local_fire_departmentHoneystax
search⌘K
loginLog Inperson_addSign Up
layers
HONEYSTAX TERMINAL v1.0
HomeNewsSavedSubmit
Back to the live board
E

EverMemOS

Agent

Long-term memory OS for your agents across LLMs and platforms.

Copy the install, test the workflow, then decide if it earns a permanent slot.

2,524
Why nowMoving now

Fresh repo activity plus visible builder pull. This is the kind of tool people test before it turns obvious.

DecisionHigh-conviction move

Copy the install, test the workflow, then decide if it earns a permanent slot.

Trial costDeep lift

This wants more setup and more teardown. Run it only if the upside is clear.

Risk42/100

GitHub health 50/100. no security policy. 87 open issues make this testable, but not something to trust blind.

What You Are Adopting

AI Agent

Multiple

Model

Multiple

Build Time

Minutes

Test This In Your Stack

One command inClean rollbackLow commitment
shieldSandboxedInstalls to ~/.claude — isolated from your projects. One command to remove.

Fastest way to find out if EverMemOS belongs in your setup.

Copy the install command, run a real test, and back it out cleanly if it slows you down.

Try now
git clone https://github.com/EverMind-AI/EverMemOS ~/.claude/agents/evermemos

Run this first. You will know quickly if the workflow earns a permanent slot.

Back out
rm -rf ~/.claude/agents/evermemos

No messy cleanup loop. If it misses, remove it and keep moving.

Install Location

~/  └─ .claude/      ├─ commands/      ├─ agents/      │   └─ evermemos/ ← installs here      └─ settings.json

About

Long-term memory OS for your agents across LLMs and platforms.. An open-source agent for the AI coding ecosystem.

README

banner-gif

Python Docker FastAPI MongoDB Elasticsearch Milvus Ask DeepWiki License

Share EverMemOS Repository

Documentation • API Reference • Demo


Image

Important

Memory Genesis Competition 2026

Join our AI Memory Competition! Build innovative applications, plugins, or infrastructure improvements powered by EverMemOS.

Tracks:

  • Agent + Memory - Build intelligent agents with long-term, evolving memories
  • Platform Plugins - Integrate EverMemOS with VSCode, Chrome, Slack, Notion, LangChain, and more
  • OS Infrastructure - Optimize core functionality and performance

Get Started with the Competition Starter Kit

Join our Discord to ask anything you want. AMA session is open to everyone and occurs biweekly.


Welcome to EverMemOS

Welcome to EverMemOS! Join our community to help improve the project and collaborate with talented developers worldwide.

Community Purpose
Discord Members Join the EverMind Discord community to connect with other users
WeChat Join the EverMind WeChat group for discussion and updates

Usecases

Image

OpenClaw Long-Term Memory Plugin(coming this week)

Claw is putting the pieces of his memory together. Imagine a 24/7 agent with continuous learning memory that you can carry with you wherever you go next.

divider divider


Image

Live2D Character with Memory

Add long-term memory to your anime character that can talk to you in real-time powered by TEN Framework. See the Live2D Character with Memory Example for more details.

divider divider


Image

Computer-Use with Memory

Use computer-use to launch screenshot to do analysis all in your memory. See the live demo for more details.

divider divider


Image

Game of Thrones Memories

A demonstration of AI memory infrastructure through an interactive Q&A experience with "A Game of Thrones" See the code for more details.

divider divider


Image

EverMemOS Claude Code Plugin

Persistent memory for Claude Code. Automatically saves and recalls context from past coding sessions. See the code for more details.

divider divider


Image

Visualize Memories with Graphs

Memory Graph view that visualizes your stored entities and how they relate, this is a pure frontend demo which hasn't been plugged with the backend yet, we are working on it. See the live demo.


Quick Start

Prerequisites

  • Python 3.10+ • Docker 20.10+ • uv package manager • 4GB RAM

Verify Prerequisites:

# Verify you have the required versions
python --version  # Should be 3.10+
docker --version  # Should be 20.10+

Installation

# 1. Clone and navigate
git clone https://github.com/EverMind-AI/EverMemOS.git
cd EverMemOS

# 2. Start Docker services
docker compose up -d

# 3. Install uv and dependencies
curl -LsSf https://astral.sh/uv/install.sh | sh
uv sync

# 4. Configure API keys
cp env.template .env
# Edit .env and set:
#   - LLM_API_KEY (for memory extraction)
#   - VECTORIZE_API_KEY (for embedding/rerank)

# 5. Start server
uv run python src/run.py

# 6. Verify installation
curl http://localhost:1995/health
# Expected response: {"status": "healthy", ...}

✅ Server running at http://localhost:1995 • Full Setup Guide


Basic Usage

Store and retrieve memories with simple Python code:

import requests

API_BASE = "http://localhost:1995/api/v1"

# 1. Store a conversation memory
requests.post(f"{API_BASE}/memories", json={
    "message_id": "msg_001",
    "create_time": "2025-02-01T10:00:00+00:00",
    "sender": "user_001",
    "content": "I love playing soccer on weekends"
})

# 2. Search for relevant memories
response = requests.get(f"{API_BASE}/memories/search", json={
    "query": "What sports does the user like?",
    "user_id": "user_001",
    "memory_types": ["episodic_memory"],
    "retrieve_method": "hybrid"
})

result = response.json().get("result", {})
for memory_group in result.get("memories", []):
    print(f"Memory: {memory_group}")

📖 More Examples • 📚 API Reference • 🎯 Interactive Demos


Demo

Run the Demo

# Terminal 1: Start the API server
uv run python src/run.py

# Terminal 2: Run the simple demo
uv run python src/bootstrap.py demo/simple_demo.py

Try it now: Follow the Demo Guide for step-by-step instructions.

Full Demo Experience

# Extract memories from sample data
uv run python src/bootstrap.py demo/extract_memory.py

# Start interactive chat with memory
uv run python src/bootstrap.py demo/chat_with_memory.py

See the Demo Guide for details.


Advanced Techniques

  • Group Chat Conversations - Combine messages from multiple speakers
  • Conversation Metadata Control - Fine-grained control over conversation context
  • Memory Retrieval Strategies - Lightweight vs Agentic retrieval modes
  • Batch Operations - Process multiple messages efficiently

Documentation

Guide Description
Quick Start Installation and configuration
Configuration Guide Environment variables and services
API Usage Guide Endpoints and data formats
Development Guide Architecture and best practices
Memory API Complete API reference
Demo Guide Interactive examples
Evaluation Guide Benchmark testing

Evaluation & Benchmarking

EverMemOS achieves 93% overall accuracy on the LoCoMo benchmark, outperforming comparable memory systems.

Benchmark Results

EverMemOS Benchmark Results

Supported Benchmarks

  • LoCoMo - Long-context memory benchmark with single/multi-hop reasoning
  • LongMemEval - Multi-session conversation evaluation
  • PersonaMem - Persona-based memory evaluation

Quick Start

# Install evaluation dependencies
uv sync --group evaluation

# Run smoke test (quick verification)
uv run python -m evaluation.cli --dataset locomo --system evermemos --smoke

# Run full evaluation
uv run python -m evaluation.cli --dataset locomo --system evermemos

# View results
cat evaluation/results/locomo-evermemos/report.txt

📊 Full Evaluation Guide • 📈 Complete Results


GitHub Codespaces

EverMemOS supports GitHub Codespaces for cloud-based development. This eliminates the need to set up Docker, manage local network configurations, or worry about environment compatibility issues.

Open in GitHub Codespaces

divider divider

Requirements

Machine Type Status Notes
2-core (Free tier) ❌ Not supported Insufficient resources for infrastructure services
4-core ✅ Minimum Works but may be slow under load
8-core ✅ Recommended Good performance with all services
16-core+ ✅ Optimal Best for heavy development workloads

Note: If your company provides GitHub Codespaces, hardware limitations typically won't be an issue since enterprise plans often include access to larger machine types.

Getting Started with Codespaces

  1. Click the "Open in GitHub Codespaces" button above
  2. Select a 4-core or larger machine when prompted
  3. Wait for the container to build and services to start
  4. Update API keys in .env (LLM_API_KEY, VECTORIZE_API_KEY, etc.)
  5. Run make run to start the server

All infrastructure services (MongoDB, Elasticsearch, Milvus, Redis) start automatically and are pre-configured to work together.


Questions

EverMemOS is available on these AI-powered Q&A platforms. They can help you find answers quickly and accurately in multiple languages, covering everything from basic setup to advanced implementation details.

Service Link
DeepWiki Ask DeepWiki


🌟 Star and stay tuned with us

star us gif


Contributing

We love open-source energy! Whether you’re squashing bugs, shipping features, sharpening docs, or just tossing in wild ideas, every PR moves EverMemOS forward. Browse Issues to find your perfect entry point—then show us what you’ve got. Let’s build the future of memory together.


Tip

Welcome all kinds of contributions 🎉

Join us in building EverMemOS better! Every contribution makes a difference, from code to documentation. Share your projects on social media to inspire others!

Connect with one of the EverMemOS maintainers @elliotchen200 on 𝕏 or @cyfyifanchen on GitHub for project updates, discussions, and collaboration opportunities.

divider divider

Code Contributors

EverMemOS

divider divider

Contribution Guidelines

Read our Contribution Guidelines for code standards and Git workflow.

divider divider

License & Citation & Acknowledgments

Apache 2.0 • Citation • Acknowledgments


Tech Stack

PythonDockerFastAPIGoMongoDBViteRedisClaudeLangChain

Installation

.env

make run

Open Live ProjectAudit Repo

Reviews0

Log in to write a review.

ActiveLast commit today
bug_report87open issues
Submitted October 28, 2025

auto_awesomeYour strongest next moves after EverMemOS