local_fire_departmentHoneystax
search⌘K
loginLog Inperson_addSign Up
layers
HONEYSTAX TERMINAL v1.0
HomeNewsSavedSubmit
Back to the live board
A

Acontext

SKILL

Agent Skills as a Memory Layer

Copy the install, test the workflow, then decide if it earns a permanent slot.

3,352
Why nowMoving now

Fresh repo activity plus visible builder pull. This is the kind of tool people test before it turns obvious.

DecisionHigh-conviction move

Copy the install, test the workflow, then decide if it earns a permanent slot.

Trial costMedium lift

Not hard to test, not trivial to unwind. Worth trying if it closes a sharp gap.

Risk22/100

GitHub health 75/100. no security policy. Fresh enough repo health and manageable issue load keep the risk controlled.

What You Are Adopting

AI Agent

Universal

Model

Multiple

Build Time

Minutes

Test This In Your Stack

One command inClean rollbackLow commitment
folderLocalClones to current directory. Delete the folder to remove.

Fastest way to find out if Acontext belongs in your setup.

Copy the install command, run a real test, and back it out cleanly if it slows you down.

Try now
# Visit: https://github.com/memodb-io/Acontext

Run this first. You will know quickly if the workflow earns a permanent slot.

Back out
# No automated removal — visit https://github.com/memodb-io/Acontext

No messy cleanup loop. If it misses, remove it and keep moving.

Install Location

~/  └─ .claude/      ├─ commands/      ├─ agents/      │   └─ acontext/ ← installs here      └─ settings.json

About

Agent Skills as a Memory Layer. An open-source skill for the AI coding ecosystem.

README

Acontext - The Agent Memory Stack

🌐 Website | 📚 Document

Twitter Follow Acontext Discord

Acontext is the memory stack for production AI agents. Think of it as Supabase for agent memory.

Unifies short-term memory, mid-term state, and long-term skill for production AI agents.

❓ Why use Acontext

The Problem

  • Context data is scattered — messages, files, and skills live in different storages with no unified interface
  • No observability on agent state — you can't track success rates, replay trajectories, or know if your agent is actually working
  • Your agent's memory is a black box — vector stores and key-value memory are opaque, not inspectable, and not version controllable

Acontext's Approach

  • Short-term Memory — unified storage for messages, files, and artifacts — integrated with Claude Agent SDK, AI-SDK, OpenAI SDK...
  • Mid-term State — replay trajectories, track success rates, and monitor agents in real-time
  • Long-term Skill — agents distill successful/failed task outcomes into reusable, human-readable skill files, improving with every run
Acontext Memory Stack — Short-term, Mid-term, Long-term

💡 Core Features

  • Short-term Memory
    • Session: save agent history from any LLM, any modality
  • Mid-term State
    • State Tracking: collect agent tasks and results in near real-time
  • Long-term Skill
    • Skill Memory - agents automatically build and update skills from successful/failed sessions
Dashboard

🚀 Step-by-step Quickstart

Connect to Acontext

  1. Go to Acontext.io, claim your free credits.
  2. Go through a one-click onboarding to get your API Key (starts with sk-ac)
Dashboard
💻 Self-host Acontext

We have an acontext-cli to help you do quick proof-of-concept. Download it first in your terminal:

curl -fsSL https://install.acontext.io | sh

You should have docker installed and an OpenAI API Key to start an Acontext backend on your computer:

mkdir acontext_server && cd acontext_server
acontext server up

Make sure your LLM has the ability to call tools. By default, Acontext will use gpt-4.1.

acontext server up will create/use .env and config.yaml for Acontext, and create a db folder to persist data.

Once it's done, you can access the following endpoints:

  • Acontext API Base URL: http://localhost:8029/api/v1
  • Acontext Dashboard: http://localhost:3000/

Install SDKs

We're maintaining Python pypi and Typescript npm SDKs. The snippets below are using Python.

Click the doc link to see TS SDK Quickstart.

pip install acontext

Initialize Client

import os
from acontext import AcontextClient

# For cloud:
client = AcontextClient(
    api_key=os.getenv("ACONTEXT_API_KEY"),
)

# For self-hosted:
client = AcontextClient(
    base_url="http://localhost:8029/api/v1",
    api_key="sk-ac-your-root-api-bearer-token",
)

The Memory Stack in 3 Steps

Store a message, get agent state, and retrieve learned skills — one API for each layer.

session = client.sessions.create()
space = client.learning_spaces.create()
client.learning_spaces.learn(space.id, session_id=session.id)

# 1. Short-term Memory — store messages in any LLM format
client.sessions.store_message(
    session_id=session.id,
    blob={"role": "user", "content": "Deploy the new API to staging"},
)
# ... your agent runs ...
msgs = client.sessions.get_messages(session_id=session.id)

# 2. Mid-term State — flush to trigger processing, then get state
client.sessions.flush(session.id)
summary = client.sessions.get_session_summary(session_id=session.id)
print(summary)

# 3. Long-term Skill — wait for learning, then retrieve skills
client.learning_spaces.wait_for_learning(space.id, session_id=session.id)
skills = client.learning_spaces.list_skills(space.id)
for skill in skills:
    print(f"{skill.name}: {skill.description}")

flush and wait_for_learning are blocking helpers for demo purposes. In production, task extraction and learning run in the background automatically — your agent never waits.

More Features

  • Context Engineering — Compress context with summaries and edit strategies
  • Disk — Virtual, persistent filesystem for agents
  • Sandbox — Isolated code execution with bash, Python, and mountable skills
  • Agent Tools — Disk tools, sandbox tools, and skill tools for LLM function calling

🧐 Use Acontext to build Agent

Download end-to-end scripts with acontext:

Python

acontext create my-proj --template-path "python/openai-basic"

More examples on Python:

  • python/openai-agent-basic: openai agent sdk template
  • python/openai-agent-artifacts: agent can edit and download artifacts
  • python/claude-agent-sdk: claude agent sdk with ClaudeAgentStorage
  • python/agno-basic: agno framework template
  • python/smolagents-basic: smolagents (huggingface) template
  • python/interactive-agent-skill: interactive sandbox with mountable agent skills

Typescript

acontext create my-proj --template-path "typescript/openai-basic"

More examples on Typescript:

  • typescript/vercel-ai-basic: agent in @vercel/ai-sdk
  • typescript/claude-agent-sdk: claude agent sdk with ClaudeAgentStorage
  • typescript/interactive-agent-skill: interactive sandbox with mountable agent skills

Note

Check our example repo for more templates: Acontext-Examples.

We're cooking more full-stack Agent Applications! Tell us what you want!

🔍 Document

To learn more about long-term skill and what Acontext can do, visit our docs or start with What is Long-term Skill?

❤️ Stay Updated

Star Acontext on Github to support and receive instant notifications

click_star

🏗️ Architecture

click to open
graph TB
    subgraph "Client Layer"
        PY["pip install acontext"]
        TS["npm i @acontext/acontext"]
    end
    
    subgraph "Acontext Backend"
      subgraph " "
          API["API<br/>localhost:8029"]
          CORE["Core"]
          API -->|FastAPI & MQ| CORE
      end
      
      subgraph " "
          Infrastructure["Infrastructures"]
          PG["PostgreSQL"]
          S3["S3"]
          REDIS["Redis"]
          MQ["RabbitMQ"]
      end
    end
    
    subgraph "Dashboard"
        UI["Web Dashboard<br/>localhost:3000"]
    end
    
    PY -->|RESTFUL API| API
    TS -->|RESTFUL API| API
    UI -->|RESTFUL API| API
    API --> Infrastructure
    CORE --> Infrastructure

    Infrastructure --> PG
    Infrastructure --> S3
    Infrastructure --> REDIS
    Infrastructure --> MQ
    
    
    style PY fill:#3776ab,stroke:#fff,stroke-width:2px,color:#fff
    style TS fill:#3178c6,stroke:#fff,stroke-width:2px,color:#fff
    style API fill:#00add8,stroke:#fff,stroke-width:2px,color:#fff
    style CORE fill:#ffd43b,stroke:#333,stroke-width:2px,color:#333
    style UI fill:#000,stroke:#fff,stroke-width:2px,color:#fff
    style PG fill:#336791,stroke:#fff,stroke-width:2px,color:#fff
    style S3 fill:#ff9900,stroke:#fff,stroke-width:2px,color:#fff
    style REDIS fill:#dc382d,stroke:#fff,stroke-width:2px,color:#fff
    style MQ fill:#ff6600,stroke:#fff,stroke-width:2px,color:#fff
Loading

🤝 Stay Together

Join the community for support and discussions:

  • Discuss with Builders on Acontext Discord 👻
  • Follow Acontext on X 𝕏

🌟 Contributing

  • Check our roadmap.md first.
  • Read contributing.md

🥇 Badges

Made with Acontext Made with Acontext (dark)

[![Made with Acontext](https://assets.memodb.io/Acontext/badge-made-with-acontext.svg)](https://acontext.io)

[![Made with Acontext](https://assets.memodb.io/Acontext/badge-made-with-acontext-dark.svg)](https://acontext.io)

📑 LICENSE

This project is currently licensed under Apache License 2.0.

Tech Stack

GoViteLLMTypeScriptPythonFastAPIPostgreSQLRedisSupabaseOpenAIClaudeGPTDockerVercel

Installation

We're maintaining Python and Typescript SDKs. The snippets below are using Python. Click the doc link to see TS SDK Quickstart. pip install acontext

Open Live ProjectAudit Repo

Reviews0

Log in to write a review.

ActiveLast commit today
bug_report30open issues
Submitted July 16, 2025

auto_awesomeYour strongest next moves after Acontext