local_fire_departmentHoneystax
search⌘K
loginLog Inperson_addSign Up
layers
HONEYSTAX TERMINAL v1.0
HomeNewsSavedSubmit
Back to the live board
D

deer-flow

starFeaturedSKILL

An open-source long-horizon SuperAgent harness that researches, codes, and creates. With the help of sandboxes, memor...

Copy the install, test the workflow, then decide if it earns a permanent slot.

64,234
Why nowMoving now

Fresh repo activity plus visible builder pull. This is the kind of tool people test before it turns obvious.

DecisionHigh-conviction move

Copy the install, test the workflow, then decide if it earns a permanent slot.

Trial costMedium lift

Not hard to test, not trivial to unwind. Worth trying if it closes a sharp gap.

Risk44/100

GitHub health 75/100. no security policy. 804 open issues make this testable, but not something to trust blind.

What You Are Adopting

AI Agent

Universal

Model

Multiple

Build Time

Days

Test This In Your Stack

One command inClean rollbackLow commitment
folderLocalClones to current directory. Delete the folder to remove.

Fastest way to find out if deer-flow belongs in your setup.

Copy the install command, run a real test, and back it out cleanly if it slows you down.

Try now
# Visit: https://github.com/bytedance/deer-flow

Run this first. You will know quickly if the workflow earns a permanent slot.

Back out
# No automated removal — visit https://github.com/bytedance/deer-flow

No messy cleanup loop. If it misses, remove it and keep moving.

Install Location

~/  └─ .claude/      ├─ commands/      ├─ agents/      │   └─ deer-flow/ ← installs here      └─ settings.json

About

An open-source long-horizon SuperAgent harness that researches, codes, and creates. With the help of sandboxes, memories, tools, skill, subagents and message gateway, it handles different levels of tasks that could take minutes to hours.. An open-source skill for the AI coding ecosystem.

README

🦌 DeerFlow - 2.0

DeerFlow (Deep Exploration and Efficient Research Flow) is an open-source super agent harness that orchestrates sub-agents, memory, and sandboxes to do almost anything — powered by extensible skills.

deer-flow-720p.mp4

Note

DeerFlow 2.0 is a ground-up rewrite. It shares no code with v1. If you're looking for the original Deep Research framework, it's maintained on the 1.x branch — contributions there are still welcome. Active development has moved to 2.0.

Offiical Website

Learn more and see real demos on our official website.

deerflow.tech


Table of Contents

  • Quick Start
  • Sandbox Mode
  • From Deep Research to Super Agent Harness
  • Core Features
    • Skills & Tools
    • Sub-Agents
    • Sandbox & File System
    • Context Engineering
    • Long-Term Memory
  • Recommended Models
  • Documentation
  • Contributing
  • License
  • Acknowledgments
  • Star History

Quick Start

Configuration

  1. Clone the DeerFlow repository

    git clone https://github.com/bytedance/deer-flow.git
    cd deer-flow
  2. Generate local configuration files

    From the project root directory (deer-flow/), run:

    make config

    This command creates local configuration files based on the provided example templates.

  3. Configure your preferred model(s)

    Edit config.yaml and define at least one model:

    models:
      - name: gpt-4                       # Internal identifier
        display_name: GPT-4               # Human-readable name
        use: langchain_openai:ChatOpenAI  # LangChain class path
        model: gpt-4                      # Model identifier for API
        api_key: $OPENAI_API_KEY          # API key (recommended: use env var)
        max_tokens: 4096                  # Maximum tokens per request
        temperature: 0.7                  # Sampling temperature
  4. Set API keys for your configured model(s)

    Choose one of the following methods:

  • Option A: Edit the .env file in the project root (Recommended)

    TAVILY_API_KEY=your-tavily-api-key
    OPENAI_API_KEY=your-openai-api-key
    # Add other provider keys as needed
  • Option B: Export environment variables in your shell

    export OPENAI_API_KEY=your-openai-api-key
  • Option C: Edit config.yaml directly (Not recommended for production)

    models:
      - name: gpt-4
        api_key: your-actual-api-key-here  # Replace placeholder

Running the Application

Option 1: Docker (Recommended)

The fastest way to get started with a consistent environment:

  1. Initialize and start:

    make docker-init    # Pull sandbox image (Only once or when image updates)
    make docker-start   # Start services (auto-detects sandbox mode from config.yaml)

    make docker-start now starts provisioner only when config.yaml uses provisioner mode (sandbox.use: src.community.aio_sandbox:AioSandboxProvider with provisioner_url).

  2. Access: http://localhost:2026

See CONTRIBUTING.md for detailed Docker development guide.

Option 2: Local Development

If you prefer running services locally:

  1. Check prerequisites:

    make check  # Verifies Node.js 22+, pnpm, uv, nginx
  2. (Optional) Pre-pull sandbox image:

    # Recommended if using Docker/Container-based sandbox
    make setup-sandbox
  3. Start services:

    make dev
  4. Access: http://localhost:2026

Advanced

Sandbox Mode

DeerFlow supports multiple sandbox execution modes:

  • Local Execution (runs sandbox code directly on the host machine)
  • Docker Execution (runs sandbox code in isolated Docker containers)
  • Docker Execution with Kubernetes (runs sandbox code in Kubernetes pods via provisioner service)

For Docker development, service startup follows config.yaml sandbox mode. In Local/Docker modes, provisioner is not started.

See the Sandbox Configuration Guide to configure your preferred mode.

MCP Server

DeerFlow supports configurable MCP servers and skills to extend its capabilities. See the MCP Server Guide for detailed instructions.

From Deep Research to Super Agent Harness

DeerFlow started as a Deep Research framework — and the community ran with it. Since launch, developers have pushed it far beyond research: building data pipelines, generating slide decks, spinning up dashboards, automating content workflows. Things we never anticipated.

That told us something important: DeerFlow wasn't just a research tool. It was a harness — a runtime that gives agents the infrastructure to actually get work done.

So we rebuilt it from scratch.

DeerFlow 2.0 is no longer a framework you wire together. It's a super agent harness — batteries included, fully extensible. Built on LangGraph and LangChain, it ships with everything an agent needs out of the box: a filesystem, memory, skills, sandboxed execution, and the ability to plan and spawn sub-agents for complex, multi-step tasks.

Use it as-is. Or tear it apart and make it yours.

Core Features

Skills & Tools

Skills are what make DeerFlow do almost anything.

A standard Agent Skill is a structured capability module — a Markdown file that defines a workflow, best practices, and references to supporting resources. DeerFlow ships with built-in skills for research, report generation, slide creation, web pages, image and video generation, and more. But the real power is extensibility: add your own skills, replace the built-in ones, or combine them into compound workflows.

Skills are loaded progressively — only when the task needs them, not all at once. This keeps the context window lean and makes DeerFlow work well even with token-sensitive models.

Tools follow the same philosophy. DeerFlow comes with a core toolset — web search, web fetch, file operations, bash execution — and supports custom tools via MCP servers and Python functions. Swap anything. Add anything.

# Paths inside the sandbox container
/mnt/skills/public
├── research/SKILL.md
├── report-generation/SKILL.md
├── slide-creation/SKILL.md
├── web-page/SKILL.md
└── image-generation/SKILL.md

/mnt/skills/custom
└── your-custom-skill/SKILL.md      ← yours

Sub-Agents

Complex tasks rarely fit in a single pass. DeerFlow decomposes them.

The lead agent can spawn sub-agents on the fly — each with its own scoped context, tools, and termination conditions. Sub-agents run in parallel when possible, report back structured results, and the lead agent synthesizes everything into a coherent output.

This is how DeerFlow handles tasks that take minutes to hours: a research task might fan out into a dozen sub-agents, each exploring a different angle, then converge into a single report — or a website — or a slide deck with generated visuals. One harness, many hands.

Sandbox & File System

DeerFlow doesn't just talk about doing things. It has its own computer.

Each task runs inside an isolated Docker container with a full filesystem — skills, workspace, uploads, outputs. The agent reads, writes, and edits files. It executes bash commands and codes. It views images. All sandboxed, all auditable, zero contamination between sessions.

This is the difference between a chatbot with tool access and an agent with an actual execution environment.

# Paths inside the sandbox container
/mnt/user-data/
├── uploads/          ← your files
├── workspace/        ← agents' working directory
└── outputs/          ← final deliverables

Context Engineering

Isolated Sub-Agent Context: Each sub-agent runs in its own isolated context. This means that the sub-agent will not be able to see the context of the main agent or other sub-agents. This is important to ensure that the sub-agent is able to focus on the task at hand and not be distracted by the context of the main agent or other sub-agents.

Summarization: Within a session, DeerFlow manages context aggressively — summarizing completed sub-tasks, offloading intermediate results to the filesystem, compressing what's no longer immediately relevant. This lets it stay sharp across long, multi-step tasks without blowing the context window.

Long-Term Memory

Most agents forget everything the moment a conversation ends. DeerFlow remembers.

Across sessions, DeerFlow builds a persistent memory of your profile, preferences, and accumulated knowledge. The more you use it, the better it knows you — your writing style, your technical stack, your recurring workflows. Memory is stored locally and stays under your control.

Recommended Models

DeerFlow is model-agnostic — it works with any LLM that implements the OpenAI-compatible API. That said, it performs best with models that support:

  • Long context windows (100k+ tokens) for deep research and multi-step tasks
  • Reasoning capabilities for adaptive planning and complex decomposition
  • Multimodal inputs for image understanding and video comprehension
  • Strong tool-use for reliable function calling and structured outputs

Documentation

  • Contributing Guide - Development environment setup and workflow
  • Configuration Guide - Setup and configuration instructions
  • Architecture Overview - Technical architecture details
  • Backend Architecture - Backend architecture and API reference

Contributing

We welcome contributions! Please see CONTRIBUTING.md for development setup, workflow, and guidelines.

Regression coverage includes Docker sandbox mode detection and provisioner kubeconfig-path handling tests in backend/tests/.

License

This project is open source and available under the MIT License.

Acknowledgments

DeerFlow is built upon the incredible work of the open-source community. We are deeply grateful to all the projects and contributors whose efforts have made DeerFlow possible. Truly, we stand on the shoulders of giants.

We would like to extend our sincere appreciation to the following projects for their invaluable contributions:

  • LangChain: Their exceptional framework powers our LLM interactions and chains, enabling seamless integration and functionality.
  • LangGraph: Their innovative approach to multi-agent orchestration has been instrumental in enabling DeerFlow's sophisticated workflows.

These projects exemplify the transformative power of open-source collaboration, and we are proud to build upon their foundations.

Key Contributors

A heartfelt thank you goes out to the core authors of DeerFlow, whose vision, passion, and dedication have brought this project to life:

  • Daniel Walnut
  • Henry Li

Your unwavering commitment and expertise have been the driving force behind DeerFlow's success. We are honored to have you at the helm of this journey.

Star History

Star History Chart

Tech Stack

PythonOpenAILangChainGPTLLMDockerKubernetesNginxpnpm

Installation

make docker-start

Open Live ProjectAudit Repo

Reviews0

Log in to write a review.

ActiveLast commit today
bug_report804open issues
Submitted May 7, 2025

auto_awesomeYour strongest next moves after deer-flow