local_fire_departmentHoneystax
search⌘K
loginLog Inperson_addSign Up
layers
HONEYSTAX TERMINAL v1.0
HomeNewsSavedSubmit
Back to the live board
M

mlflow

starFeaturedAgent

The open source AI engineering platform for agents, LLMs, and ML models. MLflow enables teams of all sizes to debug,...

Copy the install, test the workflow, then decide if it earns a permanent slot.

24,910
Why nowMoving now

Fresh repo activity plus visible builder pull. This is the kind of tool people test before it turns obvious.

DecisionHigh-conviction move

Copy the install, test the workflow, then decide if it earns a permanent slot.

Trial costDeep lift

This wants more setup and more teardown. Run it only if the upside is clear.

Risk47/100

GitHub health 75/100. no security policy. 2,037 open issues make this testable, but not something to trust blind.

What You Are Adopting

AI Agent

Universal

Model

Multiple

Build Time

Days

Test This In Your Stack

One command inClean rollbackLow commitment
shieldSandboxedInstalls to ~/.claude — isolated from your projects. One command to remove.

Fastest way to find out if mlflow belongs in your setup.

Copy the install command, run a real test, and back it out cleanly if it slows you down.

Try now
git clone https://github.com/mlflow/mlflow ~/.claude/agents/mlflow

Run this first. You will know quickly if the workflow earns a permanent slot.

Back out
rm -rf ~/.claude/agents/mlflow

No messy cleanup loop. If it misses, remove it and keep moving.

Install Location

~/  └─ .claude/      ├─ commands/      ├─ agents/      │   └─ mlflow/ ← installs here      └─ settings.json

About

The open source AI engineering platform for agents, LLMs, and ML models. MLflow enables teams of all sizes to debug, evaluate, monitor, and optimize production-quality AI applications while controlling costs and managing access to models and data.. An open-source agent for the AI coding ecosystem.

README

MLflow logo

Open-Source Platform for Productionizing AI

MLflow is an open-source developer platform to build AI/LLM applications and models with confidence. Enhance your AI applications with end-to-end experiment tracking, observability, and evaluations, all in one integrated platform.

Python SDK PyPI Downloads License follow on X(Twitter) follow on LinkedIn Ask DeepWiki

Website · Docs · Feature Request · News · YouTube · Events

🚀 Installation

To install the MLflow Python package, run the following command:

pip install mlflow

📦 Core Components

MLflow is the only platform that provides a unified solution for all your AI/ML needs, including LLMs, Agents, Deep Learning, and traditional machine learning.

💡 For LLM / GenAI Developers

Tracing

🔍 Tracing / Observability

Trace the internal states of your LLM/agentic applications for debugging quality issues and monitoring performance with ease.

Getting Started →

LLM Evaluation

📊 LLM Evaluation

A suite of automated model evaluation tools, seamlessly integrated with experiment tracking to compare across multiple versions.

Getting Started →

Prompt Management

🤖 Prompt Management

Version, track, and reuse prompts across your organization, helping maintain consistency and improve collaboration in prompt development.

Getting Started →

MLflow Hero

📦 App Version Tracking

MLflow keeps track of many moving parts in your AI applications, such as models, prompts, tools, and code, with end-to-end lineage.

Getting Started →

🎓 For Data Scientists

Tracking

📝 Experiment Tracking

Track your models, parameters, metrics, and evaluation results in ML experiments and compare them using an interactive UI.

Getting Started →

Model Registry

💾 Model Registry

A centralized model store designed to collaboratively manage the full lifecycle and deployment of machine learning models.

Getting Started →

Deployment

🚀 Deployment

Tools for seamless model deployment to batch and real-time scoring on platforms like Docker, Kubernetes, Azure ML, and AWS SageMaker.

Getting Started →

🌐 Hosting MLflow Anywhere

Providers

You can run MLflow in many different environments, including local machines, on-premise servers, and cloud infrastructure.

Trusted by thousands of organizations, MLflow is now offered as a managed service by most major cloud providers:

  • Amazon SageMaker
  • Azure ML
  • Databricks
  • Nebius

For hosting MLflow on your own infrastructure, please refer to this guidance.

🗣️ Supported Programming Languages

  • Python
  • TypeScript / JavaScript
  • Java
  • R

🔗 Integrations

MLflow is natively integrated with many popular machine learning frameworks and GenAI libraries.

Integrations

Usage Examples

Tracing (Observability) (Doc)

MLflow Tracing provides LLM observability for various GenAI libraries such as OpenAI, LangChain, LlamaIndex, DSPy, AutoGen, and more. To enable auto-tracing, call mlflow.xyz.autolog() before running your models. Refer to the documentation for customization and manual instrumentation.

import mlflow
from openai import OpenAI

# Enable tracing for OpenAI
mlflow.openai.autolog()

# Query OpenAI LLM normally
response = OpenAI().chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hi!"}],
    temperature=0.1,
)

Then navigate to the "Traces" tab in the MLflow UI to find the trace records for the OpenAI query.

Evaluating LLMs, Prompts, and Agents (Doc)

The following example runs automatic evaluation for question-answering tasks with several built-in metrics.

import os
import openai
import mlflow
from mlflow.genai.scorers import Correctness, Guidelines

client = openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

# 1. Define a simple QA dataset
dataset = [
    {
        "inputs": {"question": "Can MLflow manage prompts?"},
        "expectations": {"expected_response": "Yes!"},
    },
    {
        "inputs": {"question": "Can MLflow create a taco for my lunch?"},
        "expectations": {
            "expected_response": "No, unfortunately, MLflow is not a taco maker."
        },
    },
]


# 2. Define a prediction function to generate responses
def predict_fn(question: str) -> str:
    response = client.chat.completions.create(
        model="gpt-4o-mini", messages=[{"role": "user", "content": question}]
    )
    return response.choices[0].message.content


# 3. Run the evaluation
results = mlflow.genai.evaluate(
    data=dataset,
    predict_fn=predict_fn,
    scorers=[
        # Built-in LLM judge
        Correctness(),
        # Custom criteria using LLM judge
        Guidelines(name="is_english", guidelines="The answer must be in English"),
    ],
)

Navigate to the "Evaluations" tab in the MLflow UI to find the evaluation results.

Tracking Model Training (Doc)

The following example trains a simple regression model with scikit-learn, while enabling MLflow's autologging feature for experiment tracking.

import mlflow

from sklearn.model_selection import train_test_split
from sklearn.datasets import load_diabetes
from sklearn.ensemble import RandomForestRegressor

# Enable MLflow's automatic experiment tracking for scikit-learn
mlflow.sklearn.autolog()

# Load the training dataset
db = load_diabetes()
X_train, X_test, y_train, y_test = train_test_split(db.data, db.target)

rf = RandomForestRegressor(n_estimators=100, max_depth=6, max_features=3)
# MLflow triggers logging automatically upon model fitting
rf.fit(X_train, y_train)

Once the above code finishes, run the following command in a separate terminal and access the MLflow UI via the printed URL. An MLflow Run should be automatically created, which tracks the training dataset, hyperparameters, performance metrics, the trained model, dependencies, and even more.

mlflow server

💭 Support

  • For help or questions about MLflow usage (e.g. "how do I do X?") visit the documentation.
  • In the documentation, you can ask the question to our AI-powered chat bot. Click on the "Ask AI" button at the right bottom.
  • Join the virtual events like office hours and meetups.
  • To report a bug, file a documentation issue, or submit a feature request, please open a GitHub issue.
  • For release announcements and other discussions, please subscribe to our mailing list ([email protected]) or join us on Slack.

🤝 Contributing

We happily welcome contributions to MLflow!

  • Submit bug reports and feature requests
  • Contribute for good-first-issues and help-wanted
  • Writing about MLflow and sharing your experience

Please see our contribution guide to learn more about contributing to MLflow.

⭐️ Star History

Star History Chart

✏️ Citation

If you use MLflow in your research, please cite it using the "Cite this repository" button at the top of the GitHub repository page, which will provide you with citation formats including APA and BibTeX.

👥 Core Members

MLflow is currently maintained by the following core members with significant contributions from hundreds of exceptionally talented community members.

  • Ben Wilson
  • Corey Zumar
  • Daniel Lok
  • Gabriel Fu
  • Harutaka Kawamura
  • Joel Robin P
  • Matt Prahl
  • Pat Sukprasert
  • Serena Ruan
  • Tomu Hirata
  • Weichen Xu
  • Yuki Watanabe

Tech Stack

GoPythonLLMTypeScriptJavaScriptJavaOpenAILangChainLlamaIndexGPTDockerKubernetesAWS
Open Live ProjectAudit Repo

Reviews0

Log in to write a review.

ActiveLast commit today
bug_report2,037open issues
Submitted June 5, 2018

auto_awesomeYour strongest next moves after mlflow