Generate and evaluate agent skills for code agents like Claude Code, Open Code, OpenAI Codex
Copy the install, test the workflow, then decide if it earns a permanent slot.
Still active enough to matter. Good candidate for a fast stack test instead of a long evaluation loop.
Copy the install, test the workflow, then decide if it earns a permanent slot.
Not hard to test, not trivial to unwind. Worth trying if it closes a sharp gap.
GitHub health 50/100. no security policy. 8 open issues make this testable, but not something to trust blind.
AI Agent
Multiple
Model
Multiple
Build Time
Instant
Fastest way to find out if upskill belongs in your setup.
Copy the install command, run a real test, and back it out cleanly if it slows you down.
# Visit: https://github.com/huggingface/upskillRun this first. You will know quickly if the workflow earns a permanent slot.
# No automated removal — visit https://github.com/huggingface/upskillNo messy cleanup loop. If it misses, remove it and keep moving.
Install Location
~/ └─ .claude/ ├─ commands/ ├─ agents/ │ └─ upskill/ ← installs here └─ settings.json
Generate and evaluate agent skills for code agents like Claude Code, Open Code, OpenAI Codex. An open-source skill for the AI coding ecosystem.