Clawify Documentation
Loading...
Quick Start
For AI Agents (zero install)
One command to teach your AI agent how to use the registry:
curl -sL https://clawify.world/api/agent/bootstrap | bash
After this, tell your agent: "帮我从 clawify 找个 code review 的 skill 装上"
For Developers (CLI)
pip install clawify
clawify match "help me write unit tests"
clawify install subagent-dev
For Applications (API)
curl -X POST https://clawify.world/api/agent/match \
-H "Content-Type: application/json" \
-d '{"task":"code review","limit":5}'
Bootstrap for Agents
The bootstrap script installs a meta-skill into your AI tool (Claude Code, Codex, OpenClaw). After installation, the agent can autonomously search and install skills from the registry.
/api/agent/bootstrap
Returns a shell script that auto-detects installed AI tools and writes the clawify meta-skill.
curl -sL https://clawify.world/api/agent/bootstrap | bash
# Output:
# ✓ Installed to Claude Code (~/.claude/commands/clawify.md)
# ✓ Installed to Codex CLI (~/.codex/skills/clawify/SKILL.md)
Agent Discovery
/.well-known/clawify.json
Standard discovery endpoint for agents to find the registry automatically.
{
"registry": "https://clawify.world/api/agent",
"version": "1.0",
"capabilities": ["search", "match", "install", "download", "publish", "versions"],
"auth": { "type": "api_key", "prefix": "clw_" },
"skill_count": "..."
}
Roles & Tags
/api/agent/roles
~200 tokens. Layer 0 discovery.
{"roles": [
{"name": "operations", "count": 108},
{"name": "research", "count": 79},
{"name": "engineering", "count": 30},
...
]}
/api/agent/tags?limit=20
List Skills
/api/agent/skills?q=&role=&detail=summary&page=1&per_page=20&sort=quality
Progressive Disclosure
The detail parameter controls response density, saving agent context window:
| Level | Fields | ~Tokens | Use case |
|---|---|---|---|
minimal | name, slug, role | ~15 | Listing, autocomplete |
summary | + description, tags, quality_score | ~30 | Selection, comparison |
full | + version, signals, install commands | ~80 | Decision, install |
Examples
# Minimal
curl 'https://clawify.world/api/agent/skills?detail=minimal&per_page=50'
# Summary
curl 'https://clawify.world/api/agent/skills?q=testing&detail=summary'
# By role
curl 'https://clawify.world/api/agent/skills?role=engineering&detail=summary'
Smart Match
/api/agent/match
Server-side intelligent matching. The core agent endpoint.
Describe a task in natural language. The server matches skills using keyword relevance + quality signals, returning the top results. Typically ~400 tokens total.
curl -X POST https://clawify.world/api/agent/match \
-H "Content-Type: application/json" \
-d '{"task": "help me optimize database queries", "limit": 5, "detail": "summary"}'
Skill Detail
/api/agent/skills/{name}?detail=full
Resolve by human_name, source-{id} slug, or source_key.
Skill Content
/api/agent/skills/{name}/content
Returns raw SKILL.md (text/markdown). Increments download count.
Download with Tracking
/api/agent/skills/{name}/download
Returns JSON with content + metadata. Set X-Client header for tracking.
Publish a Skill
/api/agent/skills
Requires authentication. Creates a new skill in the registry.
SKILL.md Frontmatter
---
name: my-code-review
role: engineering
tags: [review, quality, testing]
depends: [git-conventional-commits]
requires: [tool:git]
---
# My Code Review
...
Update a Skill
/api/agent/skills/{name}
Only the original author or admin can update. Auto-bumps patch version.
Version History
/api/agent/skills/{name}/versions
Dependencies
/api/agent/skills/{name}/dependencies
API Keys
/api/agent/auth/api-keys
Create an API key. The full key (clw_...) is returned only once.
Auth Methods
| Method | Format | Use case |
|---|---|---|
| API Key | Authorization: Bearer clw_... | Agent/CLI access, publishing |
| Session Token | Authorization: Bearer tkn-... | Web UI, trial jobs |
Most read endpoints work without authentication. Auth is required for publishing and API key management.
CLI Commands
pip install clawify
| Command | Description |
|---|---|
clawify match "task" | Find best skills for a task |
clawify search --role eng | Search by keyword/role/tag |
clawify install <name> | Install to Claude/Codex/OpenClaw |
clawify uninstall <name> | Remove skill |
clawify update [name] | Update installed skills |
clawify list | Show installed skills |
clawify info <name> | Skill details + quality signals |
clawify publish <file> | Publish SKILL.md to registry |
clawify versions <name> | Version history |
clawify deps <name> | Show dependencies |
clawify roles | List categories |
clawify login | Authenticate |
clawify match
clawify match "help me design a REST API"
clawify match "帮我写小红书文案" --limit 10
clawify match "database optimization" --json
clawify search
clawify search "code review"
clawify search --role engineering --limit 20
clawify install
clawify install subagent-dev # all detected tools
clawify install subagent-dev --target claude # Claude Code only
clawify publish
clawify publish ./SKILL.md --role engineering
clawify publish ./SKILL.md --update --changelog "Added security checks"
Quality Score
Each skill has a composite quality score (0-1) computed from multiple signals. The ranking function is pluggable.
quality_score = f(
text_relevance, // query match
trial_rating, // user ratings (1-5)
usage_frequency, // log(trial + download count)
success_rate, // completed / total trials
community_signal // log(github_stars + likes)
)
Usage Signals
Signals are collected automatically as skills are used:
| Signal | Source | Updated when |
|---|---|---|
| trial_count | Trial worker | Job completes or fails |
| trial_avg_rating | Rating endpoint | User rates a job |
| download_count | Content endpoint | GET /content called |
| install_count | Download endpoint | CLI/agent installs |
| view_count | Web UI | Skill card viewed |
More usage data = better recommendations. This is the core feedback loop.