cli-jentic scores your OpenAPI spec across 6 dimensions and gives it a letter grade — so you know exactly what to fix before connecting it to an LLM.
Most OpenAPI specs are written for humans. AI agents have different needs — and most specs fail silently.
Run against any OpenAPI 3.x spec and get a scored report in milliseconds. No account, no upload, no waiting.
Dimensions are weighted for what LLMs and agents actually need: semantic operation IDs, error schemas, intent descriptions.
Every failing check includes the exact JSON path and a plain-English explanation so you know what to fix and where.
Structured exit codes (0 / 1 / 2) and --json output make it trivial to gate deploys on API quality.
Pure Python CLI. Runs locally, in Docker, or in any CI runner. Your spec never leaves your machine.
A single A–F grade tells the whole story. Drill down per-dimension to understand exactly where quality is lost.
clitic — CLI Intelligence & Compliance Tester — probes any command-line tool and scores it across 5 dimensions. Know exactly how well your CLI will work inside an AI agent loop before you ship.
Each dimension probes a different aspect of agent-friendliness by actually running the tool.
Each dimension targets a distinct quality axis that determines whether an AI agent can discover, call, and reason about your API.
Both tools aim to make APIs work better with AI — but they solve fundamentally different problems at different points in your workflow.
| Feature | cli-jentic | Jentic |
|---|---|---|
| Primary purpose | Score & improve OpenAPI specs for AI-readiness | Discover & execute APIs at runtime for AI agents |
| When you use it | Before & during API development (design time) | When building AI agents that call APIs (runtime) |
| Input | Local OpenAPI / Swagger YAML or JSON file | Natural language query to discover APIs |
| Output | Scored report with letter grade + fixes | Executable workflow / API call for an agent |
| Open Source | Fully open source (MIT) | SDK open source, platform is SaaS |
| Runs locally | 100% offline — spec never leaves your machine | Requires internet access to Jentic cloud registry |
| CI/CD integration | Exit codes + JSON output, drop into any pipeline | Not designed for CI quality gates |
| Actionable fix guidance | Exact JSON path + plain-English recommendation | No spec improvement guidance |
| Multi-dimension scoring | 6 weighted dimensions with per-dimension grades | No quality scoring model |
| API execution / calling | Analysis only — does not call your API | Core feature — finds and executes API calls |
| API registry / discovery | Works on a single spec you provide | Searches a large registry of third-party APIs |
| Requires account | No — zero signup, just install and run | Free tier available, full registry needs API key |
| Best used by | API developers, platform engineers, DevOps | AI agent developers integrating third-party APIs |
Use cli-jentic to verify your OpenAPI spec is high-quality and AI-ready. Then use Jentic to let agents discover and call it at runtime. A higher cli-jentic grade means Jentic agents can use your API more reliably.
No account. No config. Just point it at an OpenAPI spec.
Official release — same tool as on this site. One command and you are ready to score specs.
0
Score ≥ 60 — passing
1
Error reading or parsing the spec
2
Score < 60 — failing grade
Gate your CI pipeline on API quality:
cli-jentic score spec.yaml || exit 1
Questions, feedback, or want to contribute? Reach out on any of these channels.
No clicks recorded yet.
localStorage · GA4 events also fired · Shift+S to close