cli-jentic

Is your API ready for
AI agents?

cli-jentic scores your OpenAPI spec across 6 dimensions and gives it a letter grade — so you know exactly what to fix before connecting it to an LLM.

Get Started View on GitHub
cli-jentic score openapi.yaml
$ cli-jentic score openapi.yaml
╭─ cli-jentic · OpenAPI AI readiness ───────────────╮ │ Task Manager API v1.0.0 │ │ Overall Score: 42.3/100 Grade: F │ ╰────────────────────────────────────────────────────╯

Foundational Compliance 75.0/100 C ██████████░░░░
Developer Experience 38.5/100 F █████░░░░░░░░░
AI-Readiness & Agent UX 31.2/100 F ████░░░░░░░░░░
Agent Usability 28.0/100 F ███░░░░░░░░░░░
Security & Governance 15.0/100 F ██░░░░░░░░░░░░
AI Discoverability 33.3/100 F ████░░░░░░░░░░

4/6 operations lack descriptive intent
! 5/6 operations have no documented error responses
Security schemes defined but operations not secured

$ cli-jentic score openapi.yaml --json | jq '.overall_score'
42.3
Why cli-jentic

Ship APIs that AI agents
can actually use

Most OpenAPI specs are written for humans. AI agents have different needs — and most specs fail silently.

Instant feedback

Run against any OpenAPI 3.x spec and get a scored report in milliseconds. No account, no upload, no waiting.

🤖

AI-first scoring

Dimensions are weighted for what LLMs and agents actually need: semantic operation IDs, error schemas, intent descriptions.

🎯

Actionable issues

Every failing check includes the exact JSON path and a plain-English explanation so you know what to fix and where.

🔌

CI/CD ready

Structured exit codes (0 / 1 / 2) and --json output make it trivial to gate deploys on API quality.

📦

Zero infra dependencies

Pure Python CLI. Runs locally, in Docker, or in any CI runner. Your spec never leaves your machine.

📊

Letter grades at a glance

A single A–F grade tells the whole story. Drill down per-dimension to understand exactly where quality is lost.

New feature · clitic

Does your CLI tool speak
AI agent?

clitic — CLI Intelligence & Compliance Tester — probes any command-line tool and scores it across 5 dimensions. Know exactly how well your CLI will work inside an AI agent loop before you ship.

cli-jentic clitic demo
$ cli-jentic clitic demo

(*) Feeding git into the tester...
(*) Eating git (nom nom nom)...
(*) Digesting the results for git...

╭──── (*) clitic — CLI Intelligence & Compliance Tester ────╮ │ git /usr/bin/git │ │ Overall Score: 77.5/100 Grade: C │ ╰────────────────────────────────────────────────────────────╯
Help & Discoverability 100.0/100 A ████████████████████
Machine-Readable Output 15.0/100 F ███░░░░░░░░░░░░░░░░░
Exit Code Semantics 100.0/100 A ████████████████████
Error Handling 100.0/100 A ████████████████████
Argument & Interface Design 91.7/100 A ██████████████████░░

x No JSON output support — agents cannot reliably parse tool output
! No mention of JSON or structured output flags in help text
! Only 0/3 subcommands mention output format flags

5 dimensions, 100 points

Each dimension probes a different aspect of agent-friendliness by actually running the tool.

🔍
Help & Discoverability--help quality, subcommands, --version
25%
📤
Machine-Readable OutputJSON support, no raw ANSI in pipes
25%
🚦
Exit Code Semantics0 on help/version, non-zero on bad args
20%
⚠️
Error HandlingErrors to stderr, informative messages, no stack traces
15%
🎛️
Argument & Interface DesignGNU flags, kebab-case, standard conventions
15%
Try it
$ cli-jentic clitic score git
$ cli-jentic clitic score gh
$ cli-jentic clitic score curl
$ cli-jentic clitic score your-tool
Scoring model

6 dimensions, 100 points

Each dimension targets a distinct quality axis that determines whether an AI agent can discover, call, and reason about your API.

🏗️
Foundational ComplianceOpenAPI version, info object, valid paths
20%
👩‍💻
Developer ExperienceOperation IDs, summaries, examples
15%
🤖
AI-Readiness & Agent ExperienceDescriptions, error schemas, response detail
20%
🧠
Agent UsabilitySemantic IDs, tags, parameter schemas
20%
🔐
Security & GovernanceAuth schemes, operation security, docs
15%
🔍
AI DiscoverabilityAPI description, tags, external docs
10%
A
90–100Agent-ready
B
80–89Near ready
C
70–79Needs work
D
60–69Significant gaps
F
< 60Not ready
Comparison

cli-jentic vs Jentic

Both tools aim to make APIs work better with AI — but they solve fundamentally different problems at different points in your workflow.

Feature cli-jentic Jentic
Primary purposeScore & improve OpenAPI specs for AI-readinessDiscover & execute APIs at runtime for AI agents
When you use itBefore & during API development (design time)When building AI agents that call APIs (runtime)
InputLocal OpenAPI / Swagger YAML or JSON fileNatural language query to discover APIs
OutputScored report with letter grade + fixesExecutable workflow / API call for an agent
Open SourceFully open source (MIT)SDK open source, platform is SaaS
Runs locally100% offline — spec never leaves your machineRequires internet access to Jentic cloud registry
CI/CD integrationExit codes + JSON output, drop into any pipelineNot designed for CI quality gates
Actionable fix guidanceExact JSON path + plain-English recommendationNo spec improvement guidance
Multi-dimension scoring6 weighted dimensions with per-dimension gradesNo quality scoring model
API execution / callingAnalysis only — does not call your APICore feature — finds and executes API calls
API registry / discoveryWorks on a single spec you provideSearches a large registry of third-party APIs
Requires accountNo — zero signup, just install and runFree tier available, full registry needs API key
Best used byAPI developers, platform engineers, DevOpsAI agent developers integrating third-party APIs
Primary purpose & when to use
cli-jentic
✓ Score & improve OpenAPI specs (design time)
Jentic
~ Discover & execute APIs (runtime)
Input / Output
cli-jentic
✓ Local YAML/JSON → letter grade + fixes
Jentic
✗ Natural language → executable API call
Open Source & local
cli-jentic
✓ Fully open source, 100% offline
Jentic
~ SDK open source, needs internet
CI/CD & fix guidance
cli-jentic
✓ Exit codes, JSON output, actionable paths
Jentic
✗ Not designed for CI
API execution & discovery
cli-jentic
✗ Analysis only
Jentic
✓ Executes APIs, large registry
Account required
cli-jentic
✓ Zero signup needed
Jentic
~ Free tier, API key for full access

They're complementary, not competing

Use cli-jentic to verify your OpenAPI spec is high-quality and AI-ready. Then use Jentic to let agents discover and call it at runtime. A higher cli-jentic grade means Jentic agents can use your API more reliably.

Get started

Up and running in 30 seconds

No account. No config. Just point it at an OpenAPI spec.

Python Package Index

Install from PyPI

Official release — same tool as on this site. One command and you are ready to score specs.

pip install cli-jentic
# Install from PyPI
pip install cli-jentic

# Or with uv
uv pip install cli-jentic

# Score your spec
cli-jentic score path/to/openapi.yaml

# Try the built-in demo
cli-jentic demo

# Score a CLI tool with clitic
cli-jentic clitic score git

Exit codes


0 Score ≥ 60 — passing
1 Error reading or parsing the spec
2 Score < 60 — failing grade

Gate your CI pipeline on API quality:
cli-jentic score spec.yaml || exit 1

Get in touch

Contact

Questions, feedback, or want to contribute? Reach out on any of these channels.

GitHub
@SheepSeb
🐛
Open an Issue
Bug reports & feature requests
💬
Discussions
Questions & ideas
🍴
Fork & Contribute
PRs welcome

Click analytics (local)

No clicks recorded yet.

localStorage · GA4 events also fired · Shift+S to close