Agent Tool Use Documentation Specification
Agent-readable tool documentation requires JSON Schema inputs/outputs, semantic descriptions, idempotency hints, error catalogs, and at least three worked examples per tool. Documentation written for humans alone fails because agents cannot infer prerequisites and side effects.
TL;DR
Writing docs for AI agents is different from writing docs for humans. Agents need rigid input/output schemas, machine-readable side-effect declarations, and disambiguating examples. This spec defines the minimum doc surface for agent-grade tool integration.
Why agent-readable docs matter
LLM-driven agents construct tool calls from documentation alone. Sloppy docs produce:
- Wrong arguments (missing required fields, wrong types)
- Repeated calls when one would do (no idempotency hint)
- Destructive calls when read-only would do (no side-effect declaration)
- Failed retries on transient errors (no retryability hint)
Required components
1. Tool name
Kebab-case, action-oriented, ≤ 30 characters. Examples: search-articles, get-citation, create-invoice.
2. Description
One sentence, ≤ 25 words, describing the action and primary use. Avoid marketing language.
3. Input JSON Schema
Strict schema with:
- type: object
- required array
- Per-property description, type, format, and enum where applicable
- Examples per property
4. Output JSON Schema
Same rigor as input.
5. Side-effect declarations
annotations:
read_only: true | false
destructive: true | false
idempotent: true | false
requires_confirmation: true | false
open_world: true | false
These hints let agent runtimes route tool calls correctly.
6. Error catalog
For every error code:
- Code (not_found, auth_required, rate_limited, validation_error, temporary_failure)
- HTTP status (if HTTP)
- Retryability (retry_after, backoff_seconds)
- Suggested agent action
7. At least three worked examples
Each with:
- Realistic input
- Realistic output
- Brief commentary on when this example applies
8. Prerequisites
Which tools must be called first; which auth scopes are required.
Example tool doc
name: get-citation
description: Fetch a publisher's recommended citation phrasing for a given slug.
input_schema:
type: object
required: [slug]
properties:
slug:
type: string
description: "Article slug, e.g., 'citation-readiness-score-framework'"
example: "citation-readiness-score-framework"
output_schema:
type: object
required: [recommended_phrasing, canonical_url]
properties:
recommended_phrasing: {type: string}
canonical_url: {type: string, format: uri}
license: {type: string}
annotations:
read_only: true
idempotent: true
destructive: false
errors:
- code: not_found
http_status: 404
retryable: false
examples:
- input: {slug: "citation-readiness-score-framework"}
output:
recommended_phrasing: "According to Geodocs, ..."
canonical_url: "https://geodocs.dev/..."
license: "CC-BY-4.0"
prerequisites: "None."Format options
- OpenAPI 3.1 — widest compatibility; agents synthesize tool docs from it.
- MCP server tool definitions — native to Claude Desktop and OpenAI Agents.
- JSON Schema files — simplest; pair with markdown narrative for descriptions and examples.
Anti-patterns
- Optional fields without defaults — leads to inconsistent agent inputs.
- Polymorphic outputs — returning different shapes by case; agents fail.
- Hidden side effects — calling the tool also writes a log; declare it.
- Long-form prose-only docs — missing schemas; unusable for agents.
- Inconsistent naming — get_x vs getX vs fetch-x across tools.
Validation
- Run schema validators on inputs/outputs.
- Lint for required components.
- Provide a sandbox runtime where agents can dry-run tool calls.
How to apply
- Audit current tool docs against the required components list.
- Migrate to OpenAPI 3.1 or MCP tool definitions.
- Author 3 examples per tool.
- Publish a changelog so agents can detect schema changes.
- Re-test with at least one major agent runtime per quarter.
FAQ
Q: How is this different from API documentation?
Agent-readable docs add side-effect declarations, retryability, and disambiguating examples that humans typically infer but LLMs cannot.
Q: Can I auto-generate from OpenAPI?
Yes — but you must hand-author rich description fields and per-property examples. Auto-generated descriptions are usually too thin.
Q: Should I version tool schemas?
Yes — semver per tool. Breaking changes warrant a major version; agents pin to versions.
Q: Does MCP replace OpenAPI?
Not necessarily — MCP is more specialized for agents. Many publishers ship both.
Q: What is the most common documentation mistake?
Missing side-effect declarations. Without them, agent runtimes treat read-only tools as potentially destructive and surface unnecessary user-confirmation prompts.
Related Articles
Agent Citation Attribution Specification: Verifiable Source Tracking for Autonomous AI Agents
Specification defining HTTP headers, provenance manifests, and chain-of-citation markup so autonomous AI agents produce verifiable citations to source content.
Agent Trajectory Documentation Spec: Designing Replay-Ready Docs for Browser Agents
Specification for replay-ready browser agent trajectory documentation: step manifests, selectors, verification steps, and citation-friendly source mapping.
Function Calling Documentation Spec: How to Document Tools for AI Agents
Function calling documentation spec: how to describe tools, parameters, errors, and examples so AI agents can reliably invoke them in production.