/.well-known/ai-plugin.json Manifest Specification
ai-plugin.json is a JSON manifest published at /.well-known/ai-plugin.json that declares plugin metadata, authentication mode, and a link to an OpenAPI specification, allowing AI agent runtimes to discover and call an API. Although OpenAI retired the original ChatGPT Plugins surface in 2024, the manifest format remains in active use by Custom GPTs, LibreChat, and several open-source agent runtimes.
TL;DR
Host a JSON file at /.well-known/ai-plugin.json with schema_version, name_for_model, description_for_model, an auth block, and an api block pointing at an OpenAPI document. Choose an auth mode (none, service_http, user_http, or oauth) that matches your API's actual access model. ChatGPT Plugins were wound down by OpenAI in 2024, so for new integrations pair the manifest with a Model Context Protocol (MCP) server and treat ai-plugin.json as a compatibility surface for older clients.
Definition
ai-plugin.json is the manifest file originally specified by OpenAI for the ChatGPT Plugins program. It is hosted at the well-known path /.well-known/ai-plugin.json on the API's own origin, contains plugin metadata and an authentication declaration, and links to an OpenAPI document that describes the actual endpoints. The manifest is the discovery contract: any agent runtime that supports ai-plugin.json fetches the file, validates it, presents the metadata to the model, and uses the linked OpenAPI to call endpoints.
The core fields are:
- schema_version: manifest version, currently "v1".
- name_for_model / name_for_human: machine-friendly slug and human-readable name.
- description_for_model / description_for_human: model-facing prompt and human-facing description.
- auth: authentication mode (none, service_http, user_http, or oauth).
- api: object with type (typically "openapi") and url pointing at the OpenAPI document.
- logo_url, contact_email, legal_info_url: presentation and trust metadata.
The manifest is consumed by clients such as ChatGPT Plugins (deprecated), Custom GPTs (via Actions), LibreChat, and several open agent frameworks that support the same discovery convention.
Why this matters
For publishers and API authors, ai-plugin.json is the lowest-cost way to expose an existing HTTP API to AI agents. The manifest itself is small, the OpenAPI document is reusable across many consumers, and the description_for_model field gives the publisher a concise way to instruct the model how and when to call the API. Even though ChatGPT Plugins is retired, the manifest still drives integrations into Custom GPTs (which evolved into Apps), LibreChat, and other tools that read the discovery file.
The broader trend, however, is toward Model Context Protocol (MCP) servers as the canonical agent integration surface. MCP defines a richer tool schema, supports streaming and prompt templates, and is consumed natively by Claude Desktop, ChatGPT desktop apps, and a growing list of agent runtimes. New integrations should publish both: ai-plugin.json for legacy or HTTP-only consumers, and MCP for richer agent runtimes. Treating ai-plugin.json as a compatibility surface—not the primary integration path—future-proofs the publisher against further client churn.
How it works
A minimal manifest that mirrors OpenAI's reference example:
{"schema_version":"v1","name_for_human":"TODO Manager","name_for_model":"todo_manager","description_for_human":"Manages your TODOs!","description_for_model":"An app for managing a user's TODOs","api":{"type":"openapi","url":"https://example.com/openapi.json"},"auth":{"type":"none"},"logo_url":"https://example.com/logo.png","legal_info_url":"https://example.com/legal","contact_email":"hello@example.com"}
Field summary:
| Field | Type | Required | Notes |
|---|---|---|---|
| schema_version | Text | Yes | Currently "v1". |
| name_for_model | Text | Yes | Machine slug, lowercase, no spaces. |
| name_for_human | Text | Yes | Display name. |
| description_for_model | Text | Yes | Model-facing prompt; describe when to use the plugin. |
| description_for_human | Text | Yes | One-line user-facing summary. |
| auth | Object | Yes | Auth mode and parameters; see below. |
| api | Object | Yes | {"type":"openapi","url":"..."}. |
| logo_url | URL | Yes | Square logo, typically PNG. |
| contact_email | Yes | Contact for the publisher. | |
| legal_info_url | URL | Yes | Terms or privacy policy URL. |
The auth block supports four modes:
- {"type":"none"} — no authentication; suitable for public read APIs.
- {"type":"service_http","authorization_type":"bearer","verification_tokens":{...}} — service-level bearer token shared by the agent runtime.
- {"type":"user_http","authorization_type":"bearer"} — per-user bearer token captured by the agent runtime.
- {"type":"oauth","client_url":"...","scope":"...","authorization_url":"...","authorization_content_type":"application/x-www-form-urlencoded","verification_tokens":{...}} — OAuth 2 flow brokered by the runtime.
During discovery the runtime fetches the manifest, validates that all required fields are present, fetches the OpenAPI document at api.url, and synthesizes a tool description for the model. OpenAPI operations whose operationId and description fields are well-written produce the most reliable model behavior.
Practical application
A recommended rollout:
- Author the OpenAPI document first. Use clean operationId values, explicit parameter schemas, and descriptive operation summaries—the model relies on these as instructions.
- Write the manifest. Fill in every required field; pay particular attention to description_for_model, which acts as a system-prompt addendum for clients that synthesize prompts from the manifest.
- Pick the smallest auth mode that fits. Public read APIs should use "none"; authenticated APIs should prefer oauth or user_http over service_http to keep request authorization tied to the end user.
- Host both files at well-known paths. /.well-known/ai-plugin.json and (commonly) /.well-known/openapi.yaml keep the discovery surface predictable.
- Validate. Use a JSON Schema validator on the manifest, then run a discovery test against a runtime that supports it (LibreChat is a common reference because it documents the field set explicitly).
- Pair with MCP. Stand up an MCP server exposing the same tools and link to it from your agent integration documentation; this future-proofs the integration as more runtimes adopt MCP as primary.
Common mistakes
- Vague description_for_model. This field instructs the model when to invoke the plugin. Generic descriptions cause both over- and under-invocation; be specific about the use cases the plugin serves.
- Broken api.url. A relative URL or a path that returns HTML instead of an OpenAPI document fails discovery silently in most clients.
- Missing schema_version. Older runtimes accept the manifest without it; newer ones reject it. Always include "v1".
- Wrong auth scope. Using service_http for a per-user API leaks data across users; using user_http when the API actually requires a service token causes 401 responses every time.
- Skipping logo_url. Required by most runtimes; using an empty or off-origin URL causes manifest validation failures.
- Treating ai-plugin.json as the only integration surface. New runtimes prefer MCP; ship both for full coverage.
FAQ
Q: Is ai-plugin.json still used after the ChatGPT Plugins sunset?
Yes, in narrower scope. OpenAI retired the original ChatGPT Plugins surface in 2024, but the manifest format is still consumed by Custom GPTs (via Actions), LibreChat, and several open agent runtimes. Most new integrations now publish both an ai-plugin.json manifest for compatibility and an MCP server for richer agent runtimes.
Q: How does ai-plugin.json relate to MCP servers?
They solve the same problem at different layers. ai-plugin.json describes an HTTP API plus an OpenAPI document for runtimes that call HTTP endpoints directly. MCP defines a protocol with richer typed tools, prompt templates, and streaming, and is consumed natively by Claude Desktop and ChatGPT desktop apps. New publishers should ship both; ai-plugin.json acts as a fallback for HTTP-only consumers.
Q: Should I publish ai-plugin.json alongside an MCP manifest?
For most publishers, yes. The two formats target different runtimes and are inexpensive to publish together. Reuse the same OpenAPI document across both surfaces where possible, and keep the description_for_model and MCP tool descriptions aligned so the model sees a consistent capability set regardless of which path the runtime takes.
Q: What auth mode should I choose?
Use none for public read APIs, oauth for any user-scoped data, and service_http only when the integration is genuinely shared and not per-user. Most authenticated APIs end up on oauth because it lets the runtime broker token refresh and scope enforcement without the publisher trusting the runtime with long-lived credentials.
Q: Where should I host the manifest if my API is on a CDN?
The manifest must be served from the same origin as the OpenAPI document and the API endpoints to avoid cross-origin issues. If you front the API with a CDN, configure the CDN to pass /.well-known/ai-plugin.json through to the origin or serve a static copy from edge with the same Content-Type: application/json and CORS headers as the API.
Sources
- OpenAI. "ChatGPT plugins" introduction — verified 2026-05-04 — reference manifest example. https://openai.com/index/chatgpt-plugins/
- OpenAI. "Winding down the ChatGPT plugins beta," 2024 — verified 2026-05-04 — sunset notice. https://help.openai.com/en/articles/8988022-winding-down-the-chatgpt-plugins-beta
- OpenAI chatgpt-retrieval-plugin GitHub. "ai-plugin.json" reference — verified 2026-05-04 — user_http and oauth auth examples. https://github.com/openai/chatgpt-retrieval-plugin/blob/main/.well-known/ai-plugin.json
- LibreChat docs. "Using official ChatGPT Plugins / OpenAPI specs" — verified 2026-05-04 — ongoing manifest support. https://github.com/fuegovic/Libre-Chat/blob/main/docs/features/plugins/chatgpt_plugins_openapi.md
Related Articles
What Is an MCP Server? Architecture and Citation Implications
An MCP server exposes tools, resources, and prompts to AI agents over a standardized protocol. Definition, architecture, comparisons, and citation implications.
404 Page AI Crawler Handling: Avoiding Citation Loss During Migrations
Migration playbook for keeping AI citations during URL changes — hard 404 vs soft 404, 410 Gone, redirect chains, sitemap cleanup, and refetch monitoring.
Accept-Encoding (Brotli, Gzip) for AI Crawlers
Specification for serving Brotli, gzip, and zstd to AI crawlers via Accept-Encoding negotiation: which bots support which codecs, fallback rules, and Vary handling.