LLM Citation Anchor Text Patterns: How Generative Engines Phrase Source Mentions
Each major AI engine renders citations differently — numbered superscripts on Perplexity, inline domain pills on ChatGPT, source chips on Gemini, parenthetical attributions on Claude, card carousels on Google AI Overviews. Knowing each anchor format lets writers craft quote-ready spans that surface cleanly in the engines they target.
TL;DR
- Perplexity uses numbered superscripts ([1], [2]) inline with a numbered sources list at the bottom of the answer.
- ChatGPT (Search) uses inline clickable domain pills (example.com) and footnote-style links, with a Sources tray.
- Gemini / Google AI Overviews use pill-style chips and a Sources card carousel; phrases like "according to…" are common.
- Claude uses inline parenthetical attributions ((Source, 2026)) plus a sources tray, and exposes web_search_result_location spans via API.
- Copilot uses numbered superscripts similar to Bing/Perplexity, with hover preview cards.
Why anchor text patterns matter
AI citation surfaces are the new "snippet." The phrase, format, and location of an attribution determine whether a user clicks through, whether your brand name reaches the reader, and whether the engine's confidence model treats your content as a primary source. AI search has been shown to display citations inconsistently and inaccurately, which means structuring content for predictable extraction is part of optimization.
Per-engine reference
ChatGPT (Search and connectors)
- Inline anchor: Clickable domain pill rendered next to the cited claim, e.g. “…reduces latency by 27% (example.com).”
- Secondary anchors: Footnote-style superscript numbers in long answers; expandable Sources tray.
- Phrasing cues: “According to {brand}…”, “{brand} reports…”, “per {publication}…”
- What it rewards: Pages with strong brand entity in the page title and clean canonical URL. ChatGPT favors Wikipedia and large publishers (Wikipedia ~48% of citations in some datasets).
Perplexity
- Inline anchor: Numbered superscripts [1], [2], [3] directly after the claim sentence.
- Secondary anchors: Numbered sources list with thumbnail favicons, source titles, and short excerpts.
- Phrasing cues: Often answer-first without an explicit “according to,” relying on the superscript to carry attribution.
- What it rewards: Reddit (~47% of citations in some datasets), long-form blog posts, and content with extractable lists. Perplexity heavily quotes the first 2-3 sentences of well-structured paragraphs.
Gemini / Google AI Overviews
- Inline anchor: Pill-style chips embedded in the prose, sometimes with the source brand visible.
- Secondary anchors: Sources card carousel below the answer with thumbnail, title, snippet.
- Phrasing cues: “According to {brand}…”, “As explained on {publisher}…”, “{brand} notes that…”
- What it rewards: First-party official sites and content with strong on-page entity match (Yext data on 17.2M citations shows Gemini favors official websites more than the other engines).
Claude
- Inline anchor: Parenthetical (Source, YYYY) style attribution within the prose; clickable in claude.ai.
- Secondary anchors: Per-message sources tray; via API, web_search_result_location returns url, title, cited_text (≤150 chars), encrypted_index.
- Phrasing cues: “Per {publisher}…”, “{brand}’s documentation states…”, “As {author} notes…” — named-author phrasing is more common than on other engines.
- What it rewards: Smaller, niche outlets and standards-body documentation; sentence-span chunkability.
Microsoft Copilot
- Inline anchor: Numbered superscripts (¹1, ¹2) styled similarly to Bing.
- Secondary anchors: Hover preview cards on the superscript.
- Phrasing cues: Mix of “according to…” and unattributed sentences; relies on superscript.
- What it rewards: Content well-indexed in Bing with valid schema and crisp meta descriptions.
Anchor format reference table
| Engine | Inline anchor | Secondary anchor | Common phrasing |
| ChatGPT (Search) | Domain pill (example.com) | Sources tray; footnote links | "According to {brand}…" |
| Perplexity | Superscript [1] | Numbered sources list | Often unattributed prose |
| Gemini / AI Overviews | Pill chip | Sources card carousel | "As {publisher} explains…" |
| Claude | Parenthetical (Source, 2026) | Sources tray; API spans | "Per {publisher}…" / "As {author} notes…" |
| Copilot | Superscript ¹1 | Hover preview card | Mixed |
Implications for content writers
- Make brand entity unambiguous in the page title and first 100 words. ChatGPT and Gemini surface it in "according to…" phrasing; if the engine can’t find a clean entity, it falls back to bare domain pills.
- Front-load the citable claim. Perplexity and Copilot quote near-verbatim from the first 2-3 sentences of well-structured paragraphs.
- Date your claims inline. Claude prefers parenthetical (Source, 2026) spans; an explicit date is a magnet for that pattern.
- Use named-author bylines and Person schema. Claude and increasingly Gemini cite “As {author} notes…” style; without a named author you forfeit that anchor.
- Keep self-contained spans ≤ 150 characters. Claude’s cited_text cap is 150 chars; a tightly written answer-first sentence often becomes the literal citation.
- Avoid clickbait or vague openers. Engines down-weight ambiguous openers when selecting which span to anchor a citation on.
Common misconceptions
- “Inline domain pills are stable.” They aren’t. ChatGPT regularly drops domain pills in favor of footnote links across UI revisions. Optimize for either format.
- “The [1] style is universal.” Only Perplexity and Copilot consistently use it. ChatGPT, Gemini, and Claude all render attributions differently.
- “AI engines always link the source.” No. Tow Center research found AI search tools frequently fabricate or misattribute citations. Brand strength reduces the risk of being misquoted without attribution.
FAQ
Q: Do AI engines link to specific page anchors?
Mostly to the page URL, occasionally to a #section anchor when the cited content lives in a clearly named section. Adding an id to each H2 increases the chance of a deep-linked citation, especially in Perplexity and Gemini.
Q: What is the difference between an AI citation and a brand mention?
A citation is a clickable link to your page; a mention is your brand name appearing in the answer text without a link. Both matter — mentions still drive recall and downstream branded search even when the user does not click.
Q: Do citations include the publication date?
Claude routinely includes the year in parenthetical citations. Gemini and ChatGPT sometimes display “published {date}” under the source card. Perplexity shows freshness color cues but no inline date.
Q: Should the citation phrase appear in my own content?
Yes — writing in a quote-ready style (answer-first, dated, attributed) increases the chance an engine will lift the sentence verbatim. Avoid first-person marketing voice in spans you want extracted.
Citations
: Columbia Journalism Review, AI Search Has a Citation Problem (2025).
: Discovered Labs, AI Citation Patterns: How ChatGPT, Claude, and Perplexity Choose Sources (2025).
: Yext Research, AI Citation Behavior Across Models: 17.2 Million Citations (2026).
: Anthropic, Web search tool — Claude API Docs. Retrieved 2026-04-28.
Related Articles
Answer Block Architecture Framework: Engineering Extractable Answer Units for AI Engines
A 5-component framework for engineering extractable answer blocks that ChatGPT, Perplexity, and Google AI Overviews cite cleanly — with schema bindings and length rules.
AI Citation Confidence Scoring Framework: Predicting Source Inclusion Likelihood
AI citation confidence scoring framework: a predictive model that scores how likely generative engines are to cite a source based on retrieval, grounding, and trust signals.
AI Search SERP Feature Citation Map: Where AI Mentions Appear in 2026
AI search SERP feature citation map: a 2026 checklist of every surface where AI mentions appear, from AI Overviews to Perplexity Sources.