AI Search Snippet Character Limits Reference
AI search engines pull citable snippets from source pages within predictable size budgets — Google AI Mode typically truncates passage selections near 160 characters (DEJAN, 2025), paragraph-style featured snippets cluster around 320 characters (Portent, 2021), and conversational engines like ChatGPT search, Perplexity, and Bing Copilot quote at the sentence or short-paragraph level rather than enforcing a published hard cap. Writers optimizing for AEO should fit each core answer into roughly 250-320 characters and re-measure their niche every 90 days.
TL;DR
- Google AI Mode passage snippets truncate near ~160 characters with an ellipsis while preserving complete thoughts (DEJAN, 2025).
- Google paragraph featured snippets (the legacy zero-click block reused inside AI Overviews) cluster around ~320 characters, with up to 8 list items and no observed paragraphs of 5+ sentences (Portent, 2021).
- ChatGPT search, Perplexity, and Bing Copilot quote citations at the sentence or short-paragraph level with no published hard cap; observation-only target is ≤25 words / ~150 characters per cited sentence.
- Re-measure every 90 days. Limits drift as engines retune retrieval; always pair an internal number with a sample size and a measurement date.
Definition
This reference documents the observed character and word budgets that major AI search engines apply when extracting snippets from source pages. A snippet is the short answer-passage an engine quotes inline (a Google AI Overview bullet, a Perplexity answer block, a ChatGPT search citation, a Bing Copilot summary line) before linking back to the source.
Two distinct constraints matter here. First is the display limit — what the user sees before truncation. Google AI Mode, for example, cuts most passage selections at roughly 160 characters and signals continuation with "…" while keeping complete thoughts intact (DEJAN, 2025). Second is the selection window, which is the chunk size the engine considers when scoring a passage for inclusion. Selection windows are typically larger than display limits and are not officially published by any major engine. Where primary documentation is silent, this reference reports empirical observations and labels them as such.
Why this matters
Most AEO and GEO advice treats snippet length as a single number, but the engines disagree. A 50-word answer that fits comfortably inside a ChatGPT search citation can be cut off mid-sentence inside a Google AI Mode passage, and a paragraph that looks clean in a Perplexity answer block may fail to qualify for a Google paragraph featured snippet at all because it exceeds the ~320-character cluster Portent measured (Portent, 2021).
That mismatch creates a concrete authoring risk: a single answer paragraph cannot be optimized for every surface unless it is structured at the sentence level. Writers who design answers as a chain of self-contained sentences — each ≤25 words and citable on its own — give every engine a quotable unit that fits within its truncation window. This sentence-first approach is also what ChatGPT search and Perplexity tend to lift verbatim, because their citation logic prefers atomic claims with clear publisher attribution.
The second reason this matters is drift. Engines retune their retrieval and presentation layers continuously. Featured snippet display lengths shifted measurably between earlier Portent studies and the 2021 update (Portent, 2021), and AI-era systems iterate even faster. Treating a single number as canonical without a measurement date is the most common error in AEO playbooks today.
How it works
Each engine applies its own combination of display truncation, selection window, and presentation format. The table below summarizes the most reliable public numbers as of May 2026, with primary sources where available and explicit observation labels where not.
| Engine / Surface | Typical display limit | Source label | Notes |
|---|---|---|---|
| Google AI Mode (passage selection) | ~160 characters, truncated with "…" | Empirical analysis (DEJAN, 2025) | Selected passages preserve complete thoughts; short content is shown in full. |
| Google paragraph featured snippet (reused by AI Overviews) | ~320 characters typical cluster | Empirical study, desktop SERPs (Portent, 2021) | No paragraph snippets observed with 5+ sentences. |
| Google list featured snippet | Up to 8 bullet points displayed | Empirical study (Portent, 2021) | List length caps before character count becomes the constraint. |
| Google table featured snippet | 5×3 and 5×2 most common shapes | Empirical study (Portent, 2021) | Tables wider/taller than this typically truncate. |
| ChatGPT search (citation passage) | Sentence-level; no published hard cap | Observation only — no primary documentation | Cites at sentence or short-paragraph granularity; long source passages are summarized rather than displayed verbatim. |
| Perplexity (answer-block citation) | Sentence to short-paragraph; no published hard cap | Observation only — no primary documentation | Inline citations link the source after each cited sentence cluster. |
| Bing Copilot (summary line) | Sentence-level; no published hard cap | Observation only — no primary documentation | Surfaces 1-3 sentence excerpts per source citation. |
Methodology note. Where a row is labeled empirical, the cited study reports a measurement window and sample that the original publisher documented. Where a row is labeled observation only, no engine has published a character or word ceiling for snippet selection, and the claim reflects pattern observations across competitive SERPs rather than a single audited corpus. Writers who need internal numbers for their niche should run a measurement of ≥30 queries per engine, record the observation date, and re-measure on a 90-day cadence to catch drift.
For sentence-level construction specifically, the safe authoring target is ≤25 words / ~150 characters per sentence. Sentences in that range fit cleanly inside Google AI Mode's 160-character window, qualify as quotable units inside ChatGPT search and Perplexity citations, and chain into paragraphs that stay under Portent's ~320-character paragraph-snippet ceiling.
Practical application
To use this reference in a content workflow, treat each engine's limit as an authoring constraint on the smallest extractable unit, not on the whole article.
- Lead every answer with a self-contained sentence ≤25 words. This is what AI Mode, ChatGPT search, and Perplexity are most likely to lift. Number-first or definition-first phrasing maximizes citation lift because it survives truncation gracefully.
- Keep the answer paragraph under ~320 characters. That keeps the paragraph eligible for a Google paragraph featured snippet (Portent cluster) while giving Perplexity and Bing Copilot a clean two-to-three-sentence block to cite.
- Use lists when the answer is enumerative. Cap visible items at eight; engines that surface list snippets typically truncate beyond that (Portent, 2021). Each list item should also be self-contained at the ~150-character mark.
- Avoid cliffhanger sentences. AI Mode preserves "complete thoughts" when truncating (DEJAN, 2025), so a sentence that reads as a complete idea is more likely to survive selection than one that ends mid-clause.
- Mirror the answer in your meta description and on-page TL;DR. Engines that hesitate to extract from body copy will fall back to meta. Keeping a 150-160 character version of the core answer in the meta description gives the snippet selector a low-risk option.
- Re-measure on a 90-day cadence. Set review_cycle_days: 90 in your content frontmatter and run a small measurement (≥30 queries) for each engine you care about. Log observed character ranges and the measurement date in a tracking sheet so drift is visible.
A practical authoring template for the lead block:
[Sentence 1: ≤25 words, answer-first, self-contained.]
[Sentence 2: ≤25 words, adds the most important qualifier.]
[Sentence 3: ≤25 words, names the source authority or scope.]That three-sentence stack lands at roughly 250-320 characters total — the cross-engine safe zone.
Common mistakes
- Treating a single number as universal. A "300-character limit" optimization rule fails as soon as the answer has to fit AI Mode's ~160-character display window. Build for the smallest relevant constraint, not the average.
- Ignoring complete-thought preservation. Engines truncate at sentence or clause boundaries, not at exact character counts. A 300-character paragraph that ends in a dependent clause will be cut earlier than a 320-character paragraph composed of complete sentences (DEJAN, 2025).
- Burying the answer. If the citable sentence is the third sentence of the paragraph, ChatGPT search and Perplexity may still find it, but Google AI Mode and AI Overviews — which favor the leading passage — will skip the page in favor of a competitor whose answer leads.
- Citing a number without a date. Reference tables that omit measurement dates lose value within a quarter. Always pair a snippet-length claim with a publisher and a year.
- Mixing list and paragraph snippets in one answer. Engines extract one snippet shape at a time. A hybrid block (intro sentence + bullets) often loses to a clean list or a clean paragraph from a competitor.
FAQ
Q: Are AI search snippet limits hard or soft?
Most engines apply soft limits with truncation at sentence boundaries. Google AI Mode cuts passage selections near 160 characters but preserves complete thoughts rather than chopping mid-word (DEJAN, 2025). Featured snippet paragraphs cluster around ~320 characters but Portent's study did not find a single hard ceiling either; the engine prefers complete sentences over hitting an exact character target (Portent, 2021).
Q: How do I measure snippet limits for my own niche?
Run a controlled query corpus of at least 30 representative searches per engine, capture the snippet text and source URL for each, and record the measurement date. Compute the median and 90th-percentile character counts per engine, label each row with the sample size, and store the result in a 90-day review document. This is the same shape of methodology Portent and DEJAN used for their published studies (Portent, 2021).
Q: Do snippet limits change over time?
Yes — measurably. Featured snippet character clusters shifted between earlier Portent measurements and the 2021 update, and the introduction of AI Overviews and AI Mode has added new passage-selection logic on top of the legacy snippet stack (Portent, 2021; DEJAN, 2025). A 90-day re-measurement cadence is the minimum to catch material drift before it affects citation share.
Q: Why do limits differ across engines?
Each engine optimizes its snippet for a different surface. Google AI Mode renders passages inline within a conversational response and benefits from short, scannable units (~160 chars). Featured snippets sit above the ten-blue-link list and tolerate ~320-character paragraphs because users read them as a primary answer. ChatGPT search and Perplexity present citations alongside a synthesized answer, so they quote at the sentence level rather than reproducing whole paragraphs. Bing Copilot blends both modes and surfaces 1-3-sentence excerpts. The shapes are different because the user experiences are different.
Q: Should I optimize for the shortest limit or the longest?
Optimize for the smallest extractable unit — the sentence at ≤25 words / ~150 characters. A page whose lead sentence fits AI Mode's 160-character window will also qualify for the wider featured snippet ceiling, the ChatGPT search citation, the Perplexity answer block, and the Bing Copilot summary. The reverse is not true: a page tuned only to a 320-character paragraph budget will lose AI Mode placements to competitors whose lead sentence is tighter.
Q: Where do these numbers come from?
The two empirical anchors in this reference are Portent's featured snippet length study (Portent, 2021, desktop SERPs) and DEJAN's AI Mode passage-selection analysis (DEJAN, 2025). Sentence-level estimates for ChatGPT search, Perplexity, and Bing Copilot are observation-only because none of those vendors has published a character or word ceiling for citation extraction. Writers who need a primary source for those engines should generate one with a documented measurement methodology rather than borrow a vendor blog estimate without a date.
Related Articles
AEO Snippet Length Framework: Tuning Answer Block Word Counts by Engine and Intent
AEO snippet length framework that maps answer block word counts to engine and query intent so your content lands in featured snippets and AI quotes.
How to Write AI-Citable Answers
How to write answers that AI engines like ChatGPT, Perplexity, and Google AI Overviews extract and cite — answer-first prose, length, entities, and source-anchoring.
AI Snippet Truncation Patterns: How ChatGPT, Perplexity, and Google AI Overviews Cut Answers
AI snippet truncation patterns reference: how ChatGPT, Perplexity, and Google AI Overviews cut citations, where breaks occur, and how to author for them.