GEO Topical Decay Framework: When and How AI Citations Fade by Content Type
AI citations decay at a baseline half-life of about 4-5 weeks across major LLMs, but the rate is not uniform. The GEO Topical Decay Framework segments content into six types—news, tutorial, comparison, guide, reference, and framework—each with a distinct decay curve, refresh trigger, and recommended cadence.
TL;DR
- The average source loses half its AI citations in 4-5 weeks, but content type changes that curve dramatically.
- News and trend pages decay fastest (1-2 week half-life); reference definitions decay slowest (6-12 months).
- Use the decay tier of a page, not a generic calendar, to schedule refreshes—and pair every refresh with a substantive change, not a date bump.
Why a content-type-aware decay model matters
Most GEO advice treats citation decay as a single number. Scrunch's analysis of 3.5 million citation events (Sept 2025-Mar 2026) found an average half-life of 4-5 weeks across platforms, and Ahrefs reported that AI-cited content is 25.7% fresher than the organic results for the same queries. These averages are useful, but they hide an important reality for editorial teams: a glossary entry and a market-recap article do not lose citations on the same schedule.
A single half-life forces every page onto the same refresh treadmill. Some pages get over-refreshed (wasting editorial capacity), while others rot quietly inside a quarterly cycle. The GEO Topical Decay Framework replaces that flat curve with six content-type-specific decay tiers, each with its own refresh trigger.
How AI citation decay actually works
LLMs and AI search engines blend three retrieval signals when deciding which sources to cite: relevance, authority, and recency. Recency is weighted more aggressively than in classic search. Three findings frame the model:
- Recency bias is platform-dependent. ChatGPT shows the strongest recency preference—its top-cited pages average 458 days younger than organic results on the same queries, and 76.4% of ChatGPT's most-cited pages were updated within the last 30 days.
- Decay is exponential, not linear. A longitudinal study tracking 500+ citations across ChatGPT, Perplexity, and Gemini found 62% of month-1 sources were gone by month 3, and only 18% were still cited at month 6.
- Freshness has a sweet spot. A Search Engine Land analysis found pages 30-89 days old performed best in ChatGPT citations—pages newer than 30 days underperformed because they had not built retrieval signals yet, and pages over two years old underperformed because they were treated as stale.
In other words, decay is real, but the right answer is not "refresh everything weekly." It is "match refresh cadence to the content's intrinsic rate of change."
The six decay tiers
The framework defines six tiers, ranked from fastest to slowest decay. Each tier has a recommended refresh cadence and a primary refresh trigger.
| Tier | Content type | Citation half-life | Refresh cadence | Primary refresh trigger |
|---|---|---|---|---|
| T1 | News / trend recap | 1-2 weeks | Weekly while live, then archive | New event in topic |
| T2 | Tutorial / how-to | 6-10 weeks | Every 60-90 days | Tooling/UI change |
| T3 | Comparison / vs page | 8-12 weeks | Every 90 days | Competitor pricing or feature change |
| T4 | Guide / playbook | 12-20 weeks | Every 4-6 months | New best-practice or data point |
| T5 | Framework / model | 6-12 months | Every 6-12 months | Model invalidation or new evidence |
| T6 | Reference / definition | 9-12+ months | Annual minimum | Terminology drift or canonical change |
These ranges are derived by applying the platform-level half-life curve from Scrunch and the citation-rate lift from Quattr (3.2x for pages refreshed within 30 days) to content-type performance data from Presence AI's content-type benchmark and Search Engine Land's structure study.
T1 — News and trend recaps
This tier covers launch coverage, funding rounds, model releases, and any page tied to a dated event. Its half-life is the shortest because the underlying topic itself ages: once a new event in the same theme occurs, older recaps lose context and citations migrate to newer URLs.
- Refresh cadence: weekly while the topic is active; archive or merge when the news cycle ends.
- Trigger: any new event in the same topic cluster (release, ruling, acquisition).
- Anti-pattern: keeping news pages in active rotation with date bumps and no new substance.
T2 — Tutorials and how-tos
Tutorials decay when the underlying tool, API, or UI changes. ChatGPT's preference for pages 30-89 days old hits this tier hardest because tutorials must reflect current screenshots and command syntax to be cited.
- Refresh cadence: every 60-90 days.
- Trigger: version bump, deprecated parameter, changed UI flow, or new prerequisite tool.
- What to update: screenshots, code blocks, version pins, error messages, and the validation step.
T3 — Comparison pages
"X vs Y" pages decay whenever either side changes pricing, plan structure, or feature parity. Comparison content has the highest extraction rate among AI engines (Claude in particular favors structured comparisons), but only when the data table is current.
- Refresh cadence: every 90 days.
- Trigger: competitor pricing or plan change, new feature in either product, new entrant.
- What to update: the comparison table, the verdict, and the "when to use X / when to use Y" sections.
T4 — Guides and playbooks
Guides describe how to do something well, not how to operate one specific tool. They decay when best-practice consensus shifts or a new dataset reframes the topic. Stacker's source-decay research found that distributed, comprehensive content lasts roughly twice as long in LLM responses as one-off pages, which lengthens this tier's effective half-life.
- Refresh cadence: every 4-6 months.
- Trigger: new authoritative study, methodology change, or measurable shift in industry benchmarks.
- What to update: opening data points, internal links to newer references, examples, and the FAQ.
T5 — Frameworks and models
Frameworks (this article being one) describe a model for thinking about a problem. They decay slowly because the model itself is the artifact, but they require version bumps when underlying assumptions change—for example, when platform behavior invalidates a tier.
- Refresh cadence: every 6-12 months.
- Trigger: model invalidation, new platform behavior, or a competing framework that supersedes a tier.
- What to update: the framework version, decay tier ranges, and any tier whose evidence base has shifted.
T6 — Reference and definitions
Reference pages and glossary entries are the slowest tier. Definitions of stable terms (for example, "canonical URL" or "schema markup") can hold citations for a year or more if the terminology itself has not drifted. The risk is semantic drift—the term acquires new connotations, and AI engines start preferring competitors that frame the term in current language.
- Refresh cadence: annual minimum, plus an out-of-band update on terminology drift.
- Trigger: the canonical question changes, a new alias becomes dominant, or a related concept emerges.
- What to update: the definition itself, aliases, related concepts, and the canonical question in the frontmatter.
Refresh triggers vs refresh calendars
A scheduled refresh calendar will always trail reality. The framework recommends running both calendars and triggers in parallel:
- Calendar floor: the minimum cadence in the table above. No page in a tier should miss this.
- Trigger ceiling: if a trigger fires, the page jumps the queue regardless of when it was last refreshed.
Triggers worth instrumenting:
- A monitored competitor URL changes (T3 trigger).
- A canonical platform doc gets an updated_at bump (T2 trigger).
- An LLM stops citing the page for two consecutive weekly checks (any tier, escalation trigger).
- A new term appears in three or more cited competitor pages (T6 semantic-drift trigger).
What counts as a real refresh
Quattr observed that pages refreshed within 30 days are cited at 3.2x the rate of older pages—but only when the refresh is substantive. Date bumps without content change do not generate the lift. A real refresh, in this framework, requires at least one of:
- New evidence—an added data point, source, or quoted statistic from the last 12 months.
- Structural change—a new section, a rebuilt FAQ, or a corrected comparison table.
- Re-grounding—replacing a generic claim with one tied to a verifiable source.
When you make any of those changes, also update updated_at, last_reviewed_at, and the version field in the frontmatter so that retrieval pipelines see a coherent freshness signal.
Applying the framework
A practical workflow for an editorial team:
- Tag every page with its tier. Tier is a property of content type, not of individual pages, but check edge cases (a tutorial that depends on a stable CLI may belong in T4 rather than T2).
- Set the calendar floor for each tier. Wire it into your CMS or Notion database as a refresh-due date.
- Instrument triggers. At minimum: competitor change watch (T3), platform doc watch (T2, T6), and citation-loss alert (any tier).
- Score every refresh. Use the citation readiness checklist to make sure the refreshed page still meets extraction quality bars.
- Re-tier after every major rewrite. A page whose underlying topic has shifted may need to move tiers (e.g. a guide that has become a framework).
The framework is intentionally simpler than a per-URL decay model. Per-URL models give a more accurate prediction but require citation-tracking infrastructure most teams do not yet have. Tier-based decay is the minimum viable model that beats a single half-life.
FAQ
Q: What is the GEO topical decay framework in one sentence?
It is a six-tier model that maps content types to AI-citation half-lives and refresh triggers, so editorial teams can schedule refreshes by tier instead of by a single global cadence.
Q: How is this different from a generic content-decay model?
Generic models report one half-life curve for all content. This framework segments content into six tiers (news through reference) because the empirical decay rate differs by an order of magnitude across them, and because each tier has a distinct refresh trigger.
Q: Which AI platform decays citations fastest?
ChatGPT shows the most aggressive recency bias: 76.4% of its most-cited pages were updated within the last 30 days, and its citations average 458 days younger than the organic results for the same queries. Perplexity is close behind, with about 50% of its 2026 citations published or updated in the same year.
Q: How often should I refresh a reference or glossary page?
Annually at minimum, plus an out-of-band refresh whenever you detect terminology drift—a new alias for the term appearing in three or more cited competitor pages, or a measurable change in the canonical question users ask. Reference pages are the slowest tier, but they still decay.
Q: Does refreshing the date alone help?
No. Citation lift requires a substantive change—new evidence, structural updates, or re-grounded claims. Date-only bumps do not move the needle and may erode trust if AI engines or auditors detect the pattern.
: Scrunch + Stacker, "The half-life of AI citations," 2026. https://scrunch.com/blog/half-life-of-ai-citations
: Ahrefs, "Do AI assistants prefer to cite fresh content?" 2025. https://ahrefs.com/blog/do-ai-assistants-prefer-to-cite-fresh-content/
: ZipTie, "Content refresh strategy for AI citations," 2026. https://ziptie.dev/blog/content-refresh-strategy-for-ai-citations/
: r/GEO_optimization, "We measured how long AI citations actually last," 2026. https://www.reddit.com/r/GEO_optimization/comments/1sik2sf/
: Search Engine Land, "ChatGPT citations reward ranking and precision over length," 2025. https://searchengineland.com/chatgpt-citations-ranking-precision-length-study-474538
: Quattr, "AI search & content freshness," 2026. https://www.quattr.com/blog/content-freshness
: Presence AI, "AI citation rates research," 2025. https://presenceai.app/blog/ai-search-citation-rates-research-which-content-gets-cited
: Stacker, "Most AI citations fade in weeks," 2026. https://stacker.com/blog/source-decay-research-the-stacker-network-effect-on-ai-citation-persistence
: The HOTH, "Understanding semantic drift and content decay," 2026. https://www.thehoth.com/blog/semantic-drift/
Related Articles
AI Platform Citation Mix Strategy
Portfolio framework for AI platform citation mix: allocate GEO effort across ChatGPT, Perplexity, Gemini, Claude, and Copilot by source bias.
AI readability score: how to measure machine comprehension of your pages
AI readability scoring: which classic readability metrics still matter for LLMs, plus the structural and semantic signals AI parsers reward.
AI Search Citation Types: How AI Attributes Sources
Reference for AI search citation types — inline, footnote, source card, attributed quote, implicit — with platform differences and how to optimize.