Citation Half-Life Refresh Cadence Framework: Platform-Specific Update Schedules for AI Search
AI citation half-life averages 4.5 weeks across platforms (Scrunch and Stacker, 3.5 million events, March 2026) but varies sharply by engine: ChatGPT 3.4 weeks, Perplexity 5.8 weeks, Google AI Mode 4.3 weeks, AI Overviews 4.7 weeks, Gemini 4.6 weeks. This framework converts that data into per-platform refresh cadences, tiered by content value, with triggers that escalate updates when AI Overview inclusion or citation share drops.
TL;DR
If you optimize for ChatGPT, refresh top pages every 3 to 4 weeks. For Perplexity, every 5 to 6 weeks is enough. For Google AI Overviews and AI Mode, target a 4 to 5 week cadence. Tier the rest of your library by traffic and citation value. Treat AI Overview drop-off and citation share decline as out-of-cycle triggers, not quarterly review items.
What citation half-life actually means
Citation half-life is the time it takes for half the citations a piece of content earned in week zero to fall out of AI-generated answers. It is not the page going down. It is AI engines rotating to fresher sources.
The Scrunch and Stacker study of 3.5 million citation events (September 2025 to March 2026) puts the cross-platform median at 4.5 weeks. Per-platform numbers diverge:
| Platform | Non-network half-life (weeks) |
|---|---|
| ChatGPT (OpenAI) | 3.4 |
| Google AI Mode | 4.3 |
| Gemini | 4.6 |
| Google AI Overviews | 4.7 |
| Perplexity | 5.8 |
A second longitudinal study of 500 plus citations across ChatGPT, Perplexity, and Gemini (r/GEO_optimization, six months) found 62% of cited sources were gone by month three. Only 18% held citations for the full six-month window. That sticky 18% shared traits: updated within 30 days, more than 2,000 words, original data, and corroborating mentions on independent domains.
Ahrefs separately reported AI Overviews change roughly every 2 days at the surface level, though the underlying source mix is more stable than the phrasing.
Why platforms differ
Engines refresh on different economics:
- ChatGPT cycles fastest because OpenAI can afford aggressive refresh against its index, prioritizing recency over source loyalty.
- Perplexity holds longest because its real-time citation model favors sources it has already vetted, conserving compute and reducing answer variance.
- Google AI surfaces (AI Mode, AI Overviews, Gemini) cluster in the middle, suggesting a shared citation refresh cycle across Google's AI ecosystem.
Quattr's analytics show pages refreshed within 30 days are cited by ChatGPT at 3.2x the rate of older pieces. ZipTie reports 76.4% of ChatGPT's top-cited pages were updated within 30 days, and AI-cited content is 25.7% fresher on average than traditionally ranked content.
The framework: tiered cadence by platform and value
Use this matrix as the default cadence. Move tiers up or down based on your own platform mix and citation analytics.
Tier 1 - flagship pillar pages
High-traffic, high-pipeline-value pages that you actively want cited.
- ChatGPT-priority: 3 weeks
- Perplexity-priority: 5 weeks
- Google AI surfaces priority: 4 weeks
- Multi-platform default: 3 to 4 weeks
Updates must be substantive (new data, new section, refreshed examples), not date stamps alone.
Tier 2 - supporting articles in cited clusters
Secondary pieces that earn citations in their cluster.
- ChatGPT-priority: 6 weeks
- Perplexity-priority: 8 weeks
- Google AI surfaces priority: 6 to 8 weeks
- Multi-platform default: 6 weeks
Tier 3 - long-tail and reference content
Pages that do not earn citations directly but support hub authority.
- All platforms: quarterly (12 to 13 weeks)
- Annual deep refresh required.
Tier 4 - product and pricing pages
Monthly cadence regardless of platform. Engines treat these as transactional and demand recency.
Out-of-cycle triggers
Do not wait for the cadence to fire when any of these happen:
- AI Overview inclusion drops on a head term for two consecutive weeks.
- Citation share for a tracked query falls more than 30% week-over-week.
- Competitor publishes a definitive new asset that closes your coverage gap.
- Compliance, product, or pricing change invalidates a claim.
- Underlying data referenced in the page is older than 12 months.
When a trigger fires, refresh the page and re-publish with updated updated_at and a changelog block.
What counts as a refresh that earns re-citation
Not all updates earn re-citation. The 18% sticky cohort in the longitudinal study shared specific traits. Treat these as the minimum bar.
- Add at least one new data point or stat with a 2026 source.
- Replace at least one example or screenshot with a current one.
- Restructure or add at least one extractable section (FAQ entry, comparison row, definition box).
- Update the visible last-reviewed date and any schema dates (dateModified in Article schema, lastReviewed in MedicalWebPage where relevant).
- Add a changelog summary at the bottom of the page.
A cosmetic date-only update will not move the needle. ChatGPT and Perplexity both detect content-level drift and weight it more than metadata recency.
How to budget the cadence across your library
Most teams cannot refresh every Tier 1 page every three weeks. Use this rule of thumb:
- Identify the 20 highest-value cited pages (traffic times pipeline value).
- Stagger them across the cadence: 5 to 7 pages per week if your top platform is ChatGPT, 3 to 4 per week if your top platform is Perplexity.
- Route Tier 2 and Tier 3 through a quarterly batch process.
- Reserve roughly 20% of the team's update capacity for out-of-cycle trigger events.
If the math does not work, prune lower-tier pages rather than thin the Tier 1 cadence. Stale pillars cost more in lost citations than long-tail pruning costs in lost coverage.
Measuring whether the cadence works
Track three metrics, not one:
- Citation share for your monitored query set per platform (weekly).
- Time-to-recitation after each refresh (median days).
- Sticky rate: % of refreshed pages that retain citations beyond two half-life cycles.
A healthy program lifts citation share month over month, holds time-to-recitation under 21 days for ChatGPT and under 14 days for Perplexity, and pushes sticky rate above 40%.
Misconceptions to avoid
- "All AI engines refresh the same way." They do not. ChatGPT cycles 70% faster than Perplexity.
- "A date bump is enough." Engines weight content-level drift more than metadata.
- "Quarterly is fine." It is fine for Tier 3, not for Tier 1 on ChatGPT.
- "Refreshing kills SEO." Substantive refreshes lift both AI citation share and classic organic when paired with internal linking.
FAQ
Q: What is the average AI citation half-life in 2026?
Roughly 4.5 weeks across platforms, based on Scrunch and Stacker's analysis of 3.5 million citation events from September 2025 to March 2026.
Q: Which platform has the shortest citation half-life?
ChatGPT, at 3.4 weeks. Perplexity is the longest at 5.8 weeks. Google's AI surfaces cluster between 4.3 and 4.7 weeks.
Q: Why does ChatGPT cycle citations faster than Perplexity?
Leading hypothesis: OpenAI can afford aggressive refresh against its index, while Perplexity favors vetted sources it has already cited to conserve compute and reduce answer variance.
Q: Is updating only the publish date enough to re-earn citations?
No. Engines weight content-level drift (new data, restructured sections, new examples) more than metadata. Date-only updates rarely re-earn citations.
Q: How often should we refresh content if our top platform is Google AI Overviews?
Every 4 to 5 weeks for Tier 1 pillar pages, with out-of-cycle refreshes when AI Overview inclusion drops on head terms for two consecutive weeks.
Related Articles
AI Citation Crisis Response Checklist: 20 Steps When ChatGPT or AI Overviews Stop Citing Your Brand
20-step crisis response checklist for diagnosing and reversing sudden AI citation drops in ChatGPT, Perplexity, and AI Overviews within 30 days.
AI Citation Forecasting Framework: Modeling Citation Lift Before You Publish
AI citation forecasting framework predicts how new content will lift LLM citations using entity coverage, intent fit, and competitor source overlap.
AI citation forecasting: how to estimate which pages will get cited
A scoring framework to forecast which pages AI search engines will cite, based on intent fit, authority, evidence density, and structure quality.