Geodocs.dev

GEO Citation Velocity Framework

ShareLinkedIn

Open this article in your favorite AI assistant for deeper analysis, summaries, or follow-up questions.

Citation velocity is the rate at which a page earns AI citations after publication, modeled as the change in citation share per unit time per engine. This framework gives a formula, engine-specific benchmarks, content-type tiers, and an instrumentation pattern so teams can plan and measure GEO investments instead of guessing.

TL;DR

Most GEO measurement stops at "are we cited?" Citation velocity asks the next question: how fast does citation share change after we publish, refresh, or earn a new authority signal? The framework defines citation velocity formally, gives benchmarks by engine and content type, and prescribes a weekly prompt-panel instrumentation pattern. With velocity in place, teams can size GEO investments, set release-level KPIs, and detect content decay before traffic loss surfaces in analytics.

Definition

Citation velocity (V) is the change in citation share (ΔC) over a time window (Δt) on a defined prompt panel, for a given engine and content unit:

Where:

  • = citation share (% of prompts in the panel that cite the content unit) on a fixed prompt panel.
  • = time window (typically 1 week or 1 release cycle).
  • The unit can be a single page, a topic cluster, or a brand.

Velocity is measured per engine. ChatGPT, Perplexity, Google AI Overviews, Gemini, Claude, and Copilot have distinct ingestion and refresh dynamics; mixing them obscures the signal.

Why velocity matters

A page's lifetime citation share is a stock; velocity is the flow that produces it. Without flow data, teams cannot tell apart three very different scenarios that share the same monthly citation count:

  • Climbing: a page slowly gaining trust signals and citations.
  • Plateaued: a page that has stopped accruing citations.
  • Decaying: a page losing citations faster than it gains them.

Velocity also reveals lead-lag relationships across engines: typically Perplexity rises first because of its frequent index refresh; ChatGPT lags 4-6 weeks; Google AI Overviews lags 8-12 weeks because of E-E-A-T accumulation; Copilot tracks Bing index refresh. Velocity dashboards expose these lags so teams can attribute citation gains to the correct intervention.

Engine velocity benchmarks

Reasonable defaults observed across cross-vertical GEO programs in 2025-2026 (Profound, Peec AI, and brand-side dashboards), assuming a well-structured page with clear authorship and adequate trust signals:

EngineTypical first-citation lagTypical 90-day plateau citation shareRefresh sensitivity
Perplexity2-6 weeks8-20%High
ChatGPT (web search)6-12 weeks5-15%Medium
Google AI Overviews8-20 weeks1-8%Medium
Gemini8-16 weeks2-10%Medium
Claude4-10 weeks4-12%High
Microsoft Copilot4-10 weeks3-10%Medium

Walker Sands H1 2026 B2B AI Search Visibility benchmark places median enterprise B2B citation share in AI Overviews near 3%, consistent with the Google AI Overviews row above. Treat the table as orientation, not as targets; calibrate to your vertical and prompt panel.

Content-type velocity tiers

Different content types have characteristic velocity profiles:

  • Definitional pages ("what is X"): fast first citation (2-4 weeks on Perplexity); modest plateau (5-15%); long half-life.
  • How-to / tutorial pages: medium first citation (4-8 weeks); high plateau (10-25%) on Perplexity; sensitive to recency on technical topics.
  • Comparison pages ("X vs Y"): medium first citation (4-8 weeks); high plateau (10-20%) on ChatGPT and Perplexity; decay fast if alternatives change.
  • News and trend pages: very fast first citation (days to 2 weeks) on Perplexity; high but short-lived (decay within 4-8 weeks).
  • Reference / specification pages: slow first citation (8-16 weeks); very high plateau (15-30%) once authority accrues; long half-life.

Use the tier as the prior; replace it with measured data once you have 90 days of weekly tracking.

Instrumentation pattern

The minimum viable velocity instrumentation:

  1. Prompt panel. Define 30-100 prompts that represent the queries you want to win. Group prompts by intent cluster (definitional, how-to, comparison, troubleshooting, news).
  2. Weekly run. Execute the panel weekly across all target engines, capturing the citation list per prompt.
  3. Citation share computation. For each engine and cluster, compute the % of prompts citing your content unit (page, cluster, or brand).
  4. Velocity computation. Compute week-over-week and 4-week trailing-average velocity per engine, cluster, and unit.
  5. Annotation. Tag each week with releases, content updates, distribution events (Hacker News post, trade press coverage). Velocity becomes interpretable when joined to interventions.
  6. Decay alerting. Alert when velocity goes negative for 2-3 consecutive weeks on a previously stable cluster; investigate and refresh.

Profound, Peec AI, and Resonance ship most of this out of the box; OpenCite and gego (open source) provide a build-your-own substrate.

Common failure modes

  • Mixed-engine averages. Reporting a single "AI citation rate" across engines hides the lead-lag signal that makes velocity useful.
  • Drifting prompt panels. Adding or removing prompts mid-quarter breaks the velocity time series. Freeze the panel for 90-day windows; version changes explicitly.
  • One-shot snapshots. A single weekly measurement is noisy. Use a 4-week trailing average for stable comparisons.
  • Ignoring decay. Pages that plateau then quietly lose share are worse than pages that never started. Decay alerting is the highest-leverage instrumentation.
  • Confusing referrals with citations. Referral traffic from chat.openai.com or perplexity.ai is a downstream effect; velocity should measure citations in the answer itself.

Worked example

A fintech vertical landing page publishes on day 0 with a clean structure, licensed-author byline, and FinancialProduct schema. Prompt panel: 40 prompts on "best high-yield savings account for X".

  • Week 1: Perplexity citation share 0%, ChatGPT 0%, AI Overviews 0%.
  • Week 4: Perplexity 7.5%, ChatGPT 2.5%, AI Overviews 0%.
  • Week 8: Perplexity 12.5%, ChatGPT 5%, AI Overviews 0%.
  • Week 12: Perplexity 15%, ChatGPT 10%, AI Overviews 2.5%.
  • Week 16: Perplexity 17.5%, ChatGPT 12.5%, AI Overviews 5%.

Velocity (weeks 4-12): Perplexity ≈0.94 pp/week, ChatGPT ≈0.94 pp/week, AI Overviews ≈0.31 pp/week. Reading: typical fintech profile, well-structured page, no anomalies. Action: hold strategy, plan a 90-day refresh.

FAQ

Q: What is citation velocity?

Citation velocity is the rate of change of AI citation share per unit time on a defined prompt panel, measured per engine. It exposes whether content is climbing, plateaued, or decaying — distinctions invisible in a single citation snapshot.

Q: How big should the prompt panel be?

Use 30 prompts at minimum to control noise; 100-300 is a reasonable working size. Group prompts by intent cluster so you can compute cluster-level velocity, which is more actionable than page-level.

Q: How often should I run the panel?

Weekly. Daily is unnecessary noise for most engines; monthly is too coarse to detect decay. Weekly aligns with engine refresh cadence and team operating rhythm.

Q: How do I attribute velocity changes to specific interventions?

Annotate the time series with release events, content updates, and distribution events (Hacker News posts, trade press citations, podcast appearances). Velocity inflections that coincide with annotated events are strong evidence; unattributed inflections require investigation.

Q: What velocity counts as healthy?

For a Tier-2 page in a competitive vertical, +0.5 to +1 percentage point per week on Perplexity over the first 8 weeks is a healthy profile. ChatGPT and AI Overviews velocities are typically lower (0.1-0.5 pp/week). Treat your own first 90 days as the baseline; deviation matters more than absolute numbers.

Q: What does a negative velocity mean?

Negative velocity means the page is losing citation share. Common causes: a competitor publishing fresher or higher-authority content; the page becoming stale relative to a fast-moving topic; engine ingestion changes that demote the page's source class. Investigate within two weeks and refresh; uninvestigated negative velocity becomes structural.

Related Articles

comparison

Enterprise vs Startup GEO: Citation Velocity Patterns Compared Across Ten Brands

Enterprise vs startup GEO compared: citation velocity, time-to-first-citation, and budget patterns across ten branded archetypes.

guide

Government & Public Sector GEO Case Study: Earning AI Citations for .gov Content Under Plain-Language and Accessibility Mandates

How a state public-health agency engineered .gov content to earn AI Overviews and ChatGPT citations while staying within plain-language and Section 508 mandates.

guide

Citation Building for AI Search Engines

Strategies for building citation authority so AI search engines consistently reference and quote your content in generated answers.

Topics
Stay Updated

GEO & AI Search Insights

New articles, framework updates, and industry analysis. No spam, unsubscribe anytime.