GEO Strategic Positioning Framework
The GEO strategic positioning framework is a four-quadrant matrix that plots each topic by your citation share against the topic's strategic value, then prescribes one of four moves — defend, attack, harvest, or divest — so finite content investment compounds AI visibility instead of spreading thin.
TL;DR
- The framework adapts the logic of the BCG growth-share matrix (BCG, 2024) to AI-search portfolios, swapping market growth for strategic value and market share for citation share.
- Each topic falls into one of four quadrants: Defend (high value, high share), Attack (high value, low share), Harvest (low value, high share), or Divest (low value, low share).
- Each quadrant has a default play: protect the lead, invest aggressively, run lean, or retire.
- Use it quarterly with a citation-tracking baseline to rebalance content investment instead of treating every topic as equally important.
Definition
The GEO strategic positioning framework is a portfolio-management tool that helps content teams decide what to write, what to defend, what to retire, and what to ignore in the era of generative search. It treats your content library as a portfolio of bets across topics, ranks each topic on two axes — strategic value to the business and your current citation share inside AI answers — and assigns one of four standardized strategic moves to each quadrant.
The framework's lineage is the Boston Consulting Group growth-share matrix introduced in 1968, which classifies products as stars, cash cows, question marks, and dogs based on market growth and relative market share (Investopedia, 2024). The GEO version preserves the 2x2 logic but replaces the axes with metrics that matter for generative engines: citation share replaces market share, and strategic value replaces market growth, because in AI search visibility tracks topical authority — not raw publishing volume.
The output is a one-page positioning map that turns "we should write more content" into a defensible portfolio decision: defend a few cash-flow topics, invest in a small number of attack bets, run aging assets lean, and explicitly stop work on the long tail.
Why this matters
Most content teams treat their backlog as a flat list of ideas. In generative search this is a losing posture. AI engines surface a small number of canonical sources per query, and citation breadth and citation depth diverge sharply across platforms (arxiv.org, 2026) — meaning a few topics receive most of the cited weight and the rest are functionally invisible. A flat backlog spreads investment across hundreds of low-leverage topics while underfunding the handful that actually drive citation.
Three pressures make this acute:
- Concentrated citation surface. AI engines pick a small set of cited sources per answer, so winning a topic typically requires depth, not breadth.
- Compounding authority. Once a domain becomes the canonical citation on a topic, displacement is hard. Defending a strong position is cheaper than attacking a competitor's strong position.
- Finite editorial capacity. Every team has a fixed number of senior writers, reviewers, and SMEs. Flat prioritization wastes that capacity.
The positioning framework imposes the question every portfolio owner should answer monthly: where am I investing, and is each topic a defend, attack, harvest, or divest move? Without this discipline, teams default to the lowest-friction work — incremental updates on already-strong topics or new posts on low-value tail terms — and citation share stagnates while competitors compound.
How it works
The framework plots every topic on a 2x2 grid using two axes:
- Vertical axis — Strategic value (low → high). A composite of business value (revenue intent, audience fit), topical centrality (how often the topic is referenced in your buyer journey), and AI-search query volume.
- Horizontal axis — Citation share (low → high). Your fraction of cited sources for the topic across target AI platforms (ChatGPT, Perplexity, Google AI Overviews, Gemini, Claude). Citation share replaces market share because in AI search a single canonical citation often beats a dozen loose mentions.
Each topic lands in one of four quadrants. The default play per quadrant is summarized below.
Quadrant matrix
| Quadrant | Citation share | Strategic value | Default play | Investment posture |
|---|---|---|---|---|
| Defend | High | High | Protect canonical position | Maintenance plus freshness; refuse to let the page age |
| Attack | Low | High | Build canonical depth | Concentrated investment; long-form, structured, citation-ready |
| Harvest | High | Low | Run lean for efficiency | Light touch; let it rank, redirect upgrades to Defend or Attack |
| Divest | Low | Low | Retire or merge | Stop publishing; consolidate or 410 |
Scoring topics
For each topic, score the two axes 0 to 10 using a documented rubric. A simple rubric:
- Strategic value = (revenue fit 0 to 4) + (buyer-journey centrality 0 to 3) + (query-volume tier 0 to 3).
- Citation share = your appearances divided by total distinct cited sources across a stable prompt set per topic.
Threshold each axis at the median across your portfolio. The four resulting quadrants are not absolute categories — they are relative-to-portfolio decisions.
Cadence and review
Run the framework quarterly:
- Sample 30 to 80 prompts per priority topic across target AI engines.
- Recompute citation share and strategic value.
- Compare with the prior quarter's map.
- Reclassify topics that have shifted quadrants and update investment plans.
The framework is most useful when paired with a citation-tracking baseline; without measurement the axes collapse to opinion. Adopt a measurement framework first — a four-layer KPI model covering selection, absorption, brand, and conversion is a practitioner-tested pattern (Averi, 2026) — then layer positioning on top.
Practical application
A worked example: an AI-tooling SaaS with a 60-topic content portfolio runs the framework for the first time.
Step 1 — Inventory topics. The team consolidates 60 topics into 22 distinct concepts using a canonical concept ID per topic to avoid double counting near-duplicates.
Step 2 — Score strategic value. Product marketing assigns 0 to 10 strategic value scores. Five topics tied to top-funnel buyer questions for the company's flagship product score 8 to 10. Twelve mid-funnel comparison topics score 4 to 7. Five long-tail definitional topics score 0 to 3.
Step 3 — Measure citation share. The team runs roughly 50 prompts per topic across ChatGPT, Perplexity, and Google AI Overviews, deduplicates cited sources, and computes share. Results: the company is cited on three high-value topics (Defend), absent on two more high-value topics (Attack), heavily cited on six low-value topics (Harvest), and lightly cited on the remaining eleven (Divest).
Step 4 — Assign moves.
- Defend (3 topics). Schedule quarterly freshness updates; add structured-data blocks; expand FAQs to absorb adjacent prompts.
- Attack (2 topics). Allocate the majority of the next quarter's senior-writer hours; commission a Tier 1 canonical anchor per topic with a measurement plan.
- Harvest (6 topics). No new investment; only fix factual drift if flagged. Redirect freed capacity to Attack.
- Divest (11 topics). Consolidate into 3 hub pages or set 410 status for true zombies; remove from the editorial calendar.
Step 5 — Lock the plan. Encode the assignments in the editorial calendar as a single source of truth. Block off-portfolio requests at intake unless they unlock a new Attack topic.
After one quarter, teams typically observe citation share rising on Attack topics, holding or improving on Defend, and total publishing volume dropping while citation density rises. Concentration tends to outperform spread, which mirrors the original BCG insight that resource allocation across a portfolio outperforms equal-weight investment.
Common mistakes
- Skipping measurement. Running the framework on opinion-only scores produces a confident but wrong map. Measure citation share with a stable prompt set first.
- Treating quadrants as labels, not decisions. A topic in Attack without an allocated writer, deadline, and brief is still a wish, not a plan.
- Over-defending. Spending most capacity on Defend topics yields diminishing returns. Once a topic is canonical, the marginal hour is better spent on Attack.
- Reluctance to divest. Teams keep low-value pages alive because someone wrote them. Divestment frees capacity and reduces dilution; track it as a positive metric.
- Quarterly drift without re-scoring. Citation share changes as AI engines retrain. A static map is decorative; rerun the scoring every quarter.
- Confusing volume with share. Publishing a dozen articles on a topic with low citation share is worse than publishing one canonical anchor that captures meaningful share.
FAQ
Q: How is the GEO strategic positioning framework different from the BCG growth-share matrix?
The structure is the same — a 2x2 with four named quadrants — but the axes are reframed for AI search. BCG uses market growth and relative market share to decide where to invest in product portfolios (BCG, 2024). The GEO version uses strategic value and citation share, because in generative engines a few canonical citations dominate answers, so share-of-citation matters more than share-of-market or share-of-publishing.
Q: What metric should I use for citation share?
The cleanest metric is the percentage of distinct cited sources that include your domain across a stable prompt set per topic, run across the AI engines you care about. Sample 30 to 80 prompts per topic, deduplicate sources, and compute share. Use the same prompt set quarter over quarter so movement is comparable.
Q: How do I score strategic value without making it subjective?
Use a fixed rubric: business value (revenue fit and audience match), buyer-journey centrality (how often the topic appears in real customer questions), and query-volume tier (a coarse 0 to 3 ranking from internal analytics or a third-party tool). Document the rubric, score the whole portfolio in one sitting to keep calibration consistent, and revisit annually.
Q: How often should I rerun the framework?
Quarterly is the practical cadence for most teams. AI engines update, competitors publish, and buyer-journey priorities shift. A lighter monthly check on Defend and Attack topics is useful, but full portfolio rescoring once per quarter prevents drift without exhausting the team.
Q: What if most of my topics fall into Divest?
That is a normal first-run finding and a useful signal: most flat backlogs are dominated by low-value, low-share content. Resist the urge to defend every page. Consolidate divest topics into hub pages, redirect duplicates, and reallocate the freed capacity to a small number of Attack topics. Concentration is the point of the framework.
Q: Can I use this framework if I have no current AI citations?
Yes. If your baseline citation share is near zero across the board, every high-value topic effectively starts in Attack. The framework still helps because it forces you to pick a small number of Attack bets instead of spreading thin. Measure baseline anyway — even a 0% map is a useful starting reference for the next quarter.
Q: How does this relate to traditional SEO portfolio decisions?
It complements them. Traditional SEO already favors prioritization (pillar and cluster, topical authority), but it tends to optimize for ranking on blue links. GEO positioning optimizes for citation inside AI answers, where the win condition is being one of a few canonical sources, not appearing in a top-10 list. Run both lenses on the same portfolio; topics that win in both deserve the most defense.
Q: When should I divest versus consolidate?
Divest (410 or noindex) when a topic has near-zero strategic value and no internal-link weight worth preserving. Consolidate (301 with content merge) when several thin pages cover overlapping ground and a single canonical hub would serve users and AI engines better. Default to consolidation when in doubt; divestment is final.
Related Articles
GEO vs AEO
GEO optimizes content for broad citation across generative AI engines, while AEO targets direct answer extraction in answer boxes and voice. Use them together.
What Is GEO? Generative Engine Optimization Defined
GEO (Generative Engine Optimization) is the practice of structuring content so AI search engines retrieve, understand, synthesize, and cite it in generated answers.
AI Search Competitor Monitoring Framework: Citation Share, Sentiment, Velocity
Framework for AI search competitor monitoring covering citation share, sentiment, velocity, content mix, reporting cadence, and action triggers.