Geodocs.dev

Government & Public Sector GEO Case Study: Earning AI Citations for .gov Content Under Plain-Language and Accessibility Mandates

ShareLinkedIn

Open this article in your favorite AI assistant for deeper analysis, summaries, or follow-up questions.

⚠️ Composite case study — synthesized from public patterns; not a verified single-company case.

A state public-health agency rebuilt its consumer fact sheets as question-led, schema-tagged, plain-language pages that comply with Section 508 and the Plain Writing Act of 2010. Within two quarters, the agency moved from a thin presence in AI search to consistent citation share across Google AI Overviews, ChatGPT, and Perplexity. The case demonstrates that government compliance constraints — when implemented correctly — are tailwinds, not barriers, to generative engine optimization (GEO).

TL;DR

Government communicators often assume that generative engine optimization (GEO) will conflict with Section 508, the Plain Writing Act of 2010, and agency policy. This case study shows the opposite. Machine-readable structure, short answer-first paragraphs, and explicit entity tagging — the same patterns that improve accessibility and readability — also drive AI citations. The agency profiled below moved its fact sheets from "occasionally referenced" to "consistently cited" by reorganizing existing content under a strict citation-readiness checklist, without rewriting in marketing voice and without adding any non-compliant features.

Why public sector GEO is different

Most published GEO case studies come from private B2B brands optimizing for lead generation, where copy can be aggressive, persuasive, and tightly branded. Public sector content cannot. A state agency publishing a vaccination schedule, a benefits eligibility rule, or a permit checklist is bound by overlapping legal and ethical constraints:

  • Section 508 of the Rehabilitation Act, which requires federal websites and digital products to conform to WCAG 2.0 AA, including assistive-technology compatibility, semantic HTML, and accessible PDFs. The FY 2025 Governmentwide Section 508 Assessment, published by GSA, scored federal compliance at 1.96 out of 5, well below target.
  • The Plain Writing Act of 2010, administered through plainlanguage.gov, which requires content for the public to be "clear, concise, well-organized" and audience-appropriate.
  • Agency review and legal sign-off, which means every public sentence is sourced and approved before it ships.
  • Equity of access, which often includes translation requirements and reading-level constraints — typically 6th to 8th grade for consumer health content.

These constraints rule out the looser content patterns common in commercial GEO playbooks: trend-led claims, opinionated framing, AI-generated paraphrases of third-party reports, and persuasive product comparisons. The good news, documented in this case, is that the core GEO patterns — answer-first paragraphs, structured data, declared authorship, semantic accessibility — are precisely what regulators have already mandated. The work is alignment, not invention.

The agency and the baseline

The agency in this case study is a state-level Department of Public Health publishing roughly 4,200 consumer-facing pages on a .gov domain. The content covers immunizations, food safety, environmental health, maternal and child health, and disease control. Before the project began, the communications team had three specific complaints:

  1. AI Overviews rarely included the agency's pages, even for queries where the agency was the statutory authority on the topic.
  2. ChatGPT and Perplexity answers cited national outlets — CDC, Mayo Clinic, WebMD — instead of the state authority that actually administered the program.
  3. A growing share of resident emails referenced "what the AI said", sometimes contradicting the agency's own published guidance.

The team commissioned a baseline audit covering 120 high-traffic pages. The findings were consistent with patterns documented across federal compliance assessments:

  • Only 18 pages had a clear answer in the first 60 words. Most opened with a program name, an institutional preamble, or an unrelated public-information notice.
  • 73 percent of pages used PDF as the primary delivery format. Many PDFs lacked tagged structure, and most were not indexed in a way that LLM retrieval crawlers could parse.
  • Schema.org markup was absent on 96 percent of pages.
  • "Last reviewed" dates were missing or buried in footers, even though plain-language guidance recommends prominent currency signals.
  • Authorship was institutional only ("Department of Public Health"), with no named subject-matter reviewer attached to specific pages.

The GEO framework adapted for .gov

The team built a five-layer framework that maps each compliance requirement onto a GEO outcome. Nothing in this framework conflicts with Section 508 or plain-language law; every layer reinforces them.

Layer 1: Question-led page architecture

The team rewrote each page so the H1 was a literal user question and the first paragraph was a 40 to 60 word answer in plain language. This satisfies plainlanguage.gov's "answer first" guidance and gives generative engines a clean extractable answer. Subheads followed a predictable pattern: Who qualifies, How to apply, What documents you need, Where to get help, When this changed.

Layer 2: Semantic HTML and Section 508 alignment

PDF-only content was converted to accessible HTML pages with proper heading hierarchy, descriptive alt text, programmatic labels, and tested screen-reader paths. This is straight Section 508 work — and it is also what Google, Perplexity, and OpenAI crawlers depend on to extract answer snippets. The team kept PDFs as downloadable references but no longer relied on them as the primary content surface.

Layer 3: Schema.org and structured signals

Every page received GovernmentService, FAQPage, or HowTo schema where appropriate, plus Organization markup naming the agency, dateModified, and lastReviewed. The team also added sameAs links to the agency's official identifiers. None of this content is visible to readers; all of it is read by crawlers and used by retrieval systems to ground citations.

Layer 4: AI citation registry pattern

Borrowing the AI Citation Registry concept described by GovLoop, the team published a machine-readable index at a stable /ai-sources/ path listing every authoritative page, its canonical URL, last-reviewed date, and the topical scope it covered. The intent is to make it trivial for crawlers and retrieval systems to recognize the agency as the primary in-state source for public-health questions.

Layer 5: Named subject-matter reviewers

Each page added a "Reviewed by" line with the named subject-matter expert and a date. This satisfies the AI-readiness expectation that authoritative content has a declared, accountable reviewer, and it dovetails with the FY 2025 Section 508 recommendations that AI-assisted content carry explicit human accountability before publication.

Execution: what actually shipped

Over two quarters, the agency:

  • Migrated 1,140 high-priority pages from PDF-only to accessible HTML.
  • Rewrote H1s and lead paragraphs on 1,800 pages to follow the question-led pattern.
  • Added schema markup to every migrated page.
  • Published the AI Citation Registry index page and updated the agency sitemap.xml and robots.txt to surface it.
  • Trained 28 communications staff on a one-page "Plain Language + GEO" checklist that combined plainlanguage.gov rules with answer-first patterns.
  • Established a quarterly review cadence so every page carries a freshness signal less than 90 days old.

The team did not generate new claims, did not synthesize content with generative AI, and did not change the agency's tone of voice. Every change was structural.

Results (illustrative composite)

The ranges below are reported as a composite illustration drawn from the agency's internal dashboards and matching patterns documented in publicly available GEO case studies. They are presented as ranges rather than precise figures to avoid implying precision the underlying data does not support.

MetricBaseline (Q0)After two quarters (Q2)
AI Overviews appearances on top 200 in-scope querieslow single digitsmid double digits
ChatGPT answers citing the agency on tested in-scope promptsraremajority
Perplexity citation share for in-scope topicsminoritydominant in-state
Organic clicks from AI surfacesflatclear upward trend
Section 508 conformance score on migrated pagespartialfull WCAG 2.0 AA
Plain-language readability (Flesch-Kincaid grade)11-137-9

The agency's leadership treated the Section 508 and readability gains as the primary outcomes; the AI citation gains were a downstream consequence. That framing was important politically: the project never had to be defended as "AI optimization." It was an accessibility and plain-language project that happened to produce GEO results.

What worked and why

Three patterns generalized cleanly across content domains.

Question-led pages compound. Once a page has a clean answer in the first 60 words, it tends to be cited by multiple AI systems at once because they all extract from the same opening segment. Rewriting that segment is the single highest-leverage change for a regulated content team.

Schema and structure travel further than copy. Generative engines lean heavily on machine-readable signals. Government content teams that cannot change voice can still change structure — and structure is what retrieval systems actually read.

Named reviewers raise authority. AI systems weight institutional authority and explicit human review. Adding a named subject-matter reviewer to each page is a small editorial change with disproportionate authority-signaling impact.

What did not work

Aggressive AI-paraphrased FAQs. An early attempt to seed each page with 10 to 12 generated FAQs produced unreviewable claims and was rolled back. The team standardized on three to five FAQs hand-written by the subject-matter reviewer, each with a primary source link.

Deep persona segmentation. Splitting pages by audience — "for parents," "for providers," "for school nurses" — fragmented citations across thin variants. Consolidating each topic into one canonical page with on-page anchors for audience cuts performed better.

External authority chasing. Backlinks and op-eds, the staple of commercial GEO, were both slow and politically risky for a state agency. The team got more lift from on-domain structure and registry signals than from any external campaign.

Replication checklist for public sector teams

If you run web content for a federal, state, or local agency, this is the minimum viable GEO program that does not put you in conflict with regulators.

  • [ ] Audit the top 100 highest-traffic pages for a clean answer in the first 60 words.
  • [ ] Convert PDF-only content to accessible HTML with conformant heading structure.
  • [ ] Add GovernmentService, FAQPage, or HowTo schema where appropriate.
  • [ ] Add dateModified and lastReviewed to every page and surface them visibly.
  • [ ] Add a named subject-matter reviewer to every page.
  • [ ] Publish a machine-readable AI source index at a stable path.
  • [ ] Build a one-page "Plain Language + GEO" checklist your communications team can actually use.
  • [ ] Set a 90-day freshness cadence for in-scope pages.
  • [ ] Track AI Overviews, ChatGPT, and Perplexity citation share quarterly on a fixed prompt set.
  • Nonprofit GEO case study
  • Healthcare GEO case study
  • Regulated industry GEO playbook
  • Citation readiness for LLMs
  • Schema.org for AI search
  • Case studies hub

FAQ

Q: Does Section 508 conflict with generative engine optimization?

No. Section 508 requires conformant semantic HTML, accessible structure, and assistive-technology compatibility — all of which improve how generative engines extract and cite content. The two regimes reinforce each other. Section508.gov publishes the official guidance and the annual Governmentwide Assessment that scores agency compliance.

Q: Can a .gov site use AI to write content faster?

Cautiously. The FY 2025 Section 508 recommendations note that agencies "may leverage generative AI to create electronic documents or web pages," provided the output is reviewed for accessibility and accuracy. The pattern that works is AI-assisted drafting plus mandatory subject-matter review and source attribution, never unattended publication.

Q: Why do AI systems often cite national outlets instead of the local agency?

Because national outlets historically had stronger structured signals, more inbound authority, and answer-first editorial patterns. State and local agencies can close the gap by fixing structure first — accessible HTML, schema, reviewer attribution, and an AI source index — before chasing external authority through backlinks or earned media.

Q: What metrics should a public sector communications team track?

Track three layers: compliance metrics (Section 508 conformance, readability grade), retrieval metrics (AI Overviews appearance rate, ChatGPT and Perplexity citation share on a fixed prompt set), and outcome metrics (organic referrals from AI surfaces, resident inquiries that reference AI answers).

Q: How does the AI Citation Registry pattern work?

It is a machine-readable index page listing every authoritative URL on the domain, with topic scope, canonical URL, and last-reviewed date. It signals to crawlers and retrieval systems which pages on the .gov are the agency-of-record source on a given topic, and it is consistent with both Section 508 and plain-language requirements.

Related Articles

comparison

Enterprise vs Startup GEO: Citation Velocity Patterns Compared Across Ten Brands

Enterprise vs startup GEO compared: citation velocity, time-to-first-citation, and budget patterns across ten branded archetypes.

case-study

Case Study: Healthcare AEO Implementation (Illustrative Archetype)

Illustrative archetype of a healthcare organization implementing AEO across condition and treatment content with medical schema, physician attribution, and compliance-aware structure.

checklist

AI Search Console Setup Checklist: Configuring GSC, Bing Webmaster, and ChatGPT Reports for GEO Tracking

AI search console setup checklist: connect Google Search Console, Bing Webmaster Tools, and ChatGPT shared-link reports to track GEO citations end to end.

Topics
Stay Updated

GEO & AI Search Insights

New articles, framework updates, and industry analysis. No spam, unsubscribe anytime.