Geodocs.dev

Review Schema for AI Citations

ShareLinkedIn

Open this article in your favorite AI assistant for deeper analysis, summaries, or follow-up questions.

Review schema (schema.org/Review) is the structured data type AI search systems including ChatGPT, Perplexity, and Google AI Overviews use to cite product opinions; correct implementation requires nested Review entries with AggregateRating, identified third-party authorship, and adherence to Google's anti-spam review policies.

TL;DR

Use schema.org/Review with required itemReviewed, reviewRating, author, and datePublished. Pair with AggregateRating only when underlying reviews are individually visible and independently verified. AI shopping assistants cite reviews that pass Google's anti-self-serving policies and validate cleanly in the Rich Results Test.

Definition

The Review schema is a schema.org type at https://schema.org/Review that represents a single evaluative opinion about an entity such as a Product, LocalBusiness, Service, Book, Movie, or CreativeWork. A Review carries one or more rating values, an identified author, a publication date, and a body of evaluative text. AggregateRating (https://schema.org/AggregateRating) is a related type that summarizes multiple reviews into a single rating distribution — typically ratingValue, reviewCount, bestRating, and worstRating. AI shopping experiences in ChatGPT, Perplexity Pro, Gemini, and Google AI Overviews extract these fields to ground product recommendations in citable third-party signals.

Why it matters

AI shopping assistants do not cite arbitrary praise. They look for structured evidence that a review is real, dated, attributable, and external to the seller. Properly-marked reviews feed three downstream behaviors: ranked product carousels in Perplexity Shopping and Google AI Overviews, inline citation links in ChatGPT shopping responses, and confidence scoring in Gemini's product summarization. Reviews without structured data are typically invisible to these pipelines because the systems cannot reliably distinguish marketing copy from genuine evaluation. Review schema also feeds traditional rich-result eligibility — review snippets, star ratings in SERPs, and Merchant listings — which provides redundant signal that reinforces the AI citation layer.

A Review entity should carry the following:

  • @type: Review
  • itemReviewed: a nested entity (Product, LocalBusiness, Organization, Service, etc.) with at least @type and name
  • reviewRating: a nested Rating with ratingValue, bestRating (default 5), worstRating (default 1)
  • author: a nested Person or Organization with a name
  • datePublished: ISO-8601 date when the review was first published
  • reviewBody: the free-text body of the review
  • publisher: the platform or site that hosts the review (recommended for third-party platforms)

For AggregateRating, include ratingValue, reviewCount (or ratingCount), bestRating, and worstRating. The aggregate must reflect verified, individually-reviewable reviews — fabricated counts trigger Google's manual actions and disqualify the page from AI citation pipelines.

How AggregateRating differs

A Review describes a single opinion. An AggregateRating summarizes a set of opinions. They are typically used together: a Product carries an aggregateRating (the summary) and a review array (the constituent reviews). Confusion arises when sites publish only the aggregate without underlying reviews, or when they aggregate reviews from off-site sources without permission. Google's review-snippet policy disallows aggregating reviews collected on third-party platforms and republishing them as your own AggregateRating without clear attribution and without a destination page where each review is independently visible.

vs Critic Review and Employer Review

Review schema is generic. Two specializations matter for AI citations:

  • CriticReview extends Review for editorial or expert reviews (publications, critics, accredited reviewers). AI search systems weight CriticReview higher for nuanced product summaries.
  • EmployerReview is used for workplace reviews and feeds Google's job-related rich results, not shopping citations.

For most ecommerce surfaces, plain Review is sufficient. Use CriticReview only when the author is a recognized publication or critic with verifiable bylines.

Anti-spam rules and post-2024 policies

Google's review snippet policy prohibits, among other things:

  • Self-serving reviews (a business reviewing itself or its own products)
  • Aggregating reviews from third-party platforms and presenting them as first-party signal
  • Reviews that cannot be browsed individually on the page where the markup lives
  • Markup on category, hub, or homepage URLs rather than the specific item being reviewed
  • Generated or fabricated reviews

AI search systems extend these rules in practice. Pages whose review markup fails the Rich Results Test or violates self-serving rules tend to be de-ranked in AI shopping responses, and Google AI Overviews treats failed validation as evidence of low quality.

Validation workflow

Validate every page that carries Review markup using:

  1. Google's Rich Results Test (https://search.google.com/test/rich-results) for required-field coverage and policy alignment.
  2. Schema.org Validator (https://validator.schema.org/) for type-level correctness.
  3. Manual review of the rendered page to confirm each marked-up review is independently visible.
  4. Periodic spot-checks in Search Console's Enhancement reports for sudden drops in valid Review items.

Re-validate after every CMS update, theme change, or review-platform integration change.

AI shopping citation behavior

AI search systems differ in how they consume Review markup. Based on practitioner reports and observed citations:

  • ChatGPT (with shopping): extracts aggregateRating.ratingValue and one to three representative reviewBody excerpts to produce inline citation links to the product page.
  • Perplexity Pro Shopping: ranks products by AggregateRating-weighted authority and cites high-rated items with direct deep links.
  • Google AI Overviews: pulls review snippets into the overview when the topic is purchase-intent and the markup validates cleanly.
  • Gemini for shopping: combines review content with Merchant Center signals; markup that disagrees with Merchant feed data is typically suppressed.

Common misconceptions

  • Higher ratingValue helps me rank. Inflated ratings are typically treated as a spam signal. Accuracy matters more than the average.
  • I can mark up curated quotes from press. Only mark up reviews that exist on the page. Press quotes belong in Quotation or MediaReview if appropriate.
  • AggregateRating without underlying Review entries is fine. It is not. Aggregates must derive from verifiable on-page reviews.

How to apply

  1. Map each product or service URL to the reviews displayed on that URL.
  2. Render Review JSON-LD with all required fields, one entry per displayed review.
  3. Render an AggregateRating only when at least three verified reviews exist and are individually visible.
  4. Run the Rich Results Test on the URL before deploying to production.
  5. Monitor Search Console's Review snippet enhancement report for the first 30 days post-deployment.
  6. Re-run validation quarterly and after any platform migration.

FAQ

Q: Does AggregateRating alone count for AI citations?

No. AI search systems generally require the underlying Review entries to be present and individually visible. Aggregates without underlying reviews are treated as unverified.

Q: Can I mark up reviews collected on Trustpilot or G2?

Only with documented permission and only when the reviews are republished individually on your page. Otherwise link to the source platform and let it carry its own markup.

Q: How many reviews trigger AI citation eligibility?

There is no published threshold, but practitioner reports suggest at least three verified reviews and a minimum of five to ten for stable inclusion in AI shopping carousels.

Q: Does the rating scale matter?

Yes. Always set bestRating and worstRating even when using the default 1-5 scale. Non-default scales require explicit declaration to avoid normalization errors.

Q: How often should review markup be revalidated?

Quarterly, plus after any platform, theme, or review-integration change. Failed validation can take Review snippets out of AI citation eligibility within days.

Related Articles

guide

404 Page AI Crawler Handling: Avoiding Citation Loss During Migrations

Migration playbook for keeping AI citations during URL changes — hard 404 vs soft 404, 410 Gone, redirect chains, sitemap cleanup, and refetch monitoring.

specification

Accept-Encoding (Brotli, Gzip) for AI Crawlers

Specification for serving Brotli, gzip, and zstd to AI crawlers via Accept-Encoding negotiation: which bots support which codecs, fallback rules, and Vary handling.

guide

JSON-LD for AI Search: Complete Guide

How to implement JSON-LD structured data so AI search engines and traditional search both understand your content's type, authorship, and entities.

Topics
Stay Updated

GEO & AI Search Insights

New articles, framework updates, and industry analysis. No spam, unsubscribe anytime.