Bing Webmaster Tools for GEO: Tracking Microsoft Copilot Citations with the AI Performance Report
The Bing Webmaster Tools AI Performance Report (public preview, February 2026) is the only first-party dashboard that exposes how often Microsoft Copilot and Bing's AI surfaces cite your domain. Verify your site in Bing Webmaster Tools, open the AI Performance tab, and use Total Citations, Average Cited Pages, and grounding queries to prioritize GEO content work.
TL;DR: Bing Webmaster Tools (BWT) added a free AI Performance report in February 2026. It is the first first-party way to measure how often your content is cited inside Microsoft Copilot answers, Bing AI summaries, and select partner AI integrations. This guide walks beginners through verifying a domain, reading the four core metrics, and weaving the data into a GEO operating cadence alongside third-party visibility trackers.
Why this report matters for GEO
For most of the past year, Generative Engine Optimization (GEO) measurement depended on third-party tools that scrape AI answer pages from outside. Practitioners had no first-party signal from any major AI provider. That changed when Microsoft released AI Performance inside Bing Webmaster Tools as a public preview in February 2026.
Microsoft positioned the launch as "an early step toward Generative Engine Optimization (GEO) tooling in Bing Webmaster Tools." For the first time, a major AI search provider publishes citation data direct to publishers — for free, with no quota, no sampling subscription, and no scraping required.
For a GEO program, that unlocks three things you could not do before:
- Confirm whether content actually grounds Copilot answers, not just whether it ranks in blue links.
- See the grounding queries — the sub-queries an LLM rewrites a user prompt into — that pull your domain into responses.
- Track citation trends over time so you can connect content edits to AI visibility.
If you only have time for one new GEO tool this quarter, AI Performance is the highest-leverage option because the data is first-party and the cost is zero.
What the AI Performance report actually tracks
The dashboard aggregates citation events from a fixed set of surfaces:
- Microsoft Copilot answers across web, Edge, Windows, and Copilot in Microsoft 365 surfaces that ground on the open web.
- AI-generated summaries in Bing search results.
- Select partner AI integrations that ground on Bing's index. Microsoft has not published the full partner list, but it is widely understood to include providers that license Bing grounding.
Coverage is therefore broader than just bing.com. The trade-off is that the report does not cover ChatGPT browsing without Bing grounding, Google AI Overviews, Perplexity, or other engines that use their own crawlers. Treat AI Performance as the authoritative Microsoft view, not a global GEO dashboard.
The four core metrics
The launch dashboard exposes four primary measures, each defined inside the official Bing Webmaster Tools help center:
- Total Citations — the total number of times your URLs appeared as a cited source in AI-generated answers across supported surfaces during the selected time range. It does not indicate placement, ranking, or how prominently the citation was rendered.
- Average Cited Pages — the mean number of unique pages from your site that were cited per day across the date range. This is a breadth signal: if it grows, more of your content is grounding answers.
- Grounding Queries — the actual sub-searches the LLM issues against Bing's index when constructing an answer. They are usually rewrites or fan-outs of the original user prompt, not the prompt itself.
- Page-level Citation Trend — citation counts and grounding queries broken down per URL so you can see which assets carry the AI surface.
Each metric is filterable by date range. Microsoft has indicated more dimensions — including a Citation Share metric and grounding query intent labels — are on the roadmap.
Prerequisites
Before you set up AI Performance you need:
- A live website with at least one URL that Bing has crawled and indexed.
- A Microsoft account that you control.
- Access to publish a small change to the site (DNS record, root file, or HTML meta tag) for domain verification.
- For multi-property organizations, a list of every domain and subdomain you want to track. Each needs its own verification.
There is no paid tier, no minimum traffic threshold, and no waitlist. The public preview is open to any verified site owner.
Step-by-step setup
1. Add and verify your site in Bing Webmaster Tools
If you already have BWT access, skip to step 2.
- Go to bing.com/webmasters and sign in with your Microsoft account.
- Choose Add a site. Enter your full domain, including the protocol.
- Pick one of the three verification options Bing offers: a DNS TXT record, an XML file uploaded to your site root, or an HTML meta tag in your homepage .
- Apply the change, then click Verify.
If you already use Google Search Console, BWT supports importing properties through the Import flow. That is the fastest way to onboard a large estate, but each property still needs Microsoft to confirm ownership before AI Performance unlocks.
2. Open the AI Performance tab
After verification, the left navigation in Bing Webmaster Tools shows an AI Performance entry. If you do not see it within a few hours of verification:
- Confirm Bing has crawled at least one page (check Site Explorer for any indexed URL).
- Sign out and back in to refresh your account scopes.
- Check that the property is the canonical hostname your traffic actually uses (a www variant verified separately from the apex will collect data only for that hostname).
3. Set a baseline before changing anything
Resist the urge to optimize on day one. Capture a 14- to 28-day baseline so you have a reference for later experiments. Export the dashboard or screenshot the trend lines, and note the current Total Citations and Average Cited Pages alongside the date.
4. Reconcile citations to your analytics layer
The AI Performance report does not pass through to Google Analytics, Adobe Analytics, or Plausible automatically. To correlate citations with downstream behavior, decide now how you will reconcile them. Two practical options:
- Append a UTM-style identifier to internal links inside the cited URLs so you can track when a Copilot click lands on the page. Microsoft preserves canonical URLs in citations, so any tracking should sit on outbound links from the cited page.
- Maintain a spreadsheet that joins BWT page-level data to your analytics export by URL. Refresh weekly.
How to read the four metrics
Total Citations
Read it as a volume signal. Quarter-over-quarter growth tells you that your domain is becoming a more frequent grounding source. Treat single-day spikes with caution: Copilot personalization and partner traffic can swing daily totals significantly.
Useful framings:
- Compare Total Citations to your organic Bing impressions. A rising citation count without rising impressions suggests your blue-link search visibility is not the bottleneck — your content is being read into answers even when it does not rank visibly.
- Watch for sustained drops after publishing a rewrite. That is a regression signal worth investigating before the user-visible engine catches up.
Average Cited Pages
Read it as a breadth signal. If Total Citations grow but Average Cited Pages stay flat, you are concentrating GEO success in a small number of URLs. That is fragile. Investigate whether neighboring pages are missing the citation-readiness signals that make the top performers grounding-eligible — clear definitions, scannable structure, and verifiable claims.
Grounding Queries
This is the single most actionable section of the dashboard. Grounding queries are the fan-out questions an LLM generates from a user prompt — usually three to seven per answer — and they are different from search queries you would see in regular BWT performance reports.
Treat the grounding query list as a content briefing source:
- Clusters of queries that all touch the same concept reveal where Copilot still has knowledge gaps you could fill.
- Queries that surface your URLs but do not match your current page topic can suggest new sub-pages or anchor sections to add.
- Queries that exist but never cite you suggest competitor content owns that fan-out — a candidate for a comparison or definition article.
Page-level Citation Trend
Filter by URL to see which assets carry your AI surface. The 80/20 here is informative: most domains will find that a small set of definitions, comparisons, and reference pages drive the bulk of citations. Use that list to prioritize linking, schema, and update cadence.
A weekly GEO operating cadence using AI Performance
Once the report is collecting data, fold it into a lightweight weekly review:
- Monday — pull the dashboard. Export Total Citations, Average Cited Pages, and the top 25 cited URLs. Diff against last week.
- Tuesday — read grounding queries. Cluster new queries by topic. Decide which need a dedicated answer block on an existing page.
- Wednesday — write or update. Make small, atomic edits — definitions tightened, FAQs added, LLM citation blocks polished. Avoid large rewrites that break trend continuity.
- Thursday — cross-check with third-party trackers. Open Otterly, Profound, or your tool of choice to see whether ChatGPT, Perplexity, and Google AI Overviews show similar movement. Discrepancies between BWT and external trackers are useful experiments, not bugs.
- Friday — log changes. Annotate the date and the URLs you touched on a single tracking sheet. Future-you needs the audit trail to attribute citation moves.
This cadence is intentionally light. AI Performance data refreshes at most daily, and the public preview signals can be noisy day-over-day. Weekly review is the right rhythm for most teams.
Combining AI Performance with third-party trackers
AI Performance is authoritative for Microsoft surfaces only. To build a complete GEO picture, pair it with at least one third-party tracker:
- Use BWT for citation counts and grounding queries — first-party signals you cannot get elsewhere.
- Use third-party tools (for example, Profound, Otterly, or visibility scrapers in your stack) to capture user-visible AI answers across ChatGPT, Perplexity, Google AI Overviews, and Copilot UI.
Treat the two data sources as complementary, not redundant. BWT tells you whether the engine grounded on your content; third-party tools tell you whether users saw a citation in the rendered answer. Both can move independently.
Limits and common misreads
- No clicks, no rankings. AI Performance is not a click-through report. Pairing it with traditional Bing search performance is required to model the full impact.
- Total Citations does not equal placement. A citation can be a mid-paragraph footnote or a top-of-card source. The dashboard does not distinguish.
- Partner attribution is opaque. "Select partner AI integrations" is not enumerated. Treat aggregated totals as a Microsoft-controlled blend.
- Local business content is partially measured. Citations sourced from Bing Places data — common for "near me" queries — are not always counted as website citations even when your business is named.
- Grounding queries are not user prompts. Do not optimize a page title for a grounding query — it is an internal LLM artifact, not something a user typed.
What is on the roadmap
At SEO Week 2026, Microsoft previewed four future additions to AI Performance:
- Citation Share — the percentage of citations a site captures within a given grounding query.
- Grounding query intent labels — automated classification of queries by intent and topic.
- Page-level performance breakdowns beyond the current per-URL view.
- GEO recommendations — guidance tied to AI visibility across content structure, indexing, structured data adoption, and structured data quality.
None of these had a firm rollout date as of April 2026, but expect the dashboard to evolve quickly through the public preview phase. Re-read this guide quarterly.
FAQ
Q: Is the Bing Webmaster Tools AI Performance Report free?
Yes. The public preview launched in February 2026 and is included in the standard free Bing Webmaster Tools account. There is no paid tier, no quota, and no waitlist; you only need to verify your domain.
Q: Does AI Performance track ChatGPT or Google AI Overviews?
No. The report tracks citations on Microsoft Copilot, Bing AI summaries, and select partner AI integrations that ground on Bing's index. ChatGPT browsing without Bing grounding, Google AI Overviews, and Perplexity are out of scope and require third-party trackers.
Q: What is a grounding query?
A grounding query is a sub-query that an LLM generates from the user's original prompt to retrieve evidence from the web. A single user question often produces three to seven grounding queries. They are usually rewrites of the prompt, not the prompt itself, which is why they look different from the queries you see in traditional Bing search performance reports.
Q: How often does the AI Performance dashboard update?
Microsoft has not published a service-level commitment, but the dashboard typically refreshes daily during the public preview. Expect occasional latency or backfills in the first months while the team tunes the pipeline.
Q: Can I export AI Performance data?
The dashboard supports CSV export for the metrics and tables it displays. There is no public API as of April 2026, so automated pipelines need to scrape the export endpoint or screenshot for archival. Watch the BWT release notes for an API announcement.
Q: How does AI Performance relate to GEO?
Microsoft itself describes the report as "an early step toward Generative Engine Optimization (GEO) tooling in Bing Webmaster Tools." It is the measurement infrastructure GEO has been missing on Microsoft surfaces — the first first-party way to confirm that content earns AI citations rather than just blue-link rankings. Pair it with third-party trackers to cover the full AI ecosystem.
Related Articles
Ahrefs for GEO: Content Gap Analysis and AI Visibility
Step-by-step Ahrefs for GEO tutorial: use Content Gap, Keywords Explorer, Brand Radar, AI Content Helper, and Site Audit to find AI search opportunities and ship cluster content.
AI Bot Log Analytics Tool Buyer's Checklist
Buyer's checklist for evaluating AI bot log analytics platforms that track GPTBot, ClaudeBot, and PerplexityBot crawl behavior across server logs.
AI Citation Monitoring Tool Buyer's Checklist: 30 Criteria for Evaluating Profound, Otterly, and Optiview in 2026
AI citation monitoring tool buyer's checklist with 30 weighted criteria for evaluating Profound, Otterly, Optiview, Nightwatch, and Peec in 2026.