AEO for How-To Queries: Winning Step-by-Step Answers in AI Engines
AEO for how-to queries is the practice of engineering atomic, ordinal step blocks so answer engines like ChatGPT, Google AI Overviews, and Perplexity can lift one step—or the entire procedure—directly from your page and cite it as the source.
TL;DR
How-to queries are won by atomicity, not length. Give every step a verb-led title, a 1-3 sentence body, an explicit prerequisite, and a time estimate, then wrap the procedure in HowTo JSON-LD even though Google retired the desktop rich result in September 2023 (Google Search Central). The same structure that lets Gemini quote one step lets Perplexity cite the whole list.
What counts as a how-to query
Answer engines bucket queries by intent before they retrieve. A how-to query asks for a procedure: an ordered set of actions that produces a defined outcome. It is structurally distinct from:
- Definition queries ("what is X") — answered by a noun-phrase paragraph.
- Comparison queries ("X vs Y") — answered by a side-by-side table or trade-off list. See AEO for comparison queries.
- Recommendation queries ("best X for Y") — answered by a ranked list with criteria.
- Troubleshooting queries ("why is X broken") — answered by a cause → fix mapping.
A how-to query usually surfaces signal words: how to, steps to, guide to, set up, install, configure, file, build, fix. The user expects an ordered output. If your page returns prose paragraphs, the engine will either rewrite your content into steps (and cite a competitor that did the work) or skip you entirely.
How-To schema today: still useful, no longer a Google rich result
In August 2023 Google announced it would no longer show How-to rich results on desktop, and the feature was deprecated by September 13, 2023 (Google Search Central). That changed the SEO calculus, not the AEO one.
Two reasons to keep HowTo JSON-LD on procedural pages:
- LLM parsing. Schema.org markup remains a clean, machine-readable signal that a block is a procedure rather than an arbitrary list (schema.org/HowTo). Models trained to recognize structured data—including the retrieval and reranker stages used by Perplexity and AI Overviews—use it as a structural hint when deciding what to extract.
- Future-proofing. Google has continued to surface how-to content inside AI Overviews even after the rich-result deprecation, and the underlying property does not violate any current structured-data policy (Google general structured data guidelines).
Use HowTo when there is a single linear procedure with a defined outcome. Use ItemList (or no schema at all) when the order is interchangeable—e.g., a "best practices" list. Mixing the two confuses extractors.
Anatomy of an extractable step block
Every step on a how-to page should pass four checks. If any fails, the engine will collapse multiple steps, drop the prerequisite, or quote the wrong block.
- Atomic action. One verb, one outcome. "Install Node.js 20" is atomic. "Install Node.js 20 and configure your shell" is two steps.
- Ordinal marker. Lead with ### Step 3: or use a numbered list. Engines key on ordinal tokens to reconstruct sequence when they extract a single step.
- Verb-led title. Start with an imperative: install, run, paste, verify, file, whisk. Vague titles ("Configuration") strip the action and lower extraction confidence.
- Prerequisite + time + outcome. A 1-3 sentence body that names what must be true before the step (requires Node 18+), how long it takes (~2 minutes), and what is true after (the dev server prints "ready on :3000").
Optional but high-leverage: a one-line result indicator the user can match against their own screen, terminal, or kitchen.
Token-budget tuning per engine
Different answer engines extract different amounts of text per cited block. Optimizing for the smallest budget makes you eligible everywhere.
| Engine | Typical extracted span | Implication |
|---|---|---|
| Google AI Overviews | 40-80 words per cited chunk | Step body must be self-contained in 2-3 sentences. |
| Perplexity | 60-100 words per cited block; 3-4 sources cited per answer | Numbered steps with verb-led titles extract cleanly. |
| ChatGPT (with browsing) | 80-150 words per quoted region | Slightly more room, but it still prefers atomic blocks. |
| Claude (with web search) | 60-120 words | Treats numbered lists like Perplexity. |
These ranges are practitioner-observed, not published quotas, and align with public reporting on Perplexity's source-selection behavior (ailabsaudit.com). The hard rule that follows: write each step as if it must stand alone in 60 words. If a step needs more, it is two steps.
Worked examples
Example 1: Dev tooling — initialize a Next.js project
### Step 1: Install Node.js 20 LTS
Requires: macOS 12+ / Ubuntu 22.04+ / Windows 11 with WSL2.
Run nvm install 20 && nvm use 20. Verify with node -v (should print v20.x).
Time: ~2 minutes.Step 2: Scaffold the project
Run npx create-next-app@latest my-app --ts --app --eslint.
Accept the default Tailwind + src directory prompts.
Time: ~1 minute. Outcome: a my-app/ folder with package.json.
Step 3: Start the dev server
From my-app/, run npm run dev. Open http://localhost:3000.
Outcome: the Next.js welcome page renders.
Each step has a verb-led title, an explicit prerequisite, a time estimate, and an outcome line.
Example 2: Finance task — file a US 1099-NEC for a contractor
### Step 1: Collect the contractor's W-9
Requires: total payments to the contractor in the calendar year ≥ $600.
Request a signed W-9 before issuing the 1099. Outcome: legal name, address, and TIN on file.Step 2: Generate the 1099-NEC
Use payroll software (Gusto, QuickBooks) or the IRS IRIS portal.
Enter Box 1 (nonemployee compensation) with the calendar-year total.
Time: ~10 minutes per contractor.
Step 3: File with the IRS by January 31
Submit electronically via IRIS or FIRE. Mailed paper forms must be postmarked by the same date.
Outcome: confirmation receipt with submission ID.
Step 4: Send the recipient copy
Deliver Copy B to the contractor by January 31, by mail or secure digital portal.
Outcome: contractor has the form needed for their own filing.
Note the explicit deadlines. AI Overviews regularly cite procedural deadlines as the most-quoted sentence in a how-to answer.
Example 3: Cooking — build a tonkotsu ramen broth
### Step 1: Blanch the pork bones
Requires: 2 kg pork neck and trotter bones, 6 L water.
Bring bones to a rolling boil for 10 minutes, drain, rinse under cold water.
Outcome: bones are pale; the scum is gone.Step 2: Begin the long simmer
Return bones to a clean pot. Cover with 6 L fresh water. Boil hard for 8-12 hours, topping up water to keep bones submerged.
Time: 8-12 hours. Outcome: the broth turns opaque white.
Step 3: Strain and reduce
Pour through a fine-mesh strainer lined with cheesecloth.
Reduce the strained liquid by one-third over high heat.
Outcome: ~3 L of glossy, body-rich broth ready for tare.
Each block survives in isolation. A user asking ChatGPT "how long do I simmer tonkotsu bones?" receives an extractable Step 2.
Common failure modes
- Collapsed steps — combining "install and configure" into one block. Engines pick the first verb and drop the rest.
- Missing prerequisites — the step works on the writer's machine but fails for readers. Engines cite the page once, get burned by user complaints, and lower the source's authority signal.
- Weak verb choice — handle, manage, deal with are non-extractable. Replace with concrete verbs (configure, run, paste, verify).
- Ordinal drift — step numbers reset after a heading change ("Step 1, Step 2, Step 1 again"). Reconstructors lose the sequence.
- Schema-content mismatch — the JSON-LD lists 5 steps but the visible content has 7. Validators flag this and engines downgrade trust (Google general structured data guidelines).
- Buried outcome — the success criterion lives only in a closing paragraph. Extractors cite the last step's body and miss the result.
Implementation checklist
- [ ] Every step has a verb-led ### Step N: heading.
- [ ] Each step body is 1-3 sentences (≤60 words).
- [ ] Prerequisites stated up-front (system requirements, tools, ingredients).
- [ ] Time estimate per step.
- [ ] Result indicator per step.
- [ ] One linear procedure per page; branching variants link to sibling pages.
- [ ] HowTo JSON-LD matches the visible step list one-to-one.
- [ ] Internal link to the AEO hub and the relevant AEO content checklist.
FAQ
Q: Should I still add HowTo schema if Google removed the rich result?
Yes, when the page is a single linear procedure. Google deprecated the desktop how-to rich result in September 2023 (Google Search Central), but the markup remains a valid schema.org type and a useful structural hint for LLM-driven retrieval pipelines.
Q: How long should each step be?
One to three sentences—target ≤60 words. That is the common floor across AI Overviews (40-80w), Perplexity (60-100w), and ChatGPT (80-150w). Writing for the smallest budget makes you eligible across all three.
Q: Numbered list or ### Step headings?
Headings, when each step has a prerequisite, time estimate, and result. Numbered lists, when each step is a single sentence (e.g., a quick-fix). Mixed pages should pick one format and apply it consistently—mixed structures confuse extractors.
Q: How many steps is too many?
If your procedure exceeds about 12 steps, split it. Long sequences fail the atomicity test and force users to scroll. A "phase 1 / phase 2" pattern across two linked pages extracts more cleanly than a 25-step monolith.
Q: Should I use ItemList instead of HowTo?
Use ItemList when order does not matter (e.g., "5 ways to improve sleep"). Use HowTo when each step depends on the previous one. Engines treat the two differently; mislabeling a procedure as ItemList loses the sequence signal.
Q: How do I know my how-to was cited?
Manual: ask the target query in ChatGPT, Perplexity, AI Overviews, and Claude, and inspect the citations. Automated: use a citation-tracking tool that polls the engines daily for your tracked queries.
Q: Do videos help?
Yes when embedded with VideoObject schema and a transcript. The transcript gives extractors text to lift; the video gives users a verifiable demonstration. Pages with both pattern types tend to earn citations on both AI Overviews and Perplexity Pages.
Related Articles
AEO Content Checklist
A 30-point AEO content checklist across five pillars (Answerability, Authority, Freshness, Structure, Entity Clarity) to make pages reliably AI-citable in 2026.
AEO for Comparison Queries: Winning 'X vs Y' and 'Best of' Answers
Tactical guide to AEO for comparison queries: structure 'X vs Y' and 'best of' answers so AI engines extract balanced, multi-entity citations from your pages.
What Is AEO? Complete Guide to Answer Engine Optimization
AEO (Answer Engine Optimization) is the practice of structuring content so AI systems and answer engines can extract it as a direct, attributed answer.