Query fan-out and LLM decomposition has broken the one keyword per page model. This guide decodes what New Zealand & Australian marketers need to do differently
AEO / GEO / AI Search
Keyword Strategy Guide
New Zealand & Australian Marketers
The Shift
Typical SEO keyword research is no longer fit for purpose for AI Search
Four out of five New Zealanders used AI tools in the last year (InternetNZ, 2025). The majority used them via search and chatbots to ask questions and get information. If your brand isn’t showing up in those answers, you’ve already lost the consideration
Kiwi search behaviour is shifting fast. According to the IAB New Zealand 2025 Kiwi Search Habits Survey, 96.8% of New Zealanders still name Google as their primary search engine, but that moat is narrowing.
AI chatbot use is accelerating, and the consumers arriving at purchase decisions are increasingly doing so through AI-mediated answers, not a list of blue links.
At the same time, the way AI engines process search queries is fundamentally different from how Google’s traditional ranking algorithm works. Most SEO strategies haven’t caught up. This guide is about closing that gap.
We’ll walk through why traditional keyword research is structurally broken for LLM search, how the query decomposition pipeline works, and the revised methodology. the ‘FAN Framework’ that we use at The Optimisers to build citation-eligible content ecosystems for New Zealand’s mid-market and enterprise brands.
1.
Section 01
What Happens Before a Single Result is Retrieved
When someone types a query into Perplexity, Google AI Mode, or ChatGPT Search, the AI doesn’t immediately go looking for pages. It first transforms the query through a multi-step pipeline query reformulation. Cleaning, clarifying, and then decomposing the original input into multiple sub-queries before a single document is retrieved.
Google officially named this mechanism “query fan-out” at Google I/O 2025: AI Mode breaks a question into subtopics and issues a multitude of queries simultaneously on your behalf. A single user query fans out into anywhere from 6 to 20 sub-queries, each searched independently, with results synthesised into the final answer.
The Five Stages of this pipeline are:
Platform Comparison
How each major platform handles this differently
The fan-out mechanism is consistent across platforms, but the implementation varies. This matters for content strategy because the sub-query space each platform generates can differ.
| Platform | Fan-Out Mechanism | Key Characteristic |
|---|---|---|
| Google AI Mode | Named “query fan-out” (Google I/O 2025) | Gemini generates parallel themed sub-queries, retrieving 20–100 candidates per sub-query |
| Perplexity | Hybrid retrieval: dense vector + BM25 + multi-stage ranking | Treats document sections as atomic retrieval units; shows sub-queries in "Steps" tab |
| ChatGPT Search | Query reformulation strongly implied by retrieval behaviour | 31% of prompts trigger web search (Nectiv, 2025); OpenAI hasn't published full architecture details |
| Microsoft Copilot | Iterative grounding loop, not pure parallel fan-out | Sequential: each result informs the next query, creating a grounding chain rather than a parallel burst |
Here’s the part most explainers skip: LLM sub-queries are non-deterministic. Research suggests only around 27% of fan-out sub-queries remain consistent across different searches of the same query, the majority appear only once.
That is not a bug. It is the architecture. LLMs are stochastic (having a random probability distribution or pattern that may be analysed statistically but may not be predicted precisely) by design.
Their fan-out behaviour is further shaped by user context, session history, platform, and model version. Two users asking the same question may trigger different sub-query decompositions and receive different answers.
The implication: optimising for a fixed list of specific sub-queries means chasing a moving target. Optimising for comprehensive semantic coverage of a topic space, so your content is eligible regardless of which sub-queries the LLM generates is the only durable approach.
2.
Section 02
Why One Page, One Keyword Is Structurally Broken
Traditional SEO is built on a simple premise: optimise each page for one primary keyword, rank highly for that keyword, get traffic. In LLM search, that query is immediately decomposed into 6–20 sub-queries before any content is retrieved.
A page optimised for a single keyword is eligible for exactly one sub-query slot in a system where each slot cites only 2–7 sources. A page covering only the seed query might be completely absent from the final AI response, even though it is technically relevant.
The maths is unforgiving. In traditional Google, a number one ranking means visibility to everyone searching that term. In LLM search, you need to be citation-eligible across multiple sub-queries simultaneously.
The stakes for New Zealand brands are real and growing:
- Four in five New Zealanders (80%) used AI tools in the last year (InternetNZ, 2025)
- 62% of those users primarily used AI to ask questions or get information; the exact use case that drives brand discovery
- Globally, 35% of consumers use AI tools at the discovery stage of the purchase journey, versus just 13.6% using traditional search at that same stage (Similarweb, 2026)
- New Zealand’s AI market is projected to grow at 28.55% annually from 2025–2030, reaching NZD $20.34 billion by 2030
- AI Overviews reduce website clicks by 34.5% (Ahrefs, 2025), the zero-click shift is already happening
What traditional keyword research still gets right
Declaring traditional keyword research dead is lazy. Traditional keyword data, volume, difficulty, intent classification, CPC is still useful. It just needs to be thought about in a different manner.
In a GEO-adapted workflow, traditional keyword data answers the prioritisation question: which sub-queries in the fan-out space are worth dedicated content versus a section within a longer piece? A sub-query with 500 monthly searches warrants its own article. One with 50 searches gets a 200-word subsection. Traditional data tells you which is which.
What traditional keyword research cannot tell you: which sub-queries LLMs are generating, which content chunks are being retrieved for those sub-queries, how visible your brand is in AI-generated answers, or whether your content architecture supports chunk-level retrieval. Those require a different framework.
| Dimension | Traditional SEO | GEO-Adapted Research |
|---|---|---|
| Primary planning unit | Single keyword | Topic + full intent cluster |
| Research goal | Find rank-worthy terms | Map the complete fan-out space |
| Volume metric | Monthly search volume | Fan-out surface area coverage |
| Optimisation target | One keyword per page | Semantic coverage across the sub-query cluster |
| Success metric | Rank position in Google | AI citation frequency + share of voice |
| Content structure driver | Keyword density and placement | Chunk-level retrievability |
| Failure mode | Algorithm update drops rankings | Sub-query coverage gaps create invisible blind spots |
3.
Section 03
The FAN Framework: A Revised Keyword Research Methodology
The FAN Framework is a three-component approach for restructuring keyword research around the reality of LLM query decomposition. It doesn’t replace traditional keyword research. It reframes what keyword research is for.
F - Fan-out Mapping
A — Authority-signal Alignment
N — Node Architecture
(F) Fan-Out Mapping
Fan-out mapping is the new seed keyword research. Instead of starting with a short-tail keyword and expanding outward, start with the conversational anchor query your audience actually asks an AI engine:
Typically, 15–25 words. Then decompose it into its sub-query space.
Anchor queries are how people talk to AI engines. Seed keywords are how SEOs have historically thought about content strategy. GEO requires thinking in anchor queries first.
Get anchor queries from three places:
- Your sales team (what prospects actually ask in conversations)
- Customer support tickets and onboarding questions
- Direct experimentation. Prompt AI engines on your core topics and examine the sub-queries they generate in search steps.
For a NZ professional services firm, the difference looks like this:
- Seed keyword: “business accounting software NZ”
- Anchor query: “what accounting software should I use for a growing New Zealand SME with 15 staff?”
The anchor query is what someone actually types into Perplexity. The seed keyword is what SEOs used to build pages around. Only one of these maps to how LLMs decompose intent.
The Fan-out coverage audit
For each anchor query, map the probable sub-query types the LLM will generate. Every anchor query fans out across at least seven sub-query categories. The table below shows an example for a NZ financial services brand:
| Sub-Query Type | Example ("best KiwiSaver fund for a 35-year-old in NZ") | Covered? |
|---|---|---|
| Definition | "What is KiwiSaver and how does it work?" | Y / N |
| Comparison | "KiwiSaver growth fund vs balanced fund returns NZ" | Y / N |
| How-to | "How to switch KiwiSaver providers in New Zealand" | Y / N |
| Use case | "Best KiwiSaver for first home buyer NZ" | Y / N |
| Objection | "Is KiwiSaver worth it if markets are down?" | Y / N |
| Entity expansion | "Fisher Funds vs Simplicity KiwiSaver fees comparison" | Y / N |
| Metric | "Average KiwiSaver balance by age New Zealand 2025" | Y / N |
Gaps in the “Covered?” column are content briefs waiting to be written. If you have eight anchor queries and seven sub-query types each, you have a potential map of 56 content opportunities; most of which your competitors haven’t thought to target.
N: Node Architecture
LLMs don’t retrieve pages. They retrieve passages. A 3,000-word article is not the unit of retrieval, individual paragraphs and sections are. Content architecture is itself a keyword research output.
Node architecture means structuring content so every significant section is a self-contained, independently retrievable unit: an atomic chunk that answers one sub-query completely, without requiring context from earlier in the article.
Three rules for node architecture
| Rule | What It Means | NZ Example |
|---|---|---|
| Every H2 opens with a standalone direct answer | The first 30–60 words must answer the section's question completely, as if it's the only content the reader sees | "KiwiSaver fees vary by provider and fund type. As of 2025, annual management fees in New Zealand range from 0.35% (Simplicity) to over 1.5% for actively managed funds." |
| Definitions are always explicit | "KiwiSaver is New Zealand's voluntary, work-based savings scheme..." not "KiwiSaver, which most Kiwis have heard of..." | LLMs match explicit definitions. Implied definitions are invisible to retrieval |
| Statistics carry full context | Every quantified claim needs: a number, a population, an action, a timeframe, and a source | "87% of NZ organisations report using AI in some capacity in 2025 (Datacom State of AI Index, July 2025)", not just "AI adoption is high in NZ" |
4.
Section 04
The GEO Keyword Research Workflow: Step by Step
Here is FAN applied as an end-to-end process. This extends but does not replace traditional keyword research.
- Define your anchor queries. Collect 5–10 conversational questions your audience asks AI engines about your core topics. These should be 15–25 words and phrased as questions. Sources: sales conversations, support tickets, community forums (Reddit NZ, TradeMe forums, Seek forums), and direct AI engine testing.
- Run your Fan-Out Coverage Audit. For each anchor query, map the seven sub-query types against your existing content. Document gaps. These gaps are your prioritised content brief list.
- Run traditional keyword research on the gap list. Take every uncovered sub-query and validate with a keyword tool. Volume and difficulty determine whether a gap becomes a standalone article (500+ monthly searches) or a section within an existing piece (50–200 searches).
- Brief content to FAN standards. Every content brief must specify: the anchor query, the fan-out map, required statistics with sources (prioritise NZ-specific data from Stats NZ, MBIE, Datacom, InternetNZ, and IAB NZ), node architecture requirements, and internal linking.
- Audit existing content for fan-out coverage. Pull your top 20 traffic-driving pages and run them through the fan-out mapping exercise. How many sub-query types does each page cover? Pages addressing only one sub-query type are citation-vulnerable, they can be entirely omitted from LLM answers on their own topic despite strong traditional rankings.
- Measure AI citation frequency, not just rank. If you’re still reporting primarily on keyword rank positions, you are measuring the wrong game. See Section 5 for the right measurement stack.
Where to look
Finding anchor queries for NZ brands
New Zealand’s relatively small digital footprint means local signals are often more valuable than global ones for anchor query research. Look to:
-
→
Sales call transcripts and Zoom recordings — what language do NZ prospects actually use? -
→
Google Search Console query reports filtered for question-format queries (who, what, how, where, why, should, can) -
→
Perplexity’s “Steps” tab — run your core topics and observe the sub-queries it generates live -
→
Google AI Mode’s visible search steps in the NZ SERP -
→
Reddit NZ and NZ-specific Facebook and LinkedIn groups. Use things like Idea Ape for Reddit insights. -
→
Stats NZ, MBIE, and InternetNZ research reports. The questions their research answers are often the questions your audience is asking AI.
5.
Section 05
What to stop measuring, and what to start
The measurement shift follows directly from the methodology shift. If you are still reporting primarily on keyword rank positions, you are measuring the wrong game.
Research from SEOClarity found that 25% of the top 1,000 URLs cited by ChatGPT over one week had zero organic visibility in Google. Separately, Status Labs citation research found that only 12% of URLs cited by ChatGPT, Perplexity, and Copilot rank in Google’s top 10. Rank is still the entry fee — but it is no longer the scorecard.
Here is the measurement stack that maps to LLM visibility:
| KPI | What It Measures | Why It Matters for NZ Brands |
|---|---|---|
| AI citation frequency | How often you are cited across a tracked query cluster in LLM responses | The core GEO metric. Tracks actual presence in AI-generated answers. |
| Brand mention rate | Brand mentioned in AI responses, with or without a link | Awareness signal. NZ brands often appear without a citation link — this still influences decisions. |
| Fan-out coverage score | % of sub-query space where you have indexed, retrievable content | Identifies structural gaps before they cost citations. |
| Share of voice in AI answers | Your citations vs competitors across tracked topic clusters | Particularly relevant in NZ's concentrated market (often just 3–5 competitors per category.) |
| Zero-click rate | % of searches answered in SERP without a click | High zero-click rates signal where GEO visibility matters most for that query. |
| AI-referred traffic | Direct traffic from ChatGPT, Perplexity, Copilot referral domains | Measurable in GA4. Growing fast in NZ. Benchmark now before it accelerates. |
A note on AI-referred traffic in the NZ context
AI-referred traffic is now measurable in web analytics platforms e.g. GA4 via referral domains including chat.openai.com, perplexity.ai, and copilot.microsoft.com. While absolute volumes remain modest in New Zealand compared to the US, the trajectory matters more than the current number. Brands that establish AI citation authority now will compound that advantage as adoption accelerates.
New Zealand AI adoption is moving fast: business usage rose from 48% in 2023 to 87% in 2025 (Datacom). Among organisations with 200+ employees, adoption is already at 92%. The enterprise buyers you’re targeting are using AI search to evaluate vendors and solutions right now.
6.
Section 06
The practical GEO brief template
Every piece of GEO-optimised content should be briefed against these five requirements:
| Brief Element | Requirement | Example |
|---|---|---|
| Anchor query | 15–25 word conversational question your audience asks AI engines | "What performance marketing agency should a NZ retail brand use for AI-era search?" |
| Fan-out map | All 7 sub-query types mapped: Definition, Comparison, How-to, Use case, Objection, Entity expansion, Metric | Definition: "What is GEO?" | Metric: "GEO citation rate benchmarks NZ" |
| Required statistics | Minimum one quantified claim per H2 section, with source. Prioritise NZ data. | "87% of NZ organisations report positive AI impact (Datacom, 2025)" |
| Node architecture | Each H2 opens with a standalone direct answer (30–60 words). Definitions explicit. Statistics carry full context. | First sentence answers the section question completely without referring back. |
| Internal links | Link from sub-query content to hub page and to other sub-query content within the cluster | Definition page links to Comparison page, How-to page, and Metric page within the same topic cluster |
FAQ
GEO Keyword Research for New Zealand Marketers
What is query fan-out in LLM search?
Query fan-out is the process by which an LLM-powered search engine decomposes a single user query into multiple sub-queries before retrieving any content. Google officially named this mechanism at Google I/O 2025. A typical query fans out into 6–20 sub-queries, each searched independently, with results synthesised into a single AI-generated answer.
Does traditional keyword research still matter for GEO?
Yes, but its role has shifted from primary target to prioritisation tool. Traditional keyword data (volume, difficulty, intent) answers which sub-queries in your fan-out space warrant a standalone article versus section-level coverage. What traditional research cannot tell you is which sub-queries LLMs are generating or how retrievable your content is at the chunk level.
Is AI search relevant for New Zealand businesses yet?
It already is. 80% of New Zealanders used AI tools in the last year (InternetNZ, 2025). 62% primarily used them to ask questions and get information. Among enterprise buyers, the audience that NZ’s mid-market B2B brands are targeting, AI search adoption is even higher. The question is no longer whether AI search is relevant. It’s whether your brand is showing up in the answers.
Why does keyword stuffing hurt GEO performance?
Keyword stuffing performs below the unoptimised baseline in LLM citation research (Princeton/Georgia Tech/IIT Delhi, 2023). LLMs reward semantic relevance, structural clarity, and quantified data. Not keyword repetition. The same study found that adding statistics to content improved citation rates by up to 41%, while keyword stuffing actively degraded visibility.
Does this approach work across ChatGPT, Perplexity, Claude, and Google AI Mode?
Largely yes. Research confirms that Gemini, GPT, and Claude share 78–84% of content preference rules, meaning citation-worthiness signals are substantially consistent across platforms. The FAN framework’s core principles; comprehensive fan-out coverage and data-dense atomic chunks apply across all major AI engines.
What's the difference between query fan-out and topic clustering?
Query fan-out is what LLMs do: decomposing a single user query into multiple sub-queries at retrieval time. Topic clustering is what you do: organising your content into hub pages and supporting articles that cover a subject comprehensively. Topic clustering is your content strategy response to the reality of query fan-out.
When you build topic clusters that map to the seven sub-query types: Definition, Comparison, How-to, Use case, Objection, Entity expansion, Metric. Your content becomes retrievable across the full fan-out space.
The Window Is Open Right Now
Build Authority Before Your Competitors Do
Traditional SEO answered one question: what exact phrase should this page rank for? GEO-adapted keyword research answers a fundamentally different question: what is the complete semantic space , across all the sub-queries that will fan out from our audience’s AI search behaviour where our content needs to exist and be citation-eligible?
That second question is harder to answer. It requires mapping anchor queries rather than seed keywords, auditing fan-out coverage rather than keyword density, and measuring AI citation frequency rather than rank position.
But the practitioners who understand that AI engine visibility is won in the sub-query space, not in the main keyword, will establish topical authority in LLMs before the rest of the industry catches up.
In New Zealand’s concentrated market, being the first brand in your category to build genuine GEO authority is a durable competitive advantage.
Traditional SEO is still the entry point. You need to be indexed and discoverable for the sub-queries LLMs fan out to. But ranking is no longer the full game. The full game is building a content ecosystem comprehensive enough that regardless of which sub-queries an LLM generates from your audience’s anchor queries, your content is citation-eligible for most of them.
That’s not a minor update to your keyword brief. That’s a new job description for keyword research itself.
About The Optimisers
The Optimisers is Auckland's AI-era performance marketing agency, specialising in SEO, GEO (Generative Engine Optimisation), AEO (Answer Engine Optimisation), and Agentic SEO. We work with New Zealand's mid-market and enterprise brands to build the content ecosystems and technical foundations required to win in AI-mediated search.
