## AI Assistant Tracking Requires New Metrics
**Fewer clicks can still mean more revenue—if you’re cited.** Illustratively, with $AOV$ held constant, if incremental AI sessions (net of cannibalization) $\approx 0.10$ (10%) and the conversion rate of AI-referred sessions $\approx 4.4$ (4.4×), then relative lift of revenue $\approx 44\%$.[^30] This shows how AI citations can help offset lower search $CTR$, provided we measure what matters.
So $CTR$ alone doesn't capture the full picture anymore. Across assistants (AIO, AI Mode, ChatGPT, Perplexity, Gemini, Claude, etc.), you need track AI-referred sessions and outcomes with [Google Analytics 4 (GA4)](https://en.wikipedia.org/wiki/Google_Analytics): use standardized UTMs + referrer allowlist; server-side tagging; track $\mathrm{CVR}$, $\mathrm{RPS}$, $AOV$, $LTV$, and refunds/returns. But those are just the business outcomes.
If you want to optimize for AI assistants, you need to measure content, platform, and assistant performance against those business metrics. You need to (i) know how often you're being cited (and to what extent); (ii) find which industry blogs/news sites are driving rival citations; (iii) expose your topical coverage gaps; and quantify changes after strategy modifications (either content, platform, or assistant).
To pull this off, we need to develop a robust benchmark:
1. **Build** a query-set $Q$ of ~100-200 representative, long-tail queries; across intents (prefer informational), buyer journeys, and user profiles; branded and unbranded; clustered by topic.
2. **Generate** 3–10 sub-queries per query (captures “query fan-out”).
3. **Sample** the following (with adequate sample sizes to account for variance) per query, sub-queries, and query clusters, across the full query-set, platforms, and assistants:
- **AI Citations:** a simple count of your brand’s mentions or links to your company site/blog in responses; consider weighting by depth/placement (title, top answer, footnote) and link type (direct/indirect).
2. **Share of Answer (SOA):** AI Citations compared to competitors
- Per-query $SOA$ (single response): $\mathrm{SOA}(q)=\frac{\text{your brand citations in the response to }q}{\text{all brand citations in that response}}$
- Aggregate $SOA$ (over a query set $Q$): $\mathrm{SOA}(Q)=\frac{1}{|Q|}\sum_{q\in Q}\mathrm{SOA}(q).$
2. *Notes: (i) “Citations” = count of your brand’s mentions or links to your company site/blog; (ii) you may use a weighted version if you score prominence (e.g. top answer > collapsed source > footnote); (iii) to aggregate across assistants to measure lift and manage investments, average $\mathrm{SOA}_a(Q)$ over assistants $a$, or use impression-weighted averaging.*
3. > Are you growing $SOA$ across your query-set fast enough to turn declining $CTR$ into higher $RPS$, $LTV$, and total revenue?
3. **Sub-query Rank (SQR):** Traditional SERP rank adapted for "query fan-out"; using the AI-generated query set $Q$, we generate ~3-10 diverse sub-queries, pull our SERP rank for each, and measure the fraction of generated sub-queries where we appear in the top $k$ organic results.
- Aggregate $SQR$ (over a query set $Q$): $\mathrm{SQR}@k=\frac{\left|\{\,q\in Q:\ r_q\le k\,\}\right|}{|Q|}.$
- *Notes: (i) $Q$ is the assistant-style sub-query set for a query, which form topic clusters; (ii) $r_q$ is your best organic rank for $q$ (set $r_q=\infty$ if you don’t rank within the window), so $\mathrm{SQR}@k$ is equivalent to recall@k; (iii) pick $k$ to match your funnel (e.g., $k\in\{3,10\}$); (iv) de-duplicate hosts when computing $r_q$ if you care about domain coverage rather than page count; (v) compute $\mathrm{SQR}_e@k$ per engine/assistant $e$ and average (optionally impression-weighted).*
2. > Across the sub-questions assistants might generate, how often—and how well—do you rank organically?
4. **Re-sample** at regular intervals to measure change over time.
[[new AI metrics]]
[[new AI metrics v2]]
[[new AI metrics v3]]