## AI Assistant Tracking Requires New Metrics
**Fewer clicks can still mean more revenue—if you’re cited.** If AI adds an incremental share of sessions and those sessions convert better (AOV held constant), revenue lift decomposes cleanly.
Let $S_0$ be baseline (non-AI) sessions, $S_{\mathrm{ai}}$ be **incremental** AI sessions (net of cannibalization), $\mathrm{CVR}_{\mathrm{avg}}$ be baseline conversion rate, and $\mathrm{CVR}_{\mathrm{ai}}$ be the conversion rate of AI-referred sessions. With AOV constant,
$
R_0=\mathrm{AOV}\cdot S_0\cdot \mathrm{CVR}_{\mathrm{avg}},\qquad
R_1=\mathrm{AOV}\cdot\big(S_0\cdot \mathrm{CVR}_{\mathrm{avg}}+S_{\mathrm{ai}}\cdot \mathrm{CVR}_{\mathrm{ai}}\big).
$
Therefore the relative lift is
$
\frac{\Delta R}{R_0}
= \frac{R_1-R_0}{R_0}
= \underbrace{\frac{S_{\mathrm{ai}}}{S_0}}_{s}\;\cdot\;
\underbrace{\frac{\mathrm{CVR}_{\mathrm{ai}}}{\mathrm{CVR}_{\mathrm{avg}}}}_{u}.
$
*Notes: defining $s$ against **baseline** sessions makes this an **exact** identity (no approximation needed). If you instead prefer $s'=\frac{S_{\mathrm{ai}}}{S_0+S_{\mathrm{ai}}}$ (share of new total sessions), then $\frac{\Delta R}{R_0}=\frac{s'}{1-s'}\,u$.*
Illustratively, if $s\approx 0.10$ and $u\approx 4.4$, then $\Delta R/R_0\approx 44\%$. In practice, validate incrementality (holdouts), tagging/attribution, comparable intent, LTV, and operational constraints.
Essentially, AI citations can help offset lower search CTR, provided we measure what matters. Track cross-assistant (AIO, AI Mode, ChatGPT, Perplexity, Gemini, Claude, etc.) performance with:
1. **AI Citations:** your brand’s mentions/links in responses; consider weighting by prominence (e.g., top answer > collapsed source > footnote).
2. **AI-referred Sessions:** traffic from assistants; use standardized UTMs + referrer allowlist; server-side tagging.
3. **AI Session Outcomes:** CVR, RPS (revenue/session), AOV, LTV, refunds/returns.
[Google Analytics 4 (GA4)](https://en.wikipedia.org/wiki/Google_Analytics) can capture AI-referred sessions and conversions, but tracking **AI citations** is more involved:
1. **Build** a set of 100–200 representative, long-tail queries across intents and buyer journeys; include branded and unbranded; group by topic.
2. **Sample** responses across assistants; record brand mentions or links to company site/blog.
3. **Re-test** at set intervals (e.g., weekly) with adequate sample sizes.
Tracking AI citations also surfaces topics and third-party sites driving *rival* citations. To compare coverage more completely, compute **Share of Answer (SOA)** at the response and query-set levels.
**Per-query SOA (single response):**
$
\mathrm{SOA}(q)=\frac{\text{your brand citations in the response to }q}{\text{all brand citations in that response}}
$
**Aggregate SOA over a query set $Q$:**
$
\mathrm{SOA}(Q)=\frac{1}{|Q|}\sum_{q\in Q}\mathrm{SOA}(q).
$
*Notes: (i) “Citations” = count of your brand’s mentions or links to your company site/blog; (ii) you may use a weighted version if you score prominence.*
**Next, adapt traditional SERP rank for the AI era** with **Sub-query Ranking (SQR)**. Using the assistant-generated sub-query set $Q$ for a topic cluster, define your best organic rank for sub-query $q$ as $r_q\in\mathbb{N}\cup\{\infty\}$ (set $r_q=\infty$ if you don’t rank within the window). Then
$
\mathrm{SQR}@k=\frac{\left|\{\,q\in Q:\ r_q\le k\,\}\right|}{|Q|}.
$
In words, SQR@k measures the fraction of assistant-generated sub-queries where you appear in the top $k$ organic results. Track SQR by topic, across engines, and over time to expose coverage gaps and quantify gains after content updates.