## AI Assistant Tracking Requires New Metrics
**Fewer clicks can still mean more revenue—if you’re cited.**
Hold **average order value** ($AOV$) constant and let
- $S_0$ = baseline sessions
- $S_{\mathrm{ai}}$ = *incremental* AI-referred sessions (net of cannibalization)
- $\mathrm{CVR}_{\mathrm{avg}}$ = baseline conversion rate
- $\mathrm{CVR}_{\mathrm{ai}}$ = conversion rate of AI-referred sessions
If $S_{\mathrm{ai}}/S_0 \approx 0.10$ (10 %) and $\mathrm{CVR}_{\mathrm{ai}}/\mathrm{CVR}_{\mathrm{avg}} \approx 4.4$ (4.4 ×), then the expected lift in revenue is
$
\frac{\Delta R}{R_0}
= \frac{S_{\mathrm{ai}}}{S_0}\;
\frac{\mathrm{CVR}_{\mathrm{ai}}}{\mathrm{CVR}_{\mathrm{avg}}}
\approx 0.44 \quad (44\%).
$
This shows why measuring the **right** assistant-specific metrics is critical: traditional $CTR$ alone can’t explain the upside.
---
### Business-level Metrics (track in GA4)
| Metric | Why it matters | Implementation hints |
| --- | --- | --- |
| **AI-referred Sessions** | Size of the new channel | Standardised UTMs, referrer allow-list, server-side tagging |
| **AI Session Outcomes** | Profitability of the channel | $\mathrm{CVR}$, $\mathrm{RPS}$, $AOV$, $LTV$, refunds/returns |
---
### Assistant/content-level Metrics (benchmark loop)
1. **Build** a representative query set $Q$ (100–200 long-tail prompts across intents, buyer stages, user profiles; branded & unbranded; clustered by topic).
2. **Generate** 3–10 assistant-style sub-queries per prompt (captures “query fan-out”).
3. **Sample** responses across target assistants and record:
* **AI Citations** – raw count of your brand mentions/links.
* **Share of Answer** ($SOA$) – your slice of citations vs competitors.
* **Sub-query Rank** ($SQR$) – how often you appear in the top $k$ organic results for the generated sub-queries.
4. **Re-sample** at fixed intervals (e.g., weekly) to monitor lift or decay.
#### Share of Answer (SOA)
Per-query:
$
\mathrm{SOA}(q)=
\frac{\text{your brand citations in the response to }q}
{\text{all brand citations in that response}}.
$
Aggregate over a query set $Q$:
$
\mathrm{SOA}(Q)=
\frac{1}{|Q|}
\sum_{q\in Q}\mathrm{SOA}(q).
$
*Notes:* Weight citations by prominence if desired (e.g.\ primary answer
gt;$ collapsed source gt;$ footnote).
Average $\mathrm{SOA}_a(Q)$ across assistants $a$—raw or impression-weighted—to spot where to invest.
#### Sub-query Rank (SQR)
For each assistant-generated sub-query $q\in Q$, let $r_q$ be your best organic rank (set $r_q=\infty$ if you don’t rank in the window). Then
$
\mathrm{SQR}@k=
\frac{\lvert\{\,q\in Q:\ r_q\le k\,\}\rvert}{|Q|},
$
which is recall@ $k$. Choose $k$ to match funnel depth (e.g.\ $k=3$ or 10).
Compute $\mathrm{SQR}_e@k$ per engine/assistant $e$ and average or weight by impressions.
> **Key question:** Are you growing $SOA$ fast enough—and improving $\mathrm{SQR}@k$ on the sub-questions assistants actually generate—to turn declining $CTR$ into higher $RPS$, $LTV$, and total revenue?
---
### Implementation Gotchas
* Recover missing referrers with **server-side tagging** plus **allow-listed referrers**.
* Maintain a **brand dictionary** (name variants, product lines, tickers) for reliable citation matching.
* Log **assistant model/version, locale, timestamp** to control for response variance.
* Track **citation position** so you can weight $SOA$.
* Deduplicate hosts when computing $r_q$ if domain coverage (not page count) is what you care about.
* Measure **cannibalisation** explicitly (uplift vs holdout) when estimating $\Delta R/R_0$.