**TLDR:** While Evans correctly identifies the importance of "expert POV" and earned media, several of her core claims are dangerously misleading. Honestly, of all the marketing/AI frameworks I've analyze, this one might be the worst. The general issues are: 1. over-reliance on few studies (and no references) 2. bias against technical SEO 3. very rigid system: - no adaptation for domains 2. no adaptation for emerging platforms or assistants 3. no adaptation for business models 4. no adaptation for user intents: 1. in B2B vs B2C contexts; 2. in customer journey steps; 3. across avatars I hope you didn't read the article. In case you did, I'm going to clear things up. Most importantly, you should never imagine any fixed hierarchies of content types, platforms, or assistants. There are no silver-bullets; focus on disciplined iteration. Let's go: ## Q: Does Paid Media Have "zero influence" on AI Rankings? **Wrong:** "Paid media has zero influence on AI rankings / You can't buy your way in." **Reality:** Research demonstrates that paid distribution absolutely influences visibility. Wire-distributed press releases "dominate early windows" due to recency bias (65% of citations from past year).[^18] More importantly, world-class publications (Forbes, WSJ, The Verge) offer sponsored content that feeds AI retrieval just like editorial content. With news/blogs accounting for 41-73% of citations,[^12] sponsored expert content in these outlets directly contradicts "zero influence." ## Q: Do Aggregator and Forum Content Outrank Owned Brand Content? **Wrong:** "Aggregator & forum content outrank owned brand content; owned is the least-trusted tier." **Reality:** This completely misses user intent. An 8,000-citation analysis shows forums/UGC at <4% overall.[^12] The citation mix reflects what users need at each stage, not universal trust tiers. News/blogs dominate not because they're inherently superior, but because they match broad informational intent. ## Q: Does Social Media Carry "medium impact" for AI Citations? **Wrong:** "Social (Reddit, Quora, LinkedIn) carries 'medium' impact." **Reality:** Social platforms show <4% citation rates overall,[^12] but this misses the nuance of user intent. Someone diving deeper into purchase decisions may trigger more Reddit citations in AIO product comparisons—potentially indicating _higher_ buyer intent, not medium impact. These platforms are tactical opportunities for specific query types and buyer stages, not strategic pillars. ## Q: Is Owned Brand Content the "least trusted" Source? **Wrong:** "Owned brand content is least trusted unless structured and cited." **Reality:** This claim is dangerously wrong. In B2B, owned content achieves the second-highest citation rate at ~17%[^27]—that's 4.25× more citations than all social platforms combined. How can owned content be both "least trusted" and the "foundation" in ROSE? The reality: B2B buyers seeking technical documentation trust vendor expertise; structure is table stakes, not a trust determinant. ## Q: Is Technical SEO an "emerging" Impact Layer? **Wrong:** "Engine (technical SEO) is an emerging impact layer." **Reality:** This reveals a classic PR vs. SEO bias. Technical SEO isn't "emerging"—it's foundational. Research states clearly: "AI assistants can't cite what they can't see or understand."[^30][^31] Top rankings don't even guarantee AIO citations (46.5% of cited pages rank outside top 50),[^21] but without fetchability, you have zero chance. Whether "emerging" means "new" (it's not) or "increasingly important" (it's always been critical), the claim ignores that technical foundations are non-negotiable across ALL platforms assistants use for retrieval, not just Google. ## Q: Does a Fixed Hierarchy (Responsive > Owned > Social > Engine) Apply Universally? **Wrong:** "A fixed hierarchy (Responsive > Owned > Social > Engine) applies universally." **Reality:** No universal hierarchy exists. Citation patterns shift dramatically by: customer profiles (B2B vs B2C),[^27] buyer journeys (discovery vs evaluation), content types (technical docs vs reviews), platforms (ChatGPT favors Wikipedia at 27%, Gemini favors YouTube),[^12] and assistants (each has distinct retrieval preferences).[^12] Google's "query fan-out" method generates multiple sub-queries,[^20] meaning visibility depends on comprehensive coverage, not hierarchical positioning. ## Q: Do Conversational Interfaces favor Simple Structured Answers over Branded Storytelling? **Wrong:** "Conversational interfaces favor clear, structured answers—not branded storytelling." **Reality:** Wrong for technical B2B contexts. Research shows assistants reward "unique data or definitive expert synthesis."[^22][^23] For complex technical areas, users need detailed documentation from businesses. For product comparisons, yes, review platforms matter. But dismissing branded storytelling ignores that proprietary data and expert analysis—often requiring narrative depth—earn citations when combined with proper structure. ## Q: Is Visibility just about Being "the Best source" rather than Distribution? **Wrong:** "Visibility isn't about distribution. It's about being the best source." **Reality:** This fundamentally misunderstands how AI assistants work. Research shows assistants have strong biases for multiple content formats across platforms—video content receives 3.1× more citations than equivalent text.[^13] Being the "best source" means nothing if you're not distributed where assistants look. Plus, "best" depends entirely on user intent: technical specs for B2B evaluation, third-party reviews for B2C comparison. ## The Real Framework Success requires understanding that AI citation patterns reflect **user intent**, not fixed hierarchies. That means you need to match content to buyer stage (across avatars) and business model.[^27] What else do you have to do to earn citations? Read my full report: [[The No Nonsense Guide to Getting Cited by AI v9]] [^1]: [Ahrefs (May 19, 2025)](https://ahrefs.com/blog/insights-from-56-million-ai-overviews/) found that AIO now appears in 12.8% or more of all Google searches by volume (skewing toward longer, non-branded informational queries)—nearly 2 billion appearances daily, with frequency increasing monthly and occupying most screen real estate previously held by traditional search results. [^2]: [OpenAI (July 22, 2025)](https://openai.com/global-affairs/new-economic-analysis/) reported that ChatGPT handles ~2.5 billion prompts daily from global users (~330 million from U.S. users). [^3]: [Ahrefs (April 17, 2025)](https://ahrefs.com/blog/ai-overviews-reduce-clicks/) analyzed 300,000 keywords and found that AIO presence correlated with organic CTR drops of 34.5% for rank-1 pages compared to similar informational keywords without AIO. [^4]: [Seer Interactive (February 4, 2025)](https://www.seerinteractive.com/insights/ctr-aio) found that AIO reduces both organic CTR (dropping from 1.41% to 0.64% year-over-year) and paid CTR. [^5]: [Adobe Digital Insights Quarterly Report (June 2025)](https://business.adobe.com/content/dam/dx/us/en/resources/reports/adobe-digital-insights-quarterly-report/adobe-digital-insights-quarterly-report.pdf) analyzed trillions of visits and billions of transactions, finding that: AI-referred traffic (from ChatGPT, Perplexity, Copilot, Gemini) grew 10–12× from July 2024 to February 2025; engagement metrics improved with 23% lower bounce rates, 12% more page views, and 41% longer sessions versus other traffic; conversion gap narrowed from 43% lower in July 2024 to 9% lower by February 2025; revenue per visit reached parity during 2024 holidays with travel showing 80% higher RPV and banking showing 23% higher application starts from AI-referrals. [^6]: [Semrush (June 9, 2025)](https://www.semrush.com/blog/ai-search-seo-traffic-study/) studied 500+ high-value digital marketing and SEO topics, finding that: 50% of ChatGPT 4o response links point to business/service websites; the average AI-referred visitor (from non-Google sources like ChatGPT) converts ~4.4× more than the average Google Search visitor (via AIO or not); 90% of pages cited by ChatGPT typically rank 21+ in traditional search for related queries. [^7]: RALM (Retrieval-Augmented Language Models) encompasses various implementations differing in timing (pre‑training, fine‑tuning, inference), supervision (learned vs. fixed retrievers), and conditioning (prompted, fused, or generative). [Ram et al. (August 1, 2023)](https://arxiv.org/abs/2302.00083) formalized the RALM acronym with In-Context RALM. [Hu & Lu (June 29, 2025)](https://arxiv.org/abs/2404.19543) established RALM as the umbrella term spanning [Retrieval Augmented Generation (RAG)](https://en.wikipedia.org/wiki/Retrieval-augmented_generation) and Retrieval Augmented Understanding (RAU). [^8]: [Zhang et al. (October 22, 2023)](https://arxiv.org/abs/2310.14393) demonstrated that pairing AI-generated passages with retrieved sources improves accuracy when knowledge conflicts exist by: generating multiple passages from parametric knowledge; retrieving external sources that may agree or conflict; matching generated and retrieved passages into compatible pairs; and processing matched pairs together using compatibility-maximizing algorithms. [^9]: [Google's U.S. Patent No. US11769017B1 (September 26, 2023)](https://patents.google.com/patent/US11769017B1/en) describes two approaches for AIO citation: Search-First retrieves content based on query relevance then generates responses, ensuring grounding but potentially missing parametric insights; Generate-First creates responses from parametric knowledge then searches for verification sources, leveraging model understanding but requiring post-hoc verification. [^10]: [Wu et al. (February 7, 2025)](https://arxiv.org/abs/2404.10198) found that models with lower confidence in initial responses (measured via token probabilities) more readily adopt retrieved content, while confident models resist contradictory information. [^11]: [Xie et al. (February 27, 2024)](https://arxiv.org/abs/2305.13300) found that RALMs encountering both supportive and contradictory external sources exhibit strong [confirmation bias](https://en.wikipedia.org/wiki/Confirmation_bias) toward parametric knowledge rather than synthesizing conflicting viewpoints. [^12]: [Search Engine Land (May 12, 2025)](https://searchengineland.com/how-to-get-cited-by-ai-seo-insights-from-8000-ai-citations-455284) analyzed ~8,000 citations across 57 queries, finding platform-specific patterns: news steady at 20–27%, blogs varying 21–46%, UGC typically under 4% (often <0.5%); Wikipedia leads ChatGPT (27%), YouTube dominates Gemini, expert review sites prominent for Perplexity (9%), Reddit most-cited for AIO; product/vendor blogs appear across engines (~7% for AIO/Gemini/Perplexity). [^13]: [GoDataFeed (February 14, 2025)](https://www.godatafeed.com/blog/google-ai-overview-prefers-video) analyzed video citations in AIO, finding YouTube citations in 78% of product comparison searches with significant industry variation, and video content 3.1× more likely to be cited than equivalent text content. [^14]: [SparkToro (March 10, 2025)](https://sparktoro.com/blog/new-research-google-search-grew-20-in-2024-receives-373x-more-searches-than-chatgpt/?utm_source=chatgpt.com) analyzed U.S. desktop behavior, finding Google processes ~14B searches daily versus ChatGPT's ~37.5M search-like prompts—a 373× gap. All AI tools combined represent <2% of search volume. [^15]: [Ahrefs (February 6, 2025)](https://ahrefs.com/blog/ai-traffic-study/#) studied 3,000 websites and found ChatGPT drives 50% of AI-referred traffic, potentially delivering disproportionate value. [^16]: [Search Engine Land (May 29, 2025)](https://searchengineland.com/mike-king-smx-advanced-2025-interview-456186) interviewed Michael King, who argued search functions as a branding channel despite industry focus on performance metrics. AI surfaces make this branding function undeniable by exposing brand information without requiring clicks, transforming non-branded searches into branded awareness within search results, creating more qualified traffic. [^17]: [Semrush (February 3, 2025)](https://www.semrush.com/blog/chatgpt-search-insights/) analyzed 80 million clickstream records, finding that: Google showed higher navigational intent while ChatGPT showed more informational intent; SearchGPT-enabled distribution mirrored Google with increased navigational, commercial, and transactional searches; SearchGPT-disabled prompts leaned heavily informational with many falling into "unknown" intent due to longer, detailed nature. [^18]: [Seer Interactive (June 15, 2025)](https://www.seerinteractive.com/insights/study-ai-brand-visibility-and-content-recency/) analyzed 5,000+ cited URLs across ChatGPT, Perplexity, and AIO, finding strong recency bias: 65% of citations from past year (2025), 79% from past two years, 89% from past three years. AIO showed strongest bias (85% from 2023–2025), followed by Perplexity (80%) and ChatGPT (71%). [^19]: Content decay refers to the gradual decline in performance and relevance of online content over time, leading to decreased traffic, lower rankings, and diminished engagement. It's a natural part of the content lifecycle. [^20]: [Google's Developer Search Documentation (June 19, 2025)](https://developers.google.com/search/docs/appearance/ai-features) confirms AIO and AI Mode use "query fan-out" technique (also called "query expansion")—issuing multiple related searches (e.g., intents, related or subtopics, specific named entities, and adjacent needs) to develop responses. Assistants retrieve sites ranking for synthetic sub-queries beyond the original query. [^21]: [Advanced Web Ranking (July 1, 2024)](https://www.advancedwebranking.com/blog/ai-overview-study) analyzed 8,000 keywords across 16 industries, finding that top rankings don't guarantee AIO citations: 33.4% of AIO links ranked top 10 organically while 46.5% ranked outside top 50. [^22]: [Aggarwal et al. (June 28, 2024)](https://arxiv.org/abs/2311.09735) found that adding substance (quotes, statistics, sources) and improving writing quality increases AI citation rates more than stylistic optimization like adding technical terms or unique keywords. [^23]: [Wan et al. (August 9, 2024)](https://arxiv.org/abs/2402.11782) showed LLMs prioritize relevance over credibility indicators humans value (scientific references, neutral tone, authoritative sources). Substantive text additions may improve AI visibility by increasing information density rather than traditional credibility, providing more semantic relevance hooks. [^24]: Knowledge conflicts describe contradictions between parametric memory and contextual evidence. [Longpre et al. (January 12, 2022)](https://arxiv.org/abs/2109.05052) formalized conflicts as contextual contradictions to learned knowledge. [Xu et al. (June 22, 2024)](https://arxiv.org/abs/2403.08319) systematized taxonomies (context‑memory, inter‑context, intra‑memory) and mitigation guidance. [^25]: [Qian et al. (October 15, 2024)](https://arxiv.org/abs/2310.00935) found LLMs can identify knowledge conflicts but struggle to pinpoint specific conflicting segments and provide appropriately nuanced responses. [^26]: [Jin et al. (February 22, 2024)](https://arxiv.org/abs/2402.14409) found RALMs follow [bandwagon effect](https://en.wikipedia.org/wiki/Bandwagon_effect)/[majority rule](https://en.wikipedia.org/wiki/Majority_rule) when facing conflicting evidence, trusting evidence appearing more frequently. [^27]: [Search Engine Land (May 12, 2025)](https://searchengineland.com/how-to-get-cited-by-ai-seo-insights-from-8000-ai-citations-455284) revealed RALMs cite company content 4.25× more in B2B versus B2C queries (17% vs. <4% across platforms): B2C queries favor review sites, tech blogs, Wikipedia, Reddit/Quora with minimal company citations; B2B queries show company sites/blogs at ~17%, plus industry publications and analyst reports; mixed queries show ~70% news and blog citations. [^28]: [OpenAI (April 10, 2025)](https://x.com/OpenAI/status/1910378768172212636) announced that ChatGPT memory now references all past chats to provide personalized responses based on user preferences and interests for writing, advice, learning, and other applications. [^29]: [Google's U.S. Patent No. US11769017B1 (September 26, 2023)](https://patents.google.com/patent/US11769017B1/en) indicates AIO/AI Mode may rerank retrieved content by relevancy to recent queries and update overviews based on user interaction to reflect familiarity with certain sources/content; same patent cited for a different claim. [^30]: E.g., ensure [robots.txt](https://en.wikipedia.org/wiki/Robots.txt) and login or paywall gates don’t hide primary content from crawlers; prioritize [server-side rendering](https://en.wikipedia.org/wiki/Server-side_scripting) or static [HTML](https://en.wikipedia.org/wiki/HTML); keep pages lean; avoid heavy [DOM](https://en.wikipedia.org/wiki/Document_Object_Model) gymnastics; keep [interstitials](https://en.wikipedia.org/wiki/Interstitial_webpage) away from primary content; for complex apps (e.g., a [single-page application](https://en.wikipedia.org/wiki/Single-page_application)), provide fallbacks such as prerendered routes or static [JSON](https://en.wikipedia.org/wiki/JSON) [API endpoints](https://en.wikipedia.org/wiki/Web_API); Hybrid “serve crawlers differently” setups can work (not "[cloaking](https://en.wikipedia.org/wiki/Cloaking)")—just budget for maintenance. [^31]: E.g., write self-sufficient paragraphs; provide structured metadata using [JSON-LD](https://en.wikipedia.org/wiki/JSON-LD) with [schema.org](https://en.wikipedia.org/wiki/Schema.org) types (Organization, Person, Product) that loads without [JavaScript](https://en.wikipedia.org/wiki/JavaScript); include explicit bylines and credentials, publication and last-modified dates, and simple [changelogs](https://en.wikipedia.org/wiki/Changelog); keep schema consistent across pages (a shared `schema.json` helps); use [semantic HTML](https://en.wikipedia.org/wiki/Semantic_HTML) for headings, stable terminology, and consistent internal hyperlinks to canonical entity pages. [^32]: With baseline sessions $S_0$ and incremental net new AI-referred sessions $S_{\mathrm{ai}}$ (gross AI sessions minus cannibalized ones), relative revenue lift is $\Delta R/R_0 = \frac{S_{\mathrm{ai}}}{S_0} \cdot \frac{\mathrm{CVR}_{\mathrm{ai}}}{\mathrm{CVR}_{\mathrm{avg}}}$ where $\mathrm{CVR}_{\mathrm{ai}}/\mathrm{CVR}_{\mathrm{avg}}$ is the conversion rate multiplier; AOV held constant; If cannibalization is unmeasured, treat this as an upper bound. [^33]: This may require implementing UTMs, a referrer allowlist, or server-side tagging in conjunction with [Google Analytics 4 (GA4)](https://en.wikipedia.org/wiki/Google_Analytics). Additionally, as of June 17, 2025, AI Mode traffic is now included in [Google Search Console (GSC)](https://en.wikipedia.org/wiki/Google_Search_Console) performance reports, aggregated with regular search traffic; AIO data is also included, but it's not possible to isolate the performance of citations within GSC. Still, combine GSC and GA4 data for a more comprehensive view of traffic from AIO / AI Mode. [^34]: E.g., compile ~100–200 long-tail prompts across intents, buyer stages, user profiles; branded & unbranded; clustered by topic; mimic "query fan-out" by generating ~3-15 diverse sub-queries (e.g., adjacent needs). [^35]: Test different levels (per query, sub-queries, topical clusters, full set), platforms (Google Search, YouTube), and AI assistants (AIO, AI Mode, ChatGPT, Perplexity, Gemini, Claude); using adequate sample sizes to account for variance; measure sentiment in AI generated answers. [^36]: $\mathrm{SOA}(q) = \frac{\text{your brand citations}}{\text{all brand citations}}$ for query $q$; aggregate (full set) as $\mathrm{SOA}(Q) = \frac{1}{|Q|}\sum_{q \in Q}\mathrm{SOA}(q)$. Weight by depth/placement (title, top answer, footnote) and link type (direct/indirect) and average across assistants (raw or impression-weighted). [^37]: With sub-queries of the original query as set $Q$, measure coverage as % of sub-queries where you rank at any position ($\mathrm{SQR}@\infty$, i.e., $k = \infty$), with finer granularity via $\mathrm{SQR}@k = \frac{|\{q \in Q: r_q \leq k\}|}{|Q|}$ where $r_q$ is your organic rank for sub-query $q$ (set $r_q = \infty$ if unranked). Choose $k$ for preferred ranking threshold (e.g., top 3, top 10).