Things I will be monitoring in 2026:
1. In general: Search engines are shifting from generating the best average answer to the best individual answer by ingesting and understanding more user context.
1. Because specific answers are better, there's a huge incentive to collect even more user data.
2. New dynamic: How niche does your on-domain content need to be, since LLMs can apply it to users for you?
3. Obviously, your content still needs to be "searchable" for certain terms. But the tried-and-true advice has been: niche down, niche down. I expect this to get more complicated as we more toward total adoption of LLMs.
2. Monitor AI assistant usage / user preferences (who, when, and why).
1. Which assistants are addressing specific market segments?
2. Will industry-specific AI assistants emerge to challenge generalists?
3. Are assistants moving up-funnel (discovery), moving down-funnel (decision), or collapsing everything?
4. This is less about "specialized models" (although that is certainly a factor in niche products like medicine or law), and more about specialized assistants.
5. Could mention here also what we were talking about OpenAI's new Browser.
3. Monitor when major AI labs announce partnerships that could reshape retrieval sources
1. E.g., OpenAI partnership with WSJ
2. Copyright / legislation is worth mentioning here. For example, if a big media company sues ChatGPT, then those placements will be totally forgotten by that assistant.
4. Monitor copyright legislation that could affect AI generation
5. Monitor citation biases which differ between assistants and are likely to change over time
1. Industry-specific trust patterns: how AI assistants model them to provide better answers for users
2. In general: recent third party placements gets citations
3. Specific platform: AIO likes Youtube for product comparisons
6. New crawlers or technical SEO requirements (e.g., robots.txt → llm.txt)
LLM specific that might be too technical or not actionable enough:
1. LLM inference costs are going down and context windows are growing
1. AI assistants will analyze hundreds or thousands of sources per query instead of dozens
2. Genuine relevance and unique value will be more important than any "optimization"
3. Google specific: rank becomes less important
2. LLM release cycles: time / investment increasing or decreasing? That affects how quickly brands can change LLMs (via training data).
3. "Dumb LLM" <→ "AI agent" spectrum
Topics I have not looked into in depth:
1. How AI changes user intent (psychological)
2. MCP (Model Context Protocol): Can it give LLMs private data in a way that is not public to consumers? https://daisyui.com/docs/editor/claudecode/
1. Quick update: MCP servers / the MCP protocol is gaining mass adoption among developers. So, LLM (AI Assistant) tool calling (integrations) will get easier and easier. That means assistants will know how to use any tool you give them, eventually. No hard coding (that means the LLMs more useful). Okay, let me rephrase that one more time ... The protocol is ushering in an era of standardization ... It's like an app store for LLMs. Instead of apps, they are "tools"; this will be a new ecosystem
3. "AI agents" (not AI bots) (prob over-hyped)