By 2026, your homepage won’t matter. A chat model will decide if anyone even reaches it. 👻
The front door to your content is now Perplexity, ChatGPT, Claude, and Google’s AI Overviews. They crawl, parse, and cite. My AI research agent pulled the docs and logs - OpenAI lists GPTBot and OAI-SearchBot, Perplexity lists PerplexityBot, Anthropic lists ClaudeBot. You can allow or block them. The gatekeepers are machines.
If your site doesn’t speak model-native, you’re invisible. “White coding” is not AI-generated code. It’s building pages that models can extract without sweating:
- Server-render the actual words - no JS labyrinth.
- Tight sections - H2/H3, anchors, stable IDs per section.
- JSON-LD - a small data box that says who-what-when, with author, sameAs links, dates.
- Freshness signals everywhere - dateModified, sitemaps with lastmod, RSS/Atom, IndexNow if you have it.
- Provenance - citations to sources, credentials, canonical URL, clear license.
- Optional but deadly - a per-article JSON mirror like /api/article/slug.json with summary, claims, references, sections.
Why this works: models think in chunks, entities, and timestamps. Clean structure makes extraction easy. sameAs prevents name mixups. Visible dates win recency battles. Server-side HTML lowers crawl cost. And citations are the new clicks - the pages that are quotable get surfaced.
No, WordPress and Shopify aren’t dead. Bloated themes are. If your output is soup - heavy scripts, popups, thin metadata - you’ll underperform. If it’s crisp facts with structure, you’ll win. Allow legit bots in robots.txt, and if you really want to block them, verify with IP or your firewall. Don’t overfit to one vendor’s quirks - the internals shift, but every system converges on structure, provenance, freshness, and access.
One proof point: Perplexity already behaves like an answer engine. It pulls sources, quotes lines, and shows your favicon. Pages with clean JSON-LD, clear summary boxes, and stable anchors get cited more than “pretty” single-page apps that hide content behind scripts.
The quiet downside - optimize for machines and the web turns to oatmeal. The antidote is simple: ship unique data, show your work, cite sources. Models reward that too.
Hard take: build for the gatekeepers first, then delight the humans they send. Same country, same language - or get waved back at the border. 🛂
Want a compact starter pack - JSON-LD snippet and a /api/article example - or a Perplexity vs non-white-coded A/B test breakdown?