Business

SEO After LLMs Killed Traditional Search Traffic

My Google Search Console numbers for January 2026 told a story I'd been expecting but dreading: organic clicks were down 34% year-over-year on Grizzly...

My Google Search Console numbers for January 2026 told a story I'd been expecting but dreading: organic clicks were down 34% year-over-year on Grizzly Peak Software. Not because my content got worse. Not because a competitor outranked me. Because a significant portion of the people who would have clicked through to my articles were getting their answers directly from ChatGPT, Claude, Perplexity, and whatever Google's AI Overview decided to synthesize from my content without sending me a single visitor.

The old SEO playbook — keyword research, write a 2,000-word article, optimize your title tag, build some backlinks, wait for traffic — is not dead. But it's been fundamentally altered. And if you're a solo developer or small publisher who relies on organic search traffic to drive revenue, you need to understand what changed and what to do about it.

Retrieval Augmented Generation with Node.js: A Practical Guide to Building LLM Based Applications

Retrieval Augmented Generation with Node.js: A Practical Guide to Building LLM Based Applications

Build LLM apps with Node.js. Master RAG, vector embeddings, and hybrid search. Practical examples for scalable AI applications. For developers.

Learn More

I've spent the last year adapting my approach. Some of what I tried worked. Some didn't. Here's what I've learned.


The Traffic That Disappeared

Let me be specific about what happened, because vague complaints about "AI killing SEO" aren't useful.

The traffic that disappeared falls into a clear pattern: informational queries with short, definitive answers. Things like "what is an API gateway," "how to parse JSON in Python," or "difference between REST and GraphQL." These are queries where an LLM can synthesize a perfectly adequate answer from its training data without the user ever needing to visit a source.

Google's AI Overviews made this worse. For a query like "how to set up Express middleware," Google now shows a generated answer at the top of the results page that's good enough for most people. The ten blue links still exist below it, but click-through rates on those links have cratered.

Here's the thing that took me a while to accept: this is actually reasonable behavior from the user's perspective. If I search "what port does MongoDB use" and the answer is right there in the AI overview, why would I click through to someone's blog post that spends 800 words building up to telling me it's port 27017? I wouldn't. And neither would you.

The traffic that survived is different. It's people searching for opinions, experiences, comparisons, and implementation details that require context. "Should I use Postgres or MongoDB for my SaaS" still drives clicks because people want to hear from someone who's actually done both. "How I deployed my Node.js app on DigitalOcean" still drives clicks because the answer is long, involves screenshots, and benefits from a specific person's experience.

The distinction is roughly: factual lookups are dying, experience-based content is holding.


What Google Search Console Actually Shows Now

If you haven't looked at your Search Console data through this lens, you should. Here's what I found when I segmented my content:

Tutorial-style articles with code examples: Down 20-40% in clicks, but impressions mostly stable. People are seeing the listings but clicking less because the AI overview already gave them the gist. The ones that held up best are tutorials for uncommon tool combinations — things the LLMs aren't great at synthesizing because there isn't enough training data.

Opinion and strategy pieces: Roughly flat or slightly up. Articles like "why I stopped using microservices for side projects" actually gained traffic. My theory is that as LLMs handle the factual queries, people are using Google more deliberately for opinion and analysis content.

"How I built X" case studies: Up 15-25%. These are inherently resistant to LLM summarization because the value is in the specific details of one person's experience. An LLM can tell you how to build a job board in general. It can't tell you the specific mistakes I made building mine.

Reference and glossary content: Down 50-60%. This is the category that got crushed hardest. If you built a bunch of "What is X?" pages hoping to capture top-of-funnel traffic, those pages are largely worthless now.

The takeaway: the type of content you produce matters more than ever. SEO used to be a game you could win with volume and keyword targeting. Now the content itself has to offer something an LLM can't replicate.


The New SEO Playbook

Here's what I'm doing differently in 2026. This isn't theory — this is what's actually working on my sites.

Write From Experience, Not Research

The single biggest change I've made is this: I stopped writing articles based on research and started writing articles based only on things I've actually done. If I haven't personally implemented it, debugged it, or shipped it, I don't write about it.

This sounds obvious, but the old SEO model incentivized the opposite. You'd find a high-volume keyword, research the topic, and write a comprehensive article about it — even if your personal experience with the topic was shallow. That model produced a lot of content that was technically accurate but fundamentally interchangeable. Any competent writer could produce the same article. And now any competent LLM can too.

Experience-based content has a moat. When I write about the specific challenges of running a Node.js application on DigitalOcean's App Platform, I'm drawing on months of actual deployment experience. The edge cases I mention, the gotchas I warn about, the specific configuration decisions I made — those details can't be synthesized from general documentation. An LLM might give you the documentation answer. I can tell you what actually happens at 2 AM when your app runs out of memory.

Optimize for the Query After the AI Answer

Here's a pattern I've noticed: people ask an LLM a question, get a general answer, and then search Google for something more specific. The initial query goes to the LLM. The follow-up query — the one where they need depth — goes to Google.

So instead of targeting "how to implement rate limiting in Express," I target "Express rate limiting production issues" or "rate limiting Redis vs memory store tradeoffs." These are the queries people type after the LLM gave them the basics.

In practice, this means my keyword research now starts with: "What would someone search for after getting the generic answer from ChatGPT?" The answer is usually something more specific, more opinionated, or more experience-based than the original query.

Build Entity Authority, Not Just Page Authority

LLMs don't just pull from random web pages. They synthesize information and — increasingly — they attribute it. Being a recognized entity in your niche matters more now than it did in the pure-Google era.

What does this mean practically?

// JSON-LD structured data I add to every article
var structuredData = {
  "@context": "https://schema.org",
  "@type": "Article",
  "author": {
    "@type": "Person",
    "name": "Shane Larson",
    "url": "https://grizzlypeaksoftware.com",
    "sameAs": [
      "https://x.com/grabordev",
      "https://www.amazon.com/stores/Shane-Larson/author/B0DX2LNMMZ"
    ],
    "jobTitle": "Software Engineer",
    "worksFor": {
      "@type": "Organization",
      "name": "Grizzly Peak Software"
    }
  },
  "publisher": {
    "@type": "Organization",
    "name": "Grizzly Peak Software",
    "url": "https://grizzlypeaksoftware.com"
  },
  "headline": article.title,
  "datePublished": article.publishDate,
  "description": article.synopsis
};

Structured data tells search engines and LLMs who you are, not just what your page says. The sameAs links connect your identity across platforms. The author information establishes you as a real person with verifiable credentials.

But structured data alone isn't enough. You need consistent identity signals: the same name, the same bio, the same areas of expertise across your website, social profiles, Amazon author page, and anywhere else you show up. LLMs are trained on all of these sources. The more consistent your identity, the more likely you are to be recognized as an authority in your space.

Diversify Beyond Google

This is the one that took me the longest to act on, even though it's the most important.

If Google sends you 60% of your traffic and LLMs eat 30% of that, you just lost 18% of your total traffic with no way to get it back through traditional SEO. The math doesn't work anymore if Google is your only channel.

What I'm doing instead:

X (Twitter) as a distribution channel. I post threads summarizing key insights from articles, then link to the full piece. The engagement-to-click ratio is surprisingly good for technical content. More importantly, X content gets indexed quickly and shows up in search results, and it feeds into LLM training data.

Email newsletter. Old school, but it's traffic you own. Nobody can algorithmically decide to stop sending your newsletter to your subscribers. I should have started this years ago.

YouTube. I'm not a natural video creator, but even simple screen recordings of me walking through code get decent views. YouTube search is a different ecosystem from Google web search, and it's less affected by LLM competition because people searching YouTube specifically want video content.

Direct traffic through brand building. The percentage of my traffic that comes from people typing "grizzlypeaksoftware.com" directly has increased. This is the most resilient traffic source possible. Nobody can take it from you.


What About AI-Generated Content for SEO?

I have to address this because everyone asks: should you use LLMs to generate SEO content at scale?

My honest answer: it depends on what you're trying to accomplish, and the window is closing.

In early 2025, you could generate mediocre content at scale and rank for long-tail keywords. Google was slow to catch up. That worked for a while. I know people who made real money doing it.

By mid-2025, Google's spam detection improved significantly. The sites that were ranking with pure AI content started getting hammered. Not all of them — some are still doing fine — but the risk increased dramatically.

Here in 2026, my position is this: AI-generated content that doesn't have genuine human expertise behind it is a bad long-term bet. Google is getting better at detecting it. More importantly, if the content an AI generates is the same content any other AI could generate, what's your competitive advantage? You're producing commodity content in a market that's being flooded with commodity content.

Where AI helps me with content: outlining, editing, catching errors, formatting code examples, generating structured data. Where it doesn't: the actual insights, opinions, and experiences that make content worth reading. Those have to come from me.


The Metrics That Actually Matter Now

I've changed what I measure. Old metrics like "total organic clicks" and "average position" are still useful but they're no longer the primary indicators of success.

What I track now:

Engaged traffic. Not just clicks, but people who stay, scroll, and interact. A visitor who reads your entire article and bookmarks it is worth more than ten visitors who bounce after reading the AI overview snippet.

Email signups per article. This tells me whether the content is compelling enough to earn trust. If someone reads your article and gives you their email address, that content is working regardless of what your click count says.

Revenue per visitor. As total traffic decreases, revenue per visitor needs to increase. This means better monetization, better affiliate placement, better calls to action. Fewer visitors spending more is a viable model. Fewer visitors spending the same is a death spiral.

Brand search volume. Are more people searching for you by name? This is the ultimate signal that your content strategy is building something durable.

LLM citations. This is new and imperfect, but I check periodically whether ChatGPT and Claude mention Grizzly Peak Software or my articles when asked relevant questions. Being cited by LLMs is the new form of organic reach.


The Uncomfortable Truth

Here's what I think a lot of SEO-focused content creators don't want to hear: the era of building a business primarily on Google organic traffic is over. It's not coming back. LLMs will continue to improve. AI Overviews will expand to more query types. The percentage of searches that result in a click to an external website will continue to decline.

This doesn't mean SEO is worthless. It means SEO is necessary but not sufficient. You need it, but you can't rely on it alone.

The content creators who will thrive in this environment are the ones who have genuine expertise, build audiences they own, diversify their traffic sources, and produce content that can't be replicated by an LLM trained on the same documentation everyone else reads.

That's a higher bar than "write a 2,000-word article targeting a keyword with 5,000 monthly searches." But it's also a more defensible position. If your content strategy is built on things only you can write — because only you have the experience — then LLMs are a tailwind, not a headwind. They handle the commodity information. You provide the insights that actually matter.

I'm still figuring this out. My traffic numbers aren't where I want them to be. But the traffic I do get is more engaged, more valuable, and more likely to convert than the traffic I was getting two years ago. That's the tradeoff, and I think it's the right one.


Shane Larson is a software engineer and the founder of Grizzly Peak Software. He writes about API development, AI applications, and the business of software from his cabin in Caswell Lakes, Alaska. His book on training LLMs with Python and PyTorch is available on Amazon.

Powered by Contentful