AI

Switching from Cursor to VS Code + Claude: Productivity Before/After

I used Cursor for about four months. I was a paying customer. I evangelized it to friends. I thought it was the future of how we'd write software.

I used Cursor for about four months. I was a paying customer. I evangelized it to friends. I thought it was the future of how we'd write software.

Then I switched to VS Code with Claude Code, and I'm not going back. But this isn't a hit piece — Cursor is a good product. This is a honest accounting of what happened to my actual productivity when I made the switch, with real numbers and specific examples.

 The Mt. Gox Collapse: Bitcoin's First Crisis

The Mt. Gox Collapse: Bitcoin's First Crisis

How Mt. Gox went from a Magic: The Gathering site to the world's largest Bitcoin exchange—then lost 850,000 BTC and triggered crypto's first existential crisis.

Learn More

Why I Tried Cursor in the First Place

The pitch was compelling: an IDE built from the ground up around AI. Not an extension bolted onto an existing editor, but a fork of VS Code where AI was a first-class citizen integrated into every workflow.

I was tired of the extension treadmill. I had Copilot, Copilot Chat, various snippet extensions, and a growing collection of custom keybindings trying to glue it all together. Cursor promised to replace that mess with a single coherent experience.

And honestly? The first two weeks were magic. Cursor's tab completion felt smarter than Copilot's. The inline editing with Cmd+K was genuinely faster for small changes. The chat panel had better context awareness because it could read my entire codebase without me manually selecting files.

So what went wrong?


The Context Window Problem

Cursor's big selling point is its codebase-wide context. It indexes your project and can answer questions about files you haven't opened. That sounds great until you realize what it costs.

On my main project — a Node.js application with about 200 files, nothing massive — Cursor's indexing would occasionally choke. The AI responses would slow down. Sometimes the context would just be wrong, pulling in code from a completely different part of the codebase that happened to have similar variable names.

I started noticing a pattern: the bigger the context window, the more diluted the responses. When I asked Cursor to help me write a new Express route, it would pull in patterns from five different route files, average them together, and give me a Frankenstein suggestion that matched none of them exactly.

Claude Code takes a different approach. It doesn't try to index everything upfront. When I'm working in VS Code and I invoke Claude Code in the terminal, I feed it specific context. I paste the relevant code, I describe the architecture, and I get responses that are focused on exactly what I'm working on.

Less context, counterintuitively, gives better results. Because the context is curated rather than comprehensive.


My Productivity Tracking Setup

I'm a nerd about this stuff, so when I decided to make the switch, I actually tracked my work for two weeks on each side. I used a simple spreadsheet with these columns:

  • Task description
  • Estimated time (before starting)
  • Actual time
  • Number of AI suggestions accepted vs. rejected
  • Number of times I had to fix AI-generated code

Here's what I found across comparable tasks.

Task: Building a New API Endpoint

Cursor (average of 6 endpoints):

  • Average time: 22 minutes
  • AI suggestions accepted: 8 per endpoint
  • Post-acceptance fixes: 3 per endpoint
  • Net time saved vs. manual: ~35%

VS Code + Claude Code (average of 6 endpoints):

  • Average time: 18 minutes
  • AI suggestions accepted: 4 per endpoint
  • Post-acceptance fixes: 0.5 per endpoint
  • Net time saved vs. manual: ~45%

The difference isn't that Claude Code generates faster. It's that the code it generates requires fewer fixes. I accept fewer suggestions, but the ones I accept are right more often.

Task: Debugging a Production Issue

This is where the gap widened.

Cursor:

  • Average time to identify root cause: 35 minutes
  • The AI would suggest potential causes, but they were often generic. "Check if the database connection is active" — thanks, already did that.

VS Code + Claude Code:

  • Average time to identify root cause: 20 minutes
  • I paste the error log, the relevant code, and my hypothesis. Claude responds with specific analysis of my code, not generic debugging advice.

The key difference: with Cursor, I was waiting for the AI to figure out my codebase. With Claude Code, I was telling it about my codebase and getting targeted responses.


What Cursor Actually Does Better

I'm not going to pretend it's all one-sided. There are specific things Cursor excels at.

Inline Editing

Cursor's Cmd+K (or Ctrl+K on Windows) inline editing is genuinely best-in-class. You highlight a block of code, type a natural language instruction, and it modifies the code in place. The diff view shows you exactly what changed before you accept.

VS Code with Claude Code doesn't have this. When I want Claude to modify existing code, I copy it to the terminal, describe what I want changed, and paste the result back. It works, but it's more steps. Claude Code does have the ability to edit files directly, but the workflow is different — it's agentic rather than inline.

Multi-File Refactoring

If you need to rename a function across fifteen files, Cursor handles this more smoothly. Its codebase awareness means it can find and modify all references in one operation.

With Claude Code, I'd typically do this with a combination of VS Code's built-in rename symbol feature and Claude for the trickier cases where the rename has semantic implications.

Onboarding to New Codebases

When I clone an unfamiliar repository, Cursor's ability to answer questions about the entire codebase immediately is valuable. "What does this project do? Where are the database models? How does authentication work?" — Cursor can answer these without me pointing it at specific files.

For my own projects where I know the codebase intimately, this advantage disappears entirely.


What VS Code + Claude Does Better

Conversation Quality

This is the biggest difference and the hardest to quantify. Claude's responses are just better. More nuanced, more aware of edge cases, more willing to tell me when my approach is wrong.

With Cursor, I'd sometimes get responses that felt like they were optimizing for speed over correctness. "Here's the code you asked for" — without mentioning that the approach has a race condition, or that there's a simpler way to achieve the same result.

Claude Code regularly pushes back on my requests. "You could do it that way, but here's why you might want to consider this alternative." That kind of feedback from an AI tool is worth more than faster code generation.

Terminal Integration

Claude Code runs in the terminal, which means it's right next to my git commands, my npm scripts, my server logs. When I'm debugging, I can paste an error message directly from the terminal output into the same terminal window where Claude is running.

In Cursor, the AI lives in a sidebar panel. There's a context switch between "terminal work" and "AI work" that adds friction. It's small, but across a full day of development, those transitions add up.

Model Flexibility

This might be the most pragmatic advantage. Claude Code gives me access to Claude's latest models, and the quality improvements between model versions are substantial. With Cursor, I'm using whatever model they've integrated, and I don't have control over which version or how it's configured.

When Anthropic ships a new Claude model that's better at reasoning about code, I get that improvement immediately in Claude Code. With Cursor, I'm waiting for them to update their integration.

Cost Predictability

Cursor's Pro plan is $20/month with usage limits that were never quite clear to me. I'd hit rate limits during heavy development sessions and have to wait or switch to a slower model. The exact boundaries of "fast" vs. "slow" requests were opaque.

Claude Code with an Anthropic API key gives me clear per-token pricing. I know exactly what I'm spending, and I'm never rate-limited during a critical debugging session. My monthly cost averages about $40-60 for heavy use, which is more than Cursor's base price but less than Cursor's higher tiers, and I never hit a wall.


The Workflow That Actually Works

Here's my current daily workflow, refined over several months:

Morning (architecture and planning): I open Claude Code in a terminal and describe what I'm building that day. I paste relevant code snippets and get Claude's input on the approach before I write a line of code. This planning phase typically saves me from at least one dead-end per day.

Active development: VS Code with Copilot handles the inline completions — the auto-complete suggestions as I type. For anything more substantial than a single line, I switch to the Claude Code terminal. I describe what I need, review the output, and paste it into my editor.

Debugging: Error logs go straight into Claude Code. I paste the error, the relevant function, and what I expected to happen. Claude's debugging analysis is the single biggest time-saver in my entire workflow.

Code review: Before committing, I'll paste my changes into Claude Code and ask for a review. It catches things I miss: potential null references, missing error handling on actual I/O operations, edge cases I hadn't considered.

The key insight is that I'm using two AI tools for different purposes. Copilot is my autocomplete — fast, inline, low-friction. Claude Code is my senior developer — thoughtful, thorough, willing to tell me I'm wrong.


The Migration Wasn't Painless

I should be honest about the friction. When I first switched back to VS Code from Cursor, I missed the integrated experience. Having to context-switch between the editor and a terminal for AI interactions felt like a step backward.

It took about a week to build new muscle memory. I set up keybindings to quickly switch between the editor and terminal panes. I created a VS Code workspace layout with the terminal taking up the right third of the screen so I could see code and Claude's responses simultaneously.

I also missed Cursor's inline diff view for about two weeks before I stopped thinking about it. Turns out, most of my AI-assisted code changes are either small enough to type directly or large enough that I want to review them in the terminal anyway.


The Numbers After Three Months

After tracking loosely for three months post-switch, here's where I landed:

  • Time to complete typical feature: Down ~15% compared to Cursor, down ~50% compared to no AI
  • Bugs caught before commit: Up roughly 30% (Claude Code's review catches more than Cursor did)
  • Context switches per hour: Down about 20% (terminal integration reduces mode-switching)
  • Monthly cost: Up $20-30 compared to Cursor Pro, but the productivity gains more than offset it
  • Satisfaction: Significantly higher. I feel more in control of the AI relationship

That last point matters more than the numbers. With Cursor, I sometimes felt like I was working for the AI — accepting its suggestions to keep up with its pace. With Claude Code, I feel like the AI is working for me. The human stays in the driver's seat.


Who Should Make the Switch?

Not everyone. Here's my honest assessment:

Stay with Cursor if:

  • You work on many different codebases and need quick onboarding
  • Inline editing is central to your workflow
  • You prefer a single integrated experience over composing tools
  • Your budget is fixed and you need predictable costs

Switch to VS Code + Claude if:

  • You work deeply in a few codebases you know well
  • You value response quality over response speed
  • You're comfortable working in the terminal
  • You want control over which AI model you're using
  • You're doing complex debugging or architecture work

For me, as someone who spends most of my time in a small number of projects that I've built from scratch, the switch was clearly the right call. The AI quality difference is real, and it compounds over weeks and months of daily use.

But I won't pretend this is the right choice for every developer. Cursor is doing genuinely innovative work, and for certain workflows, it's still the better tool. The best advice I can give is: track your own productivity for a couple of weeks with each tool and let the data tell you what works for you.


Shane Larson is a software engineer and technical author based in Caswell Lakes, Alaska. He builds things at Grizzly Peak Software and has been arguing with editors — both human and artificial — for three decades. His book on training large language models is available on Amazon.

Powered by Contentful