Modern Creator Network
Chris Raroque · YouTube · 12:53

I was wrong about Claude Code

A 13-minute first-person switch report from a developer who abandoned Cursor after one week with Claude Code.

Posted
10 months ago
Duration
Format
Tutorial
sincere
Channel
CR
Chris Raroque
§ 01 · The Hook

The bait, then the rug-pull.

Chris Raroque had already made the video. Two months ago he shipped his AI coding workflow — Cursor, agents, the whole setup. Then Anthropic introduced the Max plan and everything changed. This is the correction, delivered by someone who builds apps for a living and has the git commits and a 3400 API bill to prove it.

§ · Stated Promise

What the video promised.

stated at 00:22we're gonna cover why I switched to Claude code, my workflow, and practical tips to get the most out of Claude code, and then some general thoughts on where I think AI coding is headingdelivered at 11:53
§ · Chapters

Where the time goes.

00:0001:09

01 · Hook + overview

Self-correction hook, intro, promises three topics: why he switched, his workflow, and where AI coding is heading.

01:0901:50

02 · How Claude Code works

Terminal-based agent, not an editor. Run the claude command in any project directory.

01:5002:24

03 · Step 1: Plan mode

Shift+Tab activates plan mode — thinking only, no code changes. Always review the plan before executing.

02:2403:14

04 · Step 2: CLAUDE.md

/init auto-generates a project memory file. Claude follows it thoroughly. Keep it accurate.

03:1403:55

05 · Step 3: Git as checkpoints

Claude Code has no Cursor-style restore. Commit frequently; revert if Claude goes wrong.

03:5504:42

06 · Steps 4 + 5: Screenshots + multiple codebases

Drag screenshots for visual context. Drag external project folders for cross-codebase context.

04:4206:33

07 · Steps 6-9: Web, sub-agents, double-check, review

Paste URLs for live doc reading. Spin up sub-agents for parallel tasks. Ask it to find edge cases. Always review output like a PR.

06:3308:22

08 · Benchmarks: 3 wins over Cursor

Custom drag-and-drop animations (30 min vs hours), calendar-to-task feature (1 hr vs 1+ year stuck), iOS to Android port.

08:2209:27

09 · Pricing backstory

Avoided Claude Code for months due to API token pricing. The 200 Max plan changed everything.

09:2710:31

10 · Model selection

Opus 4 for everything on Max plan. Sonnet 4 reportedly fast and comparably good.

10:3110:59

11 · Why it beats Cursor: token theory

Cursor compresses tokens to serve cheap plans. Claude Code burns full context. More tokens = better reasoning. Real API cost equivalent: 3400+ in one week.

10:5911:53

12 · 3 downsides

Cost (200/mo floor), no checkpoint/restore system, very slow (30+ min tasks).

11:5312:47

13 · Who should use it

Pro devs making money from software: no-brainer. Hobbyists or beginners: stick with Cursor.

12:4712:53

14 · Dog outro

Luna the cockapoo closes the video.

§ · Storyboard

Visual structure at a glance.

open
hookopen00:00
agenda card
promiseagenda card00:31
my 9 step card
valuemy 9 step card01:50
benchmarks card
valuebenchmarks card06:33
why better card
valuewhy better card10:31
downsides card
valuedownsides card10:59
who should use card
ctawho should use card11:56
dog outro
ctadog outro12:47
§ · Frameworks

Named ideas worth stealing.

01:50list

The 9-Step Claude Code Workflow

  1. Use Plan mode first (Shift+Tab)
  2. Generate and maintain a CLAUDE.md file
  3. Use Git as a checkpoint system
  4. Drag in screenshots for visual context
  5. Give it multiple codebases as context
  6. Paste URLs — it has browser access
  7. Use sub-agents for massive parallel tasks
  8. Ask it to double-check its own work
  9. Always review generated code like a PR

A practical 9-step workflow for maximizing Claude Code output with guardrails.

Steal forCLAUDE.md template, LFB line curriculum, Claude Code onboarding guide
09:30concept

Token Burn Theory

Cursor heavily compresses token usage to sustain cheap plans at scale. Claude Code burns full context window unoptimized. More tokens = deeper reasoning = better output. Evidence: 3400+ real API cost vs 200 flat in one week.

Steal forExplaining to audience why Claude Code outperforms Cursor despite same underlying model
§ · Quotables

Lines you could clip.

01:09
the solutions it came up with were genuinely better every time I compared the two
Direct claim with comparison framing — no setup neededTikTok hook
07:22
I have been struggling with this feature for over a year now... Claude Code was able to deal with all of that complexity, and I was able to successfully ship this in less than one hour.
One year vs one hour — concrete, emotional, believableIG reel cold open
10:21
I have used over 3000 and over a billion tokens in a little over the week, which is absolutely crazy.
Dollar amount + visual proof on screen = instant credibility + shock valueTikTok hook
10:31
I think Claude Code is losing Anthropic a ton of money because I'm just paying a flat 200.
Contrarian insider take — makes people stop scrollingIG reel cold open
11:53
if you make money from your app, I think this is a no brainer at 200 a month
Clean verdict line — works standalone as a recommendation clipnewsletter pull-quote
§ · Pacing

How they spent the runtime.

Hook length69s
Info densityhigh
Filler5%
§ · Resources Mentioned

Things they pointed at.

00:22productCursor
00:50productClaude Code
02:24toolCLAUDE.md (/init)
10:10toolToken usage tracker script (community-built)
§ · CTA Breakdown

How they asked for the click.

12:20subscribe
check out my Instagram and TikTok. I post almost every other day about building productivity apps. And obviously if you like this content, don't forget to subscribe.

Soft and genuine — placed after the verdict, not before. Instagram shown on phone screen as B-roll. Low pressure, high trust.

§ · The Script

Word for word.

HOOKopening / re-engagementCTAthe pitchstory
00:00HOOKSo I wasn't sure if I should make this video because I recently did a video two months ago about my AI coding workflow, but so much has changed in that time that I felt like I had to do it. If you're new here, welcome to the video. My name is Chris, and I build productivity apps. And today, I'm gonna share my updated AI coding workflow. If you saw my workflow video, you know that I use Cursor and Cursor agents to do the AI coding, which most developers are currently using right now. But I've recently switched to Claude code, and I basically haven't used Cursor's AI features like agents in over a week. In this video, we're gonna cover why I switched to Claude code, my workflow, and practical tips to get the most out of Claude code, and then some general thoughts on where I think AI coding is heading in general. Okay. So what happened? Why did I change to Claude code? So I've been using Cursor just like everyone else for over a year now. There's absolutely nothing wrong with Cursor, but about a week ago, Anthropic introduced some new pricing to Claude Code. If you're not familiar, it is basically a coding agent. So it actually lives in the terminal. So it's not a code editor like Cursor. It just lives in the terminal, and you can put it into any code base and just ask it to do anything. You can ask it questions. You can it to add features, and I have been genuinely surprised by the results that I've been getting. A lot of these things were things that Cursor, even with Claude Opus four max,
01:06which is again one of the best models for coding, was really struggling to help me with. Claude code was able to come up with solutions to these issues really quickly. It seemed to think about complex problems way better than Cursor's agent, and the solutions it came up with were genuinely better every time I compared the two. So how does this stuff actually work? The way it works is you go to any project in your terminal,
01:25and then you just run the Claude command. And when you run this command, you can start chatting with it. You could ask it questions. You can tell it to work on features exactly like CursorAgent. And my workflow barely deviates from how I use CursorAgent. I'm just constantly asking the agent to do things. I'm checking the code. Technically, I am still using Cursor, but I'm not really using any of the AI stuff. I just use cursor or x code as my editor, and then I have Claude code in a terminal window, usually on the right side.
01:51First thing is Claude code does have something called plan mode. So if you hit shift tab on your computer, you'll see that it'll change to plan mode. And what this is going to do is it's only going to think through the problem, and it's not gonna actually generate or modify any code. So the way that I use Claude code is I always first use plan mode. I ask it to make a change. So if I ask it something like, can you go modify this? I make sure I'm in plan mode. I hit enter, and then it's going to think through for a pretty long time usually, and then it's going to spit out its game plan. I review this game plan very thoroughly,
02:24and then if I'm happy with it, I tell it, okay. Go ahead. Go try to execute the plan. If not, I hit no, and then I revise the plan. And so that's tip number one and step one in my workflow is I always use plan mode first. So number two is I always generate a Claude MD file. So this is a file that's basically the brain and the memory of Claude when it's working on your project. You can kind of think of it like cursor rules. You're basically even Claude rules, but it is very important, and Claude really follows this thoroughly. If you hit slash init in any code base, it's actually gonna automatically generate this file for you, and then you can go ahead and make modifications, or you can even tell Claude code itself like, hey. Can you remember to do this next time? And it'll go ahead and add to the Claude MD file. So step number two is generating this and making sure that it's accurate and up to date and exactly how I want it to be. So step number three, and I'm about to start actually using it and modifying code is I always make sure to commit frequently
03:14and use git as almost a checkpoint system. So if you're familiar with cursor, they have this really nice restore feature in the chat. You can go anywhere back in your chat, hit the restore button, and it'll basically go back to that point in time of the chat. It's really great when Cursor Agent is going down the wrong path or you made a mistake, and I was constantly using that feature. Claude code has nothing like this. So the way I'm getting around this is by using git almost as my checkpoint system. When I'm happy with the changes, I make a commit. And if I don't like the changes, I just discard or revert the commit. It's kind of a hacky system, and I haven't found a better way to do this, but it gets the job done. It gives me the ability to undo changes if Claude does something that I'm not happy with. Number four is using screenshots. So you can actually drag screenshots
03:56into Claude code, so that way it has context. Very similar to cursor agent, you can drag images into the chat. I do this with errors. I do this with design screenshots, but I'm constantly dragging in images. Number five is similar to images. I usually also drag in entire folders, and I'm not talking about folders in your code base. I'm talking about other folders in other code bases. So something I'm frequently doing for Ellie, for example, is I'm working on the Ellie front end. I like to drag in the folder for the back end and tell it, hey. By the way, this is what the back end looks like. I found it is very helpful to give it additional context on something like how does the back end work. Can actually make changes to the folders if you give it permission. So sometimes if I'm asking it to build something on the front end, you can go ahead and make changes for the back end for me. I heard that working with multiple code base is not officially supported, but this is a way to get around that. Number six is giving it URLs.
04:42So something a lot of people don't know is that ClaudeCode actually has access to a web browser. So you can just paste in the link to documentation, kinda similar to what you can do with Cursor. But when you give it the link to documentation, it will go to the website, read the documentation, and get whatever context that it needs. It can also run Google searches and go find documentation. So sometimes I just tell it things like, make sure to use the latest Google Calendar API, and it'll actually go do a web search and go find the documentation. So I don't even have to paste the link in. I'm constantly doing that, especially when I'm working with newer APIs. Number seven is sometimes I use sub agents. Cloud Code has the ability to spin up sub agents. So these are instances of Cloud Code with their own context that will go off in parallel. So if I'm doing a task that's pretty massive, like trying to port an entire LE iOS app to Android, I told it for the sake of time, can you actually break this problem down and run sub agents where necessary? I actually spun up, like, 10 agents that all ran in parallel at the same time. If I ran this without sub agents, this probably would have taken over an hour to run. But since I ran it with sub agents in parallel, they all ran at the same time, and it was able to finish much faster. Number eight is that I actually ask it to double check its work. So when it's done, I often ask it, hey. Can you make sure that it didn't break anything else, or can you try to find some edge cases and just confirm that everything is working? I've been surprised that sometimes it actually does find things that I've originally missed, and it gives me a little bit more peace of mind about the code that it generated. Number nine, which really isn't a tip. This is what I do is I always review the code that it generates. This thing is so good that I can easily see people just blindly accepting whatever it's producing. But my advice and for my workflow, I always review the code that it produces almost as if it was another developer, and I'm basically just kind of reviewing a pull request and reviewing their changes. If you're using any of these AI tools, you should be doing that anyway. But I think it's worth saying because especially with Claude code, this can get really tempting to just blindly accept things because it is really good at generating some of this stuff.
06:36I wanted to share some real examples of things I was able to build with Claude code that I wasn't able to do with Cursor. Specifically, I've been using Cursor with Claude Sonnet four as the model, and sometimes I even use Claude Opus four max, which is one of the most expensive and powerful models on Cursor, and it still wasn't able to get some of these things. So one example was very custom drag and drop animations in the Elli iOS app. But this is where you can hold down and reorder list items. This isn't using the default SwiftUI drag and drop. It's a completely custom drag and drop experience, which Cursor did seem to be struggling with after a couple hours. But the minute I switched to Cloud Code, was able to get it in, like, thirty minutes. Second example is a feature in Elli where you can take an external calendar event, like a Google Calendar event, and convert it into a task. I have been struggling with this feature for over a year now because it is extremely
07:22complex. It touches three different calendar integrations. It touches recurring tasks. It's just an overall very complicated feature the way that it's built into Elli. Because of the complexity, Cursor had a really hard time helping me with this. Every time it changed something and something was fixed, another thing ended up breaking. But Cloud Code was able to deal with all of that complexity, and I was able to successfully ship this in less than one hour. So the last example is a very extreme one. I actually started the process of porting over the Elli iOS app to Android. And I had attempted to do this with Cursor in the past, but it kinda struggled to do this because this is a pretty big migration. Cloud Code actually made substantial progress, and I was thoroughly surprised by the results I got with this. And if you don't believe me on the timeline, I was live tweeting the entire thing, so you can go check that out if you want some proof. But these were three concrete examples, but I had five or six other features that I used as benchmarks to test Cursor versus Cloud Code. And every single time, Cloud Code gave me better results and much, much faster than Cursor could. Cloud Code has been around for a few months now, and I've heard really good things. But the reason I was hesitant to try it is it was only available through API based pricing, which means I had to provide my own API key, and I had to pay based on the amount of tokens that I was using with Claude code. And as someone who does a ton of AI coding, that scared me because I'd rack up a huge bill if I use this the same way I was using Cursor. So I've always stayed away from it until they introduced their new $200
08:42Claude Max plan, which allows you to have borderline unlimited usage and not have to worry about the token based pricing. Obviously, the big caveat is to use it the way I'm using it. It costs about $200 a month. They do have some cheaper plans, but in my experience, they're way too limited. I use Claude code on their $20 month plan, and I hit the limit in, like, ten minutes. I use it on the $100 plan, and they hit the limit in an hour. Realistically, my recommendation is to use the $200 plan.
09:07So when you subscribe to Claude code, you can choose between using the Claude Sonnet four model or the Claude Opus four model, or you can have it auto select and try to use the best one for the task. Personally, I just have everything set to Opus because I'm paying for the max plan. And even using Opus for almost every request, I rarely hit the limit on the max plan. But I have heard people say that Sonnet four is actually good and sometimes even better than Opus in some cases.
09:31Why is this thing performing better than CursorAgent if CursorAgent is using the exact same model? That So was the first question I had, and I did a little bit of research. I have no definitive proof here, so this is just my opinion. My hypothesis is that Cursor is super optimized from a token usage standpoint, that it does a lot of things like compression and not using the full context window to try to save cost, which makes sense because at the scale that cursor's operating at, they have to do whatever they can to try to get the token usage down. So that way as much as possible can fit in the $20 a month plan. I think that Claude code is not doing optimizations like this. I think it is eating a ton of money. Someone wrote a program you can run to actually see how much tokens you consumed, how much it would have cost if you were on the API based pricing. Here's what my usage looked like in a little over a week. If this is accurate, I have used over $3,000
10:21and over a billion tokens in a little over the week, which is absolutely crazy. If that's accurate, I think Cloud Code is losing Anthropic a ton of money because I'm just paying a flat $200. I think it's not as optimized as Cursor, and it's just consuming tokens. And what that does is it probably allows for better output. This makes me really question how Anthropic is doing this. They're probably doing this to try to take more market share. Anthropic has probably one one hundredth the number of users on Cloud Code than Cursor does. So I think that they can sustain this a little bit longer, but if developers start picking up on this and start using Cloud Code, knows who how long this is gonna last. I probably just accelerated that timeline by making this video, but I think everyone's gonna figure it out eventually. So I think it's worth sharing with you guys.
11:02CTAOkay. So what are the downsides to it? Number one, the cost is extremely high. I think to use this effectively, you have to be on the $200 a month plan, which is just not affordable to most developers. But if you can afford it, I highly recommend trying it. Maybe I'll do a whole separate video about coding tools at different price points, but the price point is probably the biggest downside. The second downside, which I already mentioned, is that there is no checkpoint functionality like cursor agent where you can just restore to a specific point in the chat. You have to do that manual workaround with git that I'm doing, which kinda sucks, but I'm kinda used to it. So it's not even that big of a deal breaker anymore. And number three, it takes a very long time to run. Some of the actions that I take have taken over thirty minutes to run, which really can be disruptive to flow. But, again, since you can in theory run sub agents and run things in parallel, that is a way to cut back the time. You can also just open up multiple terminal windows and use multiple instances of Claude code at the same time, which I am frequently doing.
11:56CTASo who is this for? I think that if you are a developer who does a lot of coding, and if you make apps and you make money from your app, I think this is a no brainer at $200 a month because I am getting substantially more value than $200 a month, at least in my case. So I think if you do this stuff as a professional and you make money from software development, this is a really good investment. I think if coding is just a hobby or you're just getting started, I actually recommend just sticking with Cursor and using that instead. Again, there's nothing wrong with Cursor. It is still an incredible tool. It's just that Claude code has been better in my experience, so that's why I've switched to it right now. Please share your experience below, and if you have any tips on using Claude code or you found something even better, please leave a comment down below. I'm always looking for new tools and ways to improve my workflow. But I hope you guys found this interesting. If you like this kind of content, check out my Instagram and TikTok. I post almost every other day about building productivity apps. And, obviously, if you like this content, don't forget to subscribe. But thank you guys so much for watching, and I'll see you guys in the next video.
§ · For Joe

Steal the workflow, not the price tag.

Claude Code playbook for JoeFlow / LFB

The 9-step system Chris uses is the real deliverable — and most of it works whether you're on the 20 plan or the 200 plan.

  • Plan mode first, always. Shift+Tab = no accidental code changes while it thinks. This alone prevents wasted sessions.
  • CLAUDE.md is your system prompt for the codebase. Every project should have one. Run /init, then customize it.
  • Git is the only checkpoint system Claude Code has. Commit before every major change. Revert is your undo button.
  • Drag in screenshots of errors and designs — visual context dramatically improves output quality.
  • Multi-codebase context (drag in back-end folder while working on front-end) is an officially unsupported hack that works.
  • Sub-agents for parallelism. If a task can be split into independent threads, tell it to spin sub-agents. He ran 10 at once.
  • Always review output like a PR. The temptation to blindly accept is real — resist it.
  • The 200/mo Max plan is what unlocks this workflow at full power. The 20 and 100 plans hit limits in minutes to an hour.
  • The token burn theory is worth internalizing: Claude Code is probably subsidized right now. This window may close.
§ · For You

What this means if you write code for a living.

For working developers

If you ship software professionally and Claude Code's 200/month is less than the value of one feature you'd otherwise spend a week on, it pays for itself.

  • Start with Plan mode before letting it touch your files — it forces you to review the approach before any code changes happen.
  • Create a CLAUDE.md for every active project with /init. It gives Claude persistent memory of your architecture.
  • Treat Git commits as your undo button. Commit when happy; revert when Claude goes sideways.
  • Paste documentation URLs directly — Claude Code reads them live so you don't have to copy-paste API docs.
  • For complex multi-part tasks, ask it to use sub-agents. Tasks that would take an hour serially can run in parallel.
  • Always review generated code before accepting it. Output quality is high enough it's tempting to skip — don't.
§ · Frame Gallery

Visual moments.