Modern Creator Network
Parker Rex · YouTube · 14:39

How I Make Cursor 10x More Effective using Augment & Claude Code

Parker Rex demos Augment Tasks, buries Claude Code as a daily driver, and maps a clean 0-to-1 vs 1-to-n workflow — all in 14 minutes.

Posted
10 months ago
Duration
Format
Tutorial
educational
Channel
PR
Parker Rex
§ 01 · The Hook

The bait, then the rug-pull.

The open lands with a hard credential drop — $23M exit in the first breath — before Parker Rex has even explained what this video is about. From there it's a tight three-promise setup: demo the feature, predict the extinctions, show the workflow.

§ · Stated Promise

What the video promised.

stated at 00:22We're about to cover three things. One, the details of this new task feature. Two, how I think it's actually gonna kill off a lot of the AI tools in the space, and three, we'll go through my workflow.delivered at 10:20
§ · Chapters

Where the time goes.

00:0000:22

01 · Cold open + credential drop

Cinematic title card, then talking head: $23M exit established immediately. Promises three things: Tasks demo, AI tool extinction thesis, personal workflow.

00:2201:45

02 · The old way vs the new way

Shows the painful old workflow — hand-crafting CLI task lists, repo-prompting Gemini for token budget, building custom pull frameworks. Sets up the before/after contrast.

01:4503:41

03 · Augment Tasks demo — live codebase

Live demo inside VAI codebase. Drop raw task list into chat, hit the enhance icon, auto-agent creates structured tasks with context. Shows filtering and export to markdown.

03:4105:45

04 · Zod schema example — real sauce

Concrete example: needed Zod schemas for all tRPC routers. Agent reads directory, builds context, creates task list automatically. Sequential thinking MCP is the recommended pairing.

05:4508:00

05 · Claude Code vs Augment — the Claude dogging problem

Claude Code burns context too fast, loses grip on codebase, caused a symlinked env-var bug that went undetected for hours. Augment wins on persistent codebase knowledge.

08:0010:20

06 · The AI Tool Extinction Event

Figma slides: tools without deep model partnerships will die on pricing. Compares Augment ($50/600 chats) vs Claude Code Max ($200, bull in a china shop). IQ bell curve meme: both extremes just use Auggie.

10:2013:47

07 · My two-workstream workflow

Hand-drawn whiteboard. 1-to-n: Auggie context → PRD → .augment/guidelines → tasks. 0-to-1: Opus 4 research (steel-man the input) → researched spec → PRD → ai/specs → tasks.

13:4714:39

08 · Credits CTA + VAI pitch

Comment with use case to win Augment credits. Soft pitch for VAI community platform. Ends on Dwight Schrute reaction meme.

§ · Storyboard

Visual structure at a glance.

title card
hooktitle card00:00
explosion talking head
hookexplosion talking head00:02
what we cover slide
promisewhat we cover slide00:27
old task list screen
setupold task list screen01:08
Augment Tasks panel
valueAugment Tasks panel02:07
Zod schema tasks
valueZod schema tasks03:56
IQ bell curve meme
entertainmentIQ bell curve meme08:36
extinction event slide
valueextinction event slide09:03
workflow outline slide
valueworkflow outline slide10:20
1-to-n whiteboard
value1-to-n whiteboard11:00
0-to-1 whiteboard
value0-to-1 whiteboard12:50
credits CTA talking head
ctacredits CTA talking head14:15
Dwight Schrute outro
ctaDwight Schrute outro14:31
§ · Frameworks

Named ideas worth stealing.

10:23model

Two Workstreams: 1-to-n vs 0-to-1

  1. 1-to-n: Auggie context → PRD → guidelines → tasks
  2. 0-to-1: Opus research → spec → PRD → ai/specs → tasks

Distinguishes between iterating on familiar code (start with context questions) versus greenfield (start with Opus 4 deep research to steel-man the approach).

Steal forAny AI coding tutorial or workflow post — the 0-to-1 vs 1-to-n split maps cleanly to real decision points every developer faces
11:52concept

.augment/guidelines (codebase memory file)

A project-level file that codifies coding conventions: data fetching patterns, state management, types, REST/tRPC layer. The agent reads it on every task run. Equivalent to CLAUDE.md.

Steal forFrame your CLAUDE.md as this — the thing that makes the AI know your codebase
09:01concept

Deep Model Partnership Moat

Tools with direct partnerships with foundational model providers survive. Tools without them face pricing pressure and reliability gaps.

Steal forContent about AI tool consolidation, picking your coding stack
04:40tool

Sequential Thinking MCP

Recommended as the best MCP to pair with Augment Tasks — forces the model to reason step-by-step before acting, improving task decomposition quality.

Steal forMCP recommendations post or AI workflow video
§ · Quotables

Lines you could clip.

02:00
Context is king.
Three words, no setup needed, lands as a standalone thesisTikTok hook
03:22
They are probabilistic geniuses.
Memorable framing of LLM reliability — funny and preciseIG reel cold open
06:41
It literally sim linked an environment variable file to somewhere else and I didn't even notice it. And then it didn't notice it either.
Concrete relatable war story — every developer has a version of thisTikTok hook or newsletter pull-quote
09:32
When I look at a tool that does not have a deep partnership like Augment and Claude does — that makes me not wanna use it. Because then your pricing gets jacked up, you are not winning.
Strong takes clip — quotable prediction about tool landscapeTwitter/X clip, newsletter pull-quote
10:04
Claude code, if you wanna ride the dragon, it's $200 and it just seems like a bull in a China shop.
Two vivid metaphors in one sentenceIG reel cold open
§ · Pacing

How they spent the runtime.

Hook length22s
Info densityhigh
Filler8%
Sponsors
  • 00:0900:21 · Augment (credits setup)
  • 13:4714:20 · Augment (credits CTA)
§ · Resources Mentioned

Things they pointed at.

04:40toolSequential Thinking MCP
02:13toolGemini (repo-prompting for token budget)
07:51toolGitHub MCP
12:28channelIndie Dev Dan
12:50toolOpus 4 (Claude — for research and specs)
§ · CTA Breakdown

How they asked for the click.

13:47product
Make a comment below as to why you want to be using Augment. Give me a specific use case of how you plan on using it.

Withholds the credits info until the very end — classic forced retention anchored in the 0:27 promise. The ask is specific (use case, not just comment) which filters for quality entries.

§ · The Script

Word for word.

HOOKopening / re-engagementCTAthe pitchmetaphorstory
00:02HOOKI led tech for a start up that sold for $23,000,000 and I've been using AI coding tools daily since. I wanna talk about how Augment's new task feature is very overpowered and probably shakes things up for a lot of companies. As a bonus, they gave me credits to give the audience which is really cool. So we're about to cover three things. One, the details of this new task feature.
00:27HOOKTwo, how I think it's actually gonna kill off a lot of the AI tools in the space, and three, we'll go through my workflow. Stick around to the end of the video to learn about the credits. Like many of you, I've been overwhelmed by the sheer volume of tools and changes and updates that have been coming out. But I've found solace in sticking to a maximum of three tools, cursor,
00:48Claude code, and then augment's been a mainstay because of their context engine. If you wanna learn more about that, I have other videos on my channel. But this new task thing saves me so much time. It is just great. So why does it still rip? Let's just show you. Let's say you have a set of tasks and you want to get really specific about them and you've maybe watched some videos on somebody using
01:12a CLI tool or you've come up with your own prompts on how it maps to your Jira, your Notion, or wherever you're working out of. It might look like this. And this was like awesome two weeks ago. And you had to go in and make these very specific tasks, and you wanted to have as much specificity in there as possible, and you would prompt prompt prompt your way. Maybe you'd go over to Gemini and repo prompt because we need more tokens.
01:39You'd be optimizing for this. And this is like a decent example. And this is that I built. I was that guy who built their own pull framework around this which is just kludge. And I said that when I built it. A lot of these tools will get replaced and that's what's happening right now with what we're talking about. So do you want this? And now, you just have this.
01:59Oh, what's this little guy over here? It's bananas. What it does, and I'll show you in my code base in a second, but a lot's happening in here. You basically have your task list which you might have hid on the left side and one of these little notepads. I've seen people do that. I don't do that. You might have made a bunch of prompts and try to fling yourself around and do it that way. But you don't have to do that anymore. You just have this right here. Now, there are ways to turbocharge it, of course. But this primitive, it's clear as day. Throw it in the IDE.
02:34That's brilliant. You can swap between file change mode, which if you're familiar augment is when you see that running log of all the things that are going on. Or your task manager. If I wanna switch into task mode, I can do it in one of two ways. I can either go and manually type in the tasks. That's loser mode. That's not mode. But typically, you'll have
02:57the tasks or the things that you think you need to do before you got to this step. So if that's you, then you can literally just drop it into the chat. And then you flip on auto and hit go, and it will automatically separate those out for you. Now, benefit of doing it this way is you can take advantage of the enhanced prompt. You always want to check the outputs of LLMs. They are probabilistic
03:22geniuses. So they might not nail it as hard as you want it to nail it. That's on you. Go read what the enhanced prompt thing does. Always read your outputs. It's just it's gonna pay you in spades. But leverage it. So you drop in whatever that task list you need to do here, that's step one. Step two is hit the magical icon, let it do its sizzle, let it work its sauce, and then it will actually
03:46give you even better context because context is king. And then you can send it off to the auto agent, it will get to work and actually create the list for you. So, good example of this was I just did this and I was like, I have to record a video because it's too good. It's actual sauce. What happened was, I needed to beef up the schemas. You need Zod validation
04:07because your types don't exist at runtime, right? Like you see the little errors and the squigglies if you're familiar with TypeScript. But you don't get that same thing when you're building an API and it's out in the world. So what I needed to do was make these Zod schemas for all the different routers, routers, all the functionality that exists within VAI, which is our private network of builders, a bunch of nerds basically. Wanted to bring San Francisco and that vibe onto the Internet because I live in a tough desert. So that's the community building platform. That's what the router are for. Needed the Zod schemas.
04:40Great. I don't wanna write all those and I know how to. I can go. I can do that, but I don't want to. And you don't want to either. You maybe write one of them, but no. You don't wanna write all of them. So in my case, I always pair it with sequential thinking. I think that's one of the best mcps if you're gonna use an mcp. That's a good one. But I'll go ahead and I'll ask it specifically to make tasks for me. I obviously asked it to refine the prompt. And then you can see as I scroll down,
05:06you have the updated prompt. And then it goes and reads the directory. It's building the context around what it's about to deliver. Reading the lines, showing you what's going on. Seeing that some of them are done, some of them are not. And then boom, add task 21. You can see if it updated any existing ones. So, it's doing all that right in front of you and then we can get off to the races where I can literally just click run all tasks. It will go to completion.
05:32Couple other pro tips when you have a lot of tasks. How do you deal with it? Well, they got filtering here, which is pretty nice. So I can click on the little filter. Guess what? It does a filter. Could you did were you able to ascertain? Were you able to figure that part out? But, yeah, you can filter it. And then also, if you want to kick it in into a new chat or if you just have so many that you wanna kick it out into a markdown file, not sure, maybe you wanted to sync that with the GitHub MCP or integration, then you could. But, yeah, those two very, very helpful. Now, side note. You see this, Claude code. I wanna talk about that too. Because I'm not just using one. Right? Like I mentioned at the beginning of the video, I try to stick to three max. Cursors, the home base, it's still got the tabby thing. That's great. And then I have Claude code which is like, alright, let's jump in and let's rip around and have some fun in this.
06:23But I I gotta say, it can just get you lost really fast because you get like excited about the fact that you have all these things. And I find that it burns through the context window too quickly, and it doesn't have a grip on the code base like augment does. So while I just rode the dragon, which was Claude code for the last few days, like just literally trying to push it as hard as I could, it caused a lot of bugs that
06:48skill issue. Probably a skill issue, but no. I mean, it literally sim linked an envar, an environment variable file to somewhere else and I didn't even notice it. And then it didn't notice it either. It's I just find that this can be almost like less pointed and it knows more. It just seems smarter. That's all. What about cursor? Well, cursor, I just think it's in a different bucket. I think maybe we're at a peak. I don't wanna say it. I don't wanna say it. I just find that every day that goes by,
07:17there's so much stuff that they have to be doing and I know they're orchestrating a lot of different models underneath the hood, but it's not nailing my use case for one area of coding, is like actually doing the tasks. And I don't wanna go and make these task lists and even little things like, if you compare the way that this can output stuff versus
07:36this, like it didn't I don't know. It just doesn't nail markdown a lot of the time. It doesn't have mermaid built in. MCP's are less reliable. Any cursor watchers, this is great for you. This is an opportunity. This is you can transcribe this chucking it into the road map and throw remote agents on it. Just command e it. Right? And so now, let's get into the next segment. We're at eleven minutes already. Wow. Here I am. I thought I was like the next star. I thought I was gonna get through this a lot quicker, but I'm now I'm in Figma. Oh, here we go. Okay. If you don't follow me on x, I post once every three years. But these are pretty good memes. Right? Because this is me, literally.
08:13Like that is me. I was like I need a local l n and I have these seven prompt chains and they're tied to the AIS DLC which is the software development life cycle and it's there's elements to it that still definitely makes sense. I'm rethinking a lot of stuff around this where the Opus Research agent is bananas and that should be used as part of specs.
08:36HOOKAnd then some of the stuff from Indie Dev Dan, I think is really good. I just need to simplify it. But yeah, this is what I'm wondering. What's this what's going on here? Do we know? Do we know what's go how look at those eyes. I don't know. I didn't say it. You said it. Did you say it? So now let's talk about the AI tool extinction event. Yeah. Of course, unfunded, but I think that there's something to be said with looking at
09:00HOOKthese tools and being realistic. Let's be realistic together for a second. Who's gonna win over the long term? I think it's the ones that have deep partnerships with foundational models. Right? So OpenAI, first in the space, crushing it. I think Claude is way better at coding across the board. I think they're better at writing. So I've just I don't even find myself using ChatGePity, to be honest, or any of their models at the moment besides transcription and embeddings. But I should use Gnomic. But I think they're fine. When I look at a tool that does not have
09:32HOOKa deep partnership like Augment and Claude does, for example, that makes me not wanna use it. Because then your pricing gets jacked up, you are not winning. Now, I'm not gonna name them. We don't name names here, but you can think of them. And so, you can go and augment all around. If you're price conscious, that's the obvious answer because you get I wanna say 600 chats for $50.
09:57HOOKSo run the math on that and the chat is literally the thread. Now, quad code, if you wanna ride the dragon, it's $200 and it just seems like a bull in a China shop. I like it, but probably not as productive. You feel like you're being more productive, but yeah. And so, comparing it to ClawCode Max, the Clawd Dogging problem. Yeah. I already talked about that nightmare.
10:19HOOKAnd so, I was gonna end this by just like walking you through another example, but I kinda wanna just talk about how I'm thinking I'm gonna work for two different work streams. What is a work stream Parker? I don't know. It's a cool word that I just said, but it also means how you will go do a set of tasks for a given goal. So let's say if I'm going one to one point n.
10:45Yeah. We're throwing n around. We're so mathematical. Then in this case, I would probably do the Auggie
10:53context. So I'm asking a lot of questions about how I need to accomplish this task. Now for any work stream, it's the old, it depends. Because if you don't know the code base like the back of your hand, you don't know the libraries, you don't know how things are done, I'm not really sure with state management because I'm new here. Then, yeah, you're gonna have to spend a lot more time upfront.
11:13But if you do know it like the back of your hand and you're committing code to a daily, then you probably don't need a lot of deep research and Auggie context questioning over and over. So probably get away with a thread. Especially if it's something that you've already built in a different before. But I'd be going in a simple state like this. I'd be like, okay. A, as I go through with the context, and then b, coming up with some sort of plan,
11:39and that's actually like maybe something like a PRD. And then I can throw that PRD PRD into AI slash specs,
11:48and then I get into tasks. And that's literally it. Now, this does not mention the fact that you should have a dot augment guidelines. How about that handwriting? I had to take extra lessons on handwriting. I'm lefty. Okay. But can you play tennis lefty and righty? Probably not. Man, that's tough. I'm tough. Okay. So this is something that I'd be doing where it's okay. This is how we do data fetching. This is how we do state management, this is how we do our types, this is how our REST API layer works, this is how our tRPC layer works. All those things kinda get outlined. And there are additional artifacts.
12:23You can put them in the guidelines if you'd like. But I really think that the beauty of this is really just, hey, this is how I like to make PRDs. This is how I like to make tasks. And this is where we store them. All those things. Whereas if I were going zero to one, and again it depends. But if I'm going zero to one, then I'm probably throwing in another step before that.
12:46So it's more Oh, opus for research. Now, if you don't have that, sorry, you should get it. It's tight. Claude, send me a shirt, would you? But you could also use ChatGePety, of course, or Gemini. And this just is you doing research, figuring out what it is your input for this. So let's do input
13:06CTAequals your code base. So you can literally just dump it in there and then ask much questions around, hey, here are the things that we want to accomplish around the problem. You can ask to get two steel man at because you're probably coming up with a solution that's overkill, over engineered. You don't need to. So you can really do the mental jousting there. And then output would be the researched spec. So it looks something like that. Right? And then you go on to b c d. So that's where I'm thinking currently as of June 18. But yeah, very excited about all the Auggie stuff. What about the credits, dude? Oh, wow. Yeah. My bad. Okay. So, if you want the credits, make a comment below as to why you want to be using Augment.
13:47CTAGive me a specific use case of how you plan on using it And that's it. Literally, just comment it below and I'll pick those out by the end of the next week. See you. If you want to learn more about what we're doing with the engineers inside of VAI, with people from Microsoft and Google, love to be able to say that. We have a new platform coming out. Price is gonna double because it's not a little school emoji,
14:08CTAI'm your guru thing. It's like an actual like network with a bunch of tools in it that I'm excited about. So you can collaborate on projects in there. You can share different resources. We have a prompt library. There's gonna be public access for some of it too for discoverability purposes and also just because I think it's a great way to build a brand. So that's it for today. If you learn one thing, then you got like the video so that more people see this. And then if you learned two things or you didn't learn anything at all, you should subscribe. Alright. I'll see you in the next one.
§ · For Joe

Steal the workflow architecture.

Builder playbook

The real unlock is not the tool — it is the pipeline: context first, spec second, tasks third. That order works regardless of which AI you use.

  • Keep a CLAUDE.md (or .augment/guidelines) that documents your codebase conventions. Every good AI tool reads this — it is the thing that makes the model feel like it knows your stack.
  • Split your work into two modes before you start: are you 1-to-n (iterating on known code) or 0-to-1 (greenfield)? The upfront research budget is very different.
  • For 0-to-1: dump the codebase or problem into Opus 4 first. Ask it to steel-man two approaches. The thing you think you want to build is usually over-engineered.
  • For 1-to-n: skip deep research, go straight to context questions then PRD then tasks. You already know the code.
  • The extinction event thesis is a strong content frame: 'Which AI tools survive the next 12 months and why?' Make that video — the model-partnership moat angle is specific and defensible.
§ · For You

How to actually use AI for coding without going crazy.

For developers trying to level up

The problem is not that AI tools are bad — it is that most people never define what they want before asking the AI to do it.

  • Before you open your AI tool, write down what you are trying to accomplish in plain language. Even a rough bullet list makes the AI 10x more useful.
  • If you are new to a codebase: ask the AI questions about it first before asking it to change things. Context questions are not a waste — they are the whole game.
  • If you are starting something from scratch: use a research-first model (Claude Opus, Gemini) to challenge your assumptions before you build. Tell it to argue against your plan.
  • Keep a file in your project that explains how you like things done. The AI will read it every time and stay consistent with your patterns.
  • Pick two or three tools and stick with them. The overwhelm is real — Parker uses Cursor (tab completion), Augment (tasks and context), and Claude Code (exploration). Three max.
§ · Frame Gallery

Visual moments.