Modern Creator Network
Austin Marchese · YouTube · 10:43

How to 10x Your Claude Code Projects (Karpathy's Method)

Austin Marchese translates Andrej Karpathy's viral AI workflow post into three copy-paste systems for Claude Code: a compounding wiki, an auto-research feedback loop, and surgical context engineering.

Posted
3 weeks ago
Duration
Format
Tutorial
educational
Channel
AM
Austin Marchese
§ 01 · The Hook

The bait, then the rug-pull.

Andrej Karpathy went viral. Austin Marchese watched, took notes, and built a tutorial that strips the jargon out of Karpathy's LLM knowledge system and hands it back as three copy-paste strategies. The promise is ten minutes to a Claude Code workflow that compounds instead of restarts.

§ · Stated Promise

What the video promised.

stated at 00:12I'm gonna break down and simplify the three key strategies Karpathy uses, view how each one works, and give you actionable advice you can apply today to 10x your Claude Code projects.delivered at 09:01
§ · Chapters

Where the time goes.

00:0000:24

01 · Cold open + hook

Authority borrow via Karpathy name-drop, simplification promise, three-strategy preview.

00:2402:42

02 · Strategy 1: LLM Knowledge Bases

Core problem: AI starts from scratch every session. Fix: Claude-maintained wiki with three layers. Raw (immutable), wiki (cross-referenced summaries), schema/CLAUDE.md (librarian instructions). Karpathy: Humans abandon wikis. LLMs do not get bored.

02:4206:50

03 · Strategy 2: Auto-Research

Karpathy's propose/test/evaluate/keep/discard loop. 11% gain from 20 improvements. Shopify CEO: 19% gain from 37 experiments overnight. Austin's reframe: use chat history as quality signal for non-measurable work. Hooks trigger improve-system skill on session start.

06:5007:27

04 · Strategy 3: Context Engineering (intro)

Karpathy definition: the delicate art and science of filling the context window with just the right information. Bad results are a skill issue.

07:2709:01

05 · How to properly context engineer

CLAUDE.md prompt and scoped knowledge via expert-advice skill. BuildPartner.ai plug.

09:0110:43

06 · Live demo + close

One master prompt sets up all three strategies. Obsidian graph view shown. Subscribe CTA.

§ · Storyboard

Visual structure at a glance.

open + Karpathy clip
hookopen + Karpathy clip00:00
3 strategies preview
promise3 strategies preview00:15
Architecture slide: 3 layers
valueArchitecture slide: 3 layers00:57
Live file explorer demo
valueLive file explorer demo01:56
Strategy 2 card: Auto-Research
valueStrategy 2 card: Auto-Research02:42
Auto-research flowchart
valueAuto-research flowchart02:58
Auto-research progress chart
valueAuto-research progress chart04:05
Measure the unmeasurable callout
valueMeasure the unmeasurable callout05:00
Automate: Loop + Schedule
valueAutomate: Loop + Schedule06:10
Strategy 3 card
hookStrategy 3 card06:50
CLAUDE.md prompt card
valueCLAUDE.md prompt card07:27
Expert advice skill terminal
valueExpert advice skill terminal08:34
Master prompt in Claude Code
valueMaster prompt in Claude Code09:01
Folder structure in VS Code
valueFolder structure in VS Code09:52
Obsidian graph view payoff
ctaObsidian graph view payoff10:17
§ · Frameworks

Named ideas worth stealing.

00:51model

3-Layer LLM Knowledge Base

  1. Raw: immutable source documents
  2. Wiki: LLM-maintained summaries and cross-references
  3. Schema: CLAUDE.md as librarian instruction file

A folder-based wiki Claude builds and maintains from raw sources. The schema file tells Claude how to ingest, organize, and health-check the wiki.

Steal forAny project with recurring context. Drop sources into raw/, let Claude build the wiki, never start cold again.
02:42model

Auto-Research Loop

  1. Propose
  2. Test
  3. Evaluate
  4. Keep or Discard
  5. Repeat

Karpathy's agentic improvement loop. For measurable work: runs autonomously. For non-measurable: use chat history as quality signal, feed it back via improve-system skill.

Steal forAny workflow where you iterate toward quality. Replace the human prompt-next-step bottleneck with a hook-triggered improvement skill.
07:27model

Context Engineering Hierarchy

  1. CLAUDE.md: session-level what/structure/mistakes
  2. Skill context injection: auto-load expert frameworks per topic
  3. Wiki navigation: LLM reads wiki to find raw, not scan all raw

Three tiers of context control that compound together. CLAUDE.md is the baseline; skills add dynamic context; the wiki adds navigable depth.

Steal forAny Claude Code project. Start with CLAUDE.md, add skill-level context injection as complexity grows.
§ · Quotables

Lines you could clip.

02:42
The LLM is rediscovering knowledge from scratch on every question. There is no accumulation.
Karpathy quote that names the pain clearly, standaloneTikTok hook
04:31
Humans abandon wikis because the maintenance burden grows faster than the value. LLMs do not get bored.
Visceral contrast, no setup neededIG reel cold open
05:20
You have to remove yourself as the bottleneck. You cannot be there to prompt the next thing.
Builder mindset hook, universal pain pointTikTok hook
08:03
It's a skill issue.
Karpathy being blunt, clip-worthy because of deliveryIG reel cold open
§ · Pacing

How they spent the runtime.

Hook length24s
Info densityhigh
Filler5%
Sponsors
  • 06:1206:50 · BuildPartner 5-day email series (own product)
  • 08:4109:01 · BuildPartner.ai (own product, free plug)
§ · Resources Mentioned

Things they pointed at.

02:42toolauto-research (Karpathy open source)
§ · CTA Breakdown

How they asked for the click.

10:17next-video
If you got this far, you are an absolute legend and I'm confident that you'll love this video where I walk through how Anthropic's team, the creators of Claude Code, actually use Claude Code.

Embedded next-video suggestion with warm compliment close. Subscribe card appears at 10:38.

§ · The Script

Word for word.

HOOKopening / re-engagementCTAthe pitchmetaphoranalogystory
00:00HOOKAndrea Carpathi, the former head of AI at Tesla, just went viral for this post titled LLM knowledge basis. And that's because he shared the secret to 10 x ing your output with Claude Code. But unfortunately, a lot of what he says sounds complicated when in reality, it's actually pretty simple. So I'm gonna break down and simplify the three key strategies Karpathy uses, view how each one works, and give you actionable advice you can apply today to 10 x your Claude Code projects. Strategy number one is LLM knowledge basis. Right now, most people use AI like a search engine. You ask a question, get an answer, close the window. Tomorrow, you start from scratch, nothing compounds. Karpathy nailed the problem in one line. The LLM is rediscovering knowledge from scratch on every question. There's no accumulation. His fix, have Claude build and maintain a knowledge base for you. He calls it a wiki. Think of this like a personal encyclopedia
00:47except Claude writes every page, keeps it organized, and updates it automatically when you add new stuff. There's no database, there's no infrastructure, just folders on your computer. Something that my mom could set up. And his system has three layers. In the demo that I go through later in this video, I'll share a prompt that you can copy and paste to set this all up. But it's important you understand the concepts first. Layer one is your raw resources.
01:09This is a folder where you drop in articles, transcripts, notes, PDFs, whatever training data could be helpful for your project. Think of this like a data dump. Claude can read from it, but never changes it because this serves as the source of truth. Layer two is the knowledge base, the wiki. This is where Claude organizes everything for you. Summaries, concepts, breakdowns, comparisons,
01:28profiles on people or tools, all cross referenced to the raw knowledge. And layer three is the schema. This is an instruction file that tells Claude how the knowledge base should be structured, what conventions to follow, and what to do when you add a new source. You can also tell Claude to do a health check, basically auditing the whole thing for contradictions, stale info, and gaps. Think of this like the librarian of the whole system. Sounds complex, but let me break down a simple example. Let's say you have a raw transcript from five podcasts of Carpathi talking about AI best practices.
01:56What you would do is upload that into the raw data folder, then a wiki would be created about Andrea Carpathi that would clearly reference these five transcripts as well as the topics that are covered there. So Claude would then look at the wiki and know where to look for specific information in the raw database. That way it doesn't have to look through all five of these raw transcripts,
02:16Instead, it can be more precise. You're creating a web of information that makes Claude's life easier, in turn making the output that much better. And the reason that this can work long term directly from Carpathi, humans abandon wikis because the maintenance burden grows faster than the value. LLMs don't get bored. The best part with this setup is it creates a foundation you can build on top of. So that's strategy one. You build the knowledge base, it compounds over time. But what if Claude could improve things without you thinking about it? That's strategy two, which is auto research. Karpathy open sourced a project called auto research. What he did was he had a small AI model he was training. Instead of manually tuning the code to make it better, he pointed an AI agent at it and said, find ways to improve this. No other guidance, just a goal and a way to measure it. And in this project, he was able to create an auto research loop. This is a loop where you propose a solution, you test a solution, evaluate it, keep or discard it, and then repeat. So the agent does exactly this. It proposes the change, it runs the training, it measures if it improved, keeps it or throws it out, and then proposes the next change over and over again. When comparte used this tool to test performance, found 20 improvements that stacked up to about 11%
03:22performance gain. And actually, the Shopify CEO saw this experiment and ran his own auto research loop on his own data. 37 experiments and a 19% improvement all while he was sleeping. This is the mindset behind this whole concept. To get the most out of the tools that have become available now, you have to remove yourself as the as the bottleneck. You can't be there to prompt the next thing. You're you need to take yourself outside. Okay. All this is really great, but there's some limitations here. So Carpathi's version works because he's measuring something that's just extremely measurable to computers. Right? Code runs faster or it doesn't. There's a clear number associated with if it's doing better or worse. But the reality is most of what you and I are building doesn't really look like this. How do you quantify if an app looks good? How do you quantify if a script resonates? Is this email draft good? You can't really put numbers to this. So, yes, you can use auto research straight up, but I wouldn't take it at face value. Instead, think about what the concept is and apply it to how you actually work. Ultimately, auto research is all about creating a system that gets better every time you use it because you're feeding results back into it. And there's different ways to do this in whatever you're working. Let's say you have a landing page and you wanna improve your conversion rate. Within the project itself, you can tell Claude, review my landing page headline, write five variations of the headline, split test them simultaneously,
04:35and track results using post hoc to determine which is the best. You've created a system to track performance and end of the week, you can come back to Claude, have it pull the numbers, and tell you which is better, make the decision, and then move on to the next experiment. The results aren't instant like Karpathy outlined, but it follows the same improvement principle. But this is still measurable. What about extending this concept to non measurable things? Well, is actually what excites me the most. So for example, let's say I use AI to create a report for a client. It'll generate the report, and then I'll go back and forth with AI until it's exactly what I want. I then can have a Claude skill. I use something called build partner colon improve system where I'll look at the back and forth I already had with Claude and enhance my knowledge base so the next output is better. Essentially, I'm using my chat history as a proxy
05:19for if the output was good or bad. In my experience, this is a phenomenal way to improve your systems over time. I personally run this skill manually, but if you want to automate this process so it's a little bit closer to auto research, you can use something called a loop or a schedule, which are two features that the creator of Claude code actually calls the most powerful features in Claude code. A loop lets you set Claude to run a specific command every so often in your session, and a schedule lets you set Claude to run it at a specific time and day, and it runs entirely on the cloud. Personally, I don't use loops or schedule features too much. Instead, I actually use something called hooks to help me. A hook essentially automates specific commands
05:57CTAbased on things that happen as you use clawed code. So I set up a hook that every time I start a new clawed code session, if I haven't ran build partner slash improve system in a while, it will remind me to run it. I then manually run it and it looks at my historical conversations to do the improvement for me. This is how I've created my version of an auto research loop for these less measurable things. And I know we are going through concepts quickly, so don't worry. If you do wanna go deeper on strategies like this where you can follow at your own pace, I put together a free five day email series where I walk through the concepts I'm covering here. And based on thousands of people that have gone through it, I am highly confident you're gonna love it. But if you don't, you can just unsubscribe anytime. Now, up to this point, we've gone through setting up your LLM knowledge base and we now understand what auto research is and how you can apply it to whatever you're working on. And before we get to a demo where I'll give you a single prompt where you can set up your whole machine, there's one more strategy that ties it all together. Strategy three is context engineering. From Crypathy, context engineering is the delicate art and science of filling the context window with just the right information for the next step. And usually, context engineering is the difference between people getting good and bad results. Here's a clip of him talking about when people complain about AI not working. Like, so many things, even if they don't work, I think to a large extent, you feel like it's a skill issue. It's not that the capability is not there. It's that you just haven't found a way to string it together of what's available. Like, I just don't I didn't give good enough instructions in He says it's a skill issue. Karpathy is not holding back, but he's frankly correct. So how do you properly context engineer? Well, there's two things. First, your Claude MD file. This is the instruction file that Claude reads at the start of every session. Most people either don't have one or it's three lines. This is key because it tells Claude what your project is, how it's structured, what conventions to follow, and what it tends to get wrong. We touch on this a touch in section one, but it is super critical. Here's a prompt you can paste right into Claude code. Create a Claude MD for this project, include what this project is, the folder structure, what I'm currently building, and common mistakes to avoid. Keep it under 50 lines. The key is that last keep it under 50 lines is because we don't want it to have too much bloat. That's an arbitrary number. It can extend past that, but you get the concept. The second is scope of Claude's seeds. If you're writing a script, Claude doesn't need your entire code base. It needs your script frameworks, your voice patterns, maybe a few examples of finished scripts. But the more irrelevant stuff you load into it, the worse the output will get. And this is why the LLM knowledge base we covered earlier is so important is because it creates a web of knowledge so that the LLM can effectively navigate. I also use skills to help simplify this too. I have a skill called build partner colon expert advice, where when someone asks a business question, the skill automatically loads the right expert framework. Let's say it's a pricing question, it references Hermozy.
08:41CTALet's say it's social media, it references mister beast and Gary Vee. Let's say you're starting a business, it references Elon Musk. All contextual information that is only important based on the specific topic I have a question about. The person asking doesn't have to know what context to provide, the skill handles it If you are interested in some of these skills that I'm referencing, you can get them for free on buildpartner.ai.
09:01It's a plugin I created, so you can go check that out. It's entirely free if you want. Now, we've covered a lot here. Right? LM knowledge, auto research, context engineering. Let me show you how to actually set this up. You don't need my exact system. You just need Claude code and one prompt and it'll get it all started. Open Claude code and paste this prompt in which is also in the description in this video. There's a lot here and this one prompt sets up all three strategies and Claude will just build it based on the project you're working on. But I do wanna call out some key things happening this prompt. So based on your back and forth with Claude code, you can just make sure that it applies them. The first is it creates the folders for you. This is the general structure, but you may have subfolders if you have a bunch of resources. Here you can see mine where I have in raw, I have different partitions. In wiki, there's different partitions.
09:46CTASo as you build it out, you may have subfolders that are needed. The second part of this is a hook that when you drop in resources, it'll bring it into the raw folder and then Claw will automatically process it and update the wikis and then create the necessary linkages. You may wanna consider making a Clawed skill that the hook calls to make this more consistent. I have one called ingest source. Once you get that all set up, if you're using Obsidian to view your files, which I personally highly recommend, hit command g to see the graph view. You'll then be able to see all of your files and all of your folders and the information
10:19CTAand the web and how they're linked together. Here, you can see mine. It's pretty cool. It's productivity porn. Let's call it what it is. But it does help you make sure that you're properly linking files within your wiki and setting up a proper LLM knowledge base. Now, if you got this far, you are an absolute legend and I'm confident that you'll love this video where I walk through how Anthropix team, the creators of Clawd Code, actually use Clawd Code. Go check that out and I'll see you over
§ · For Joe

Steal the three-layer system.

Claude Code workflow playbook

The gap between mediocre and 10x Claude Code output is almost entirely a context problem, and this video shows exactly how to solve it with folders, not infrastructure.

  • Set up raw/, wiki/, and CLAUDE.md in every project. One prompt does it; grab it from the video description.
  • The schema/CLAUDE.md is the lever most people skip. Write the librarian instructions first, not last.
  • Auto-research for non-measurable work: use your own chat history as the quality signal. Run an improve-system skill after every session that took iteration.
  • Hook the improve-system skill to session start so it reminds you automatically.
  • Expert-advice skills that auto-load the right framework per topic are the highest-leverage context injection move. Build one per domain you work in.
  • Obsidian graph view is not just pretty; it shows you immediately when your wiki has islands (unlinked nodes equal gaps in the knowledge web).
§ · For You

Stop starting from scratch with AI.

If you use Claude or ChatGPT for real work

Every time you start a new AI conversation it knows nothing about you or your project, but it does not have to.

  • Create a folder called raw/ and dump your notes, articles, and past work into it. Ask Claude to build you a wiki from it.
  • Ask Claude to write a CLAUDE.md for your project, a file it reads at the start of every session so you never have to re-explain your context.
  • After a session where you went back and forth to get something right, ask Claude to update your knowledge base based on what worked today.
  • Treat Claude like a junior employee you are onboarding. The more you document what works, the better the output gets over time.
§ · Frame Gallery

Visual moments.

§ · Watch next

More from this channel + related dossiers.