9 Claude Skills I Use Every Single Day (Steal Them)
A 16-minute listicle that filters 250+ published skills down to the 9 worth keeping — ending with the one pattern that makes every skill compound over time.
May 11th 2026Austin Marchese translates Andrej Karpathy's viral AI workflow post into three copy-paste systems for Claude Code: a compounding wiki, an auto-research feedback loop, and surgical context engineering.
Andrej Karpathy went viral. Austin Marchese watched, took notes, and built a tutorial that strips the jargon out of Karpathy's LLM knowledge system and hands it back as three copy-paste strategies. The promise is ten minutes to a Claude Code workflow that compounds instead of restarts.
stated at 00:12“I'm gonna break down and simplify the three key strategies Karpathy uses, view how each one works, and give you actionable advice you can apply today to 10x your Claude Code projects.”delivered at 09:01

Authority borrow via Karpathy name-drop, simplification promise, three-strategy preview.

Core problem: AI starts from scratch every session. Fix: Claude-maintained wiki with three layers. Raw (immutable), wiki (cross-referenced summaries), schema/CLAUDE.md (librarian instructions). Karpathy: Humans abandon wikis. LLMs do not get bored.

Karpathy's propose/test/evaluate/keep/discard loop. 11% gain from 20 improvements. Shopify CEO: 19% gain from 37 experiments overnight. Austin's reframe: use chat history as quality signal for non-measurable work. Hooks trigger improve-system skill on session start.

Karpathy definition: the delicate art and science of filling the context window with just the right information. Bad results are a skill issue.

CLAUDE.md prompt and scoped knowledge via expert-advice skill. BuildPartner.ai plug.

One master prompt sets up all three strategies. Obsidian graph view shown. Subscribe CTA.
A folder-based wiki Claude builds and maintains from raw sources. The schema file tells Claude how to ingest, organize, and health-check the wiki.
Karpathy's agentic improvement loop. For measurable work: runs autonomously. For non-measurable: use chat history as quality signal, feed it back via improve-system skill.
Three tiers of context control that compound together. CLAUDE.md is the baseline; skills add dynamic context; the wiki adds navigable depth.
“The LLM is rediscovering knowledge from scratch on every question. There is no accumulation.”
“Humans abandon wikis because the maintenance burden grows faster than the value. LLMs do not get bored.”
“You have to remove yourself as the bottleneck. You cannot be there to prompt the next thing.”
“It's a skill issue.”
“If you got this far, you are an absolute legend and I'm confident that you'll love this video where I walk through how Anthropic's team, the creators of Claude Code, actually use Claude Code.”
Embedded next-video suggestion with warm compliment close. Subscribe card appears at 10:38.
The gap between mediocre and 10x Claude Code output is almost entirely a context problem, and this video shows exactly how to solve it with folders, not infrastructure.
Every time you start a new AI conversation it knows nothing about you or your project, but it does not have to.
00:00
00:15
00:20
00:27
00:34
00:40
00:52
00:57
01:06
01:16
01:20
01:32
01:40
01:50
01:56
02:07
02:12
02:17
02:26
02:36
02:41
02:49
02:58
03:09
03:16
03:23
03:32
03:43
03:47
03:53
04:05
04:13
04:21
04:27
04:38
04:44
04:53
05:00
05:09
05:16
05:25
05:33
05:41
05:49
06:00
06:05
06:13
06:21
06:25
06:36
06:48
06:52
07:00
07:09
07:15
07:27
07:35
07:38
07:48
07:58
08:06
08:11
08:22
08:30
08:38
08:46
08:52
09:04
09:14
09:18
09:26
09:32
09:40
09:48
09:58
10:03
10:14
10:25
10:32
10:38A 16-minute listicle that filters 250+ published skills down to the 9 worth keeping — ending with the one pattern that makes every skill compound over time.
May 11th 2026An ex-Apple engineer benchmarks ref.tools and Exa AI against Cursor on a live Tailwind v4 refactor — and Claude Code wins at 2,800 tokens vs 98,000.
November 23rd 2025A 15-minute tutorial showing how to externalize Claude Code memory into three Markdown files so it can process 50+ documents without losing context.
January 13th 2026A 37-minute tutorial showing non-developers how to replace one-off prompts with a persistent, file-based AI co-writing system built inside Claude Code.
December 10th 2025Brandon Hancock spends 35 minutes putting Gemini CLI through three live tests — a one-line styling fix, a full memory-feature build, and a from-scratch landing page — and lands on a single rule: Gemini CLI thrives with context, dies without it.
June 27th 2025A 21-minute live demo of Zen van Riel adding a real conversation-history feature to his AI Tutor app with Claude Code — four debug iterations included.
June 27th 2025