Modern Creator Network
Sean Kochel · YouTube · 14:49

5 Engineer-Only Claude Skills Every Vibe Coder NEEDS

A 14-minute live demo of five Matt Pocock skills that close the gap between vibe coders and professional engineers.

Posted
2 days ago
Duration
Format
Tutorial
educational
Channel
SK
Sean Kochel
§ 01 · The Hook

The bait, then the rug-pull.

Matt Pocock's skill library hit 70,000 GitHub stars with a label that stung: not for vibe coders. Sean Kochel took that as a provocation, not a verdict. Fourteen minutes later he had walked through five skills live, in a real project, that reframe the whole argument.

§ · Stated Promise

What the video promised.

stated at 00:47I'm gonna show you five of the skills from it that I use on the daily, how they work, and where in your projects you can use them.delivered at 14:49
§ · Chapters

Where the time goes.

00:0000:55

01 · Hook: Not for vibe coders?

Matt Pocock skill library intro. Sean reframes vibe coder vs vibe engineer. Promises 5 daily-use skills.

00:5503:37

02 · Skill 1: Improve Codebase Architecture

Dispatches exploration subagent to find shallow modules and propose deepening opportunities. Demo on content-intel-v2. Surfaces top 5 refactor candidates with files, problem, solution, benefits.

03:3706:34

03 · Skill 2: Grill Me

Socratic decision-tree interrogation before writing code. Resolves every downstream branch of each answer. Demo catches false architectural premise after 7-8 questions. Sean draws branching tree diagram.

06:3409:24

04 · Skill 3: Caveman

Ultra-compressed communication mode drops filler and pleasantries while keeping technical accuracy. Demo: 768 to 502 tokens on same prompt. Auto-clarity exception for destructive ops.

09:2412:36

05 · Skill 4: Zoom Out

Gives high-level map of how code fits into the system using domain vocabulary. Debunks false duplicate threshold-read flag from skill 1.

12:3614:49

06 · Skill 5: Handoff

Compacts session into structured markdown brief for a fresh context window. Better than native compaction because explicitly formatted for a different recipient.

§ · Storyboard

Visual structure at a glance.

hook talking head
hookhook talking head00:00
GitHub skill library README
promiseGitHub skill library README00:12
skill 1 intro card
valueskill 1 intro card00:55
Claude Code running improve-codebase-architecture
valueClaude Code running improve-codebase-architecture01:30
grill-me SKILL.md
valuegrill-me SKILL.md03:37
decision tree whiteboard diagram
valuedecision tree whiteboard diagram04:57
caveman SKILL.md
valuecaveman SKILL.md06:34
OpenAI tokenizer 768 vs 502 tokens
valueOpenAI tokenizer 768 vs 502 tokens08:08
zoom-out SKILL.md
valuezoom-out SKILL.md09:24
zoom-out result debunks premise
valuezoom-out result debunks premise11:40
handoff SKILL.md
valuehandoff SKILL.md12:36
VS Code handoff markdown output
ctaVS Code handoff markdown output14:00
mattpocock/skills README full skill list
ctamattpocock/skills README full skill list14:23
§ · Frameworks

Named ideas worth stealing.

02:16concept

Deepening Opportunities

Shallow modules have an interface nearly as complex as their implementation. The deletion test: delete the module. If complexity vanishes, it was a pass-through.

Steal forAny make-this-project-more-AI-navigable content or workshop
04:57model

Decision Tree Grilling

Most tools ask 5 high-level questions then resolve hidden assumptions at code-write time. Grill Me walks every branch recursively until every leaf is settled before implementation starts.

Steal forPre-implementation planning sessions, spec review, architecture discussions
07:09concept

Auto-Clarity Exception

Caveman mode automatically drops out for security warnings, irreversible actions, multi-step sequences, or when user asks to stop.

Steal forAny token-compression scheme always needs a safety valve
12:52concept

Handoff vs Compaction

Compaction summarizes for continuity. Handoff summarizes for transfer: explicit problem framing, solution reached, key decisions, specifics resolved. The recipient changes, not just the context window.

Steal forMulti-agent workflows, planning-to-building handoffs, parallel tangent sessions
§ · Quotables

Lines you could clip.

00:13
I think vibe engineering is a better term for that middle ground, exit level beginner into intermediate, where we're trying to actually build awesome stuff, but do it in a very systematic way.
Clean reframe of the vibe coder stigma. Standalone clip.TikTok hook
06:27
A lot of other tools are gonna ask you five questions that get your directional insight, and then they're gonna go off and address all of these underlying assumptions at game time when they go to write the stuff.
Nails the failure mode of current AI coding tools in one sentence.IG reel cold open
06:30
What the grill me command does is that once you pick a direction, it's gonna go deep down the rabbit hole to resolve all of the other issues that crop up because of that decision.
Perfect contrast pair with the previous quote.IG reel cold open
12:52
This is kind of like an alternative to compacting because we're still gonna get all of that information, but we can then just use that document as the context for our next session.
Tight explanation, no setup needed.newsletter pull-quote
§ · Pacing

How they spent the runtime.

Hook length55s
Info densityhigh
Filler8%
§ · Resources Mentioned

Things they pointed at.

00:30toolOpenSpec
00:39toolOBRA
00:40toolCompound Engineering
§ · CTA Breakdown

How they asked for the click.

14:03next-video
If you like this video, I will link you to a playlist where I have a bunch of other awesome skill libraries and vibe engineering plugins that I use on a daily or weekly basis.

Soft and clean. No newsletter push, no merch. Playlist link only. Earned by the content.

§ · The Script

Word for word.

metaphor
00:00A really talented AI educator and engineer named Matt Pocock has a skill library that has soared to over 70,000 stars pretty quickly, but the description says that it's not for vibe coders. So it really makes me wonder, am I actually a vibe coder?
00:16Are you? I don't use tools like Replit and ask it to build me horse Tinder, but I also don't have a decade's experience as a software engineer. So I think vibe engineering is a better term for that middle ground exit level beginner into intermediate, where we're trying to actually build awesome stuff, but do it in a very systematic way.
00:34Because this skill library is pretty amazing, and I use it alongside tools like OpenSpec, OBRA, and Compound Engineering every single day.
00:43So I'm gonna show you five of the skills from it that I use on the daily, how they work, and where in your projects you can use them. Starting with one of my favorites, improving code base architecture. One of the big downsides of vibe coding is that you can go along your merry way making a bunch of changes that contradict what you've already done.
01:02And so your project gets very complicated and convoluted over time, which makes it really difficult to build things on top of it. So this skill aims to solve that by improving the underlying structure of your project. So let's take a look at how to use it.
01:16So here we're looking at a project that I've been working on inside of my paid community, Shameless Plug, which is a Twitter intelligence tool that was built using BMAD. Now it's pretty simple how a user interacts with it, but behind the scenes, there's a bit going on for identifying trends, clustering topics together, ranking them.
01:33Again, as a Twitter research tool that's meant to identify trends and surface them to me proactively. So let's see what happens when we run this improve code base architecture command in this project. So one of the things that I really like about how this skill works is it makes sure that it's actually using the language of your app to explain things to you.
01:52And this is important because when we're talking with the language model and when the language model's responding to us, we need to make sure that we're using the same vocabulary to describe things so that we don't go off and make changes inadvertently. So now what this is doing is it's going out, it's dispatching an exploration agent to actually go through the code base and find any architectural friction.
02:12Now if you're interested in exactly what it's doing, you can go check out the repo. But basically, what it's looking for are moments of friction inside of your app. And so the premise behind this is that it's gonna propose to you deepening opportunities, where you can take shallow modules and turn them into deeper modules.
02:31And the intent behind that is that it's meant to make your code base more testable and easier for an AI to navigate. And so now what we get out the other side are the top five deepening opportunities in priority order.
02:45Now, obviously, you wanna use your own judgment to see which of these actually makes sense for you to implement. But, again, as we move through, we can see that it's recommending a change.
02:55It's talking about the files related to this change, the fundamental problem that is surfacing based on what you've built in these files and how they interact. It'll then provide you a concrete solution and the benefits of that solution.
03:10And so this process continues now through, in this case, the top five issues that it found. So having something like this that can kind of surface the high level changes you should make is really valuable.
03:20But what happens when some of these objections that it is bringing up needs a deeper dive? So in this case, as I read through these, the one that actually stuck out to me as the biggest concern is this quality scoring system that I have inside of the app.
03:34They're calling it a god orchestrator tangled with a bunch of other stuff. And so fixing this might be something that I want to explore.
03:42Well, the next skill solves that problem, and it is called grill me. So we can run the command and then talk about what it's doing while it goes. So of the things that's a real pain about a lot of these tools is they don't really push you to have an understanding of what it is exactly that you're about to change and how that might impact things.
04:00So they might ask, like, two, three, four questions trying to clarify a few, like, really big things. But often, it lets a lot of hidden assumptions just slide through, and they make their way into your project.
04:12And so this skill helps that by really stress testing exactly what you want to change and why and the ramifications of that change. So after about seven or eight questions and maybe ten minutes of back and forth, this has gone from, like, that one fundamental thing that we wanted to address all the way down to an actual design that we can implement.
04:31Now the thing that I think is really valuable about this skill is the way that it actually works. So, again, a lot of other tools are gonna ask you five questions maybe that get your directional insight on which way you wanna take the implementation, and then they're gonna go off and address all of these underlying assumptions at game time when they go to write the stuff.
04:50But what the grill me command does is that once you pick a direction, it's gonna go deep down the rabbit hole to resolve all of the other issues that crop up because of that decision. So for example, in question one, we addressed, like, what is the actual problem that we are solving specifically. And then based on that specific problem, we had to define, well, what's the right shape of this solution?
05:13And then based on that shape, how is that going to impact the other functions that interact with this service?
05:20And then based on our response to that, well, then what needs to go inside of that? Right? And it continues spiraling down until every branch of this design tree has actually been resolved.
05:30So just to draw this out real quick, with a lot of other tools, we're gonna get just our five primary questions that address something like at a high level. But then let's say that based on what we respond to here, there's technically new decisions now. So maybe now we have a whole new subset of four different directions that we could take based on what we responded to there.
05:51Well, now based on what we choose to do here, there's again another branch of decisions. And then from here, another branch of decisions, And it continues down the tree, making sure that we, as the humans in the loop, are actually resolving these decisions in a way that makes sense for us.
06:08Now, again, the thing that's really nice about this is that it is open source. So if you wanna come in here and customize this to make it something that maybe explains things in more detail to you or generally makes things clearer for you, you have the capacity to very easily do that. Now one objection I can already hear people shouting from the void is, how many tokens this cost, bro?
06:28Well, the next skill I'm gonna show you can help reduce your token costs allegedly by up to 75%, and that skill is called caveman. So as it implies, this skill forces your language model to reply to you like a smart caveman.
06:41And so it claims to cut token usage by 75% by dropping fillers, articles, and pleasantries while keeping full technical accuracy.
06:51And that's a really big thing because there's been a lot of implementations of this caveman mode. But the problem with some of those things is that they dropped the important technical language. Now one of the things that I really like is that it will automatically kick you out of caveman mode when there's really important stuff.
07:07So if there's security warnings, irreversible actions, or multi step sequences where talking like a caveman might risk you misreading it, or if the user asks you to, hey, stop talking like a caveman for a second.
07:22All of those things will automatically kick it out of this mode and explain to you in regular mode. And so in order to show you the difference, I'm gonna run this now without the caveman skill with the same exact prompt so that we can see the difference in the outputs that we get. So let's look through, like, the first few lines just to compare.
07:39It says the plan asserts the threshold read bootstrap fallback logic is duplicated between the ranker when it applies thresholds per post and the recompute stage when it logs old values for the tuning log per enforcement guideline number 10. I read the code. The recompute stage doesn't read threshold with the bootstrap fallback.
07:57It reads post created at count. Right? And then it goes on to explain all of this stuff.
08:03So we can come through here actually, and let's just pop this into a token counter. So So now this isn't perfect because we're using Opus and they tokenize things differently, but roughly 768 tokens for that response without the caveman approach.
08:18Now we can hop in and check that same exact output from the caveman mode. So plan claim, ranker recompute threshold and divergence makes tuning log a lie. That is significantly shorter.
08:30Right? Red code carefully. Ranker does this.
08:33Recompute does this. Question one, is the premise wrong? So this is significantly shorter than the last one, and let's actually just go test that.
08:42If we pop back into our tokenizer, the first one was 768. This one is five zero two. So in that case, that's, I think, like, roughly, like, a 30% reduction in the number of tokens that were used just to basically check-in with us and ask us a question.
08:56And that is pretty significant, especially when you consider how these conversations are going to compound over time. So now if we had used that caveman skill with this original design that we got from the previous step, we would have had significantly less token usage. So in a second, I'm gonna show you how he carries forward a decision like this to use in future sessions.
09:16But first, what if these first three skills are generating questions for you that are hitting a little bit higher than your understanding of things? So the next skill that I really like is simply called zoom out.
09:29And what this does is it tells the agent to zoom out and give you broader context or a higher level perspective. So if you're ever unfamiliar with a section of code or need to understand how it fits into the bigger picture, this a really helpful skill to call. So So we will come down and we will run the zoom out command.
09:45And so one of the things when you're working in a domain that you're not really comfortable with, you will very often defer to the recommendation of the model. But if you really care about what you're building, you really need to look at those things and make sure that they are something that you're at least understanding and on board with.
10:04So let's say that, you know, the first code base architecture improver recommended that we look at this thing specifically. And now that we've run it, the caveman skill or the grill me skill is telling us that, hey. This isn't actually an issue.
10:15And let's say we wanna really understand if it is or not until we ask it to zoom out and explain where we are. And so the first thing that it's gonna look at is the domain vocabulary so that what it's about to explain to you is grounded in the actual language your project uses.
10:31So in this case, it's breaking down some of the domain vocabulary about how our ranking algorithm works. It's telling us these are the actual modules involved in this thing.
10:42It's showing us where certain files are read, where certain files are written. And so now it's gonna actually break down, like, does the claim that we were making in this case actually sit in the context of all of this stuff that it just broke down for us. And so in this case, it actually gives us, like, a really clear mental model breakdown.
10:58Says the ranker is looking at what threshold should it apply right now, which does need to have a fallback value, but the logging is just showing what the prior row actually held. And so the initial premise that we found in that code base architecture improvement was kind of unfounded, and we don't really need to worry about it.
11:17Now in this case, this is still using that caveman skill, which is why it's being kind of terse with its outputs to us. So if you ran this without caveman mode on, you would obviously get, like, a little bit more explanation and story behind what it's doing and why. And again, this is an open source skill.
11:32So if you wanna make modifications to this to explain things in different ways or do something else, that's something that you can very easily do by just downloading the repo and modifying the skill files. So I love the zoom out skill. I use it on a weekly basis.
11:46But like I said earlier, how do we now pull all of this different context that we've been generating together and take it forward into a new session? So like I said earlier in the video, I really love to use this skill library with other tools. I personally like to use spec driven development tools, but I find that processes like this to improve the code base, grill you about those changes, and some of these other commands, they're really valuable in helping you get to a decision on the plan.
12:16And then those spec driven tools are really great at taking that plan and actually implementing it systematically. So in this case, what we're gonna do is we're gonna loop back through and run that code base architecture again. We're gonna run through a grill me exercise, and then I'm gonna show you how you can take that output and make a lot of really great use of it.
12:34So our last skill is a really simple but high utility skill called handoff. And so this solves the problem of needing to continue your train of thought, but you really need a fresh context window. And so this is kind of like an alternative to compacting because we're still gonna get all of that information, but we can then just use that document as the context for our next session.
12:55So there's a lot of really solid use cases for this, but two that come to mind. Number one, if you wanna switch from planning to implementing, the handoff command will distill it down what you've talked about into a very concrete brief.
13:07Number two, maybe you're mid session and you need to go down a tangent in the middle of a task in a separate window. But everything you've accumulated so far is really valuable context that you don't wanna pollute with a side conversation. So let's say we go back down to our code base here for the Twitter tool.
13:23And so we just went through, like, another very extensive round of the code base review with a grill me, and we've reached a conclusion about what we wanna do. So what we can do is we can come through and run this handoff command, and then we can give instructions about what the next session is gonna actually use this for.
13:40So in this case, we're gonna say, hey. This is getting passed to a spec driven development tool for implementation. It should have adequate problem framing, the solution we came to, the key decisions we made, and any other specifics that got resolved.
13:53And so what we get out the other side of this is a markdown file that is basically a version of compacting. So everything that we just discussed in that chat has been properly ported over into this file. So now what we could do is come down and just clear the window out, and we could run our command for a spectrum and tool to take this over.
14:12And now it's gonna move through, and it's gonna start building all of the planning artifacts with this context in mind. So the thing that I really like about all of his skills is that they are very flexible and to the point.
14:24You can integrate them with pretty much any process you already use, and just simply make those things better, and make, like, your daily quality of life as you move through and do these things just a little bit easier. So if you like this video, I will link you to a playlist where I have a bunch of other awesome skill libraries and vibe engineering plugins that I use on a daily or weekly basis.
14:45But that is it for this video. I will see you in the next one.
§ · For Joe

Five skills. One real project. No toy demos.

Vibe engineering playbook

The credibility comes from watching a real architectural debate play out live, including the false alarm that Zoom Out had to debunk.

  • Run skills inside a real project you're actually building, not a contrived example.
  • Let uncertainty breathe on camera. Sean doesn't skip the 10-minute grill-me session.
  • Stack the skills: codebase-architecture finds the problem, grill-me designs the fix, caveman keeps tokens low, zoom-out sanity-checks the premise, handoff carries the decision forward.
  • The decision-tree diagram is worth stealing: draw the branching structure to show why linear Q&A tools miss hidden assumptions.
  • Caveman mode is instantly deployable. Run /caveman in your next Claude Code session and measure the token difference yourself. That is content.
§ · For You

How to actually stay in control when AI writes your code.

If you've been frustrated with AI coding tools

These five skills put you back in the driver's seat by forcing the AI to surface hidden assumptions before writing a single line of code.

  • Before starting any feature, run /improve-codebase-architecture to understand what's already fragile.
  • When you have a design idea, run /grill-me before touching the code. It will find the holes you missed.
  • Turn on caveman mode for long coding sessions to cut token costs without losing technical precision.
  • When something feels confusing, /zoom-out gives you the map. It costs almost nothing and can save you from a wrong refactor.
  • At the end of every planning session, run /handoff so tomorrow-you can pick up exactly where you left off.
§ · Frame Gallery

Visual moments.

§ · Watch next

More from this channel + related dossiers.