AI Coding on steroids! Auto Claude (Free & Opensource)
A 19-minute live demo of Auto Claude, the free open-source Kanban orchestrator that runs parallel Claude Code agents in git worktrees while you sleep.
December 17th 2025How a folder of markdown files beat BMAD, GSD, and most of the agent-framework ecosystem in 90 days.
One folder. No runtime. No orchestrator. No twelve-agent hierarchy. Just markdown the agent reads on demand, and 68,000 GitHub stars in ninety days, more than Next.js collected in its first three years. Matt Pocock open-sourced his ~/.claude/skills/ directory and accidentally wrote the thesis statement for a new era of AI-assisted development.
stated at 00:13“So what's actually in this folder? And why are real engineers throwing out SpecKit and BMAD to copy it?”delivered at 01:01

Cold open with star count, comparison to Next.js, sets up the core question: why are real engineers abandoning BMAD and SpecKit to copy a folder of text files?

Walks the folder structure: engineering/, productivity/, personal/, misc/ — each leaf contains a single SKILL.md file. No runtime, no orchestrator.

Names the failure mode: vague request to agent guesses to ships wrong thing to prompt fixing loop. Introduces the repo as a kill-switch for this pattern.

Agent interviews you relentlessly before writing any code. One question at a time. The fix for misalignment is friction applied to the developer, not the agent.

Shared language file compresses repeated domain context from 28 words to 8. Same bug, same agent, half the tokens. Pocock calls it the single coolest technique in the repo.

Bans horizontal slicing (write all tests first). Enforces red-green-refactor one slice at a time. Tests verify actual behavior, not imagined behavior.

Six phases: Reproduce, Minimize, Hypothesize, Instrument, Fix, Regression-Test. Real trick: rank and pick the right feedback loop first. Failing automated test is best.

Bar chart: SpecKit 93K, GSD 61K, BMAD 46.7K, mattpocock/skills 68.8K. Just markdown beats two of three.

Big frameworks own the process. When something breaks you do not know which layer to fix. Skills are single files: read it, fork it, delete it. No lock-in. No magic. The catalog is the artifact.

Pocock shipped an installer: npx skills@latest add. Distribution as a feature. Skills are the new package.json. Steal his version.
Folder of markdown files, each a SKILL.md. No runtime, no orchestrator. Agent reads on demand. Fork any one, delete any one.
Agent interviews developer relentlessly before writing any code. One question at a time. Kills vibe coding.
Domain-specific terms defined once in context.md. Cuts repeated prompt verbosity in half. Same bug, same agent, roughly 50% fewer tokens.
Bans horizontal slicing. Red-green-refactor per feature slice. Tests verify actual behavior, not imagined behavior.
Structured bug diagnosis. Build the right feedback loop before anything else. Failing test is best. Bash-driving a human is last resort.
Ship an installer alongside the content. One command picks skills, picks agent, wires them in. Most open-source projects forget this.
“Interview me relentlessly until we reach a shared understanding.”
“Same bug, same agent, half the tokens.”
“Build the right feedback loop, and the bug is 90% fixed.”
“Big frameworks own the process. Skills do the opposite.”
“Skills are the new package.json. Start curating yours or someone else will.”
“The fix for misalignment is friction applied to you.”
“Command on screen. Repos in the description. Skills are the new package dot JSON. Start curating yours or someone else will. Sub for the next one.”
Install command shown visually on screen, repo URL shown, subscribe ask buried at the very end. Extremely clean, non-pushy.
Pocock proved that a curated folder of markdown skill files, no framework, no runtime, is worth more than any 12-agent system. Joe is already building this.
The reason your AI assistant ships the wrong thing is not the AI. It is that you gave it a vague request and expected it to read your mind.
00:01
00:04
00:07
00:11
00:14
00:17
00:20
00:24
00:27
00:31
00:35
00:38
00:42
00:45
00:48
00:52
00:55
00:58
01:01
01:05
01:09
01:13
01:17
01:21
01:25
01:29
01:32
01:36
01:39
01:43
01:46
01:50
01:53
01:57
02:02
02:04
02:07
02:11
02:15
02:18
02:22
02:25
02:29
02:32
02:36
02:39
02:42
02:46
02:49
02:52
02:56
02:59
03:02
03:06
03:09
03:12
03:16
03:19
03:22
03:26
03:29
03:32
03:36
03:39
03:43
03:46
03:49
03:53
03:58
04:00
04:04
04:07
04:11
04:14
04:17
04:21
04:23
04:26
04:29
04:32A 19-minute live demo of Auto Claude, the free open-source Kanban orchestrator that runs parallel Claude Code agents in git worktrees while you sleep.
December 17th 2025An ex-Apple engineer benchmarks ref.tools and Exa AI against Cursor on a live Tailwind v4 refactor — and Claude Code wins at 2,800 tokens vs 98,000.
November 23rd 2025David Ondrej installs Google's open-source coding agent, runs it live against Claude Code on a real production codebase, and lets the results speak.
June 26th 2025A 13-minute verdict: CLI inside VS Code beats Cursor, the desktop app, and the extension — and a live app build is the receipt.
December 30th 2025GosuCoder rapid-fires 14 Claude Code tips — a swipe-file of prompt templates and CLI shortcuts disguised as a YouTube listicle.
June 14th 2025318 commits in May. One outline canvas. One creator's honest pricing breakdown.
June 5th 2025