He Open-Sourced His Claude Folder. 68K Stars on GitHub.
How a folder of markdown files beat BMAD, GSD, and most of the agent-framework ecosystem in 90 days.
May 10th 2026A 19-minute live demo of Auto Claude, the free open-source Kanban orchestrator that runs parallel Claude Code agents in git worktrees while you sleep.
The title and the first spoken line are the same sentence — a clean pattern-interrupt that wastes zero frames before stating the promise. André Mikalsen opens with a question, answers it with a product name, and is inside the demo by the 40-second mark.
stated at 00:04“Get 10 times the work done on your projects with the planning and the quality coding and testing that you should demand from your AI coding system.”delivered at 01:04

10x the work done — promise stated, creator introduced, product named. No warm-up.

File picker -> .autocloud folder initialized. One-click onboarding.

Planning -> In Progress -> AI Review -> Human Review -> Done. Creates bug-fix task by pasting a screenshot. Shows model, thinking level, and human review gate controls.

System auto-classifies task as simple (90% confidence). Introduces worktrees (git sandboxes per task) and the merge conflict AI layer. Live log panel shows tool calls.

Up to 12 simultaneous Claude Code terminals, renameable. Tasks can be created from terminal view. Session restore.

Insights = persistent project-aware chat. Roadmap = AI-generated feature priority breakdown. Planned Canny integration.

Project Index auto-parses codebase (Electron + Python detected). Graph memory + semantic RAG accumulates session insights — claims to become cheaper than raw Claude Code over time.

Changelog Generator pulls from completed tasks or Git history since a tag. One-click GitHub Release creation with emoji support. v2.2.0 generated in ~30s.

Supports multiple Claude Max accounts with auto-switching on rate limits. GitHub Issues integration incoming.

Download zip -> open in Cursor -> install Node.js + Python + Docker Desktop -> pnpm install + pnpm run start. Live macOS install demo.

Discord community plug, subscribe ask. Clean end.
Each task runs in its own git worktree (isolated branch). Parallel tasks cannot clobber each other. Merge conflict AI layer resolves diffs when tasks complete.
Auto Claude classifies each task before coding begins. Simple tasks get a quick spec + one test. Complex tasks get full spec + multiple subtasks + deeper review. Controls token spend automatically.
The Kanban columns represent real agent states. Tasks only surface for human review after the AI has reviewed its own work. Human time is reserved for final acceptance, not QA.
As Auto Claude accumulates session memory, it retrieves relevant context with fewer tokens, making it cheaper per task than raw Claude Code over time. Compounding efficiency.
“10 times the work done on your projects with the planning and the quality coding and testing that you should demand from your AI coding system.”
“A work tree is basically a sandbox or environment where the coding is happening in one place and it won't touch any of the other files.”
“The more you use Autocloud, the smarter it becomes at actually retrieving context at a smaller token usage. So it will become cheaper to actually use Autocloud over cloud code when you use it over time.”
“I get a lot of tasks done while I sleep.”
“Join our Discord community. If you have liked the video, be sure to subscribe and like it.”
Soft and brief — Discord first, then subscribe. No product pitch, no upsell. Matches the free/open-source positioning.
The worktree-per-task pattern is the unlock — it is what makes running 12 parallel agents safe rather than chaotic.
You're probably using one Claude session at a time — Auto Claude lets you run many in parallel without manually managing any of them.
00:00
00:22
00:36
00:51
01:05
01:20
01:34
01:49
02:04
02:18
02:33
02:47
03:02
03:17
03:31
03:46
04:00
04:15
04:30
04:44
04:59
05:13
05:28
05:43
05:57
06:12
06:26
06:41
06:56
07:10
07:25
07:39
07:54
08:09
08:23
08:38
08:52
09:07
09:22
09:36
09:51
10:05
10:20
10:35
10:49
11:04
11:18
11:33
11:48
12:02
12:17
12:31
12:46
13:01
13:15
13:30
13:44
13:59
14:14
14:28
14:43
14:57
15:12
15:27
15:41
15:56
16:10
16:25
16:40
16:54
17:09
17:23
17:38
17:53
18:07
18:22
18:36
18:58
19:06
19:20How a folder of markdown files beat BMAD, GSD, and most of the agent-framework ecosystem in 90 days.
May 10th 2026An ex-Apple engineer benchmarks ref.tools and Exa AI against Cursor on a live Tailwind v4 refactor — and Claude Code wins at 2,800 tokens vs 98,000.
November 23rd 2025David Ondrej installs Google's open-source coding agent, runs it live against Claude Code on a real production codebase, and lets the results speak.
June 26th 2025A 13-minute verdict: CLI inside VS Code beats Cursor, the desktop app, and the extension — and a live app build is the receipt.
December 30th 2025GosuCoder rapid-fires 14 Claude Code tips — a swipe-file of prompt templates and CLI shortcuts disguised as a YouTube listicle.
June 14th 2025318 commits in May. One outline canvas. One creator's honest pricing breakdown.
June 5th 2025