Gemini CLI – the real Claude Code killer?
David Ondrej installs Google's open-source coding agent, runs it live against Claude Code on a real production codebase, and lets the results speak.
June 26th 2025Brandon Hancock spends 35 minutes putting Gemini CLI through three live tests — a one-line styling fix, a full memory-feature build, and a from-scratch landing page — and lands on a single rule: Gemini CLI thrives with context, dies without it.
Brandon Hancock opens face-cam in front of an American flag and a bookshelf and lands the same three nails Google itself led the launch with: direct competitor to Claude Code, insanely powerful, completely free. Then he promises the only thing a developer actually wants — three real coding tests, no demos, watch it succeed and fail in real time.
stated at 00:14“I'm gonna break down everything you need to know about Gemini CLI. So we're gonna cover what makes it so special, how to set it up, and then finally, we're gonna put it to the test on a few different code examples so that you can see firsthand should you use this tool, should you not.”delivered at 34:16

Talking-head intro promising the breakdown: what makes Gemini CLI special, how to set it up, and three real-world coding tests.

Walks through Google's launch blog post. Open-source CLI for the terminal, two modes (interactive REPL + single-shot prompt), mentions Gemini Code Assist as Google's Cursor competitor. Frames Google as 'AI everywhere developers are.'

Full-screen card: 60 req/min, 1,000 req/day, Gemini 2.5 Pro (1M context), Open Source, Available free-of-charge. Brandon does the math: a single 1M-context request would cost $3 on the API; he just gave you $180/day for free.

Built-in Google Search from the terminal, MCP server support (image-gen + Veo video gen demo from launch post), GEMINI.md custom prompt file, scripting/automation. Then setup: npm install -g @google/gemini-cli, run gemini, log in with personal Google account for free tier, or paste GEMINI_API_KEY into .env to bypass limits. Tours /version, /theme, /editor, /tools slash commands.

Real bug: profile page won't scroll, content cut off at the bottom. Brandon @-references the file, Gemini investigates, proposes a CSS overflow fix, opens the diff in his external editor (Cursor) for review, applies. Then he teaches Gemini a project rule on the fly: 'only run npm lint, never run npm run' — update memory — Gemini writes it to GEMINI.md so the rule persists. Verdict: clean pass.

Adds an entire new Memories tab to his ShipKit chat template: new sidebar entry, CRUD page, Postgres schema migration, API changes so every chat injects memories into the system prompt. Uses an AI-driven workflow — screenshot + task_template.md + GEMINI.md as context, asks Gemini to plan first, reviews the multi-phase plan, requests Phase 0 (schema), then implements phase by phase. Gemini lints between steps, fixes its own errors, ships working feature in minutes. End-to-end test in-app: types a memory, sends a chat, response respects it. Verdict: massive pass.

Empty folder, .env with API key. Prompt: build a Next.js landing page for the AI With Brandon channel, look me up on YouTube first, make it beautiful and modern. Gemini researches the channel, drafts copy. Hits a wall trying to run create-next-app interactively (CLI wizards confuse it) — Brandon escapes, runs npx manually, hands the scaffolded project back. Gemini styles it, but the result is generic and ugly. Second pass with a screenshot + 'do not stop until it's absolutely beautiful' — slightly better, still underwhelming. Verdict: fail. 'Gemini CLI thrives with context, struggles without it.'
Brandon's review structure: escalating-difficulty live coding tests, each with an explicit pass/fail call. Lets the viewer make their own verdict without trusting his opinion.
Brandon's whole AI coding workflow, taught in passing inside the Case 2 demo. The pattern (template + context file + plan-then-execute) is portable across any agentic CLI.
When Gemini makes a mistake, say the correction out loud and add the words 'update memory'. Gemini writes the rule into GEMINI.md automatically. Use /memory show to view all current rules.
Don't just list specs — convert them to dollars-per-day. 60 req/min × 1M-context Gemini 2.5 Pro at $3/req = $180/day in free tokens. Concrete dollar value kills the 'is this real?' skepticism in one sentence.
“If you ran 60 of those massive requests, you're easily getting up to a $180 worth of tokens from Gemini 2.5 Pro completely for free.”
“I never wrote a single line of code. I added in a whole new feature, database change, UI change, and updated API calls, and it worked in under two minutes.”
“Gemini CLI thrives with context, and without context, it's struggling.”
“Use Gemini CLI on your existing projects to add in new features and small changes. I would not recommend it right now for creating brand new projects.”
“I have a ton of other AI related content right here on this channel. Everything from agent development kit, LangChain, CrewAI, Next. Js. I have it all right here, and I definitely recommend checking out those videos and whichever video is popping up right now on the screen.”
Soft channel CTA at the end — no hard ask for likes, no link drop, just 'next video is on screen.' Honest video earns the trust to skip the hard sell. The real sales pitch is the ShipKit.ai mention woven into Case 2 as the codebase being demoed.
Stop reviewing AI tools by 'first impressions' — escalate three real tasks from trivial to greenfield and give each one an explicit pass/fail.
Yes — but only on existing projects where you can hand it real context (a codebase, a task template, an agent-memory file). It's free and Gemini 2.5 Pro is genuinely good. Don't ask it to build a brand-new app from scratch yet.
00:00
00:12
00:35
00:36
01:15
01:42
02:19
02:34
03:01
03:27
04:07
04:07
04:57
05:12
05:37
06:02
06:27
06:52
07:27
07:42
08:08
08:45
08:48
09:26
09:52
10:18
10:44
11:20
11:36
12:02
12:28
12:55
13:21
13:48
14:14
14:51
15:08
15:36
16:05
16:33
17:02
17:30
17:59
18:27
18:56
19:24
19:53
20:21
20:50
21:18
21:47
22:15
22:44
23:12
23:41
24:09
24:38
25:06
25:35
26:15
26:31
26:57
27:23
27:50
28:16
28:42
29:08
29:35
30:01
30:27
30:54
31:20
31:46
32:13
32:39
33:05
33:31
33:58
34:24
34:44David Ondrej installs Google's open-source coding agent, runs it live against Claude Code on a real production codebase, and lets the results speak.
June 26th 2025A 5-minute walkthrough of Anthropic's native Agent View TUI and how it slots into a folder-based Agentic Operating System.
May 12th 2026An ex-Apple engineer benchmarks ref.tools and Exa AI against Cursor on a live Tailwind v4 refactor — and Claude Code wins at 2,800 tokens vs 98,000.
November 23rd 2025A 7-minute essay arguing that the AI coding war is a free sample phase — and your $200/month is a 12–24 month exemption from real prices.
May 13th 2026A 35-minute command-by-command walkthrough of every built-in slash command in Claude Code, with live terminal demos and Anthropic docs side-by-side.
June 21st 2025How a folder of markdown files beat BMAD, GSD, and most of the agent-framework ecosystem in 90 days.
May 10th 2026