The bait, then the rug-pull.
The cold open does double duty: dangle a fast outcome ('10x more productive') AND name the cost of inaction ('amnesia' — the moment your model starts speaking Spanish halfway through a thread). Inside sixty seconds you know the promise, the problem, and the format of the answer.
What the video promised.
stated at 00:05“I'm gonna show you exactly how to set up your own second AI brain that works across all apps with an incredibly simple setup. So you can have a memory operating system that makes you 10 times more productive and stops wasting your time.”delivered at 16:50
Where the time goes.

01 · Cold open + credibility
Promise (10x productivity, second AI brain across apps), pattern interrupt ('amnesia'), then quick credibility flex — sold last startup, builds AI businesses, drops a graph-view memory map as visual proof.

02 · What does great look like?
Defines the four properties of a great memory system before building anything: remembers everything, lets you edit on the fly, plugs into every platform (kills info silos), fuels every answer with context.

03 · Memory as input, not vault
Reframes the mental model. Every prompt silently pulls who you are, what you're shipping, what you started last month. Surfaces the failure mode of long threads ('Claude has amnesia' / 'speaking Spanish').

04 · Three levels framework
Names the architecture: Short (who am I), Mid (what am I doing), Long (what happened before + expert knowledge). Same answer across Claude/OpenClaw/ChatGPT.

05 · Layer 1 - Operating Manual (who am I)
First tier: identity, role, goals, tone, non-negotiables. Stuff that does not change weekly. ~200 words max. Lives natively in every platform's global settings; example walkthrough on Claude desktop + Anti-Gravity Customization.

06 · Explicit vs implicit memory + the rule
Two flavors: hard-coded instructions you write, plus the model's own learned memory growing as you converse. Lands the principle: 'the outcome of a conversation should never depend on chat history' — if it matters, write it down.

07 · Layer 2 - The Workshop (what am I doing)
Mid-term, project-scoped. Ask Claude to organize life/business into 6-8 categories (community, agency, startup, personal/health). One project folder per category.

08 · Project CLAUDE.md + memory folder
Each project gets a CLAUDE.md at root (mission, stack, decisions, memory map, references — keep under 200 lines) plus a memory/ subfolder for evolving artifacts: decisions, current-strategy, next-actions, session-summaries.

09 · Mutable layer + workflow
How you actually use it: open the project folder for whatever you're working on right now. Designed to be rewritten as priorities shift. Demos the same setup in Claude desktop projects, Claude Code, and Anti-Gravity — same idea, different UI.
10 · Layer 3 - The Arcade (long-term memory)
Third tier answers 'what happened before?' Two storage options introduced: Pinecone (vector DB for semantic search at scale) and Obsidian (markdown + graph view for hand-editable memory). Most people over-complicate this layer.
11 · Obsidian vs Pinecone tradeoff
Obsidian when you want to read and edit memory by hand (graphs, backlinks, strategy notes, decision frameworks). Pinecone when you want indexed semantic search across thousands of records, scale, anywhere access. Jack personally uses Pinecone.
12 · Conversation archive + wrap-up skill
First sub-layer of long-term: every meaningful conversation ends with a /wrap-up skill that summarizes decisions, next actions, metadata, and embeds the result into Pinecone. Indexed and searchable later by date and topic.
13 · Expert knowledge bases
Second sub-layer of long-term: domain-specific corpora (YouTube expertise, Hormozi business strategy). Layer 2 CLAUDE.md tells Claude which Pinecone indexes to consult for which questions — this is where the three layers interconnect.
14 · Building knowledge with NotebookLM
Workflow: ask Claude/ChatGPT to research a topic, auto-generate a 50-resource NotebookLM notebook, then download and vectorize into Pinecone (or keep in Obsidian).
15 · Firecrawl as MCP connector
Walkthrough adding Firecrawl as a Claude custom MCP connector (Connectors -> Add custom connector -> paste API key). Claims ~80% cost savings and better accuracy for agentic deep research vs default browsing.
16 · Recap + open loop to super-skills
Restates the three layers (who / what / before) and frames memory as only as strong as the skills supporting it. Hard cut into next-video CTA on 'super-skills that make your memory system more powerful'.
Visual structure at a glance.
Named ideas worth stealing.
Three-Level Memory System
- L1 Short / Operating Manual - Who am I?
- L2 Mid / The Workshop - What am I doing?
- L3 Long / The Arcade - What happened before + expert knowledge
Stratified memory across timescales and scopes. Each tier answers a different question and lives in a different location (global settings / project folder / vector store).
Four properties of a great memory system
- Remembers everything you said
- Lets you change the important stuff on the fly
- Plugs into every platform (kills info silos)
- Fuels every answer with context
Spec sheet for evaluating any memory architecture. Used as the pre-build checklist before showing the implementation.
Project Operating Manual template
- What is the folder / what is the goal / why does it exist
- The stack (what you're building it with)
- Decisions already made (so we don't relitigate)
- Memory map - where each memory lives
- References
Per-project CLAUDE.md skeleton. Under 200 lines because it gets prepended to every conversation in that scope.
Memory is an input, not a vault
Reframes memory as plumbing into every prompt rather than a passive store you look things up in. Every prompt silently pulls who you are, what you're shipping, what you started last month.
Outcomes-should-never-depend-on-history rule
Stress test for your memory system: open a new chat with zero context — does it still give the best advice? If not, the implicit/chat-history layer is doing work that should be explicit.
Lines you could clip.
“Called memory systems are a cheat code, but only if you use them properly.”
“Memory is not a vault, it's an input. Every prompt silently pulls from your stack.”
“How many times have you spoken with Claude or ChatGPT to get halfway through the conversation and it's talking complete Spanish?”
“The outcome of the conversation should never ever depend on a chat history.”
“Models forget things, they get truncated, they hallucinate. If it matters, we need to make sure we're writing it down.”
“Most people over-complicate it or don't set it up properly, meaning they get none of the benefits but all the complexity.”
How they spent the runtime.
Things they pointed at.
How they asked for the click.
“Your memory system is only strong as the skills that support us. So what we need to do next is learn what I call super skills that can make your memory system even more powerful, and we're gonna learn that by watching this video right here.”
Open-loop close — no subscribe ask, no link, no product. Pure retention CTA pointing to the next video. Clean for tutorial format because it preserves the 'I just gave you something useful' afterglow.
Word for word.
Steal the three-question taxonomy.
Memory is not storage — it's the plumbing that makes every prompt sharper than the last.
- Frame any AI-tooling content with the three-question collapse: Who am I? / What am I doing? / What happened before? It's screenshot-friendly and survives the platform port.
- Build the L2 Workshop as an actual file system: 6-8 project folders, CLAUDE.md at root, sibling memory/ directory with decisions / current-strategy / next-actions / session-summaries.
- Lift Jack's 'memory is an input, not a vault' line — rhetorical sibling to Joe's 'plumbing you own vs utilities you rent'.
- Use the testable rule as a sharp closer: 'the outcome should never depend on chat history' — if it matters, write it down.
- Steal the slide aesthetic: treasure-map / parchment over generic-AI gradient cards. Visual differentiation matters in this niche.
- Open-loop close to a 'super-skills' follow-up rather than a subscribe-ask — preserves the gift afterglow on tutorial content.
Build your own AI memory system this week.
Stop re-explaining yourself to your AI every morning — install a three-layer memory once and every conversation gets sharper.
- Layer 1 (today, 10 minutes): write a 200-word identity file. Name, role, tone preferences, non-negotiables, current stack. Paste it into Claude's Instructions, Cursor/VS Code global rules, and ChatGPT's Custom Instructions.
- Layer 2 (this week): ask Claude to bucket your work into 6-8 categories. Make a folder per category with a CLAUDE.md at the root and a memory/ subfolder beside it. One per real life-area (not per app).
- Layer 3 (when you're ready): pick ONE — Obsidian if you like reading your notes; Pinecone if you want semantic search and don't care about reading. Don't set up both. Don't make it complicated.
- End every meaningful AI conversation with a wrap-up: ask the model to summarize the decisions, next actions, and key insights from this session. Save the result into your memory folder.
- Test the system: open a fresh chat window with no history. If you can't get great advice immediately, the memory layer is incomplete — something important is only living in chat history.
- Don't over-build. Most people fail Layer 3 by piling on tools (Pinecone + Obsidian + ChromaDB + a custom RAG). Pick one. Cap project folders at eight. If it takes more than an evening to set up, you're doing it wrong.







































































