Modern Creator Network
Mark Kashef · YouTube · 25:11

7 Things You Can Build with Claude Code Agent Teams

A 25-minute operating manual for non-technical Claude Code agent teams - six of the seven use cases have nothing to do with code.

Posted
2 months ago
Duration
Format
Tutorial
educational
Channel
MK
Mark Kashef
§ 01 · The Hook

The bait, then the rug-pull.

Mark Kashef opens with a deliberate misdirect: everyone assumes Claude Code agent teams are for developers. Six of the seven workflows he is about to demonstrate have nothing to do with code -- and one of them is an AI advisory board that can tell you whether to launch your next product.

§ · Stated Promise

What the video promised.

stated at 00:24By the end of this, you won't look at agent teams in the same way.delivered at 24:28
§ · Chapters

Where the time goes.

00:0000:27

01 · Hook + promise

Cold-camera open: six of seven use cases are non-technical. Each example is a single prompt. Sets the curiosity gap.

00:2800:43

02 · Overview of all 7

Quick tour of the seven use cases. Explains the demo structure: diagram then prompt then output.

00:4404:53

03 · Use Case 1: Content Repurposing Engine

One transcript to four parallel platform writers. Agents share angles to prevent repetition. Postmortem synthesis report flags inconsistencies.

04:5409:05

04 · Use Case 2: Research and Pitch Deck Builder

Sequential handoff Researcher to Slide Writer to Designer (.pptx output). Human-in-the-loop approval gate. 3-5 agent rule from Anthropic cited.

09:0612:57

05 · Use Case 3: RFP and Proposal Response

Two parallel waves with shared data pool. Response to a real WorkSafeBC AI scribe RFP. Outputs capability matrix and full markdown proposal.

12:5815:02

06 · Use Case 4: Competitive Intelligence Report

One analyst per competitor plus synthesis lead. Claude Code given creative freedom to define the team. Agents share top-3 findings before synthesis.

15:0318:18

07 · Use Case 5: AI Advisory Board

Five agents debate a $7,500 bootcamp launch: Market Researcher, Audience Gap Analyst, Financial Modeler, Competitive Strategist, Devil's Advocate. Output: conditional go/no-go brief.

18:1921:02

08 · Use Case 6: Marketing Campaign Launch

Email Marketer, Social Media Manager, Ad Copywriter with psychological-framework variants, Landing Page Creator, consistency-check synthesis agent.

21:0324:28

09 · Use Case 7: Personal AI Assistant (MarkClaw)

Sub-agents clone and analyze OpenClaw repo for cheap context, then agent team builds Architect, Telegram Interface, Skill Router, Memory, CLI.

24:2925:11

10 · Resources and CTA

Free prompts in description (Gumroad link). Community plug (Skool). Subscribe CTA.

§ · Storyboard

Visual structure at a glance.

open
hookopen00:00
use case 1 repurpose
valueuse case 1 repurpose00:44
use case 2 pitch deck
valueuse case 2 pitch deck04:54
use case 3 RFP
valueuse case 3 RFP09:06
use case 4 competitive intel
valueuse case 4 competitive intel12:58
use case 5 advisory board
valueuse case 5 advisory board15:03
use case 6 marketing
valueuse case 6 marketing18:19
use case 7 MarkClaw
valueuse case 7 MarkClaw21:03
CTA
ctaCTA24:29
§ · Frameworks

Named ideas worth stealing.

01:24concept

Agent Team vs Sub-Agents

  1. Sub-agents: parallel execution, NO inter-agent communication
  2. Agent teams: parallel or sequential, WITH agent-to-agent communication
  3. Always say create an agent team -- spawn agents alone is ambiguous

The fundamental architectural distinction that determines whether agents can share context and coordinate on output angles.

Steal forAny tutorial, newsletter, or course on Claude Code or multi-agent systems
07:31concept

3-to-5 Agent Rule

  1. 3-5 agents is the Anthropic-recommended sweet spot
  2. Beyond 5: diminishing returns, over-engineering, token explosion
  3. Token benchmarks: simple ~150K, sequential ~180K, technical tasks ~300K+

Rule of thumb from Anthropic for sizing agent teams. Cited directly in the pitch deck use case.

Steal forAny content teaching multi-agent prompt design
05:25model

Sequential Handoff vs Parallel Waves

  1. Sequential handoff: each agent waits for the prior output
  2. Parallel wave: agents tackle mutually exclusive tasks simultaneously
  3. Hybrid: two parallel phases with a merge step in between

The two primary topologies for agent teams. Choosing correctly prevents wasted tokens and dependency errors.

Steal forDiagrams and explainers on agentic workflow design
06:47concept

Human-in-the-Loop Interrupt Pattern

  1. Add require plan approval from [agent] before they start building to the prompt
  2. Triggers the ask-user-input tool inside Claude Code
  3. Agents present: approve as-is / approve with notes / reject with rework

A prompting pattern that inserts a human review checkpoint mid-workflow without breaking the agent team flow.

Steal forAny workflow where a human sign-off gate is needed before expensive computation begins
02:36concept

Condition Gates

  1. Before writing, each teammate should identify the 3 most compelling insights
  2. Agents cannot advance until the condition is met
  3. Use to enforce quality bars and prevent agents rushing ahead

Inline criteria that act as checkpoints inside a prompt, forcing agents to satisfy a requirement before proceeding.

Steal forAny multi-output pipeline where consistency or quality gating matters
03:06concept

Postmortem Synthesis Agent

  1. Add a final team-lead agent whose only job is to review all outputs
  2. Checks for: consistent tone, no contradictions, all requirements addressed
  3. Produces a postmortem report alongside the deliverables

A meta-agent that audits the rest of the team's work. High-value pattern for any multi-output pipeline.

Steal forContent repurposing pipelines, proposal writing, campaign launches
21:36concept

Sub-Agent Offloading (Token Preservation)

  1. Use a sub-agent for grunt work (clone repo, read codebase) before spinning up the main agent team
  2. Sub-agent output feeds into the agent team as context
  3. Avoids burning agent-team token budget on reading/research phases

A hybrid sub-agent + agent-team pattern for complex tasks where research and build are distinct phases.

Steal forAny workflow with a heavy research/ingestion phase before creative or analytical work begins
§ · Quotables

Lines you could clip.

01:24
Sub agents can work in parallel, but they don't speak to each other. With agent teams, they can have that agent to agent communication.
Clearest one-line definition of the agent-team vs sub-agent distinctionTikTok hook
02:35
The more intentional you are on telling it exactly where the inputs lie, what the criteria is, and where it should output, the more control and predictability you have over a pretty unpredictable process.
Tight principle applicable to any agentic workflowIG reel cold open
07:31
Three to five agents is the sweet spot. Anything beyond that can lead to diminishing returns, over engineering, overthinking, and most importantly, a huge consumption of tokens.
Cites Anthropic authority + practical cost warningTikTok hook
16:44
Once consensus or informed disagreement emerges, synthesize into a single executive brief.
The phrase informed disagreement is a linguistic gemnewsletter pull-quote
17:15
This is where prompt engineering meets agentic workflows in a way where both become really powerful.
Clean thesis-level statement for the whole videoIG reel cold open
§ · Pacing

How they spent the runtime.

Hook length27s
Info densityhigh
Filler5%
§ · Resources Mentioned

Things they pointed at.

§ · CTA Breakdown

How they asked for the click.

24:36link
I'm gonna make all the prompts I showed you available to you for free in the second link in the description below.

Double CTA -- free prompts (Gumroad) as primary pull, community (Skool) as upgrade. Subscribe ask framed as algo help. Clean and non-pushy.

§ · The Script

Word for word.

metaphoranalogystory
00:00When agent teams first dropped in Cloud Code, pretty much everyone was solely using them for nontechnical tasks. But of the seven cases that I'm about to show you, six of them have absolutely nothing to do with code. And the last one helps you build a personal assistant from scratch.
00:14Each example I'm about to show you is composed of a single prompt. You paste it in a team of agents spun up, and they take care of what would normally take you an entire session. By the end of this, you won't look at agent teams in the same way.
00:27Let's dive in. Now I already prepared all seven use cases I'm about to show you right here. And just to accompany them from a conceptual standpoint, each one, I'll walk through the high level of what the agent team is planning to do, and then we'll go into the actual prompt, and I'll show you the associated output.
00:43Now this first case is really straightforward. It is a content repurposing engine, and this is something that you might have built with things like edit n, make, or Zapier in the past. But in my opinion, this is infinitely more dynamic and flexible and malleable if you do it in Cloud Code.
00:59So the idea is I will give it one of my YouTube scripts, and then it will spawn up an agent team where you have a LinkedIn writer, a thread writer, a newsletter writer, and a blog writer. So the goal is to have one input and multiple outputs, which you will see will be the main theme across all of these different examples.
01:17So when it comes to the prompt, you can say create an agent team to repurpose a video transcript into content for four platforms. The most important magic words you always need to say is create an agent team or spawn an agent team. If you just say spawn agents, it could get confused between sub agents, which are very different in the way they work versus agent teams.
01:36And the core difference TLDR is sub agents can work in parallel, but they don't speak to each other. With agent teams, they can have that agent to agent communication.
01:45So all you have to do is just tell it exactly where the transcript is. This could also be in the cloud. You could connect it to some form of API or webhook.
01:52And here's the important part. So when it comes to asking it to spawn the team of agents, you have to be very intentional here. Do you wanna leave it up to Cloud Code to decide what those agents should be, or do you wanna have some form of autonomy over it?
02:05In many cases, it makes sense that you should dictate what these agents should be so you can make sure it's executed in the exact way you expect. So in my case, I'm asking for a blog writer where I specify exactly what their role is, same thing with a LinkedIn writer, same thing with a newsletter writer, and I'm also telling it where to output things.
02:22So the more intentional you are on telling it exactly where the inputs lie, what the criteria is, and where it should output, the more control and predictability you have over a pretty unpredictable process. And one key thing you can do is provide conditions. So you can say before writing, each teammate should read the full transcript and identify the three most compelling insights.
02:43So in a way, you're now dictating that they can't move forward until they meet this specific criteria. And one thing you can do to encourage the communication and kind of force the communication is you can say, have them share their chosen insights with each other to ensure that no two platforms lead with the same angle.
03:00Each piece should feel fresh and not repetitive. And along with the normal deliverables, you can also push it to do something like synthesizing a summary, comparing the angles each teammate chose, and flag any messaging inconsistencies.
03:13So you can basically get a postmortem report on top of the deliverables you're asking for. So in terms of this process, it was very straightforward. Because we dictated everything, it could exactly create the agents we're asking for.
03:24It spawned them all. It then went back and forth. We got that criteria of meeting the three blog insights per agent.
03:31But one interesting thing that happened was it said, good. Three of four teammates reported their insights, but there seems to be heavy overlap.
03:39All three picked the three level loading system and the kitchen analogy skills plus MCPs. I need to wait for the Twitter writer's picks before I assign unique lead angles. So now when you have Claude Code and these agent teams, Claude Code takes this third person perspective looking at what's happening so they can better observe, survey, and intervene when needed.
04:00So then ClaudeCode can look at all of the different angles, make sure that each one has a unique take, and then it can assign it right here. So it tells you the blog writer, LinkedIn writer, newsletter writer, and Twitter writer.
04:12This is the lead angle for each one of them, and this is why they're doing it. So that's the rationale by the Claude code agent itself.
04:19And at the end, we get a summary of everything that was completed. And then if we go to the bottom here, this goes through all the material that was shared between the agents, any form of inconsistencies that were flagged, and then we get the outputs exactly where we ask for them.
04:32So the beauty of this is I can click on this URL right here, click on command, open it up, and we can take a look at what the blog post looks like. We can intervene. And if you want, you could re spin up the whole agent team to edit it, but in that case, it might make more sense to maybe spin up sub agents to make edits in parallel since they don't need to speak to each other if you can identify independently what needs to change.
04:55So the second use case might be very helpful, and this is meant to research and create a pitch deck on a particular topic of our choice. So in stage one, you have the researcher, and the goal of the researchers come back with certain data points. Then we have the plan approval on our end.
05:10And then once approved, we have the slide writer. And the slide writer comes up with what content should be on each and every slide based on the research. And then beyond that, in stage three, we have the designer.
05:22And the designer's role is to actually take all the research, take all the slides, and physically create the PowerPoint file using the HTML to PPTX library. So this is a really good example of a sequential handoff workflow where you can't really have the agent teams work in parallel like you would with something like sub agents.
05:40You need each one to wait for the prerequisite to go to the next stage. So the prompt for this one looks as follows.
05:46Create an agent team to build a 12 slide pitch deck about how AI automation is transforming small business operations in 2026. So once again, I say spawn three teammates with task dependencies. We have the researcher, which whose role is to find eight to 10 data points, stats, and supporting evidence.
06:05And then we have the slide writer. Now in this case, I went down to a very deep level of granularity where I said exactly what should be on each slide. So you can always choose to relinquish control or take control.
06:16It's just a matter of the prompt they put together. And then I say exactly what each slide criteria should be. So a max of eight words, three to four bullets, and some speakers notes at the bottom of the slide, and then I tell it exactly where to save to.
06:29And then we have the designer. So using the slide writer's content, I'm implying the whole sequential handoff from here. Build the actual file using Python.
06:38Now this is overkill. It would figure it out on its own, but, again, the less thinking you have to make Claude code do, the more accurate the results. And one last key nugget here is you can force the agent team to interrupt itself by asking for your input.
06:52So when you say require plan approval for the designer before they start building, once a designer goes, it will actually usually invoke what's called the ask user input tool, and I'll show what that looks like. So I screenshotted this while the agent teams are running because we can't recall it as soon as it's done.
07:07In this case, once the designer came up with an idea, it asked me to review the designer's plan and approve as is, approve with notes, or reject with some rework. And the great part is when you say involve me, you essentially create a human in the loop process yourself, and the agents are really good at actually spinning that up and interrupting their flow.
07:26So in this case, it spins up three agents as we specified, the researcher, slide writer, and designer. And the rule of thumb, by the way, from Anthropic is three to five agents is the sweet spot. Anything beyond that can lead to diminishing returns, over engineering, overthinking, and most importantly, a huge consumption of tokens.
07:46So here you can see the research has finished and notified the slide writer. We see the sequential flow. We see exactly what it's come up with in terms of stats from its research.
07:55We then see the pipeline status. This is the example of the plan from the designer that I was asked to approve. So the the color palette, the typography, the slide dimensions, and everything we specified was a little bit more.
08:07After approval, they went, and it was actually pretty quick. Typically, some of these tasks, for technical tasks, can take up to thirty, forty, fifty minutes and 300,000.
08:17This still took a 150,000 tokens, but it was very efficient. And then like I said, you can always hover over this URL, click it, open up this pitch deck.
08:26It's not gonna be absolutely beautiful, but it's respectable. So if you go through it, everything is well organized.
08:33Everything looks pretty straightforward. We have our speaker notes at the very bottom like we requested. And you can imagine if you could dictate exactly a certain brand guide or a brand style, then you can make this very business friendly.
08:44And one extra super nugget for you is you can actually install this new extension called Claude for PowerPoint, and the whole point of it is you can open it up, authorize using your existing Claude account, and you can make specific tidbit updates without having to waste your tokens in Cloud Code or spin up a brand new agent team.
09:01So you can make any specific or surgical changes here and take it from 80% to a 100%. So this next one will appeal to all the consultants out there. So if you've ever written an RFP or a request for proposal or a tender opportunity for a government contract, you'll know that the tender descriptions are really long, and the amount of work you have to do to actually complete and satisfy them is equally, if not more, very long.
09:26So this will take the requirements of these proposals, and then it will go create an agent team where you have an RFP analyst to look at all the different requirements you have to satisfy in your proposal. And then you have the capability researcher, which you could give examples of who works on your team, who has what experience, what case studies do you have for it to draw from to help with the creation of this RFP.
09:49Then they will share their data, and then it will pool everything. Then you'll have writer a, and you'll have writer b. The goal of writer a will be to create an executive summary, talk about technical management, assuming there's a technical component to the RFP, and writer b will have the qualifications, past performance, and pricing.
10:08So they will work together again to cross reference and build the whole proposal. So in this case, instead of having a sequential handoff, we essentially spawn these agents to work in parallel here and then spawn them to work in parallel here.
10:21But the sequence is that this parallel task comes first, then the second parallel task. So for this, the prompt is create an agent team to respond to a request for proposal, then we just provide it access to the URL right here, which corresponds to an actual RFP to create an AI scribe and dictation solution. And you'll see here, if you're not as familiar, you have bidding details, you have eligibility conditions, you have contact information, you have more information about the proposal in general.
10:47And then you give it more information about your particular organization. So you could say, we are a 15 person AI consulting firm specializing in building custom automation workflows for mid market companies. We use tools like Cloud Code, etcetera.
11:01Our average project size is between this and this, and we've completed 40 plus projects. Now naturally, you might wanna feed more information in the form of markdown files so you can really dial down the proposal. Then you spawn the four agents like we specified, the RFP analyst, then the company capability researcher, then the section writer a, and the section writer b, each of them with their own details, and then you basically dictate the flow like we said before.
11:25After both section writers have finished, review all sections for consistent tone and terminology, no contradictions between sections, and every RFP requirement addressed. Then flag any requirements that we didn't address.
11:38In terms of the setup, it's pretty straightforward. As you go down, we see the first two agents are launched. They run-in parallel right here, and you can see that the next section a and section b writer are blocked until they run.
11:50So once they go through that entire process, this took around a 180,000. I'm just telling you that so you can gauge your limit based on your plan. It comes back with the deliverables that I actually asked for in markdown format just because I didn't wanna create a docx or PDF just yet.
12:06I want to review it just to preserve on those tokens. Then you have task five right here, which is the final team lead boss, and you could see the output if you click right here. Then you have each part.
12:18You have the capability matrix of everyone in the company, obviously, hypothetical. Then you have the full proposal that you can review in pure markdown. And assuming it fits your requirements, then you can say, okay.
12:29Cool. Can you go and create a PDF or a doc x out of this? And then it will be able to invoke the skill that comes out of the box from the Anthropic team that can create that file.
12:39So then you have both proposal sections as well, so you can audit each and every deliverable from each and every agent. So this next use case is a spicy one. We're gonna use agent teams to do competitive analysis, compare Claude code to four other competitors, antigravity, cursor, codex, and Copilot.
12:58Then we have a synthesis lead whose sole job is to take the independent research from each one of these platforms and bring it all together. So our prompt here is a bit nuanced, and I'll show you how.
13:08So in this case, I say create an agent team to build a competitive intelligence report. I tell it what the target product is. I wanna say Claude Code, the AI powered coding assistant CLI, and these are the competitors to analyze.
13:21Document the following for each platform, latest and greatest info as of 2026. I give it all the criteria, but I want you to notice one nuance between this prompt and the ones before it. In this case, I'm not specifying it to create an agent team with specific agents to my instruction.
13:38I'm allowing it the creative freedom to do it itself. So at the end, all I say is have each analyst share their top three findings with the group before the synthesis begins. Meaning, again, I'm encouraging communication between these agents.
13:52So now it spins it all up. We have the cursor analyst, the copilot analyst, the codex analyst, and the antigravity analyst, and then we have that team lead that does the synthesis.
14:02And notice how it says me. So Claude from the third person perspective is taking on the persona. So it tells you what each analyst will do.
14:11It goes to the process. It does the research, and then it comes back with a synthesis file. Each one comes back with a deliverable.
14:17So each one has a markdown file of the full synthesis, and it goes through the top strategic takeaways. It creates this competitive intel file right here that I can bring. Each one has analysis on how each and every IDE and platform works, what it brings to the table versus the other ones, and then you could see there's an overall synthesis report where it goes through and compares and contrasts each one of them.
14:42So I'm using products here, but this could be competitors as in other companies, other platforms, other frameworks, whatever you want.
14:50This next one has to be one of my favorite examples, which is the AI advisory board. And the point of this use case is you take a very meaty problem or meaty question or opportunity and you just pose it, and you create a very comprehensive prompt to help you split up that task in a way where different agents can take different perspectives on it to come up with a cohesive analysis.
15:11So in this case, let's say I wanted to launch a $7,500 higher ticket boot camp for more affluent CEOs and VPs, etcetera. Should I launch it or wait?
15:21Could be the overall question or premise. So then you can have a market researcher, a financial modeler, a devil's advocate, a competitive strategist, and an audience analyst all work together to act as the voice of the customer, voice of the consumer, and most importantly, the voice of the market.
15:38So if we go, you'll see that this is a behemoth of a prompt. And we started the same way saying create an agent team to analyze a complex business decision.
15:47I posed the question, should we launch a $7,500 live six week AI leadership boot camp for execs and CEOs who want to manage AI teams and integrate it into their operations. And then I give it context about myself, my agency, my community, the fact that I used to teach boot camps all the time, and then we say spawn five agents to investigate different angles.
16:09I say the market researcher who goes through and analyzes the executive AI education market, the audience gap analyst to investigate the gap between our current audience and the target audience, then the financial modeler, competitive strategist to see exactly who's selling something like that out there, and then the devil's advocate who takes all the analysis and steps in to say, maybe you shouldn't do this at all, or maybe you should do it in a completely different way or at a different price point.
16:36So then this is the key deliverable, and this is where you can get creative. So I say, once consensus or informed disagreement, really good nuance here, emerges, synthesize into a single executive brief with go, no go, or conditional recommendation.
16:53Top three reasons for the recommendation, top three risks regardless of the decision, and suggested next steps. This is where prompt engineering meets agentic workflows in a way where both become really powerful.
17:05So then we tell it exactly where to save it. This one spins them all up. In this case, it can make sense that all of them can run-in parallel because they're all taking mutually exclusive tasks.
17:16So it's not necessarily a sequential handoff. Then as we go down, we get the analysis from each one. We can take a look at all the files they came up with from the audience strategy to the competitive framework.
17:29Obviously, lot of reading. You might wanna ask it for a TLDR of the TLDR, but it gives you out of the box.
17:34It tells you conditional go. Start with a $2,000 course, then upgrade to $7,500 within four to six months, then you can go through the risks, the key debates, everything that we used to see before, the revised numbers of what it would look like from a financial standpoint, all the permutations of it.
17:53And this runs for a while, and this really helps you if you have a very deep problem you wanna go through and you might not have the colleagues of your own to be able to push back on you. This is a great way to get an initial lens from different virgin eyes of different agents. And for our penultimate example, we'll take on a marketing use case.
18:11So let's assume you have a full campaign launch and you are looking to market your new FocusPods Pro.
18:17I know, very original name, you'll need a email marketer to come up with the three email sequence for the launch. You'll need the social media manager.
18:25You'll need the ad copywriter, and you'll need the landing page creator for the product itself. And then you need some way to have consistency, some fluency between each and every part of this process.
18:36So, again, you have that team lead or that synthesis agent at the end that make sure that each individual output has some cohesion. So if we go over, we also have a pretty legendary prompt here as well, but notice that most of these are very granular. You don't necessarily have to go to this level.
18:52So even though I go to levels like this where I'm pretty much spoon feeding and dog fooding it exactly what to do, you could just say email marketer. You could just say social media manager. And in this case, I say the product.
19:05I explain exactly what it is. I see that we're creating the agent team to build a complete marketing campaign for the product launch. So notice now we're at this stage.
19:13This is always the goal. So having your objective function or your goal you're optimizing for is the most important part. And then we're just contextualizing it.
19:20We're giving it all the nuance. With the ad copywriter, this is where it's valuable to add some granularity. I'm saying, I don't want you just to create three variations of ad copy in general.
19:30I want you to create variation a where it's problem agitation, so the friction point. Variation b would be the social proof angle, and then variation c, us versus them comparison.
19:40A lot of more social psychology grounded variations, and you can really have fun with this. You could go and say, you know what?
19:46I want variation d to also take on the persona of Edward Bernays, which if you don't know, kind of invented all the trends, the whole notion of breakfast and bacon and eggs being a couple. He was the mastermind behind all those marketing campaigns. So if you added that as an extra lens, now you have the capacity to have one agent take on different lenses without diluting any one of them since it's so focused on the task at hand.
20:12So after that, this runs as expected. We get all the email sequences. We get the entire marketing campaign as a series of markdown files.
20:21And once again, you can spin these up. If we take a look at the email sequence, for example, you have email number one, the teaser, the subject line. It tells you how long to release it before the launch.
20:31I don't see too many em dashes. I see a pseudo em dash here with two dashes, but you could probably just tell it to avoid that.
20:39Looks decent, but still AI. So you can focus on the copywriting to make it better. I see some italics here.
20:45So it's not completely AI slop, but you could deslopify it with the right instructions. And for the final use case, I'm gonna show you how you can create the 8020 version of your own version of OpenClaw that fits exactly what you're looking for in the easiest way possible. I dropped a whole video kind of alluding to this earlier in the week.
21:04Some people loved it. Some people hated it. All good.
21:07I'm still gonna show you how to one shot it very closely with a pretty comprehensive prompt. They're gonna use both sub agents and agent teams to work cohesively together in a way where they complement each other. Sub agents will take on more grunt work.
21:21We'll use explore sub agents to go and take a look at the existing OpenCLR repo to see what it is we want from it. And then the agent team will have the architect, the blue printer, and you'll have everything else in terms of your core requirements or wish list.
21:35So one in charge of the telegram setup, one in terms of the skill setup, one that's creating a version of memory that might fit for you for your use case, and one to help you create the CLI experience. And like I promised, this is probably the beefiest prompt of all of them. This is a small essay like the hobbit, and it starts off saying, create an agent team to build a personal AI assistant, a better customized version of OpenClaw built specifically to run my company prompt advisers.
22:04Now I give it the URL of my company for a reason. I wanted to tailor what the build of the personal assistant should be that drives business value based on what I do day to day. So the end goal, a working CLI command line interface.
22:18If you don't know what that is, that is this. Right? So you where you have the open claw, in this case, it's called mark claw, and we complete the command by saying that we wanna connect it to Telegram, understand our business context, pick the right tool for any task, and help me run my company day to day.
22:32And then I say first, before spawning the agent team, use a sub agent to clone and analyze the repo. So now we're offloading that task to preserve tokens to just contextualize exactly what's there. So this is where we specify everything for the sub agent.
22:46We tell it exactly to clone this repo, read through the code base, summarize the overall architecture and how components connect, design patterns used, what technologies and dependencies it relies on, the three best ideas we should steal for our version, and the three things we should do differently.
23:04So with that, we have the agent team now spinning up in step two, and we tell it to spawn five teammates to build our custom version. And we tell it we want one for the architect, one for the telegram interface, the skill router and tools, memory and context, integration and CLI.
23:21So pretty much everything that you would need to put everything together. And then if you go to the very bottom here, we have the dependency chain it walks through.
23:29We have the architect taking care of everything we need to progress it, and it takes around probably twenty to thirty minutes to go from zero till the very end.
23:41It originally comes up with this one, which I didn't really like, prompt advisor assistant. A little bit, uh, blinding on the eyes when you open it up. So then I switched it up to, uh, Mark Claw, and that was not using the agent team.
23:52So the agent team shut down at the point where it completed the original version of the command line interface. I just wanted to now individually go back and forth with Claude code to get it to the point where I could look at it and be like, This works.
24:06This is cool. And then in literally one shot featuring some more aesthetic updates, we can go add our telegram token onboard and have this up and running in a matter of minutes.
24:16So hopefully this walkthrough shows you the power of agent teams applied to nontechnical and a pseudotechnical use case.
24:24So you can start using it for everything where it makes sense to break down heavy problems or break down heavy tasks. And I know you're probably waiting for me to say it. Yes.
24:32I'm gonna make all the prompts I showed you available to you for free in the second link in the description below. But what I teach here on YouTube is just a sliver of what I go through in my exclusive community, and I wanna do a whole masterclass on setting up and configuring my own version of OpenClaw, MarkClaw, whatever claw you want.
24:50So if you wanna check that out and every other resource we have, then check out the first link in the description below, and I'll see you inside. And for the rest of you, if you enjoyed this video and it helped illuminate more use cases where you can practically use agent teams, I'd super appreciate if you could like the video, leave a comment, good or bad, all good, just so the video can get some more recognition in the algo, and I'll see you all in the next one.
§ · For Joe

Non-technical agent teams are the unlock.

Agent team playbook

Six of the seven use cases Mark shows are pure prompt engineering -- no code required -- which means anyone with Claude Code Pro can run these today.

  • Use the AI Advisory Board pattern (Case 5) on any business decision -- JoeFlow pricing, MCN+ positioning, new-product go/no-go.
  • Add a postmortem synthesis agent to every multi-output pipeline you already run (content repurposing, batch recording, Mod Producer).
  • Always specify create an agent team explicitly -- spawn agents alone can silently fall back to sub-agents with no inter-agent communication.
  • Use the human-in-the-loop interrupt (require plan approval before X) on any workflow where one expensive step follows a cheaper research phase.
  • For complex builds, offload repo-reading or data-gathering to a sub-agent first, then hand off context to the agent team -- saves 30-50K tokens in the main session.
  • Keep teams to 3-5 agents. The 7th use case (MarkClaw) uses 5 -- and even that pushed close to the complexity ceiling.
§ · For You

What one prompt can actually do.

If you want to try it yourself

You do not need to be a developer to use Claude Code agent teams -- six of the seven workflows in this video are purely about telling agents what to research, write, or decide.

  • Start with the content repurposing engine: paste any long piece of writing and ask for a LinkedIn post, newsletter section, and tweet thread -- tell the agents to share their angles before writing so nothing repeats.
  • Try the AI Advisory Board on a real decision: write a prompt that defines the question, gives 4-5 different analysis lenses, and asks for a go/no-go brief with top-3 risks.
  • When you want a checkpoint before an expensive step, add require plan approval from [agent] before they start building to your prompt -- the agent will pause and ask you to review.
  • Grab Mark's free prompt pack from the description -- all seven prompts are there ready to paste.
§ · Frame Gallery

Visual moments.