What Happens When You Let 6 AI Agents Write Code at the Same Time
A field report from the bleeding edge of AI-assisted development
Steve Yegge released Gas Town on January 1st, 2026. An agent orchestrator for Claude Code. Multiple AI agents working in parallel, coordinated through git-backed task tracking, communicating via an internal mail system. The pitch: stop babysitting one Claude session. Run twenty.
His first rule: don’t use this in its first weeks.
I used it in its first week.
Hey, I’m Lakshmi — I help developers build, deploy, and distribute their SaaS without hiring a team. I also run Stacksweller and Supabyoi.
New here? Start with Why Your AI Wakes Up Every Morning With No Memory or Clean Code Is Dead.
Why I Couldn’t Wait
I work across four projects solo. SaaS products, open source tools, content — the usual indie dev plate-spinning. Every Claude Code session I run is one session I’m not running somewhere else. The promise of parallel agents shipping code while I context-switch between projects was too compelling to resist.
So I installed Gas Town, added my projects as “rigs,” groomed six tasks into “beads,” and spawned six workers simultaneously.
My M2 MacBook responded by becoming a space heater that couldn’t render a terminal.
What Gas Town Actually Is
Before I explain what went wrong, let me translate the concepts. Gas Town uses Mad Max-inspired naming, which is either charming or maddening depending on your patience.
Town — Your workspace root (~/gt/). Think of it as the factory floor where everything lives.
Rig — A project container. Each of your repos becomes a rig inside the town. Not a git clone itself, but a wrapper that manages clones, worktrees, and workers for that project.
Beads — A git-backed issue tracker, also built by Steve. Every task, bug, or feature is a “bead” with a unique ID like supabyoi-9ue. They live in your repo’s .beads/ directory, committed alongside your code. Dependencies between beads create a task graph. I wrote about why this matters — AI agents lose all context when sessions end. Beads solve this by making work persist in git. This is the piece that genuinely works well.
Mayor — The global coordinator agent. You talk to the mayor, the mayor dispatches work. It sits above all rigs and orchestrates across projects.
Polecat — An ephemeral worker agent. Gets spawned with a task, works in its own git worktree, signals completion, gets cleaned up. The grunt labor.
Witness — Per-rig monitor that watches polecats. Detects stuck workers, nudges them, handles cleanup.
Deacon — Town-level watchdog that patrols all rigs. Monitors witnesses, refineries, everything.
Refinery — Per-rig merge queue processor. When a polecat finishes, the refinery handles the PR/merge workflow.
Convoy — Batch tracker for related work. Group six beads into a convoy, dispatch them, track progress as a unit.
Molecules — Reusable workflow templates. Formula defines the pattern, molecule is the running instance.
That’s ten concepts before you write a line of code. Steve’s mental model is a steam engine: agents are pistons, work flows through hooks, everything runs on the “Propulsion Principle” — if you find work on your hook, you execute immediately.
The architecture borrows from Erlang’s supervisor trees(I think) — a pattern from telecom systems where processes are organized in a hierarchy. Each parent monitors its children: if a child crashes, the parent restarts it. In Gas Town, the Deacon watches Witnesses, Witnesses watch Polecats, and failures cascade upward. This is a proven pattern that runs phone switches serving millions of calls. The catch: Erlang processes are lightweight (microseconds to spawn, kilobytes of memory). Claude Code sessions are heavy (seconds to spawn, gigabytes of memory). When each “process” is a full AI session burning tokens, the economics of cheap failure recovery invert.
What Actually Happened
Week One: The Learning Curve
The first session was pure orientation. I needed Claude to explain Gas Town to me while inside Gas Town. The cognitive overhead of mapping “polecat” to “worker” and “rig” to “project” consumed real mental energy that should have gone to actual work.
The 80/20 path is supposed to be:
gt up # Boot everything
gt mayor attach # Talk to the mayorIn practice, gt up failed because the bd (beads daemon) version check timed out. This led me down a rabbit hole patching the version comparison in Go — changing time.Equal() to time.Unix() because JSON serialization was losing nanosecond precision. I was debugging the orchestrator instead of using it.
Week Two: Six Polecats and a Space Heater
Once things stabilized, I got ambitious. Six beads groomed, six polecats spawned:
gt sling supabyoi-9ue supabyoi
gt sling supabyoi-abc supabyoi
# ... four moreEach polecat is a full Claude Code session in its own tmux pane with its own git worktree. Six of those plus a mayor, witnesses, refineries, deacons, and multiple bd daemons meant my M2 was running 20+ processes competing for resources.
The system didn’t crash. It degraded. Commands took minutes to respond. Shell execution broke mid-session. I had to kill processes manually and nuke the setup.
But here’s the thing — that wasn’t entirely Gas Town’s fault. Six concurrent Claude sessions will hammer any laptop. The real issue was that Gas Town spawned orphaned daemon processes that accumulated across restarts. I found six bd daemons running simultaneously, plus stuck bd mol burn processes from days ago that never cleaned up.
The Doctor Loop
Gas Town has a gt doctor command — a health check that reports errors and warnings. I ran it constantly.
First run: 1 error, 11 warnings. After gt doctor --fix: 4 fixed, 7 remaining. After restart: new errors. After bd daemon restart: timeout errors. After updating bd from v0.47.0 to v0.47.2: different errors.
Each fix revealed the next problem. The mayor’s CLAUDE.md was 280 lines (should be under 30). Environment variables from dead sessions broke prefix routing. Beads databases pointed to wrong paths. Symlinks needed codesigning to avoid macOS killing the binary.
It felt less like using a tool and more like being a system administrator for a tool.
What Genuinely Works
Beads: The git-backed issue tracker is solid. Creating tasks, tracking dependencies, finding ready work — this layer does its job. It survived every crash and restart because it’s just files in git. Steve built Beads as a standalone tool before Gas Town, and it’s the strongest foundation in the stack.
Worktree isolation: Each worker gets its own git worktree. No merge conflicts between parallel work. Clean separation. This is the right primitive.
The hub/worker model: Having a coordinator dispatch tasks to isolated workers is correct. The mental model of “groom beads, dispatch to workers, merge results” is sound.
gt doctor: Despite the loop, having a comprehensive health check that can auto-fix common issues is genuinely useful infrastructure.
What Doesn’t Work Yet
Daemon management: Orphaned processes are the #1 pain. bd daemons accumulate, stuck processes never clean up, version checks timeout. This is being fixed — v0.5.0 added process group killing — but it was brutal in weeks one and two.
The naming: I’m not trying to be uncharitable. But “polecat” adds zero information over “worker.” “Molecule” adds confusion over “workflow.” Every conversation about Gas Town requires a glossary. When a Hacker News commenter pointed out the irony — Steve Yegge wrote “Execution in the Kingdom of Nouns” mocking over-abstraction — it stung because it’s accurate.
Human as dispatcher: This is the core limitation. Despite all the automation, the mayor waits for you. Issue #694 on GitHub tracks exactly this: “Mayor lacks automated dispatch patrol molecule.” Community members built external cron scripts to poke the system. That tells you everything.
Cost and resource usage: Multiple reports of $100/hour token burn rates. DoltHub’s field test found none of the PRs were good enough to merge. The economics only work if the agents produce mergeable code reliably.
What I’m Building Instead
Gas Town taught me what I need. It also taught me what I don’t.
I wrote about why I’m building my own agent orchestrator. It’s called wt. The core idea: keep the infrastructure that works (beads, worktrees, tmux), strip the ceremony that doesn’t (polecats, molecules, deacons, refineries).
Where Gas Town has ten concepts, wt has three: hub, worker, task. That’s it.
The hub coordinates. Workers execute in isolated worktrees. Tasks are beads with dependencies. No mail system, no witness layer, no convoy abstraction. If a worker finishes, the hub sees it in the dashboard. If a worker gets stuck, you look at the terminal. No intermediate monitoring agent needed.
The key difference: wt is a pluggable orchestrator. Each project gets its own config — yolo mode for prototypes (no tests, auto-merge, maximum speed), strict mode for production code (tests required, PR review, quality gates), or anything in between. Gas Town is one-size-fits-all. Real projects aren’t.
It’s early. But two weeks of wrestling Gas Town gave me the blueprint for what comes next.
What I’m Taking Away
Gas Town is a research prototype that got released into the wild. Steve warned people. I didn’t listen.
But I don’t regret it. Two weeks of wrestling gave me clarity about what agent orchestration actually needs:
Beads (or equivalent) is non-negotiable. Git-backed task tracking with dependencies is the foundation. Without it, agents have no memory across sessions.
Worktree isolation is the right primitive. One agent, one worktree, no conflicts. Simple and correct.
The hub/worker model works — if the hub is smart. The dispatcher problem is the real unsolved challenge. Manual dispatch defeats the purpose.
Simplicity beats power. Three concepts (hub, worker, task) cover 90% of the use cases. Ten concepts with Mad Max names cover 95% but cost you 5x the cognitive overhead.
Your laptop has limits. Two to three concurrent workers is practical on a MacBook. Six is aspirational. Twenty is a data center problem.
Steve Yegge is doing genuinely new work here. Nobody else has shipped a multi-agent orchestrator for Claude Code with this level of ambition. The HN comment that stuck with me: “Gas Town is cackling mad laughter from someone both insane and prescient simultaneously. Today it’s insane. But expect serious versions in the future informed by these early experiments.”
I broke the first rule. I’d do it again. Just maybe with fewer polecats next time.


Fantastic writeup on Gas Town's first-week chaos. The point about simplicity beating power really resonates, especially your observation that worktree isolation is the right primitive but you dont need ten abstraction layers on top. I've been down a similar rabbit hole with over-engineered dev tooling where I spent more time being the tools sysadmin than actualy shipping features.