Skip to main content

The Next Chapter

The first series — Building with AI — documented what it looks like to develop software with AI agents: the workflows, the failures, the calibration. This series documents what comes next.
The shift: In Series 1, I was a developer using AI tools. In this series, I’m a founder running an AI development organization. The code still gets written. I’m just not writing it.

What This Series Covers

An autonomous development organization where:
  • Requirements written in Markdown auto-generate tasks as GitHub Issues
  • An executor agent (running on cron) picks tasks and implements them using Claude CLI
  • A verifier agent independently runs tests and reviews PRs against the original requirements
  • Rejected PRs get reopened as tasks — the executor retries with the verifier’s feedback
  • Completed requirements close automatically and notify you via Telegram
No human in the implementation loop.

Who This Is For

Engineers

Concrete architecture, real code, step-by-step replication guide. The executor and verifier are less than 500 lines of Python each.

Founders & VCs

What this means for the unit economics of early-stage software. How the founding bottleneck shifts from implementation to decision-making.

The Series

Episode 1: The Orchestration Problem — Why One AI Isn't Enough

The gap between “AI helps you code” and “AI builds without you” is an engineering problem. Here’s what the attempt taught us — and what we built instead.

Episode 2: Memory That Survives the Session

The loop was closing tasks. But every session started blank. Here’s how structured GitHub Issue beads and an MCP retrieval tool gave our agents memory without adding infrastructure.

Episode 3: The Agent That Couldn't See What It Was Breaking

As the loop took on larger tasks, the reactive compile-and-fix loop became a trust problem. Here’s the impact graph architecture — Tree-sitter, KuzuDB, MCP — that fixes it.