Skip to main content
This article is part of the “Building with AI” series documenting my journey using multi-agent AI workflows to build production systems.All examples are from personal projects and do not represent employer technologies.

The Challenge

By December 2025, my platform had 20+ Rust crates, each one built by asking AI to “implement feature X.” The result? Architectural inconsistency everywhere. Some crates used capsule isolation. Some didn’t. Some followed naming conventions. Some invented their own. The Problem: How do you maintain architectural consistency when AI agents build your code? AI Focus: Can architecture documentation serve as enforceable constraints for AI implementation? System Example: Architecture Decision Records (ADRs) that AI reads, understands, and automatically complies with during code generation.

The Turning Point

I discovered ADRs aren’t just decision logs—they’re architectural contracts that AI can enforce. Here’s what changed: Instead of telling AI “build it this way,” I wrote ADR-0010 (Capsule Isolation) defining table naming, partition key patterns, and scope rules. Then I told AI: “Implement this feature. Follow ADR-0010.” The result? AI generated code that perfectly followed the isolation patterns because it could parse the ADR’s structured constraints.
Read ADR-0010 and extracted 4 enforceable rules:
  1. Table names must use capsule prefix: {CAPSULE_CODE}_{table}
  2. Partition keys must include capsule: TENANT#...#CAPSULE#...
  3. EventEnvelope.capsule_id is required (not Optional)
  4. All GSI patterns include capsule boundary
AI then generated code complying with all 4 rules without human reminders.

How ADRs Guide AI

Traditional ADRs document decisions. AI-friendly ADRs define constraints in structured formats.

Before (Decision Log):


We will use DynamoDB single-table design.

## Rationale

Reduces operational overhead.
This tells humans what we decided. It doesn’t tell AI how to implement it.

After (Constraint Definition):

### 2. Table Naming Convention

Capsule-level tables use `{CAPSULE_CODE}_` prefix:

{CAPSULE_CODE}_{table_name}

Examples:
  PRODUS_crm      (Production US)
  DEVUS_crm       (Development US)
  STGEU_events    (Staging EU)

Rationale:
- Clear visual distinction in AWS console
- Physical isolation per capsule
- Aligns with existing Infrastructure Principles §2
See the difference? The second format gives AI:
  • Exact naming patterns to follow
  • Code examples to reference
  • Validation rules to enforce

The ADR Workflow

Here’s how ADRs integrate with multi-agent development: 1. Problem Detection
Human: "AI, why is dev data appearing in prod?"
AI Evaluator: "Analyzing... CRM entities lack capsule_id in partition keys."
2. ADR Creation (Human)
# ADR-0010: Capsule Isolation Enforcement

Problem: Products like CRM are tenant-scoped only. A lead created
in dev capsule appears in prod. This is fundamentally broken.

Decision: Enforce capsule isolation with these patterns:
- Table naming: {CAPSULE_CODE}_{table_name}
- Partition keys: TENANT#...#CAPSULE#...#ENTITY#...
- EventEnvelope.capsule_id: Required (not Optional)
3. Implementation (AI Builder)
Human: "Refactor CRM to comply with ADR-0010."
AI Builder: "Updating 48 files, adding capsule_id to all entities..."
4. Verification (AI Verifier)
AI Verifier: "Checking ADR-0010 compliance..."
- Table names: PASS (all use PRODUS_ prefix)
- Partition keys: PASS (all include CAPSULE#)
- EventEnvelope: PASS (capsule_id is String, not Option)

What Went Wrong

Mistake: Initially wrote ADRs like decision logs: “We decided to do X because Y.”Why it failed: AI couldn’t extract actionable constraints from prose. It would comply with the spirit but miss specific patterns.How we fixed it: Reformatted ADRs with explicit Decision sections using code blocks, tables, and examples. Added Consequences section with migration checklists.Lesson: ADRs for AI need structured constraints, not narratives. Use code blocks, tables, and bullet lists—not paragraphs.

The 4 ADRs That Changed Everything

During Dec 31, 2025 - Jan 30, 2026, we wrote 4 critical ADRs:
Problem: Dev data leaking to production environments.Constraint: All capsule-scoped entities must:
  • Use capsule-prefixed table names
  • Include capsule in partition keys
  • Require capsule_id (not optional)
Impact: 48 files changed in CRM refactor, zero cross-environment data leaks.
Problem: Deleting a parent entity leaves orphaned children.Constraint: Use SagaStep macro for multi-entity operations with compensation logic.Impact: Enabled complex workflows like account merges with automatic rollback.
Problem: 16 AWS SDKs used inconsistently, no scope enforcement.Constraint: 4 client types (Platform, Tenant, Capsule, Operator) with mandatory scope parameters.Impact: Eliminated 600 lines of boilerplate per crate, enforced isolation at SDK level.
Problem: Inconsistent test coverage across crates.Constraint: 4-level test pyramid (Unit, Integration, E2E, Contract) with coverage targets.Impact: Increased platform test coverage from 60% to 85%.

Key Learnings

AI Strength

AI excels at following structured constraints. Give it a table of partition key patterns, and every entity it creates will match exactly. No drift, no variance.

AI Weakness

AI can’t write good ADRs. It doesn’t know your pain points or operational constraints. ADR authorship must be human-driven.

Human Role

Humans write ADRs capturing architectural constraints and operational learnings. AI implements features following those constraints automatically.

Process Insight

The best process is: Human writes ADR → AI reads ADR → AI generates code → Human verifies ADR compliance. This scales to any number of AI agents.

Actionable Takeaways

If you’re building with AI assistants:
  1. Write ADRs before coding - Define your constraints as ADRs before asking AI to implement features. ADRs become the “specification” AI codes against.
  2. Use structured formats - Code blocks, tables, bullet lists. Avoid prose. AI parses structure better than narratives.
  3. Reference ADRs in commits - We enforce ADR-XXXX references in commit messages via pre-commit hooks. This creates an audit trail.
Pro tip: Start every AI implementation request with “Follow ADR-XXXX.” This primes the AI to check compliance before generating code. Our ADR compliance went from 40% to 100% with this one change.

Metrics

  • ADRs written: 4 major architectural decisions
  • Commits referencing ADRs: 67 in 30 days
  • ADR compliance rate: 100% (after pre-commit hooks)
  • Architectural drift incidents: 0

The ADR Template We Use

# ADR-XXXX: [Title]

**Status:** Proposed | Accepted | Deprecated
**Date:** YYYY-MM-DD
**Authors:** [Name]
**Reviewers:** [AI Agents]

## Context
[What problem are we solving? Include metrics if available.]

## Decision
[The constraint(s) we're enforcing. Use code blocks and tables.]

### 1. [Constraint Name]
[Exact pattern with examples]

### 2. [Constraint Name]
[Exact pattern with examples]

## Consequences

### Positive
- [Benefit 1]
- [Benefit 2]

### Negative
- [Drawback 1]
- [Migration effort]

### Migration Strategy
1. [Step 1]
2. [Step 2]

## Related Decisions
- ADR-YYYY
- ADR-ZZZZ

Resources & Further Reading


Next in This Series

Week 6: How we used ADR-driven middleware to eliminate 100% of manual configuration lookups.

Week 6: Configuration Governance

The middleware pattern that made configuration hierarchical and automatic

Discussion

Share Your Experience

Do you use ADRs? How do you keep AI implementations consistent with architecture?Connect on LinkedIn or comment on the YouTube Short

Disclaimer: This content represents my personal learning journey using AI for a personal project. It does not represent my employer’s views, technologies, or approaches.All code examples are generic patterns or pseudocode for educational purposes.