Skip to main content

The Challenge: Build a CRM Domain Layer

Build the entire CRM foundation for a multi-tenant platform. The scope:
  • 7 core domain models (Account, Contact, Lead, Opportunity, Activity, Product, Address)
  • Event sourcing integration
  • DynamoDB repository pattern
  • Financial configuration system
  • ISO reference data (countries, currencies)
  • Full API layer with permissions
  • Integration tests with LocalStack

The Journey in Detail

Phase 1: Planning

I started with a single GitHub issue: “Initialize CRM crate with domain models” Instead of diving into code, I opened an Evaluator (Opus) session:
Planning session for CRM crate initialization.

Context:
- Multi-tenant SaaS platform
- Need CRM domain models (Account, Contact, Lead, Opportunity, etc.)
- Must integrate with existing event sourcing infrastructure
- DynamoDB single-table design

Explore the codebase to understand:
1. Existing domain model patterns
2. Event sourcing conventions
3. DynamoDB entity patterns
4. Repository trait patterns

Then design the CRM crate architecture with:
- Domain model structure
- Event integration strategy
- DynamoDB schema design
- Testing approach
Evaluator’s exploration:
  1. Read existing domain models from the auth crate
  2. Grep for event sourcing patterns in the kernel crate
  3. Analyze DynamoDB entity implementations
  4. Review repository trait patterns
Output: A 12-page plan document that included:

Decision 1: Single-Table Design

  • Use one DynamoDB table for all CRM entities
  • PK pattern: TENANT#{tenant_id}#ACCOUNT#{account_id}
  • GSI patterns for cross-entity queries

Decision 2: Event Sourcing Integration

  • Each domain model emits events via EventStore trait
  • Events follow naming convention: {Entity}{Action} (e.g., AccountCreated)
  • Repository pattern wraps both state storage and event publishing

Decision 3: Macro-Driven DynamoDB Entities

  • Create #[derive(DynamoDbEntity)] macro
  • Auto-generate partition key, sort key, GSI attributes
  • Eliminate boilerplate across 7 domain models

Decision 4: Four-Level Testing

  • L1: Domain model validation (unit tests)
  • L2: Repository CRUD (LocalStack integration tests)
  • L3: Event publishing flow (EventBridge → SQS verification)
  • L4: End-to-end CRM workflows
Human approval: “Proceed with this plan. Start with the DynamoDB macro first.”

Phase 2: Implementation

Instead of one massive implementation session, I broke it into 7 sub-tasks:
  1. Create DynamoDbEntity derive macro
  2. Implement Account domain model
  3. Implement Contact domain model
  4. Implement Lead domain model
  5. Implement Opportunity domain model
  6. Implement Activity domain model
  7. Add API layer with routes
Each sub-task followed the same pattern: Plan (brief) → Implement → Verify → Fix → Next

Sub-Task Example: DynamoDB Macro

Builder session (Sonnet):
Implement DynamoDbEntity derive macro per plan:
.plans/275-crm-initialization.md - Section 3.2

Requirements:
1. Parse struct fields to identify PK, SK, GSI keys
2. Generate impl blocks for DynamoDB attribute conversion
3. Support #[pii] attribute for sensitive fields
4. Follow existing macro patterns from eva-api-macros crate

Create comprehensive tests showing all attribute combinations.
Builder’s work:
  • Created eva-dynamodb-derive crate
  • Implemented proc macro with syn and quote
  • Generated 15 unit tests covering various attribute patterns
  • All tests passing locally
Verification time. I opened a fresh Verifier session (critical - not reusing Builder’s session):
Verify DynamoDB macro implementation for issue #284.

Read:
- Plan: .plans/275-crm-initialization.md - Section 3.2
- Implementation: git diff main...HEAD
- Tests: eva-dynamodb-derive/tests/

Check:
- Does macro handle all PK/SK/GSI patterns from plan?
- Are #[pii] attributes properly propagated?
- Edge cases: optional fields, nested structs, enums?
- Test coverage adequate?
Verifier found 2 issues:

Issue #1: Missing GSI projection

Builder implemented GSI key generation but forgot to add projection type (KEYS_ONLY vs ALL).Why Builder missed this: The plan mentioned projection types in a footnote. Builder focused on main requirements.Impact if shipped: DynamoDB queries would fail at runtime with “projection type not specified” errors.

Issue #2: PII attribute not enforced

Macro allowed #[pii] on non-String fields (e.g., #[pii] amount: i64).Why Builder missed this: Tests only covered valid cases. No negative tests.Impact if shipped: Silent failures in PII encryption (expects String), leading to unencrypted sensitive data.

Builder fixed both issues

Re-verification: PASSED ✅ Merge to main. This pattern - finding issues in verification that Builder missed - repeated across all 7 sub-tasks.

The Pattern That Emerged

After several iterations, I noticed something fascinating: Verifier caught different types of bugs than Builder’s own tests.
What Builder tested:
  • Happy path functionality
  • Basic edge cases (empty strings, null values)
  • Compilation and type safety
Success rate: ~80% of requirements covered
Critical insight: Builder optimizes for “make tests pass.” Verifier checks “does this meet requirements?” These are fundamentally different evaluation criteria, and using the same AI session for both introduces bias.

Phase 3: Integration

With all 7 domain models implemented and verified individually, I assumed integration would be smooth. Wrong. Integration task: Wire all CRM entities to the API layer with permission-based routes. Builder session (Sonnet):
Add API routes for all CRM entities per plan section 5.

Use existing #[eva_api] macro pattern from eva-auth crate.
Wire to handlers with proper permission checks.
Builder’s implementation:
  • Created API handlers for 7 entities (35 routes total)
  • Applied #[eva_api] macro to each route
  • Wired to DynamoDB repositories
  • Tests passing
Verification session:
Verify API integration for CRM entities.

Check:
- All routes follow RESTful conventions?
- Permissions match authorization design?
- Error handling consistent?
- OpenAPI specs generated correctly?
Verifier found a systematic issue:

Cross-Entity Permission Inconsistency

Builder used different permission patterns for different entities:
  • Account routes: "crm.account.read"
  • Contact routes: "crm:contact:read" (wrong delimiter)
  • Lead routes: "crm-lead-read" (completely different pattern)
Why Builder did this: No single source of truth for permission naming. Builder inferred from incomplete examples.Impact if shipped: Authorization middleware would fail to match permissions, blocking all non-Account CRM operations.
This is where the workflow really shined. Instead of fixing permissions manually, I asked Builder:
Create type-safe permission constants in eva-api-common crate.

Follow pattern:
pub const CRM_ACCOUNT_READ: &str = "crm.account.read";
pub const CRM_ACCOUNT_WRITE: &str = "crm.account.write";
...

Then refactor all routes to use constants instead of strings.
Builder’s fix:
  • Created eva-api-common/src/permissions.rs with 45 typed constants
  • Refactored all 35 routes to use constants
  • Added compile-time enforcement (routes won’t compile without valid permission constant)
Re-verification: PASSED ✅ But wait - there’s more. Verifier then said:
“Routes now use type-safe constants, but the authorization middleware still accepts arbitrary strings. Recommend: update middleware to only accept permission constants.”
This is where AI really impressed me. Verifier didn’t just check requirements - it suggested architectural improvements.

The Verification Report That Mattered Most

The most valuable verification happened during event sourcing integration, when Verifier caught an issue that would’ve been catastrophic in production. Context: Implementing the event sourcing integration between CRM domain events and the kernel’s EventStore. Builder’s implementation: Straightforward - emit events after each repository save. Verifier’s question:
“What happens if DynamoDB save succeeds but EventStore append fails? You now have state divergence between the database and the event log.”
Uh oh. Builder had implemented this pattern:
// ❌ WRONG - not atomic
pub async fn create_account(&self, account: Account) -> Result<()> {
    // Save to DynamoDB
    self.dynamodb_repo.save(&account).await?;

    // Emit event (might fail after save succeeds)
    self.event_store.append(AccountCreated { ... }).await?;

    Ok(())
}
Verifier flagged this as HIGH RISK:
“DynamoDB and EventStore are separate systems. If event append fails, account exists in DB but has no audit trail. Violates event sourcing guarantees.”
The fix required rethinking the architecture. After discussing with Evaluator (in a new planning session), we introduced a two-phase commit pattern:
// ✅ CORRECT - atomic with rollback
pub async fn create_account(&self, account: Account) -> Result<()> {
    // Phase 1: Append event first (source of truth)
    let event_id = self.event_store.append(AccountCreated {
        account_id: account.id,
        ...
    }).await?;

    // Phase 2: Save to DynamoDB with event reference
    let mut entity = AccountEntity::from(account);
    entity.last_event_id = event_id;

    match self.dynamodb_repo.save(&entity).await {
        Ok(_) => Ok(()),
        Err(e) => {
            // Rollback: mark event as failed
            self.event_store.mark_failed(event_id).await?;
            Err(e)
        }
    }
}
This pattern became the standard for all 7 CRM entities.
Key Learning: Verifier’s independent perspective caught an architectural flaw that Builder’s tests would never find. Tests can’t verify what you didn’t think to test.

What We Built

By the end of the iteration, the CRM crate was complete:

Domain Models

7 core entities:
  • Account (companies)
  • Contact (people)
  • Lead (prospects)
  • Opportunity (deals)
  • Activity (tasks/events)
  • Product (catalog items)
  • Address (multi-address support)
Financial Config:
  • Tenant-scoped settings
  • Currency preferences
  • Fiscal year configuration

Infrastructure

Custom DynamoDB macro:
  • #[derive(DynamoDbEntity)]
  • Auto-generates PK/SK/GSI
  • PII field encryption support
Repository pattern:
  • Trait-based abstraction
  • In-memory + DynamoDB implementations
  • Event publishing integration

Event Sourcing

21 domain events:
  • AccountCreated, AccountUpdated
  • ContactAdded, ContactEmailChanged
  • LeadConverted, OpportunityWon
  • etc.
Integration:
  • EventStore trait implementation
  • DynamoDB Streams → EventBridge
  • Audit trail for compliance

Testing

4-level test suite:
  • L1: 85 unit tests (domain validation)
  • L2: 42 integration tests (LocalStack)
  • L3: 12 event flow tests
  • L4: 3 E2E workflows
Coverage: 92% of domain logic
Total:
  • 6,800 lines of production code
  • 2,400 lines of test code
  • 23 files created
  • 216 commits
  • 0 bugs in production

What We Learned: Real Lessons from Real Bugs

Lesson 1: Fresh Verifier Sessions Are Non-Negotiable

Experiment: Early in the process, I got lazy and reused Builder’s session for verification (to “save time”). Result: Verifier found 0 bugs. Wait, what? The code was perfect? No. I opened a fresh Verifier session later - found 4 bugs immediately. Analysis: When you reuse Builder’s session:
  • Verifier inherits Builder’s mental model
  • Verifier “remembers” the shortcuts Builder took
  • Verifier validates assumptions instead of challenging them
Rule established: Always use fresh sessions for verification, even if it feels wasteful.

Lesson 2: AI Hallucinates Requirements, Not Just Code

Builder’s hallucination: During Contact domain model implementation, Builder added a preferred_name field:
pub struct Contact {
    pub id: ContactId,
    pub email: Email,
    pub preferred_name: Option<String>, // ← NOT IN REQUIREMENTS
    ...
}
Builder’s commit message: “Add preferred_name field per PRD section 3.2.1” Problem: PRD section 3.2.1 says nothing about preferred names. Verifier caught this:
“Contact struct includes preferred_name field. This field is not mentioned in the PRD. Is this intentional or hallucinated?”
Root cause: Builder saw first_name and last_name and “reasoned” that users would want a preferred name too. Reasonable inference - but not in the requirements. Fix: Removed the field. Created a GitHub issue: “Consider adding preferred_name field to Contact” for future discussion. Lesson: AI doesn’t just hallucinate code - it hallucinates features. Verifier prevents scope creep.

Lesson 3: Test Coverage ≠ Requirement Coverage

Builder proudly reported: “92% test coverage across all domain models!” Verifier asked: “Which requirement does test #14 verify?” Builder: ”…” Turns out: Builder wrote tests that covered code paths but didn’t map to requirements. Example:
#[test]
fn test_account_name_validation() {
    let account = Account::new("", "tech");
    assert!(account.is_err()); // ✅ Test passes
}
Requirement (from PRD): “Account name must be 3-100 characters, alphanumeric plus spaces and hyphens.” Builder’s test: Only checks empty string (1 of ~6 edge cases). Verifier’s suggestion:
#[test]
fn test_account_name_validation_per_prd_section_2_1() {
    // Too short
    assert!(Account::new("ab", "tech").is_err());

    // Too long
    assert!(Account::new(&"x".repeat(101), "tech").is_err());

    // Invalid characters
    assert!(Account::new("Acme Corp!", "tech").is_err());

    // Valid cases
    assert!(Account::new("Acme Corp", "tech").is_ok());
    assert!(Account::new("Acme-Corp", "tech").is_ok());
}
New practice: Every test must reference a requirement in the test name.

Principles We Established

Based on the bugs we caught and the patterns that worked, we codified these principles:
What we learned: When integrating event sourcing with a database, always append events BEFORE updating state.Rule: Event append failure = operation fails. Database save failure = rollback event.Never: Save to database first, then emit event (leads to orphaned state).
What we learned: The DynamoDbEntity macro generated code that compiled but violated DynamoDB best practices (wrong projection types).Rule: All macro-generated code must be reviewed by Verifier, including expansion inspection.Tool added: cargo expand to verify macro output.
What we learned: String-based permissions led to typos and inconsistencies across 35 routes.Rule: Use typed constants for all cross-cutting concerns (permissions, event names, table names).Example:
// ❌ BEFORE: Runtime error if typo
#[eva_api(permission = "crm.account.raed")]  // typo!

// ✅ AFTER: Compile error if typo
#[eva_api(permission = CRM_ACCOUNT_READ)]
What we learned: Builder’s tests found code-level bugs. Verifier found requirement-level gaps.Rule: Both are necessary. Tests verify “does it work?” Verifier verifies “is it right?”Metrics tracked:
  • Builder bugs: Compile errors, test failures
  • Verifier bugs: Requirement gaps, edge cases, consistency issues

The Mistake I Made (And What It Taught Me)

After initial verification: After Verifier approved the Opportunity domain model, I merged to main and moved on. Later during integration testing: Integration tests failed. Cross-entity query between Opportunity and Account was broken. What happened? Verifier checked Opportunity in isolation. Tests passed. Requirements met. But Opportunity has a foreign key relationship to Account: opportunity.account_id → account.id The bug: Opportunity’s account_id field used a different ID format than Account’s id field.
// Account
pub struct Account {
    pub id: AccountId(Uuid),  // Format: UUID v4
    ...
}

// Opportunity (WRONG)
pub struct Opportunity {
    pub account_id: String,  // Format: String "ACC-{ulid}"
    ...
}
Why did this happen? Verifier verified each entity independently. Account was implemented first. Opportunity was implemented later. No cross-entity consistency check. The fix: I created a new verification step: Cross-Entity Consistency Check
After verifying individual entities, perform cross-entity validation:
  1. Foreign key consistency:
    • Do foreign key types match primary key types?
    • Are ID formats consistent across relationships?
  2. Event naming consistency:
    • Do all Created events have the same structure?
    • Are event versioning patterns consistent?
  3. Repository pattern consistency:
    • Do all repositories implement the same trait?
    • Are CRUD method signatures consistent?
  4. API route consistency:
    • Are path patterns consistent? (/accounts/:id vs /account/:account_id)
    • Are HTTP methods consistent across entities?
    • Are permission naming patterns consistent?
Report any cross-entity inconsistencies as BLOCKING issues.
Lesson learned: Verification must check both intra-entity (single model) and inter-entity (relationships) correctness.

Metrics

Planned work: Initialize CRM crate with 7 domain modelsActual completion:
  • 7 domain models ✅
  • DynamoDB macro ✅
  • Event sourcing integration ✅
  • API layer with 35 routes ✅
  • Financial configuration ✅
  • Reference data (ISO 3166, ISO 4217) ✅
  • LocalStack integration tests ✅
Speedup: 4.4x faster than manual estimate

Code Examples (Sanitized)

Here’s the final DynamoDB entity macro we built:
// Generic domain model with PII and event sourcing
#[derive(DynamoDbEntity, Debug, Clone)]
#[table_name = "platform_data"]
#[pk = "TENANT#{tenant_id}#CONTACT#{id}"]
#[sk = "METADATA"]
pub struct ContactEntity {
    pub id: ContactId,
    pub tenant_id: TenantId,

    #[pii(category = "email")]
    pub email: String,

    #[pii(category = "name")]
    pub first_name: String,

    #[pii(category = "name")]
    pub last_name: String,

    pub phone: Option<String>,
    pub created_at: DateTime<Utc>,
    pub updated_at: DateTime<Utc>,

    // Event sourcing metadata
    pub version: u64,
    pub last_event_id: Option<EventId>,
}

// Macro generates:
// - to_item() / from_item() for DynamoDB conversion
// - Partition key / Sort key builders
// - GSI key builders
// - PII encryption hooks
What the macro gave us:
  • Zero boilerplate across 7 entities
  • Compile-time validation of key patterns
  • Automatic PII field encryption
  • Type-safe event metadata