After building 7 CRM domain entities, I noticed a painful pattern. Every single entity required the same boilerplate:For each domain entity (Account, Contact, Lead, etc.):
The Hidden Cost: Copy-paste programming creates maintenance debt that scales with entity count.When I found a bug in the DynamoDB save method, I had to manually fix it in 7 places. When I missed one,
it caused a production data corruption issue.
Rationale: Explicit macros are more verbose but prevent silent bugs. When a macro fails, it should fail at
compile time with a clear message, not at runtime with mysterious errors.
Builder implemented macros for single-item entities (one DynamoDB record per domain object). But some
entities use multi-item pattern:
Copy
// Account has 2 DynamoDB items:// 1. METADATA item (account details)// 2. LIST_ITEM for queryable fields// Macro didn't support generating queries for LIST_ITEM pattern
Why Builder missed this: The plan showed examples for single-item entities only. Multi-item pattern was
mentioned in a footnote.Impact if shipped: Macros would only work for 4/7 entities. The other 3 (Account, Contact, Opportunity)
would need manual repository implementations.Fix: Added #[multi_item] attribute support to DynamoDbRepository macro (3 hours)
CachedRepository macro generated write-through caching but didn’t specify invalidation strategy:
Should update invalidate cache immediately?
Should delete remove from cache?
What about bulk operations?
Impact: Inconsistent caching behavior across entities could lead to stale reads.Fix: Added explicit #[cache_strategy] attribute with options: write_through, write_around, write_back (1.5 hours)
With macros approved, I faced the real test: migrate 7 existing entities to use macros.The naive approach I almost took:“Let AI refactor all 7 entities in one session.”Why that would have failed: Refactoring all entities simultaneously makes it impossible to verify each
one independently.The approach that worked:Migrate one entity at a time, verify, then next.
Refactor Account entity and repositories to use new derive macros.Current state: Manual implementation (812 lines across 3 files)Target state: Macro-driven (43 lines)Steps:1. Add #[derive(DomainAggregate)] to Account2. Replace InMemoryAccountRepository with macro version3. Replace DynamoDbAccountRepository with macro version4. Replace CachedAccountRepository with macro version5. Verify all 23 tests still pass
The macro design process:I gave Evaluator access to all 7 existing repository implementations and asked:
Copy
Analyze these 7 repository implementations.Find common patterns and identify what can be macro-generated.
AI’s analysis (impressive):“All repositories follow the same pattern:
save() method: convert domain to entity to DynamoDB item
get() method: fetch item to entity to domain
Error handling: same pattern across all entities
Tenant isolation: all methods require tenant_id + capsule_id
Variation points:
Entity type (Account vs Contact vs Lead)
ID type (AccountId vs ContactId)
GSI queries (different fields per entity)
Recommendation: Single macro parameterized by entity type and ID type.”What surprised me: AI didn’t just copy the pattern - it identified the abstraction.It recognized that the differences (Account vs Contact) were parameter values, not fundamentally different
code.
What’s wrong: No validation that aggregate.tenant_id matches self.client.tenant_id.Impact: Potential cross-tenant data corruption (save Account from Tenant A into Tenant B’s partition).Caught by: Manual code inspection using cargo expand.Fixed with:
Write comprehensive README.md for platform-macros crate.Include:- Overview of what each macro does- Usage examples for all 5 macros- Migration guide from manual to macro-driven- Common pitfalls and how to avoid them- Performance considerations
AI generated a 387-line README that included:
Complete usage examples
Side-by-side before/after comparisons
Migration checklist
Troubleshooting section
Performance benchmarks (estimated)
What impressed me: The README quality was better than what I would have written manually. Why?
Comprehensive: AI didn’t skip sections. Covered every macro thoroughly.
Consistent: Same structure for each macro (usage → example → edge cases)
Practical: Included actual migration steps, not just API docs
The twist: I still needed to review and edit the README. AI’s performance benchmarks were wrong
(hallucinated numbers). But the structure and examples were solid.
Insight: AI excels at creating structured documentation from patterns. But verify technical claims
(benchmarks, performance characteristics) independently.
The problem: This was a breaking change. Existing code called save(), but macro now generated db_save().Impact: 30 commits of cascading fixes across crm crate.I’ll cover this story in detail in the “When AI Fails: Cascading Errors” article, but the key learning:Macro changes are amplified changes. One macro bug affects every entity using that macro.
What we learned: Good macros don’t just reduce typing - they enforce correctness.Example: DynamoDbRepository macro enforces tenant isolation checks. Can’t forget them because macro
generates them automatically.Rule: If a pattern has invariants (tenant checks, error handling, isolation), encode them in macros
so they can’t be violated.
Principle 2: Explicit Over Clever
What we learned: Macros that infer behavior are brittle. Macros that require explicit annotations are robust.Bad:
Copy
#[derive(Repository)] // Infers everythingpub struct AccountRepository; // What type? What storage? Unclear!
Practice: After macro changes, always inspect generated code for at least one entity.Rule: Generated code should be readable, not just correct. If cargo expand output is
incomprehensible, the macro is too clever.
Principle 4: Migrate Incrementally
What we learned: Migrating all entities to macros simultaneously is risky.Bad approach:
Refactor all 7 entities in one session
“Comprehensive” testing at the end
High risk of breaking changes
Good approach:
Migrate one entity
Verify it thoroughly
Use learnings to improve macro
Repeat for next entity
Benefit: Each migration surfaces edge cases that improve the macro for subsequent entities.
Principle 5: AI Excels at Boilerplate, Not Abstraction Design
What we learned: AI was 4x faster at implementing macros than at designing them.Design phase (creative): AI needed extensive human guidance to identify the right abstractionsImplementation phase (systematic): AI independently implemented 5 macros with comprehensive testsDocumentation phase (systematic): AI generated excellent README with minimal guidanceRule: Use AI for implementing macro logic and generating documentation. Use human judgment for
designing the abstraction boundaries.
After migrating 4 entities: I was confident in the macro pattern. I got lazy.Shortcut I took: Migrated the last 3 entities (Product, Activity, Address) in a single Builder session
without individual verification.Result: CI failed. 14 compilation errors across the 3 entities.What happened:Product entity had a unique pattern - pricing fields that other entities didn’t have. The DomainAggregate
macro’s apply() generation didn’t handle optional Decimal fields correctly.AI’s role in fixing:Instead of manually debugging, I asked a fresh Verifier session:
Copy
CI is failing for Product, Activity, Address entities after macro migration.Errors:[Show compilation errors]Analyze:1. What pattern do these 3 entities have that Account/Contact/Lead don't?2. Does the macro handle that pattern?3. Recommend fix
Verifier’s analysis:“Product entity has Decimal fields for pricing. DomainAggregate macro generates apply() method that
assumes all fields are Clone. Decimal type doesn’t implement Clone trait.Options:
Require Clone bound on all fields
Add special handling for Decimal
Generate field updates conditionally based on trait bounds
Recommendation: Option 1 (simplest, most explicit)”Fix: Added where T: Clone bound to generated methods. All 3 entities compiled.Lesson: AI Verifier can debug macro issues by analyzing error patterns and generated code. Faster than
manual debugging.