How five derive macros eliminated 600-800 lines of boilerplate per domain entity
This article is part of the “Building with AI” series documenting my journey using multi-agent AI workflows to build production systems.All examples are from personal projects and do not represent employer technologies.
By January 2025, my SaaS platform had 20+ domain entities (Leads, Accounts, Opportunities, Products, etc.). Each one needed:
Event sourcing boilerplate (~100 lines)
DynamoDB repository (~300 lines)
In-memory test repository (~80 lines)
Caching decorator (~150 lines)
Event helper methods (~50 lines)
Total: 680 lines of repetitive code per entity.Multiply by 20 entities = 13,600 lines of nearly identical boilerplate.AI Focus: Could AI analyze the patterns across entities and generate macros to eliminate this?System Example: Five derive macros that reduce boilerplate by 80% and enforce architectural consistency.
I asked the Evaluator agent: “Look at these 5 domain entities and tell me what’s repetitive.”AI’s response:
Copy
Analyzing Lead, Account, Opportunity, Product, Contact...Patterns found:1. Event sourcing: All have version, uncommitted_events, replay()2. Repositories: All implement save(), find_by_id(), delete()3. Test helpers: All have test_fixture(), assert_events()4. Event enums: All need aggregate_id(), event_type()5. Caching: All use Moka with same TTL and write-invalidateRecommendation: Create 5 derive macros to eliminate this.
What AI Did
What Human Did
Result
The Builder agent analyzed the patterns and generated macro implementations with:
Total: 2,917 lines of macro code that eliminates 600-800 lines per entity.
I defined the macro API (what attributes to support, what methods to generate) and reviewed the generated code for edge cases.AI doesn’t know your naming conventions—I had to specify snake_case for event types, METADATA/LIST_ITEM for DynamoDB patterns.
5 macros eliminating 80% of boilerplate. 16 integration tests. 20+ entities refactored. Zero runtime overhead (macros expand at compile time).
Mistake: The initial DynamoDbRepository macro generated code that didn’t handle GSI queries correctly.Why it failed: AI generated Query operations but forgot to specify the GSI name via .index_name(). Queries were hitting the primary table instead of the GSI.How we fixed it: Added attribute validation to the macro:
Copy
#[proc_macro_derive(DynamoDbRepository, attributes(repository))]pub fn derive_dynamodb_repository(input: TokenStream) -> TokenStream { let attrs = parse_attrs(&input)?; // Validate: if gsi_name_lookup is specified, ensure GSI exists if let Some(gsi_field) = &attrs.gsi_name_lookup { if attrs.gsi_index_name.is_none() { return compile_error!( "gsi_name_lookup requires gsi_index_name attribute" ); } } // Generate code...}
Lesson: AI writes working code for the happy path but misses validation. Proc macros need compile-time checks to catch configuration errors.
Find the repetition first - Don’t write macros until you have 5+ examples of the pattern. AI needs repetition to extract patterns.
Design the API, then generate - Spend time on the macro attributes (what configuration to expose). AI can implement once you define the interface.
Add compile-time validation - Use proc-macro compile errors to catch configuration mistakes early. Better than runtime panics.
Pro tip: Use cargo expand to inspect macro-generated code during development. It shows you exactly what the macro expands to, making debugging 10x easier.
Total: 2,917 lines of macro infrastructure that eliminates 8,283 lines of boilerplate across the platform.ROI: Paid for itself after the 4th entity was refactored.
Do you use proc macros? How do you balance code generation vs manual implementation?Connect on LinkedIn or comment on the YouTube Short
Disclaimer: This content represents my personal learning journey using AI for a personal project. It does not represent my employer’s views, technologies, or approaches.All code examples are generic patterns or pseudocode for educational purposes.