Skip to main content
This is Week 7 of “Building with AI” - a 10-week journey documenting how I use multi-agent AI workflows to build a production-grade SaaS platform.This week: Applying Week 6’s ADR-driven approach to configuration management. Same pattern: clear requirements → AI designs solution → human adds operational knowledge. Result: 340 lines of boilerplate vanish, 99.2% cache hit rate, zero config bugs in production.Previous: Week 6: AWS Runtime Adoption

Watch the 60-Second Summary

Week 7: Middleware patterns and systematic implementation

The Setup: Double Down on What Works

Week 6 proved that AI excels at design when given clear constraints (ADRs, requirements docs). The success formula from Week 6:
  1. Document constraints in structured format
  2. Let AI design pattern that meets constraints
  3. Human reviews and adds operational optimizations
  4. Implement atomically, test thoroughly
Week 7’s challenge: Apply this same approach to configuration management. The problem: My SaaS platform had configuration chaos:
  • JWT expiration hardcoded in 3 places
  • Each crate reading std::env::var() directly
  • No tenant-specific overrides
  • Zero hierarchy (Platform → Tenant → Capsule)
  • 340 lines of manual config lookups across 17 handlers
Every handler manually called ConfigService.get() with different error handling patterns. Inconsistent, error-prone, impossible to enforce.

What We Built This Week

The Challenge

Design a configuration system that:
  1. Platform-level defaults (environment variables)
  2. Tenant-level overrides (stored in DynamoDB)
  3. Capsule-level overrides (SDLC: dev/staging/prod)
  4. Zero manual ConfigService calls in handlers (automatic injection)
Traditional approach: 1-2 weeks of design + implementation AI-assisted approach (Week 6 method): 3 days

The Multi-Agent Design Session

I gave the Evaluator agent our requirements (following Week 6’s pattern): Input: Requirements doc with hierarchy and constraints Not: “Build a config system” (too vague, Week 5’s mistake) The Evaluator’s response surprised me: “This is a middleware problem, not a service problem.”
The Evaluator analyzed our existing CapsuleExtractor middleware (from Week 6’s AWS client work) and proposed chaining:
Request → CapsuleExtractor → ConfigMiddleware → Handler
          (extracts scope)    (loads config)    (uses config)
Key insight: Reuse the scope extraction we already built for AWS clients!Builder agent then generated:
  • Single cache lookup per request
  • Hierarchical resolution (Platform → Tenant → Capsule)
  • Automatic injection via Actix-web extensions
  • Graceful degradation for public routes

The Architecture AI Discovered

The pattern has three layers:

1. Middleware Chain (Automatic Injection)

pub async fn login_handler(
    config_service: web::Data<Arc<ConfigService>>,
    tenant_id: web::Path<String>,
    capsule: web::ReqData<CapsuleContext>,
) -> Result<HttpResponse> {
    // Manual config loading - repeated in EVERY handler
    let auth_config = config_service
        .get_auth_config(&tenant_id, Some(&capsule.capsule_id))
        .await
        .map_err(|e| ErrorInternalServerError(e))?;

    let jwt_expiry = auth_config.jwt_expiration_seconds;
    // ... handler logic
}
The genius: Handlers can’t forget to load config. The compiler enforces it. If a handler declares config: web::ReqData<ConfigContext>, it won’t compile unless ConfigMiddleware is registered. Zero runtime errors.

2. Hierarchical Resolution

ConfigService resolves configuration with a fallback chain:
impl ConfigService {
    pub async fn get_auth_config(
        &self,
        tenant_id: &str,
        capsule_id: Option<&str>,
    ) -> Result<Arc<AuthConfig>> {
        // Try capsule-level override (most specific)
        if let Some(capsule_id) = capsule_id {
            if let Some(config) = self.get_capsule_config(tenant_id, capsule_id).await? {
                return Ok(Arc::new(config));
            }
        }

        // Try tenant-level override
        if let Some(config) = self.get_tenant_config(tenant_id).await? {
            return Ok(Arc::new(config));
        }

        // Fall back to platform defaults (env vars)
        Ok(Arc::new(self.get_platform_defaults()))
    }
}
Example hierarchy in action:
Platform defaults:     jwt_expiration_seconds = 3600 (1 hour)
Tenant override:       jwt_expiration_seconds = 7200 (2 hours, enterprise)
Capsule override:      jwt_expiration_seconds = 300  (5 min, dev env)

Result for dev capsule: 300 seconds ← Most specific wins
Result for prod capsule: 7200 seconds ← Tenant override (no capsule override)

3. Caching Layer (Performance)

Moka cache sits in front of DynamoDB to prevent hammering:
pub struct ConfigService {
    cache: Arc<Cache<ConfigCacheKey, Arc<AuthConfig>>>,
    repository: Arc<dyn ConfigRepository>,
}

impl ConfigService {
    pub fn new(repository: Arc<dyn ConfigRepository>) -> Self {
        let cache = Cache::builder()
            .max_capacity(10_000)  // 10K tenant/capsule combos
            .time_to_live(Duration::from_secs(300))  // 5 min TTL
            .build();

        Self { cache: Arc::new(cache), repository }
    }
}
Results:
  • Cache hit rate: 99.2%
  • DynamoDB calls per 1000 requests: 8 (was 17,000)
  • P99 latency improvement: 42ms faster

What Went Wrong (And How We Fixed It)

Mistake: Initially registered ConfigMiddleware BEFORE CapsuleExtractor in the middleware chain.Why it failed: ConfigMiddleware needs CapsuleContext to know which tenant/capsule config to load. Without CapsuleExtractor running first, it had no scope information.How we fixed it: Reversed the middleware order in main.rs:
// ❌ WRONG ORDER
App::new()
    .wrap(ConfigMiddleware::new(config_service))  // Runs first
    .wrap(CapsuleExtractor::new())                // Runs second

// ✅ CORRECT ORDER
App::new()
    .wrap(CapsuleExtractor::new())                // Runs first
    .wrap(ConfigMiddleware::new(config_service))  // Runs second
Lesson: AI designed both middleware pieces correctly but didn’t understand middleware execution order in Actix-web. Middleware wraps happen in reverse order—humans need to know the framework.Week 6 parallel: Same as the credential caching gap. AI designs pure patterns but misses framework-specific quirks.

The Migration

We migrated 17 auth handlers from manual config loading to middleware injection: Before: 17 handlers × manual loading = 340 lines of boilerplate
// Repeated in login, refresh_token, validate_token, etc.
async fn handler(
    config_service: web::Data<Arc<ConfigService>>,
    tenant_id: web::Path<String>,
    capsule: web::ReqData<CapsuleContext>,
) -> Result<HttpResponse> {
    let auth_config = config_service
        .get_auth_config(&tenant_id, Some(&capsule.capsule_id))
        .await
        .map_err(|e| ErrorInternalServerError(e))?;

    // ... use auth_config
}
After: Zero config lookups in handlers
// All handlers now
async fn handler(
    config: web::ReqData<ConfigContext>,
) -> Result<HttpResponse> {
    // Config already loaded by middleware
    let jwt_expiry = config.auth_config.jwt_expiration_seconds;
}
Deleted code:
  • 340 lines of config loading boilerplate
  • 17 error handling blocks
  • Inconsistent patterns across handlers
Added code:
  • 179 lines of ConfigMiddleware
  • Single registration in main.rs
  • 100% consistent pattern enforced by compiler
Week 6 parallel: Just like AWS client migration, we did this atomically. All 17 handlers migrated together, tested together, deployed together. No broken intermediate states.

The REST API (Bonus Discovery)

AI also proposed exposing configuration management via REST API (10 endpoints): Platform Config:
GET    /api/platform/config/auth
PUT    /api/platform/config/auth
POST   /api/platform/config/auth/reset
Tenant Config:
GET    /api/{tenant_id}/config/auth
PUT    /api/{tenant_id}/config/auth
DELETE /api/{tenant_id}/config/auth
Capsule Config:
GET    /api/{tenant_id}/capsules/{capsule_id}/config/auth
PUT    /api/{tenant_id}/capsules/{capsule_id}/config/auth
DELETE /api/{tenant_id}/capsules/{capsule_id}/config/auth
POST   /api/{tenant_id}/capsules/{capsule_id}/config/auth/preview
The preview endpoint is brilliant—it shows what config would resolve to without saving:
POST /api/tenant-123/capsules/DEVUS/config/auth/preview
{
  "jwt_expiration_seconds": 300
}

Response:
{
  "effective_config": {
    "jwt_expiration_seconds": 300,  // Your override
    "refresh_token_ttl_seconds": 7200,  // From tenant
    "enable_ses": true  // From platform
  },
  "source": {
    "jwt_expiration_seconds": "capsule",
    "refresh_token_ttl_seconds": "tenant",
    "enable_ses": "platform"
  }
}
This lets admins see the full hierarchy before committing changes. Prevents production misconfigurations.

Week 7 vs Week 6: The Pattern Repeats

Input: ADR-0010 (data isolation)AI Output: 4-scope client factoryHuman Addition: Credential cachingResult: 2,364 lines, 39 tests, 0 bugsTime: 3 days

Key Learnings This Week

What worked: ConfigMiddleware reused the CapsuleExtractor we built in Week 6 for AWS clients.Why it worked: Both patterns need the same scope information (tenant + capsule). Extract once, use everywhere.New principle: When building infrastructure, design composable pieces that work together. Week 6’s middleware became Week 7’s foundation.
What we learned: Making config injection type-safe means handlers can’t compile without it.Why it matters: Documentation can be ignored. Compilation can’t.Example: Handler declares config: web::ReqData<ConfigContext> → won’t compile unless middleware is registered → zero runtime errors.Contrast: Before, handlers could forget to call ConfigService.get() → runtime errors in production.
AI’s insight: Add a preview endpoint that shows effective config without saving.Why brilliant: Admins can see the full hierarchy (platform → tenant → capsule) before committing changes.Real-world save: Prevented setting dev-level JWT timeout (300s) in production capsule. Preview showed it would override the enterprise tenant config (7200s).
Confirmation: The ADR-driven design approach from Week 6 works for config too.The formula:
  • Clear inputs (hierarchy requirements)
  • AI design (middleware pattern)
  • Human optimization (caching, ordering)
  • Atomic implementation
  • Thorough testing
Implication: This is a repeatable pattern for infrastructure work, not a one-time success.

Metrics: Week 7 By The Numbers

Removed:
  • Boilerplate: 340 lines (17 handlers × 20 lines each)
  • Manual ConfigService.get() calls: 17
  • Inconsistent error handling blocks: 17
Added:
  • Middleware: 179 lines
  • Service layer: 892 lines
  • REST API: 456 lines
  • Tests: 942 lines (28 tests)
Net: More infrastructure code, but zero boilerplate in handlers

Actionable Takeaways

Based on Week 6 and Week 7’s successes:
  1. Middleware over manual calls - Inject dependencies via middleware so handlers can’t forget. Compiler enforcement beats documentation.
  2. Cache at the service layer - Put Moka/Redis in front of your config backend. Our 5-min TTL gives 99.2% hit rate.
  3. Preview before commit - Add preview endpoints that show effective config without saving. Prevents production misconfigurations.
  4. Reuse scope extraction - If you have tenant/capsule context in one place (Week 6), reuse it everywhere (Week 7).
  5. Type-safe injection - Use framework features (Actix web::ReqData<T>) to make dependencies compile-time checked.
Pro tip: When designing middleware chains, draw the data flow first:
Request → Middleware1 (extracts X) → Middleware2 (needs X) → Handler (uses result)
This reveals dependencies before you write code. We caught the ConfigMiddleware ordering issue at design time, not runtime.

The Consistency: Weeks 6 and 7

Week 6 taught us: AI excels at proactive design given clear constraints Week 7 proved: That wasn’t a fluke. Same formula works again. The emerging pattern:
Weeks 1-3: Build foundation (multi-agent workflow, plan-implement-verify, event sourcing)
Week 4: AI excels at systematic work (testing, docs)
Week 5: AI fails at reactive debugging (cascading errors)
Weeks 6-7: AI excels at infrastructure design (when given clear inputs)
The insight: AI’s strength isn’t “coding” generically. It’s pattern extraction and systematic application given clear constraints. Give AI:
  • ❌ “Fix these errors” → Disaster (Week 5)
  • ✅ “Design pattern meeting these constraints” → Excellence (Weeks 6-7)

What’s Next

Week 8 Preview: We have patterns (AWS clients, config middleware) that enforce isolation and hierarchy. Next: How do we test these multi-layered systems? The testing infrastructure that validates tenant isolation, config hierarchy, and AWS client scoping. The hypothesis: If ADR-driven design worked for clients and config, it should work for comprehensive testing too.

Discuss This Week

How do you handle configuration in multi-tenant systems? Manual lookups or automatic injection? Share your middleware patterns or ask about the hierarchical config approach.
Disclaimer: All examples are from personal projects. No proprietary code or employer-specific patterns included.