The Setup: A Simple Configuration Problem
I was implementing a configuration management system. Three layers of config: defaults, environment-specific, and runtime overrides. Simple hierarchy. I’d done this dozens of times. Here’s how the conversation started.My Initial Prompt
My Initial Prompt
- How should nested objects merge? Deep merge or shallow replacement?
- What happens if environment config removes a default key?
- Should runtime overrides be able to remove keys, or only add/modify?
- How do you handle type conflicts (string in default, array in override)?
pool_size and port from defaults? Or does defining [database] in prod replace the entire section?
ME: Obviously it merges. We want prod host and SSL, but keep the default pool size. This is basic stuff.
AI: Understood. But what about this scenario:
[feature_flags] section?
ME: Four flags. Obviously. Why would you throw away all the other flags?
AI: That makes sense for additive changes. But consider this case:
domains be replaced entirely?
ME: (getting impatient) Prod replaces it. Look, the rule is simple:
- Primitive values: replace
- Objects: merge
- Arrays: replace
- What if default config has this:
client_id? Or does the runtime oauth_config replace the entire object?
ME: (now annoyed) It merges! The whole point is layering. Runtime adds “write” scope, keeps client_id. This is exactly what I said three messages ago.
AI: I understand. But there’s a conflict between “objects merge” and “arrays replace.” In this case, scopes is an array inside an object. Should we:
- Deep merge (merge object, replace array): Final scopes = [“read”, “write”]
- Shallow merge (replace entire oauth_config): No client_id, just scopes
url (connection string) and individual components. Which takes precedence?
ME: (exasperated) Prod doesn’t have url, so it uses the individual components. We construct the URL from them. This isn’t that complicated.
AI: But default has url. After merging, final config has both url and the individual components. Which one does the database connection use?
ME: Look. I’ll add validation. If both exist, throw an error. Developer fixes their config. Problem solved.
Now can we please just implement this? We’ve spent 20 minutes on config merging.
AI: Understood. I’ll implement:
- Deep recursive merge for objects
- Replace for primitives and arrays
- Validation to catch conflicting keys like
urlvs individual components
- Array merging (runtime can’t append, only replace)
- Nested object depth (no limit on recursion)
- Conflicting representations (url vs components)
- Implicit required fields (discovered at runtime)
[Cut to: Production, Two Weeks Later]
Slack notification at 2:47 AM:Going Back to the Transcript
I pull up the conversation. Re-read it. Every single question the AI asked was pointing at this exact problem:- “How should nested objects merge?” → I said deep merge, but didn’t think about the implications
- “What happens if environment config removes a default key?” → Can’t remove, only override
- “Both url and components?” → Threw it off as an edge case, said we’d validate
- “Which takes precedence?” → Assumed it wouldn’t happen
- “Should there be a way to unset a default?” → Said no, but that’s exactly what we needed
What I Did Wrong
Let me be specific about my mistakes: Mistake 1: Assumed simplicity meant no edge cases “Config merging is simple” is true for the happy path. But production systems hit edge cases constantly. The AI was trying to map out the edge cases. I was dismissing them as overthinking. Mistake 2: Confused “I understand the problem” with “I’ve specified the solution” I understood what I wanted: layered config with sensible defaults. But I hadn’t specified what “sensible” meant for conflicting keys, type mismatches, or removal semantics. The AI was asking for a specification. I was giving it vibes. Mistake 3: Prioritized speed over clarity “We’ve spent 20 minutes on this” was me being impatient. But 20 minutes of design discussion would have saved 3 hours of debugging at 3 AM. The AI’s questions were front-loading the complexity. I was deferring it to production. Mistake 4: Dismissed the AI’s uncertainty as lack of capability When the AI kept asking questions, I read it as “the AI doesn’t understand config merging.” Actually, the AI understood config merging deeply enough to know that “just merge it” is underspecified. My frustration was a signal that I was missing something.The Fix: What AI Suggested All Along
After the production incident, I did what I should have done initially: systematically answered every question the AI asked.The Proper Design (20 Minutes I Should Have Spent Earlier)
The Proper Design (20 Minutes I Should Have Spent Earlier)
Merge Semantics:Required Fields:Unsetting Defaults:Validation:
- Objects: Deep merge recursively
- Primitives: Replace (later layer wins)
- Arrays: Replace entirely (no append/merge)
- Explicit null: Remove the key from final config
- Validate after each layer merge
- Fail fast with actionable errors
- Point to conflicting config sources
What I Learned About Working With AI
Lesson 1: AI questions are usually cheaper than production bugs When AI asks lots of clarifying questions, it’s not being slow. It’s pattern matching against failure modes you haven’t seen yet. 20 minutes of Q&A vs 3 hours of debugging at 3 AM is a good trade. Lesson 2: “Obviously” is a red flag Every time I said “obviously” in that conversation, I was papering over ambiguity.- “Obviously it merges” → Didn’t specify merge depth
- “Obviously keep the defaults” → Didn’t define precedence rules
- “Obviously throw an error” → Didn’t say when to validate
The Broader Lesson: When to Trust AI, When to Trust Yourself
This experience taught me a framework for AI collaboration:Trust Yourself For:
Product Intuition
- What users need
- Business priorities
- UX that “feels right”
- Novel solutions
- Unconventional approaches
- Aesthetic choices
- Your codebase’s history
- Team dynamics
- Political constraints
- Technical debt vs speed
- Build vs buy
- When to ship vs when to polish
Trust AI For:
Pattern Recognition
- Common failure modes
- Edge cases you haven’t seen
- Standard approaches
- When requirements are underspecified
- Ambiguity in design
- Implicit assumptions
- Test scenarios
- Error handling paths
- Validation rules
- API contracts
- Type signatures
- Null/error handling
How This Changed How I Prompt
Before this incident:The Irony
The funniest part? This entire blog is about using AI agents effectively. I have articles about multi-agent workflows, verification processes, and systematic design. And I still tried to rush past the AI’s questions because I “knew better.” Turns out, knowing about effective AI collaboration and actually practicing it are different things. The AI was trying to do exactly what I’d documented in my own processes: clarify requirements, identify edge cases, validate assumptions. I just wasn’t ready to hear it.What Would I Do Differently?
If I could redo that conversation: When AI asks “How should nested objects merge?”- Old me: “Obviously merge them”
- New me: “Let me write out 3 examples with different nesting depths and show the expected output”
- Old me: “We’ll validate and error”
- New me: “Good catch. Let me define a precedence rule or use explicit strategy selection”
- Old me: “Can we just implement this?”
- New me: “These questions are surfacing design gaps. Let me answer them systematically.”