Skip to main content
I’m going to tell you about the time I spent 20 minutes arguing with an AI, confidently dismissed its concerns, implemented my solution anyway, and then watched everything break in production. This isn’t a story about AI being smarter than humans. It’s a story about pattern recognition versus ego, and what happens when you forget that experience comes in different forms.

The Setup: A Simple Configuration Problem

I was implementing a configuration management system. Three layers of config: defaults, environment-specific, and runtime overrides. Simple hierarchy. I’d done this dozens of times. Here’s how the conversation started.
Need to implement config hierarchy for the platform:
- Default config (shipped with app)
- Environment config (dev/staging/prod)
- Runtime overrides (from admin panel)

Plan:
1. Load defaults from config/default.toml
2. Merge environment config on top
3. Apply runtime overrides last

Straightforward merge strategy. Implement it.
AI: Before implementing, I have questions about the merge strategy:
  1. How should nested objects merge? Deep merge or shallow replacement?
  2. What happens if environment config removes a default key?
  3. Should runtime overrides be able to remove keys, or only add/modify?
  4. How do you handle type conflicts (string in default, array in override)?
ME: Standard merge. Environment and runtime override defaults. If a key exists, replace it. That’s how config works. AI: I understand the intent, but there are edge cases to consider. For example:
# default.toml
[database]
host = "localhost"
port = 5432
pool_size = 10
ssl = false

# prod.toml
[database]
host = "prod-db.example.com"
ssl = true
Should the production config include pool_size and port from defaults? Or does defining [database] in prod replace the entire section? ME: Obviously it merges. We want prod host and SSL, but keep the default pool size. This is basic stuff. AI: Understood. But what about this scenario:
# default.toml
[feature_flags]
experimental_ui = false
new_billing = false
beta_apis = false

# dev.toml
[feature_flags]
experimental_ui = true
If an admin adds a new feature flag at runtime:
[feature_flags]
new_dashboard = true
Should the final config have 4 flags? Or should runtime overrides replace the entire [feature_flags] section? ME: Four flags. Obviously. Why would you throw away all the other flags? AI: That makes sense for additive changes. But consider this case:
# default.toml
[allowed_origins]
domains = ["localhost", "*.dev.example.com"]

# prod.toml
[allowed_origins]
domains = ["example.com", "app.example.com"]
Should production include the dev domains? Or should domains be replaced entirely? ME: (getting impatient) Prod replaces it. Look, the rule is simple:
  • Primitive values: replace
  • Objects: merge
  • Arrays: replace
Just implement that. We’re overthinking this. AI: I want to make sure I understand the invariants:
  1. What if default config has this:
[auth]
providers = ["local", "oauth"]
oauth_config = { client_id = "...", scopes = ["read"] }
And runtime override has:
[auth]
oauth_config = { scopes = ["read", "write"] }
Should final config have client_id? Or does the runtime oauth_config replace the entire object? ME: (now annoyed) It merges! The whole point is layering. Runtime adds “write” scope, keeps client_id. This is exactly what I said three messages ago. AI: I understand. But there’s a conflict between “objects merge” and “arrays replace.” In this case, scopes is an array inside an object. Should we:
  • Deep merge (merge object, replace array): Final scopes = [“read”, “write”]
  • Shallow merge (replace entire oauth_config): No client_id, just scopes
Both are “merging objects,” but with different depth semantics. ME: (done with this conversation) Fine. Deep merge. Recursively merge objects, replace primitives and arrays. Happy? AI: One more scenario. What if:
# default.toml
[database]
url = "postgres://localhost/dev"

# prod.toml
[database]
host = "prod-db.example.com"
port = 5432
database = "prod_db"
user = "app_user"
Now you have both url (connection string) and individual components. Which takes precedence? ME: (exasperated) Prod doesn’t have url, so it uses the individual components. We construct the URL from them. This isn’t that complicated. AI: But default has url. After merging, final config has both url and the individual components. Which one does the database connection use? ME: Look. I’ll add validation. If both exist, throw an error. Developer fixes their config. Problem solved. Now can we please just implement this? We’ve spent 20 minutes on config merging. AI: Understood. I’ll implement:
  • Deep recursive merge for objects
  • Replace for primitives and arrays
  • Validation to catch conflicting keys like url vs individual components
One final confirmation: Should there be a way to explicitly “unset” a default value? For example, removing SSL from a config section? ME: No. If you don’t want SSL, set it to false. You can’t unset required fields. AI: What determines which fields are required vs optional? ME: (typing faster now, just want this done) The code that uses the config. If it crashes without a field, that field is required. Come on. AI: I’ll implement the merge strategy as specified. Note that this approach may have surprising behavior in some cases, particularly around:
  • Array merging (runtime can’t append, only replace)
  • Nested object depth (no limit on recursion)
  • Conflicting representations (url vs components)
  • Implicit required fields (discovered at runtime)
Proceeding with implementation.

[Cut to: Production, Two Weeks Later]

Slack notification at 2:47 AM:
PagerDuty: Production database connections failing
Service: api-server
Error: Invalid database configuration
I open the logs:
ERROR: Database config has both 'url' and individual components
ERROR: Cannot determine connection strategy
FATAL: Application startup failed
What? We validated this. The validation should catch— Oh. The validation only runs when loading config. But we have three config sources. The merge happens… before validation. I check the actual merged config:
[database]
url = "postgres://localhost/dev"  # from defaults
host = "prod-db.example.com"      # from prod.toml
port = 5432                        # from prod.toml
database = "prod_db"               # from prod.toml
user = "app_user"                  # from prod.toml
Exactly what the AI asked about. But wait, this should have been caught in staging. I check staging config:
# staging uses URL format, no individual components
# So merge worked fine
Production uses a different format because the infrastructure team set it up differently six months ago.

Going Back to the Transcript

I pull up the conversation. Re-read it. Every single question the AI asked was pointing at this exact problem:
  1. “How should nested objects merge?” → I said deep merge, but didn’t think about the implications
  2. “What happens if environment config removes a default key?” → Can’t remove, only override
  3. “Both url and components?” → Threw it off as an edge case, said we’d validate
  4. “Which takes precedence?” → Assumed it wouldn’t happen
  5. “Should there be a way to unset a default?” → Said no, but that’s exactly what we needed
The AI wasn’t being pedantic. It was pattern matching against every config system footgun in its training data. I was pattern matching against my experience: “I’ve done this before, it’s always worked.” Except the AI had seen thousands of config systems. I’d built maybe a dozen.

What I Did Wrong

Let me be specific about my mistakes: Mistake 1: Assumed simplicity meant no edge cases “Config merging is simple” is true for the happy path. But production systems hit edge cases constantly. The AI was trying to map out the edge cases. I was dismissing them as overthinking. Mistake 2: Confused “I understand the problem” with “I’ve specified the solution” I understood what I wanted: layered config with sensible defaults. But I hadn’t specified what “sensible” meant for conflicting keys, type mismatches, or removal semantics. The AI was asking for a specification. I was giving it vibes. Mistake 3: Prioritized speed over clarity “We’ve spent 20 minutes on this” was me being impatient. But 20 minutes of design discussion would have saved 3 hours of debugging at 3 AM. The AI’s questions were front-loading the complexity. I was deferring it to production. Mistake 4: Dismissed the AI’s uncertainty as lack of capability When the AI kept asking questions, I read it as “the AI doesn’t understand config merging.” Actually, the AI understood config merging deeply enough to know that “just merge it” is underspecified. My frustration was a signal that I was missing something.

The Fix: What AI Suggested All Along

After the production incident, I did what I should have done initially: systematically answered every question the AI asked.
Merge Semantics:
  1. Objects: Deep merge recursively
  2. Primitives: Replace (later layer wins)
  3. Arrays: Replace entirely (no append/merge)
  4. Explicit null: Remove the key from final config
Conflict Resolution:
# Use _strategy suffix for conflicting representations
[database]
_strategy = "url"  # or "components"
url = "..."        # used if strategy=url
host = "..."       # used if strategy=components
Required Fields:
# Schema defines required vs optional
[database]
_required = ["url OR (host AND port AND database)"]
Unsetting Defaults:
# Explicit removal
[feature_flags]
beta_apis = null  # removes from final config
Validation:
  • Validate after each layer merge
  • Fail fast with actionable errors
  • Point to conflicting config sources
I reimplemented with this design. It took 4 hours (because I had to migrate existing configs), but we haven’t had a config-related incident since.

What I Learned About Working With AI

Lesson 1: AI questions are usually cheaper than production bugs When AI asks lots of clarifying questions, it’s not being slow. It’s pattern matching against failure modes you haven’t seen yet. 20 minutes of Q&A vs 3 hours of debugging at 3 AM is a good trade. Lesson 2: “Obviously” is a red flag Every time I said “obviously” in that conversation, I was papering over ambiguity.
  • “Obviously it merges” → Didn’t specify merge depth
  • “Obviously keep the defaults” → Didn’t define precedence rules
  • “Obviously throw an error” → Didn’t say when to validate
The AI couldn’t read “obviously.” It needed specifics. Lesson 3: AI pattern recognition complements human intuition I had intuition about what “sensible config merging” meant. The AI had pattern recognition about where config systems break. My intuition: “This feels right” AI’s patterns: “This breaks when X, Y, Z” Both are valuable. I should have listened to both. Lesson 4: Impatience is a code smell When I got frustrated with the AI’s questions, I was frustrated with myself for not having thought through the design. The AI was holding up a mirror. I didn’t like what I saw.

The Broader Lesson: When to Trust AI, When to Trust Yourself

This experience taught me a framework for AI collaboration:

Trust Yourself For:

Product Intuition
  • What users need
  • Business priorities
  • UX that “feels right”
Creative Direction
  • Novel solutions
  • Unconventional approaches
  • Aesthetic choices
Context
  • Your codebase’s history
  • Team dynamics
  • Political constraints
Strategic Trade-offs
  • Technical debt vs speed
  • Build vs buy
  • When to ship vs when to polish

Trust AI For:

Pattern Recognition
  • Common failure modes
  • Edge cases you haven’t seen
  • Standard approaches
Specification Gaps
  • When requirements are underspecified
  • Ambiguity in design
  • Implicit assumptions
Systematic Coverage
  • Test scenarios
  • Error handling paths
  • Validation rules
Implementation Details
  • API contracts
  • Type signatures
  • Null/error handling
The key insight: AI isn’t “smart” or “dumb.” It’s pattern-matching against vast experience. When AI asks questions that feel pedantic, it’s probably pattern-matching against a failure mode you haven’t encountered yet. When you feel impatient with AI’s questions, you’re probably underspecifying the problem.

How This Changed How I Prompt

Before this incident:
Implement config merging. Three layers: defaults, environment, runtime.
Standard merge strategy.
After this incident:
Implement config merging system.

Requirements:
- Three layers: defaults, environment overrides, runtime overrides
- Must handle nested objects, arrays, and primitives
- Must support explicit removal of default values
- Must detect and report conflicting config keys

Before implementing, ask:
1. What are the merge semantics for each type?
2. How should conflicts be resolved?
3. What validation happens, and when?
4. What are the failure modes?

After you've asked questions and I've answered, confirm your understanding
with 2-3 example scenarios showing the final merged config.

Then implement.
The difference: I now explicitly ask AI to ask questions. And I treat those questions as design validation, not obstacles.

The Irony

The funniest part? This entire blog is about using AI agents effectively. I have articles about multi-agent workflows, verification processes, and systematic design. And I still tried to rush past the AI’s questions because I “knew better.” Turns out, knowing about effective AI collaboration and actually practicing it are different things. The AI was trying to do exactly what I’d documented in my own processes: clarify requirements, identify edge cases, validate assumptions. I just wasn’t ready to hear it.

What Would I Do Differently?

If I could redo that conversation: When AI asks “How should nested objects merge?”
  • Old me: “Obviously merge them”
  • New me: “Let me write out 3 examples with different nesting depths and show the expected output”
When AI asks “What if both url and components exist?”
  • Old me: “We’ll validate and error”
  • New me: “Good catch. Let me define a precedence rule or use explicit strategy selection”
When AI asks the fifth clarifying question:
  • Old me: “Can we just implement this?”
  • New me: “These questions are surfacing design gaps. Let me answer them systematically.”
The meta-lesson: Treat AI clarifying questions as free design review from an engineer who has seen every footgun in the pattern space.

The Punchline

Two weeks after fixing the config system, I was implementing a different feature. AI started asking detailed questions about edge cases. I caught myself getting impatient. Then I remembered: 3 AM, production down, config conflict. I took a breath and answered every question. Implementation worked perfectly on first deploy.

Final Thought

AI doesn’t have ego. It doesn’t care if you’re right or wrong. It’s just pattern matching. Humans have ego. I definitely cared that I was right. And that made me dismiss the patterns. The conversation where I argued with Claude for 20 minutes wasn’t about AI being smarter than me. It was about me being too confident to notice that the AI was right. I’ve learned to listen. Most of the time.