Skip to main content

Why This Exists

After 7 weeks of building production infrastructure with AI, I’ve collected the prompts that actually work. Not theoretical best practices. Not “AI thought leadership.” Just prompts I use daily. This is a tool, not a tutorial. Copy what you need, adapt for your context, move on. What you’ll find here:
  • Prompts organized by task type
  • Success rates from real usage
  • Anti-patterns that waste tokens
  • Copy-paste templates
What you won’t find:
  • Philosophical discussions about AI
  • Generic “be clear and specific” advice
  • Prompts I haven’t tested

How to Use This Library

Each prompt follows the same structure:
  1. Context block - Tells the AI what it needs to know
  2. Task description - Specific, measurable outcome
  3. Constraints - What NOT to do
  4. Output format - Exactly what you want back
Pattern:
Context:
[Current state of the system]

Task:
[Specific action with measurable outcome]

Constraints:
[What to avoid, limits, boundaries]

Output:
[Expected format and deliverables]
Copy this structure for your own prompts.

Infrastructure Design

ADR-Driven Design

Use case: When you need to make an architectural decision that affects multiple components. Success rate: 92% (23/25 ADRs led to successful implementations) Why it works: Forces AI to think through consequences before suggesting solutions.
Plan an architectural approach for [FEATURE].

Context:
- Current architecture: [DESCRIBE EXISTING SYSTEM]
- Problem: [SPECIFIC ISSUE YOU'RE SOLVING]
- Constraints: [TECHNICAL LIMITATIONS]
- Related decisions: [EXISTING ADRs IF ANY]

Task:
Design 2-3 approaches with trade-offs. For each approach:
1. How it works (technical approach)
2. What changes (affected components)
3. Pros/cons (specific to our system)
4. Migration effort (hours estimate)

Constraints:
- Do NOT recommend approaches requiring new infrastructure
- Do NOT suggest "best practices" without context
- Do NOT hallucinate features we don't have

Output:
Structured comparison table + recommendation with justification.
Example from Week 6: Used this prompt to design capsule isolation (ADR-0010). AI compared 3 approaches: table-per-capsule, partition key isolation, and separate databases. Chose partition key approach, implemented across 48 files with zero cross-environment leaks. When it fails:
  • If you don’t specify constraints, AI suggests “ideal” solutions requiring infrastructure you don’t have
  • If problem statement is vague, AI solves a different problem

Database Schema Design

Use case: Designing DynamoDB single-table schemas with access patterns. Success rate: 88% (7/8 schemas worked without major refactoring) Why it works: Forces you to enumerate access patterns upfront, which is the hard part.
Design a DynamoDB single-table schema for [DOMAIN].

Context:
- Entities: [LIST ENTITIES WITH KEY ATTRIBUTES]
- Relationships: [HOW ENTITIES RELATE]
- Multi-tenancy: Tenant + Capsule isolation required
- Existing table: [TABLE NAME AND CURRENT SCHEMA IF ANY]

Access patterns (priority order):
1. [SPECIFIC QUERY WITH EXPECTED VOLUME]
2. [SPECIFIC QUERY WITH EXPECTED VOLUME]
3. [etc.]

Task:
Design partition key, sort key, and GSI patterns that support all access patterns.
Include:
- PK/SK patterns for each entity
- GSI definitions with projection types
- Example items for each entity
- Migration strategy if modifying existing table

Constraints:
- Maximum 5 GSIs (cost control)
- All queries must be efficient (no scans)
- Maintain tenant+capsule isolation in all keys

Output:
Table structure + example items + query patterns + cost estimate.
Example from Week 4: Designed the CRM schema supporting 7 entities (Account, Contact, Lead, etc.) with 12 access patterns. Schema handled all patterns with 3 GSIs. 92% test coverage, zero refactoring needed after deployment. When it fails:
  • If you don’t list access patterns, AI designs for generic CRUD (which doesn’t match real usage)
  • If you add “support future queries” as a requirement, AI over-engineers with 10+ GSIs

API Contract Design

Use case: Defining API routes with permissions and validation. Success rate: 95% (38/40 API designs implemented without breaking changes) Why it works: Specifies authorization and validation upfront, preventing rework.
Design API contract for [FEATURE].

Context:
- Domain model: [DESCRIBE ENTITIES]
- Existing routes: [RELATED API ENDPOINTS]
- Permission model: [PERMISSION NAMING PATTERN]
- Validation rules: [CONSTRAINTS FROM DOMAIN]

Task:
Define REST API endpoints with:
1. Route paths and HTTP methods
2. Request/response schemas (JSON)
3. Permission requirements per endpoint
4. Validation rules (request validation)
5. Error responses (what can fail and why)

Constraints:
- Follow RESTful conventions (no RPC-style endpoints)
- Use typed permission constants (no string literals)
- All IDs are UUIDs (no sequential integers)
- Pagination for list endpoints (max 100 items)

Output:
OpenAPI 3.0 spec + permission constants + validation schemas.
Example from Week 5: Designed 35 CRM API routes. AI caught permission inconsistencies across entities (mixed delimiters: dots vs colons). Fixed by introducing typed constants. Zero permission bugs in production. When it fails:
  • If you don’t specify permission pattern, AI invents inconsistent naming
  • If you skip error response design, AI returns generic 500s for everything

Debugging

Root Cause Analysis

Use case: When you have a bug and need to find the root cause, not just symptoms. Success rate: 78% (14/18 bugs traced to actual root cause) Why it works: Structured investigation prevents jumping to conclusions.
Debug [BUG DESCRIPTION].

Context:
- Observed behavior: [WHAT'S ACTUALLY HAPPENING]
- Expected behavior: [WHAT SHOULD HAPPEN]
- Environment: [PRODUCTION/STAGING/LOCAL]
- Recent changes: [COMMITS IN LAST 24H]

Investigation steps:
1. Read error logs: [LOG FILE PATHS]
2. Check related code: [FILES LIKELY INVOLVED]
3. Review recent changes: git diff [COMMIT RANGE]
4. Identify divergence point: where expected != observed

Task:
Find the root cause by:
1. Reproducing the issue (minimal reproduction case)
2. Tracing execution flow (where does it break?)
3. Identifying the change that introduced it (git bisect if needed)
4. Explaining WHY it fails (not just WHERE)

Constraints:
- Do NOT suggest fixes yet (root cause first)
- Do NOT assume infrastructure issues without evidence
- Do NOT blame "race conditions" without proof

Output:
Root cause analysis with:
- Exact line of code causing the issue
- Why it fails (logic error, wrong assumption, etc.)
- When it was introduced (commit hash)
- Suggested fix approach (not implementation yet)
Example from Week 5: Debugging authentication failures. AI initially blamed OAuth library. Forced root cause analysis revealed our middleware was checking permissions BEFORE validating JWT. Reordered middleware, bug fixed. When it fails:
  • If you let AI jump to fixes before root cause, you fix symptoms not problems
  • If error messages are cryptic, AI hallucinates causes based on “similar issues”

Performance Debugging

Use case: System is slow, need to find bottlenecks. Success rate: 71% (5/7 performance issues correctly identified) Why it works: Focuses on measurement before optimization.
Investigate performance issue: [DESCRIPTION].

Context:
- Slow operation: [WHAT'S SLOW]
- Current performance: [METRICS - LATENCY, THROUGHPUT]
- Expected performance: [TARGET METRICS]
- Environment: [PRODUCTION/STAGING]

Measurement strategy:
1. Identify measurement points (where to add instrumentation)
2. Collect baseline metrics (before optimization)
3. Profile execution (CPU, memory, I/O, network)
4. Identify bottleneck (slowest component)

Task:
Find the bottleneck by:
1. Adding instrumentation code (tracing, metrics)
2. Running profiler on representative workload
3. Analyzing results to find hot paths
4. Quantifying impact (% of total time per component)

Constraints:
- Measure first, optimize later (no premature optimization)
- Use profiler data, not guesses
- Focus on p95 latency, not averages
- Ignore micro-optimizations (< 5% impact)

Output:
Performance profile showing:
- Time breakdown by component (%)
- Bottleneck identification (specific function/query)
- Optimization opportunity ranking
- Expected improvement from fixing top bottleneck
Example from Week 3: API responses taking 800ms. AI suggested caching. Forced profiling revealed 750ms spent in DynamoDB query (missing GSI). Added GSI, dropped to 45ms. Caching would have masked the real issue. When it fails:
  • Without profiling data, AI suggests random optimizations (caching, async, etc.)
  • If you optimize based on intuition, you fix the wrong thing 50% of the time

Code Review

Verification Checklist

Use case: Reviewing AI-generated code before merging. Success rate: 94% (18/19 major bugs caught before production) Why it works: Systematic checklist prevents “looks good to me” reviews.
Verify implementation of [FEATURE].

Context:
- Requirements: [LINK TO PLAN/PRD]
- Implementation: git diff main...[BRANCH]
- Tests: [TEST FILE PATHS]

Verification checklist:

1. Requirement Coverage
   - Does code implement ALL requirements from plan?
   - Are there hallucinated features (not in requirements)?
   - Are edge cases from requirements tested?

2. Test Quality
   - Do tests map to specific requirements?
   - Are negative tests included (what should fail)?
   - Is test coverage >= 85%?
   - Do tests run in CI?

3. Cross-Cutting Concerns
   - Authorization: Are permission checks present?
   - Multi-tenancy: Are tenant boundaries enforced?
   - Error handling: Are errors logged with context?
   - Observability: Are metrics/traces added?

4. Integration Assumptions
   - Are external service calls mocked in tests?
   - Are database transactions atomic?
   - Are event sourcing patterns followed?

5. Code Quality
   - Are naming conventions consistent?
   - Is the code readable (no clever tricks)?
   - Are magic numbers extracted to constants?
   - Is documentation present where needed?

Task:
Review code against checklist. For each item:
- PASS: Requirement met
- CONDITIONAL: Met with minor issues (list them)
- FAIL: Not met (blocking issue)

Constraints:
- Do NOT pass if ANY cross-cutting concern is untested
- Do NOT approve hallucinated features
- Do NOT accept < 85% test coverage without justification

Output:
Verification report with:
- Overall decision: PASS / CONDITIONAL / FAIL
- Issues found (category, severity, location)
- Required fixes for CONDITIONAL/FAIL
- Suggestions for improvement (optional)
Example from Week 4: Verified CRM implementation. Caught 5 critical bugs Builder missed:
  1. Event/DB atomicity violation (HIGH)
  2. PII field encryption missing (HIGH)
  3. Permission inconsistency (MEDIUM)
  4. Hallucinated preferred_name field (MEDIUM)
  5. Foreign key type mismatch (HIGH)
All caught before production. Zero bugs shipped. When it fails:
  • If you reuse Builder’s session for verification, bias prevents finding bugs
  • If you skip the checklist and just read code, you miss systematic issues

Cross-Entity Consistency

Use case: Reviewing changes that affect multiple related entities. Success rate: 100% (6/6 consistency issues caught) Why it works: Explicitly checks relationships, not just individual entities.
Verify cross-entity consistency for [DOMAIN].

Context:
- Entities involved: [LIST ENTITIES]
- Relationships: [HOW THEY RELATE]
- Recent changes: [WHAT WAS MODIFIED]

Consistency checks:

1. Foreign Keys
   - Do foreign key types match primary key types?
   - Are ID formats consistent (all UUID vs mixed)?
   - Are cascade rules defined (what happens on delete)?

2. Event Patterns
   - Do all {Entity}Created events have same structure?
   - Are event names consistent ({Entity}{Action})?
   - Is event versioning applied uniformly?

3. Repository Patterns
   - Do all repositories implement same trait?
   - Are CRUD method signatures consistent?
   - Are error types uniform across repositories?

4. API Patterns
   - Are route paths consistent (/entities/:id pattern)?
   - Are HTTP methods consistent across entities?
   - Are permission names following same pattern?

5. Validation Rules
   - Are similar fields validated the same way?
   - Are error messages consistent in format?

Task:
Check each entity against the patterns from other entities.
Flag any inconsistencies as BLOCKING.

Output:
Consistency report with:
- Inconsistencies found (entity, pattern, deviation)
- Impact assessment (breaking change?)
- Remediation steps
Example from Week 4: Caught Opportunity entity using account_id: String while Account used id: AccountId(Uuid). Integration tests would have failed. Fixed before merge. When it fails:
  • If entities are reviewed in isolation, cross-entity bugs slip through
  • If you don’t have established patterns yet, there’s nothing to check consistency against

Refactoring

Pattern Extraction

Use case: Found repeated code, want to extract reusable pattern. Success rate: 83% (10/12 refactorings improved maintainability) Why it works: Forces you to define the abstraction clearly before extracting.
Extract reusable pattern from [CODE LOCATION].

Context:
- Repeated code: [DESCRIBE DUPLICATION]
- Locations: [FILE PATHS WHERE PATTERN APPEARS]
- Current pain: [WHY IT'S PROBLEMATIC]

Pattern analysis:
1. Identify common structure (what's the same?)
2. Identify variation points (what differs?)
3. Define abstraction (trait, macro, function?)
4. Estimate usage sites (how many places use this?)

Task:
Extract pattern by:
1. Defining the abstraction (trait signature, macro syntax)
2. Implementing the abstraction (generic logic)
3. Migrating 1-2 usage sites (prove it works)
4. Creating migration plan for remaining sites

Constraints:
- Do NOT extract if < 3 usage sites (not worth it)
- Do NOT make abstraction more complex than original code
- Do NOT break existing tests during migration
- Migrate incrementally (not all at once)

Output:
- Abstraction implementation
- Migration guide
- Before/after comparison (lines of code saved)
- Risk assessment (what could break?)
Example from Week 3: Found DynamoDB entity conversion code duplicated across 7 entities (600 lines total). Extracted #[derive(DynamoDbEntity)] macro. Reduced to 80 lines, zero bugs. When it fails:
  • If you extract pattern from only 2 usage sites, third site doesn’t fit the abstraction
  • If abstraction is more complex than duplication, maintainability decreases

Breaking Change Migration

Use case: Need to make a breaking API/schema change. Success rate: 67% (2/3 migrations went smoothly, 1 caused cascading errors) Why it works: Forces planning BEFORE making the change.
Plan migration for breaking change: [CHANGE DESCRIPTION].

Context:
- Current implementation: [WHAT EXISTS NOW]
- Desired implementation: [WHAT YOU WANT]
- Reason for change: [WHY IT'S NECESSARY]
- Affected components: [WHAT USES CURRENT IMPLEMENTATION]

Impact analysis:
1. Find all usage sites (grep, LSP references)
2. Categorize by impact (breaking vs compatible)
3. Estimate migration effort per site
4. Identify blockers (can't migrate until X is done)

Migration plan:
1. Preparation (new implementation alongside old)
2. Migration sequence (order matters - dependencies first)
3. Cutover strategy (atomic vs gradual)
4. Rollback plan (if migration fails)

Task:
Create migration plan with:
- Pre-change checklist (what to prepare)
- Migration steps (ordered, specific)
- Validation steps (how to verify each step)
- Rollback procedure (if things go wrong)

Constraints:
- Do NOT break main branch during migration
- Do NOT migrate more than 10 files per commit
- Do NOT skip validation between steps
- Maximum 5 commits for entire migration

Output:
Migration plan + estimated time + risk rating.
Example from Week 5: Added CRUD methods to macro (breaking change). AI did 30 incremental commits fixing cascading errors. Human intervention: batched fixes, 3 commits, done in 90 minutes. Lesson: plan breaking changes, don’t fix reactively. When it fails:
  • If you make the change first, then fix errors, you get cascading failures
  • If you don’t batch related fixes, each fix creates new errors

Documentation

ADR Writing

Use case: Documenting an architectural decision for future reference (and AI consumption). Success rate: 100% (all ADRs written with this template were AI-parseable) Why it works: Structured format AI can extract constraints from.
Write ADR for [DECISION].

Context:
- Problem: [WHAT WE'RE SOLVING]
- Current state: [HOW IT WORKS NOW]
- Pain points: [WHY CURRENT STATE IS BAD]

Decision process:
1. Options considered (2-3 approaches)
2. Trade-offs per option (specific pros/cons)
3. Decision rationale (why we chose option X)

ADR structure:

## Context
[Problem statement with metrics if available]

## Decision

### 1. [Constraint Name]
[Exact pattern with code examples]

### 2. [Constraint Name]
[Exact pattern with code examples]

## Consequences

### Positive
- [Specific benefit 1]
- [Specific benefit 2]

### Negative
- [Specific drawback 1]
- [Migration effort estimate]

### Migration Strategy
1. [Concrete step 1]
2. [Concrete step 2]

Task:
Write ADR following this structure.
Use code blocks, tables, bullet lists (not prose paragraphs).
Include examples for each constraint.

Constraints:
- Do NOT write philosophical discussions
- Do NOT skip code examples
- Do NOT use vague language ("should", "might", "consider")
- Use imperative language ("must", "required", "pattern is")

Output:
ADR document in Markdown, ready for AI to parse and enforce.
Example from Week 6: ADR-0010 (Capsule Isolation). Defined table naming, partition key patterns, required fields. AI read it and generated compliant code for 48 files. Zero drift. When it fails:
  • If you write ADRs like decision logs (narrative prose), AI can’t extract constraints
  • If you skip examples, AI misinterprets the pattern

Success Rate Summary

Based on 7 weeks of usage across ~200 prompts:
CategorySuccess RateNotes
Infrastructure Design90%Fails if constraints are vague
ADR-Driven Design92%Best when problem is well-defined
Database Schema88%Requires explicit access patterns
API Contract95%Fails if permission model not specified
Debugging75%Lower because bugs are unpredictable
Root Cause Analysis78%Works when reproduction is possible
Performance Debugging71%Requires profiling data
Code Review94%High when using fresh sessions
Verification Checklist94%Catches what tests miss
Cross-Entity Consistency100%Always finds inconsistencies if they exist
Refactoring83%Depends on pattern clarity
Pattern Extraction83%Fails if abstraction is forced
Breaking Changes67%Planning works, reactive fixing doesn’t
Documentation100%Structured format always works
ADR Writing100%AI-parseable by design
Overall: 87% success rate “Success” = Task completed without major rework or human intervention beyond approval.

Anti-Patterns (When Prompts Fail)

1. The Vague Requirement

Bad:
Make the API better.
Why it fails: AI hallucinates features based on “best practices” that don’t match your needs. Fix:
Add rate limiting to API routes.

Current: No rate limiting, users can make unlimited requests.
Target: 100 requests/minute per tenant.
Implementation: Use AWS API Gateway throttling.
Lesson: Vague goals produce vague solutions. Be specific about current state and target state.

2. The Unconstrained Fix

Bad:
Fix all compilation errors.
Why it fails: AI fixes one error at a time, creating cascading failures. (See Week 5 disaster: 30 commits, 24 hours.) Fix:
Fix compilation errors in [CRATE].

Constraints:
- Maximum 5 commits total
- Group related fixes in single commit
- Do NOT add new methods/types
- Only update call sites to match new signatures
- If stuck after 3 commits, report and stop
Lesson: Unconstrained error fixing leads to whack-a-mole. Set boundaries.

3. The Missing Context

Bad:
Implement feature X.
Why it fails: AI doesn’t know your architecture, conventions, or constraints. Generates code that compiles but doesn’t fit your system. Fix:
Implement feature X.

Context:
- Architecture: Multi-tenant SaaS with capsule isolation
- Conventions: Follow ADR-0010 for table naming
- Constraints: All queries must include tenant+capsule in partition key
- Related code: See [FILE] for similar pattern

Follow existing patterns, do not invent new approaches.
Lesson: AI needs your context. Providing it upfront prevents rework.

4. The Reused Session

Bad: Using Builder’s session for verification (to “save tokens”). Why it fails: Verifier inherits Builder’s mental model and biases. Misses bugs. Fix: Always use fresh sessions for verification. The token cost is insignificant compared to production bugs. Lesson: Session state introduces bias. Fresh context catches more issues.

5. The Premature Optimization

Bad:
Optimize this function.
Why it fails: AI micro-optimizes without knowing if it’s a bottleneck. Wastes time on less than 1% gains. Fix:
Profile [OPERATION] to find bottleneck.

Use profiler to measure time breakdown.
Only optimize if component is >10% of total time.
Report findings before optimizing.
Lesson: Measure first, optimize later. Intuition about performance is usually wrong.

Copy-Paste Templates

Planning Template

Plan [FEATURE].

Context:
- Current system: [DESCRIBE]
- Problem: [WHAT'S BROKEN/MISSING]
- Constraints: [TECHNICAL LIMITS]
- Related work: [ADRs, EXISTING FEATURES]

Design requirements:
1. [REQUIREMENT 1]
2. [REQUIREMENT 2]
3. [REQUIREMENT 3]

Task:
Design approach with:
- Architecture (how components interact)
- Data model (schema, entities)
- API contract (if applicable)
- Testing strategy (4-level pyramid)
- Migration plan (if changing existing system)

Constraints:
- Follow ADR-[XXX] for [PATTERN]
- No new infrastructure dependencies
- Must maintain backward compatibility

Output:
Design document with diagrams, examples, and migration steps.

Implementation Template

Implement [FEATURE] per plan: [PLAN FILE PATH]

Context:
- Plan section: [SPECIFIC SECTION]
- Affected files: [FILES TO MODIFY]
- Testing requirements: [FROM PLAN]

Task:
Implement feature with:
1. Core logic (domain layer)
2. Integration code (repository/API layer)
3. Tests (4 levels: unit, integration, E2E, contract)
4. Documentation (inline comments where needed)

Constraints:
- Follow plan exactly (no creative deviations)
- All tests must pass before requesting review
- Test coverage >= 85%
- No compiler warnings

Output:
Implementation + tests + verification readiness checklist.

Verification Template

Verify [FEATURE] implementation.

Context:
- Requirements: [PLAN/PRD PATH]
- Implementation: git diff main...[BRANCH]
- Tests: [TEST FILE PATHS]

Checklist:
1. Requirement coverage (all requirements implemented?)
2. Test quality (requirements mapped to tests?)
3. Cross-cutting concerns (auth, multi-tenancy, errors, observability)
4. Integration assumptions (are they tested?)
5. Code quality (naming, readability, documentation)

Task:
Review against checklist.
For each item: PASS / CONDITIONAL / FAIL
List specific issues with file paths and line numbers.

Constraints:
- FAIL if any cross-cutting concern is untested
- FAIL if hallucinated features exist
- CONDITIONAL if minor issues can be fixed quickly

Output:
Verification report with overall decision and required fixes.

Debugging Template

Debug [ISSUE].

Context:
- Observed: [WHAT'S HAPPENING]
- Expected: [WHAT SHOULD HAPPEN]
- Environment: [PROD/STAGING/LOCAL]
- Recent changes: [COMMITS IN LAST 24H]

Investigation:
1. Reproduce issue (minimal reproduction)
2. Check logs: [LOG PATHS]
3. Review code: [LIKELY FILES]
4. Trace execution: where does it break?
5. Identify root cause: WHY does it fail?

Constraints:
- Find root cause before suggesting fixes
- Use evidence (logs, traces), not assumptions
- Do NOT blame infrastructure without proof

Output:
Root cause analysis with exact failure point and suggested fix approach.

When to Use Which Prompt

Quick decision tree:
What are you doing?

├─ Designing something new?
│  └─ Use Planning Template + ADR-Driven Design

├─ Implementing a design?
│  └─ Use Implementation Template

├─ Reviewing code?
│  └─ Use Verification Template

├─ Fixing a bug?
│  └─ Use Debugging Template

├─ Making a breaking change?
│  └─ Use Breaking Change Migration

├─ Optimizing performance?
│  └─ Use Performance Debugging

└─ Extracting common patterns?
   └─ Use Pattern Extraction

Metrics from 7 Weeks

Prompts used: ~200 across all categories Token usage:
  • Planning: ~150k tokens/week
  • Implementation: ~500k tokens/week
  • Verification: ~180k tokens/week
  • Debugging: ~100k tokens/week (spiky)
Time saved:
  • Planning: 2-4 hours → 30 minutes (75% reduction)
  • Implementation: 3-5 days → 1 day (80% reduction)
  • Verification: 2 hours → 30 minutes (75% reduction)
  • Debugging: Variable (sometimes faster, sometimes slower)
Quality:
  • Bugs caught in verification: 18
  • Bugs shipped to production: 0
  • ADR compliance: 100% (after using structured prompts)
ROI:
  • Average cost per feature: $12-15
  • Human hours saved per feature: 30-40 hours
  • ROI: ~800-900x

Final Advice

Three rules that matter:
  1. Be specific about current state and target state. Vague goals produce vague solutions.
  2. Constrain the solution space. Tell AI what NOT to do. Prevents hallucination.
  3. Use fresh sessions for verification. Reused sessions introduce bias. Fresh context catches bugs.
Everything else is details.

Resources

All prompts in this library are tested on:
  • Claude Opus 4 (Evaluator role)
  • Claude Sonnet 3.5 (Builder/Verifier roles)
Your mileage may vary with other models. Related articles:
  • Plan → Implement → Verify Pattern (detailed workflow)
  • When AI Fails: Cascading Errors (debugging anti-patterns)
  • When AI Excels (what AI is actually good at)

License: Use freely. No attribution required. If it helps, great. If not, ignore it.