Stage Gate Workflow - Complete Documentation

Progressive Elaboration Framework for Software Development with Claude Code

Author

Sarfaraz Mulla

Published

December 28, 2025

1 Overview

Version: 1.0 Last Updated: 2025-12-28 Purpose: Structured, phase-gated workflow for enterprise software development Audience: Engineers, Product Managers, Technical Leads working with Claude Code

This document defines an 8-phase progressive elaboration workflow where each phase builds on the previous one with clear decision gates. This prevents common issues like:

  • Jumping to implementation without understanding requirements
  • Choosing wrong technologies without justification
  • Missing architectural constraints
  • Building solutions that don’t solve the business problem

2 Phase 1: Business Context & Requirements

Goal: Understand the “why” before the “what”

2.1 Key Questions to Answer

  1. Business Problem
    • What business problem are we solving?
    • What pain points does this address?
    • What happens if we don’t solve this?
  2. User Persona
    • Who will use this feature/system?
    • What is their technical skill level?
    • How will they interact with it?
    • Are there external users (API consumers)?
  3. Success Criteria
    • What defines success for this project?
    • How will we measure it?
    • What are the acceptance criteria?
  4. Constraints
    • What are the non-negotiables?
    • Budget/timeline constraints?
    • Technical constraints (must use X framework)?
    • Compliance/security requirements?
    • Future scalability requirements?

2.2 Deliverables

  • Business Requirements Document (in plan file)
    • Problem statement
    • Target users
    • Success metrics
    • Constraints list
    • Future considerations

2.3 Phase Gate Criteria

Proceed if:

  • User confirms understanding of business problem
  • Success criteria are clear and measurable
  • Constraints are documented
  • Future roadmap is understood

Go back if:

  • Business problem is unclear
  • Multiple interpretations exist
  • Constraints are unknown

2.4 Example Output

## Business Context

**Problem**: Manufacturing teams are manually validating CSV data, leading to 15% error rate in production systems.

**Users**:
- Supply chain managers (daily use)
- Data analysts (weekly reports)
- External systems via API (automated)

**Success Criteria**:
- Catch 95% of data errors before production
- Process 10k+ records in <30 seconds
- API-first design for microservice extraction

**Constraints**:
- Must integrate with existing AI Tools Suite
- Must use company-standard tech stack
- Must be extractable as microservice in Q2 2025

3 Phase 2: Technical Architecture

Goal: Design the system structure before selecting tools

3.1 Key Questions to Answer

  1. Architecture Style
    • Monolith, microservice, serverless, hybrid?
    • Why this choice?
    • How does it align with business constraints?
  2. Integration Points
    • How does this fit into existing systems?
    • What APIs/interfaces are needed?
    • Data dependencies?
  3. Data Flow
    • How does data move through the system?
    • Where is data stored?
    • What transformations occur?
  4. External Dependencies
    • What external systems are required?
    • Third-party services?
    • Network requirements?

3.2 Deliverables

  • Architecture Diagram (in plan file)
    • System components
    • Integration points
    • Data flow
  • Dependency Map
    • External services
    • Internal services
    • Data sources

3.3 Phase Gate Criteria

Proceed if:

  • Architecture aligns with business constraints
  • Integration points are identified
  • Data flow is understood
  • Scalability is addressed

Go back if:

  • Architecture conflicts with constraints
  • Integration points are unclear
  • Performance concerns exist

4 Phase 3: Solution Design (Tech Stack)

Goal: Choose the right tools with clear justification

4.1 Key Questions to Answer

  1. Technology Selection
    • What technologies will we use?
    • Why each choice over alternatives?
    • How does it fit existing stack?
  2. Dependency Audit
    • What can we remove?
    • What can we simplify?
    • Are we adding unnecessary dependencies?
  3. Design Patterns
    • What patterns will we follow?
    • How does this ensure maintainability?
    • Consistency with existing code?

4.2 Deliverables

  • Tech Stack Document (in plan file)
    • Technology choices with justifications
    • Comparison with alternatives
    • Dependency list
    • Design patterns

4.3 Phase Gate Criteria

Proceed if:

  • Technology choices are justified
  • Alternatives were considered
  • Dependencies are minimized
  • Patterns are consistent with codebase

Go back if:

  • Unjustified technology choices
  • Excessive dependencies
  • Conflicts with existing stack

5 Phase 4: Detailed Design

Goal: Plan the specific implementation

5.1 Key Questions to Answer

  1. API Contract
    • What endpoints will we expose?
    • Request/response formats?
    • Error handling?
  2. Data Models
    • What Pydantic models do we need?
    • Validation rules?
    • Serialization formats?
  3. File Structure
    • What files will we create?
    • What files will we modify?
    • Directory organization?
  4. Critical Code Paths
    • What are the main execution flows?
    • Error handling paths?
    • Edge cases?

5.2 Deliverables

  • API Specification (in plan file)
    • Endpoint definitions
    • Request/response schemas
    • Error codes
  • Data Model Definitions
    • Pydantic models
    • Validation rules
  • File Structure Plan
    • New files to create
    • Existing files to modify
    • Critical code paths

6 Phase 5: Implementation

Goal: Write the code following the detailed design

6.1 Implementation Order

  1. Backend First
    • Reason: Frontend depends on backend APIs
    • Start with data models
    • Then implement endpoints
    • Test with curl/Postman
  2. Core Logic
    • Reason: Endpoints depend on validator
    • Modify validator interface
    • Test with sample CSVs
    • Validate rule execution
  3. Frontend UI
    • Reason: Consumes backend APIs
    • Create page component
    • Wire up API calls
    • Add interactivity
  4. Integration
    • Register routes
    • Test end-to-end flow
    • Handle edge cases

6.2 Best Practices

  • Commit frequently: Small, atomic commits
  • Test as you go: Don’t wait until the end
  • Follow style guide: Consistent with codebase
  • Document as you code: Comments, docstrings
  • Handle errors gracefully: User-friendly messages

7 Phase 6: Testing & Validation

Goal: Verify the solution works correctly

7.1 Testing Levels

  1. Unit Tests
    • Test individual functions
    • Each validation rule independently
    • Data model validation
    • Edge case handling
  2. Integration Tests
    • Test full workflows
    • Upload → Validate → Display
    • Comprehensive test flow
    • Export functionality
  3. Edge Cases
    • Empty CSV files
    • Invalid schema types
    • Very large files (100k+ rows)
    • All records pass validation
    • All records fail validation
    • Malformed CSV data
    • Network errors
  4. User Acceptance Testing
    • Does it solve the business problem?
    • Is the UI intuitive?
    • Performance acceptable?
    • Error messages helpful?

8 Phase 7: Deployment

Goal: Ship to production safely

8.1 Deployment Steps

  1. Local Deployment
    • Test on local dev server
    • Verify all features work
    • Check logs for errors
  2. Version Control
    • Commit all changes
    • Push to Forgejo
    • Create pull request
    • Code review
  3. Staging Deployment (if applicable)
    • Deploy to staging environment
    • Run smoke tests
    • Monitor for issues
  4. Production Deployment
    • Deploy to production
    • Monitor logs
    • Verify health check
    • Test critical paths
  5. Rollback Plan
    • Document rollback steps
    • Keep previous version ready
    • Monitor for issues

9 Phase 8: Post-Deployment

Goal: Iterate and improve based on real-world usage

9.1 Activities

  1. User Feedback Collection
    • Surveys
    • User interviews
    • Support tickets
    • Usage analytics
  2. Performance Monitoring
    • Response times
    • Error rates
    • Resource usage
    • User adoption
  3. Backlog Prioritization
    • What to improve next?
    • What features to add?
    • What bugs to fix?
  4. Documentation
    • Update user guides
    • API documentation
    • Runbooks
    • Lessons learned

10 Workflow Summary

10.1 Linear Progression

Phase 1: Business Context
    ↓
    Gate: User approves requirements
    ↓
Phase 2: Technical Architecture
    ↓
    Gate: User approves architecture
    ↓
Phase 3: Solution Design (Tech Stack)
    ↓
    Gate: User approves technology choices
    ↓
Phase 4: Detailed Design
    ↓
    Gate: User approves implementation plan
    ↓
Phase 5: Implementation
    ↓
    Gate: Code works locally
    ↓
Phase 6: Testing & Validation
    ↓
    Gate: All tests pass
    ↓
Phase 7: Deployment
    ↓
    Gate: Production deployment successful
    ↓
Phase 8: Post-Deployment
    ↓
    Iterate: Return to Phase 1 for improvements

10.2 Branching Back

Sometimes you need to revisit earlier phases:

Phase 5 (Implementation)
    ↓
    Discover: New constraint (e.g., pandas has NaN serialization issues)
    ↓
    Branch back to Phase 3 (Tech Stack)
    ↓
    Re-evaluate: DuckDB vs pandas
    ↓
    Get user approval
    ↓
    Resume Phase 5 with new approach

11 Key Principles

  1. Phase Gates: User approves before moving to next phase
  2. Question First, Code Later: Understand before implementing
  3. Justify Tech Choices: Always explain “why X instead of Y?”
  4. Progressive Elaboration: Each phase adds more detail
  5. Branching Allowed: Can revisit earlier phases if new info emerges
  6. Documentation as You Go: Update plan file at each phase
  7. User Involvement: Regular checkpoints prevent misalignment

12 Template for New Projects

Use this template when starting a new feature/project:

# [Feature Name] - Stage Gate Plan

## Phase 1: Business Context
**Business Problem**: [What problem are we solving?]
**Users**: [Who will use this?]
**Success Criteria**: [How do we measure success?]
**Constraints**: [What are the non-negotiables?]

**Gate Decision**: [ ] Approved  [ ] Needs Clarification

---

## Phase 2: Technical Architecture
**Architecture Style**: [Monolith/Microservice/Hybrid]
**Integration Points**: [How does this fit into existing systems?]
**Data Flow**: [Diagram or description]
**External Dependencies**: [What external systems are needed?]

**Gate Decision**: [ ] Approved  [ ] Needs Revision

---

## Phase 3: Solution Design
**Technology Choices**:
- [Tech 1]: [Why chosen over alternatives]
- [Tech 2]: [Why chosen over alternatives]

**Design Patterns**: [What patterns will we follow?]
**Dependencies**: [What can we remove/simplify?]

**Gate Decision**: [ ] Approved  [ ] Needs Revision

---

[Continue through remaining phases...]

13 Lessons from Data Validator Implementation

13.1 What Went Wrong (Original Approach)

  • ❌ Jumped straight to implementation without understanding tech stack preference
  • ❌ Used pandas without questioning if it was the right tool
  • ❌ Perpetuated NaN serialization issues
  • ❌ Didn’t verify rule count claims

13.2 What Went Right (After Applying Workflow)

  • ✅ User questioned tech choices (“why pandas?”)
  • ✅ Re-evaluated architecture (DuckDB vs pandas)
  • ✅ Got user approval before proceeding
  • ✅ Verified claims (60 rules → actually 41 rules)
  • ✅ Designed for future microservice extraction

13.3 Key Takeaways

  1. Always justify technology choices before coding
  2. Question inherited patterns - just because old code uses X doesn’t mean new code should
  3. Verify metrics and claims - don’t trust stale documentation
  4. Design for future needs - microservice extraction should influence architecture decisions
  5. User involvement at each gate prevents costly rework

14 Quick Reference

For a condensed version of this workflow, see: Stage Gate Workflow - Quick Reference


End of Stage Gate Workflow Document