From e2f9e27fdb3730d056e3779e9eb587226eaec5c7 Mon Sep 17 00:00:00 2001 From: "codegen-sh[bot]" <131295404+codegen-sh[bot]@users.noreply.github.com> Date: Sun, 30 Nov 2025 15:34:08 +0000 Subject: [PATCH 1/3] Add Developer Guide for spec-driven feature implementation Co-authored-by: Jake Ruesink --- .devagent/DEVELOPER-GUIDE.md | 1296 ++++++++++++++++++++++++++++++++++ 1 file changed, 1296 insertions(+) create mode 100644 .devagent/DEVELOPER-GUIDE.md diff --git a/.devagent/DEVELOPER-GUIDE.md b/.devagent/DEVELOPER-GUIDE.md new file mode 100644 index 0000000..0cbc980 --- /dev/null +++ b/.devagent/DEVELOPER-GUIDE.md @@ -0,0 +1,1296 @@ +# Developer Guide: Spec-Driven Feature Implementation + +This guide walks you through implementing new features using the DevAgent workflow system. Follow these steps to go from idea to implementation with proper documentation and validation. + +## Table of Contents + +1. [Quick Start](#quick-start) +2. [Workflow Overview](#workflow-overview) +3. [Step-by-Step Process](#step-by-step-process) +4. [Command Reference](#command-reference) +5. [Example Interactions](#example-interactions) +6. [Best Practices](#best-practices) + +--- + +## Quick Start + +For a simple feature, the typical flow is: + +```bash +# 1. Scaffold feature hub +/new-feature "Add user profile editing" + +# 2. Research existing patterns +/research "How do we handle form editing in this codebase?" + +# 3. Clarify requirements +/clarify-feature + +# 4. Create specification +/create-spec + +# 5. Plan implementation tasks +/plan-tasks + +# 6. Generate task prompts +/create-task-prompt + +# 7. Implement (using Cursor AI with task prompts) +# 8. Validate code +/validate-code +``` + +--- + +## Workflow Overview + +The DevAgent system uses a structured workflow to ensure features are well-researched, clearly specified, and properly implemented. Here's how the pieces fit together: + +``` +Feature Idea + ↓ +[new-feature] → Scaffold feature hub + ↓ +[research] → Investigate technical patterns & constraints + ↓ +[clarify-feature] → Validate requirements with stakeholders + ↓ +[create-spec] → Write detailed specification + ↓ +[plan-tasks] → Break down into implementation tasks + ↓ +[create-task-prompt] → Generate AI-ready task prompts + ↓ +Implementation → Code using Cursor AI + ↓ +[validate-code] → Lint, typecheck, test + ↓ +Complete +``` + +**For complex features**, use the full workflow. **For simple enhancements**, you can skip directly to `research` → `create-task-prompt`. + +--- + +## Step-by-Step Process + +### Step 1: Scaffold Feature Hub + +**Command:** `/new-feature` + +**When to use:** Start here for any new feature, even if it's just an idea. + +**Input:** Feature title or 1-2 sentence description + +**What it does:** +- Creates feature hub at `.devagent/workspace/features/YYYY-MM-DD_feature-slug/` +- Generates `AGENTS.md` with feature metadata +- Creates empty folders: `research/`, `spec/`, `tasks/` + +**Example:** + +``` +You: /new-feature "Add datatable to view dataset data" + +AI: Creates .devagent/workspace/features/2025-11-06_simple-datatable-to-view-data/ + with AGENTS.md and folder structure. + + Recommended next steps: + - /research "table components and data access patterns" + - /clarify-feature +``` + +**Output:** Feature hub directory with `AGENTS.md` tracking progress + +--- + +### Step 2: Research Technical Patterns + +**Command:** `/research` + +**When to use:** Before writing specs, investigate existing code patterns, libraries, and constraints. + +**Input:** Research question or problem statement + +**What it does:** +- Searches codebase for existing patterns +- Reviews documentation and prior research +- Synthesizes findings into actionable recommendations +- Creates research packet in `research/YYYY-MM-DD_topic.md` + +**Example:** + +``` +You: /research "What table components exist in the codebase? + How do we query organization database tables? + What libraries are available for data tables?" + +AI: Investigates codebase, finds: + - Existing Table component in ai-elements + - TanStack Table v8 installed but unused + - Dataset access via getDatasetById() helper + - Organization databases accessed via Turso client + + Creates: research/2025-11-06_datatable-components-and-tanstack-table-research.md +``` + +**Output:** Research packet with findings, recommendations, and code references + +**⚠️ Handling Unclear Information in Research:** + +If research finds gaps or unclear information, the research packet will tag them with `[NEEDS CLARIFICATION]`. Here's how to handle it: + +**Example: Research Finds Gaps** + +``` +You: /research "How do we handle real-time data updates in tables?" + +AI: Investigates codebase, finds: + - No existing real-time update patterns + - WebSocket infrastructure exists but unused + - [NEEDS CLARIFICATION] Update frequency requirements? + - [NEEDS CLARIFICATION] Should updates be push or pull? + + Creates: research/2025-11-06_realtime-updates-research.md + with open questions section +``` + +**What to do:** +1. Review the research packet's "Risks & Open Questions" section +2. For **clarifiable questions** (stakeholder decisions): Add to clarification packet +3. For **researchable questions** (technical evidence): Run additional research +4. Document assumptions if proceeding without answers + +**Example: Following Up on Research Gaps** + +``` +You: [Review research packet, see open questions] + +You: /clarify-feature + [Include research questions in input] + +AI: Adds research questions to clarification packet: + - "What is the required update frequency for real-time data?" + - "Should updates use WebSocket push or polling?" + +You: [Meet with stakeholders, get answers] + +You: /research "WebSocket vs polling performance for 1000+ concurrent users" + +AI: [Provides technical evidence for decision] +``` + +--- + +### Step 3: Clarify Requirements + +**Command:** `/clarify-feature` + +**When to use:** When requirements are unclear or you need stakeholder validation. + +**Input:** Feature concept (can reference feature hub) + +**What it does:** +- Validates requirements across 8 dimensions: + 1. Problem Statement + 2. Success Criteria + 3. Users & Stakeholders + 4. Constraints + 5. Scope Boundaries + 6. Solution Principles + 7. Dependencies + 8. Acceptance Criteria +- Creates clarification packet in `clarification/YYYY-MM-DD_initial-clarification.md` +- Identifies gaps requiring stakeholder input + +**Example:** + +``` +You: /clarify-feature + +AI: Reviews feature hub and research, creates clarification packet. + Identifies gaps: + - Success metrics not defined + - Performance benchmarks missing + - Timeline unclear + + Creates: clarification/2025-11-06_initial-clarification.md + + Next: Schedule stakeholder meeting to fill gaps +``` + +**Output:** Clarification packet with validated requirements and identified gaps + +**⚠️ Handling Clarification Gaps:** + +The clarification packet will show completeness scores (e.g., "4/8 dimensions complete") and identify gaps. Here's how to handle them: + +**Example: Clarification Identifies Missing Requirements** + +``` +You: /clarify-feature + +AI: Creates clarification packet with: + ✓ Problem Statement: Complete + ⚠️ Success Criteria: Partial (missing metrics) + ✓ Users & Stakeholders: Complete + ⚠️ Constraints: Partial (timeline unclear) + ⚠️ Scope Boundaries: Partial (MVP vs full scope unclear) + ✓ Solution Principles: Complete + ⚠️ Dependencies: Partial (API availability unknown) + ⚠️ Acceptance Criteria: Partial (edge cases missing) + + Completeness: 3/8 dimensions complete + Status: More Clarification Needed + + Creates: clarification/2025-11-06_initial-clarification.md +``` + +**What to do next:** + +**Option A: Fill Gaps with Stakeholders (Recommended)** + +1. Review the clarification packet's "Gaps Requiring Research" and "Gaps Requiring Clarification" sections +2. Prepare specific questions for stakeholders +3. Schedule clarification session or async Q&A +4. Update clarification packet with answers + +**Example: Filling Clarification Gaps** + +``` +You: [Review clarification packet, see gaps] + +You: [Email stakeholder with specific questions] + "Hi [Stakeholder], + + I'm clarifying requirements for the datatable feature. + Need your input on: + + 1. Success Metrics: What % of users should use this feature? + (Current: Not defined) + + 2. Timeline: Is this needed for V1 launch or can it be post-launch? + (Current: Unclear) + + 3. Performance: What's acceptable load time for first page? + (Current: No benchmark) + + See full clarification packet: + .devagent/workspace/features/.../clarification/2025-11-06_initial-clarification.md + " + +[Stakeholder responds with answers] + +You: [Update clarification packet manually or re-run clarify-feature with new info] + +You: /clarify-feature + [Include stakeholder answers in input] + +AI: Updates clarification packet with new information: + ✓ Success Criteria: Complete (50% adoption target) + ✓ Constraints: Complete (Post-V1 launch acceptable) + ✓ Scope Boundaries: Complete (MVP scope defined) + + Completeness: 6/8 dimensions complete + Status: Ready for Spec (with assumptions) +``` + +**Option B: Proceed with Assumptions (For MVP)** + +If stakeholders are unavailable or timeline is tight: + +1. Document assumptions explicitly in clarification packet +2. Mark assumptions with "Validation Required: Yes" +3. Proceed to spec with clear assumption log +4. Schedule follow-up validation + +**Example: Proceeding with Assumptions** + +``` +You: [Review clarification packet, stakeholders unavailable] + +You: [Edit clarification packet manually, add assumptions section] + +You: "Assumptions (to be validated): + - Success metric: 50% adoption (assumed, needs validation) + - Timeline: Post-V1 acceptable (assumed, needs validation) + - Performance: < 3s load time (assumed, needs validation) + " + +You: /create-spec + [Include note about assumptions] + +AI: Creates spec with assumptions clearly documented in Risks section +``` + +**Option C: Escalate Mission Conflicts** + +If requirements conflict with product mission: + +``` +You: /clarify-feature + +AI: Identifies mission conflict: + "Feature requires real-time updates, but product mission + emphasizes simplicity and low infrastructure costs. + + [ESCALATION NEEDED] Mission alignment check required." + +You: /update-product-mission + [Include conflict details] + +AI: [Reviews mission, provides guidance on alignment] +``` + +**Iterative Clarification Cycles:** + +For complex features, expect multiple clarification cycles: + +``` +Cycle 1: Initial clarification (3/8 complete) + ↓ +[Fill gaps with stakeholders] + ↓ +Cycle 2: Updated clarification (6/8 complete) + ↓ +[Address remaining technical unknowns] + ↓ +Cycle 3: Final clarification (8/8 complete) + ↓ +Ready for spec +``` + +**Decision Point: When to Proceed?** + +- **8/8 dimensions complete:** Proceed to spec +- **6-7/8 dimensions complete:** Proceed with documented assumptions +- **< 6/8 dimensions complete:** Fill gaps before proceeding +- **Mission conflicts:** Escalate before proceeding + +--- + +### Step 4: Create Specification + +**Command:** `/create-spec` + +**When to use:** After research and clarification are complete (or sufficient for MVP). + +**Input:** Feature hub path and research/clarification references + +**What it does:** +- Synthesizes research and requirements into detailed spec +- Defines functional narrative, technical approach, and acceptance criteria +- Creates spec document in `spec/YYYY-MM-DD_feature-spec.md` +- Links to research and clarification artifacts + +**Example:** + +``` +You: /create-spec + +AI: Reads research and clarification packets, creates comprehensive spec: + - Context & Objectives + - User Stories + - Functional Requirements + - Technical Approach (TanStack Table, server-side pagination) + - Acceptance Criteria + - Risks & Open Questions + + Creates: spec/2025-11-06_datatable-specification.md +``` + +**Output:** Review-ready specification document + +**⚠️ Handling Spec Gaps:** + +If `create-spec` identifies missing information, it will flag gaps and recommend next steps: + +**Example: Spec Identifies Missing Requirements** + +``` +You: /create-spec + +AI: Reads clarification packet (6/8 complete), identifies gaps: + + "Cannot draft Technical Approach section: + - [MISSING] API endpoint structure unclear + - [MISSING] Error handling strategy undefined + - [ASSUMPTION] Using TanStack Table (from research, needs validation) + + Recommendation: Run /research for API patterns + or update clarification packet with technical decisions." + +You: [Choose one:] + Option A: /research "API endpoint patterns for pagination" + Option B: [Update clarification packet with technical decisions] + Option C: [Proceed with assumptions documented in Risks section] +``` + +**What to do:** +- **For technical gaps:** Run additional research +- **For requirement gaps:** Update clarification packet, re-run clarify-feature +- **For assumptions:** Document in spec's Risks section, proceed with validation plan + +--- + +### Step 5: Plan Implementation Tasks + +**Command:** `/plan-tasks` + +**When to use:** After spec is approved, before starting implementation. + +**Input:** Spec path + +**What it does:** +- Breaks spec into ordered, execution-focused tasks +- Maps tasks to specific files/modules +- Includes validation (tests, linting) in each task +- Creates task plan in `tasks/YYYY-MM-DD_task-plan.md` + +**Example:** + +``` +You: /plan-tasks + +AI: Analyzes spec, creates task breakdown: + Task 1: Create DataTable component using TanStack Table + Task 2: Implement server-side pagination endpoint + Task 3: Add route for dataset viewing + Task 4: Write tests for DataTable component + Task 5: Write tests for pagination endpoint + + Creates: tasks/2025-11-06_datatable-implementation-plan.md +``` + +**Output:** Task plan with concrete implementation steps + +--- + +### Step 6: Generate Task Prompts + +**Command:** `/create-task-prompt` + +**When to use:** When ready to start coding, convert tasks into AI-ready prompts. + +**Input:** Task plan or spec path + +**What it does:** +- Converts tasks into detailed AI execution prompts +- Includes context references (research, specs, code paths) +- Creates task prompt file in `tasks/YYYY-MM-DD_task-slug.md` +- Each task includes: description, execution prompt, file hints, context refs + +**Example:** + +``` +You: /create-task-prompt + +AI: Creates task prompt file with: + Task 1.1: Create DataTable component + - Execution prompt: "Create a DataTable component using TanStack + Table v8 that displays dataset rows with server-side pagination..." + - File hints: app/components/DataTable.tsx, packages/components/src/ui/table.tsx + - Context refs: research/2025-11-06_datatable-research.md, + spec/2025-11-06_datatable-spec.md + + Creates: tasks/2025-11-06_datatable-component-task.md +``` + +**Output:** AI-ready task prompts with full context + +--- + +### Step 7: Implement Using Cursor AI + +**When to use:** During coding, reference task prompts in Cursor chat. + +**How to use:** +1. Open the task prompt file in Cursor +2. Copy the execution prompt for the current task +3. Paste into Cursor chat with context references +4. AI implements the task using provided context + +**Example:** + +``` +You: [Paste task prompt from tasks/2025-11-06_datatable-component-task.md] + + Task 1.1: Create DataTable component using TanStack Table v8... + + Reference: research/2025-11-06_datatable-research.md + Reference: spec/2025-11-06_datatable-spec.md + Reference: packages/components/src/ui/table.tsx + +AI: [Implements DataTable component following spec and research findings] +``` + +**Best Practice:** Work through tasks sequentially, validating each before moving to the next. + +--- + +### Step 8: Validate Code + +**Command:** `/validate-code` + +**When to use:** After implementing a task or before committing. + +**What it does:** +- Runs `bun run lint` - checks for linting errors +- Runs `bun run typecheck` - checks for TypeScript errors +- Runs `bun run test` - runs all tests +- Provides specific fix suggestions if errors found + +**Example:** + +``` +You: /validate-code + +AI: Running lint... ✓ No errors + Running typecheck... ✗ Error in DataTable.tsx:42 + Running tests... ✗ 2 tests failing + + [Provides specific fix suggestions with file paths and line numbers] +``` + +**Output:** Validation results with fix suggestions + +--- + +### Step 9: Review Progress (Optional) + +**Command:** `/review-progress` + +**When to use:** When switching contexts, at end of day, or after interruptions. + +**Input:** Task prompt or spec path, current progress state + +**What it does:** +- Captures what's completed, in progress, and blocked +- Identifies immediate next steps +- Updates `AGENTS.md` with progress +- Creates checkpoint file for easy resumption + +**Example:** + +``` +You: /review-progress + Task: tasks/2025-11-06_datatable-component-task.md + Completed: Task 1.1 (DataTable component created) + In Progress: Task 1.2 (Server pagination endpoint) + Blocked: Need clarification on pagination API format + +AI: Creates checkpoint: + ✓ Completed: DataTable component with TanStack Table + ⏳ In Progress: Pagination endpoint (50% complete) + ⚠️ Blocked: API format clarification needed + + Next Steps: + 1. Clarify pagination API with team + 2. Complete pagination endpoint + 3. Write tests for endpoint + + Updates: AGENTS.md with progress log entry +``` + +**Output:** Progress checkpoint and updated `AGENTS.md` + +--- + +## Command Reference + +### Core Commands + +| Command | Purpose | Input | Output | +|---------|---------|-------|--------| +| `/new-feature` | Scaffold feature hub | Title/description | Feature hub with `AGENTS.md` | +| `/research` | Investigate technical patterns | Research question | Research packet | +| `/clarify-feature` | Validate requirements | Feature concept | Clarification packet | +| `/create-spec` | Write specification | Feature hub path | Specification document | +| `/plan-tasks` | Break down implementation | Spec path | Task plan | +| `/create-task-prompt` | Generate AI prompts | Task plan/spec | Task prompt file | +| `/validate-code` | Validate implementation | None | Lint/typecheck/test results | +| `/review-progress` | Capture progress state | Task/spec + progress | Checkpoint file | + +### Additional Commands + +| Command | Purpose | When to Use | +|---------|---------|-------------| +| `/brainstorm` | Generate feature ideas | Exploring solution space | +| `/update-product-mission` | Update product mission | Mission conflicts or updates | +| `/update-tech-stack` | Document tech stack | New project or major changes | +| `/update-constitution` | Update governance docs | Constitutional changes | + +--- + +## Handling Unclear Information: Decision Tree + +Use this decision tree when you encounter unclear information: + +``` +Encounter unclear information + ↓ +Is it a technical question? + ├─ YES → /research "specific technical question" + │ ↓ + │ Research provides answer? + │ ├─ YES → Continue workflow + │ └─ NO → Document as assumption, proceed with validation plan + │ + └─ NO → Is it a stakeholder decision? + ├─ YES → Add to clarification packet + │ ↓ + │ Stakeholder available? + │ ├─ YES → Schedule clarification session + │ │ Fill gaps → Update clarification packet + │ └─ NO → Document assumption with "Validation Required" + │ Proceed to spec with assumption log + │ + └─ NO → Is it a product mission question? + ├─ YES → /update-product-mission + │ Resolve conflict → Continue workflow + └─ NO → Document in Risks & Open Questions + Proceed with best-effort assumption +``` + +--- + +## Example Interactions + +### Example 1: Simple Feature (Quick Path) + +**Scenario:** Add a "Copy to Clipboard" button to dataset cards. + +``` +You: /new-feature "Add copy dataset ID to clipboard button" + +AI: [Creates feature hub] + +You: /research "How do we handle clipboard operations in React? + Are there existing copy buttons in the codebase?" + +AI: [Finds existing clipboard patterns, creates research packet] + +You: /create-task-prompt + +AI: [Creates task prompt with implementation steps] + +You: [Implements using Cursor AI with task prompt] + +You: /validate-code + +AI: [Validates code, provides fixes if needed] +``` + +**Result:** Feature implemented with minimal overhead. + +--- + +### Example 2: Complex Feature (Full Workflow) + +**Scenario:** Build a data visualization dashboard. + +``` +You: /new-feature "Create data visualization dashboard" + +AI: [Creates feature hub] + +You: /research "What chart libraries are available? + How do we structure dashboard layouts? + What's the data access pattern for aggregated queries?" + +AI: [Investigates, creates comprehensive research packet] + +You: /clarify-feature + +AI: [Creates clarification packet, identifies gaps] + "Need stakeholder input on: chart types, refresh intervals, + user permissions" + +You: [Meets with stakeholders, fills gaps] + +You: /create-spec + +AI: [Creates detailed spec with all requirements] + +You: /plan-tasks + +AI: [Breaks down into 8 implementation tasks] + +You: /create-task-prompt + +AI: [Creates task prompts for each task] + +You: [Implements Task 1 using Cursor AI] + +You: /validate-code + +AI: [Validates, provides fixes] + +You: [Continues with remaining tasks...] + +You: /review-progress + +AI: [Captures progress, updates AGENTS.md] +``` + +**Result:** Well-documented, validated feature with full traceability. + +--- + +### Example 3: Bug Fix (Minimal Workflow) + +**Scenario:** Fix form validation error. + +``` +You: /research "Form validation error on email field - + what's the current validation logic?" + +AI: [Finds validation code, identifies issue] + +You: [Fixes bug directly] + +You: /validate-code + +AI: [Validates fix] +``` + +**Result:** Quick fix without full workflow overhead. + +--- + +### Example 4: Feature with Unclear Requirements (Iterative Clarification) + +**Scenario:** Build export feature, but requirements are vague. + +``` +You: /new-feature "Add data export functionality" + +AI: [Creates feature hub] + +You: /research "What export formats are supported? + How do we handle large file exports?" + +AI: Investigates, finds: + - No existing export functionality + - [NEEDS CLARIFICATION] Which formats? (CSV, Excel, PDF?) + - [NEEDS CLARIFICATION] File size limits? + + Creates: research/2025-11-06_export-research.md + +You: /clarify-feature + +AI: Creates clarification packet: + ⚠️ Problem Statement: Partial (why export needed?) + ⚠️ Success Criteria: Missing (adoption target?) + ⚠️ Scope Boundaries: Unclear (which formats?) + ⚠️ Constraints: Missing (file size limits?) + + Completeness: 2/8 dimensions complete + Status: More Clarification Needed + + Creates: clarification/2025-11-06_initial-clarification.md + +You: [Review clarification packet, prepare questions] + +You: [Email product manager] + "Hi [PM], + + Clarifying export feature requirements: + + 1. Why do users need export? (use case) + 2. Which formats? (CSV, Excel, PDF, all?) + 3. File size limits? (10MB, 100MB, unlimited?) + 4. Success metric? (% of users who export) + + See: clarification/2025-11-06_initial-clarification.md + " + +[PM responds]: +- Use case: Users need to share data with external tools +- Formats: CSV and Excel (PDF later) +- File size: 50MB limit +- Success: 30% of active users export monthly + +You: [Update clarification packet with answers] + +You: /clarify-feature + [Include PM answers] + +AI: Updates clarification packet: + ✓ Problem Statement: Complete + ✓ Success Criteria: Complete (30% monthly export rate) + ✓ Scope Boundaries: Complete (CSV/Excel MVP, PDF later) + ✓ Constraints: Complete (50MB limit) + + Completeness: 6/8 dimensions complete + Status: Ready for Spec (with assumptions) + +You: /create-spec + +AI: Creates spec with documented assumptions: + - Technical approach for 50MB file generation + - CSV/Excel implementation (PDF deferred) + - Success tracking for export usage + + Creates: spec/2025-11-06_export-specification.md +``` + +**Result:** Iterative clarification cycle fills gaps before spec work. + +--- + +### Example 5: Research Finds Conflicting Information + +**Scenario:** Research discovers conflicting patterns in codebase. + +``` +You: /research "How do we handle authentication in API routes?" + +AI: Investigates, finds: + - Pattern A: Uses session cookies (older routes) + - Pattern B: Uses Bearer tokens (newer routes) + - [CONFLICT] Two different patterns exist + - [NEEDS CLARIFICATION] Which pattern should new feature use? + + Creates: research/2025-11-06_auth-patterns-research.md + with conflict documented + +You: [Review research, see conflict] + +You: /clarify-feature + [Include auth pattern question] + +AI: Adds to clarification packet: + ⚠️ Solution Principles: Partial + - [CONFLICT] Authentication pattern unclear + - [RECOMMENDATION] Standardize on Bearer tokens (newer pattern) + - [VALIDATION REQUIRED] Confirm with team lead + +You: [Check with team lead] + +Team Lead: "Use Bearer tokens for all new features" + +You: [Update clarification packet] + +You: /create-spec + +AI: Uses Bearer token pattern in spec +``` + +**Result:** Conflict resolved through clarification, spec uses consistent pattern. + +--- + +### Example 6: Stakeholder Unavailable (Proceeding with Assumptions) + +**Scenario:** Need to proceed, but stakeholder is on vacation. + +``` +You: /clarify-feature + +AI: Creates clarification packet: + ⚠️ Success Criteria: Missing (adoption target?) + ⚠️ Timeline: Missing (deadline?) + + Completeness: 6/8 dimensions complete + Status: More Clarification Needed + + Stakeholder: [Unavailable until next week] + +You: [Document assumptions in clarification packet] + +You: "Assumptions (to be validated with [Stakeholder] on return): + - Success metric: 40% adoption (industry benchmark) + - Timeline: Flexible, can adjust post-launch + - Validation required: Yes + - Validation date: [Stakeholder return date] + " + +You: /create-spec + [Include note about assumptions] + +AI: Creates spec with: + - Assumptions clearly documented in Risks section + - Validation plan included + - Proceeds with best-effort assumptions + + Creates: spec/2025-11-06_feature-spec.md + +[Later, when stakeholder returns] + +You: [Review assumptions with stakeholder] + +Stakeholder: "Adoption target should be 50%, not 40%" + +You: [Update clarification packet] + +You: /create-spec + [Re-run with updated requirements] + +AI: Updates spec with corrected success metric +``` + +**Result:** Feature proceeds with documented assumptions, validated later. + +--- + +## Clarification Scenarios & Solutions + +### Scenario 1: Research Finds Gaps + +**Problem:** Research identifies missing information or unclear patterns. + +**Solution:** +1. Review research packet's "Risks & Open Questions" section +2. Classify gaps: + - **Technical questions** → Additional research + - **Stakeholder decisions** → Add to clarification + - **Assumptions** → Document and proceed +3. Update research packet or create follow-up research + +**Example:** +``` +Research finds: "[NEEDS CLARIFICATION] Update frequency?" +→ Add to clarification packet +→ Get stakeholder answer +→ Update clarification packet +``` + +--- + +### Scenario 2: Clarification Incomplete + +**Problem:** Clarification packet shows < 6/8 dimensions complete. + +**Solution:** +1. Review "Gaps Requiring Clarification" section +2. Prepare specific questions for stakeholders +3. Schedule clarification session (sync or async) +4. Update clarification packet with answers +5. Re-run `/clarify-feature` if needed + +**Example:** +``` +Clarification: 3/8 complete +→ Identify missing dimensions +→ Prepare stakeholder questions +→ Get answers +→ Update clarification packet +→ Re-run clarify-feature +→ Now 7/8 complete, proceed with assumption +``` + +--- + +### Scenario 3: Stakeholder Conflicts + +**Problem:** Different stakeholders have conflicting requirements. + +**Solution:** +1. Document both positions in clarification packet +2. Identify decision maker +3. Escalate to decision maker or product mission +4. Do not proceed until resolved + +**Example:** +``` +Stakeholder A: "Feature must support real-time updates" +Stakeholder B: "Feature should be simple, no real-time" +→ Document both in clarification packet +→ Escalate to product manager (decision maker) +→ Get decision: "MVP without real-time, add later" +→ Update clarification packet +→ Proceed +``` + +--- + +### Scenario 4: Mission Conflicts + +**Problem:** Requirements conflict with product mission. + +**Solution:** +1. Document conflict in clarification packet +2. Escalate to `/update-product-mission` +3. Get alignment decision +4. Update clarification packet +5. Proceed with aligned requirements + +**Example:** +``` +Requirement: "Real-time updates for all users" +Mission: "Keep infrastructure costs low" +→ Conflict identified +→ Escalate to update-product-mission +→ Decision: "Real-time for premium users only" +→ Update clarification packet +→ Proceed +``` + +--- + +### Scenario 5: Technical Unknowns + +**Problem:** Technical approach is unclear or risky. + +**Solution:** +1. Document unknowns in research or clarification +2. Run additional research for technical evidence +3. Create spike/prototype if needed +4. Document assumptions and risks in spec +5. Proceed with validation plan + +**Example:** +``` +Unknown: "Can WebSocket handle 10k concurrent connections?" +→ Run research: "WebSocket scalability patterns" +→ Research finds: "Yes, with proper infrastructure" +→ Document in spec with infrastructure requirements +→ Proceed +``` + +--- + +### Scenario 6: Timeline Pressure + +**Problem:** Need to proceed quickly, but requirements incomplete. + +**Solution:** +1. Prioritize Must-have clarification (defer Should/Could) +2. Document assumptions explicitly +3. Mark assumptions as "Validation Required" +4. Proceed to MVP spec with assumption log +5. Schedule follow-up clarification for post-MVP features + +**Example:** +``` +Timeline: "Need MVP in 2 weeks" +Clarification: 5/8 complete (missing Should/Could items) +→ Document Must-have assumptions +→ Proceed to MVP spec +→ Defer Should/Could clarification to post-MVP +→ Schedule follow-up session +``` + +--- + +## Best Practices + +### 1. Choose the Right Workflow Path + +- **Complex features:** Use full workflow (new-feature → research → clarify → spec → plan → prompt) +- **Simple enhancements:** Skip to research → create-task-prompt +- **Bug fixes:** Research → fix → validate + +### 2. Keep Artifacts Updated + +- Update `AGENTS.md` as you progress +- Link related artifacts (research → spec → tasks) +- Document decisions in feature hub + +### 3. Use Context Effectively + +- Always reference research and specs in task prompts +- Include file paths and code references +- Link to related features or ADRs + +### 4. Validate Early and Often + +- Run `/validate-code` after each task +- Fix linting/type errors immediately +- Write tests as you implement + +### 5. Document Assumptions + +- Use `[NEEDS CLARIFICATION]` tags in research/clarification +- Document assumptions in specs with "Validation Required" flag +- Track open questions in `AGENTS.md` +- **Never proceed with undocumented assumptions** + +**Example of Good Assumption Documentation:** +``` +Assumption: 50% feature adoption target +Validation Required: Yes +Validation Method: Stakeholder confirmation +Validation Date: 2025-11-15 +Owner: Product Manager +Risk if Wrong: Medium (affects success metrics) +``` + +### 5a. Handle Clarification Gaps Systematically + +- **< 6/8 dimensions complete:** Fill gaps before proceeding +- **6-7/8 dimensions complete:** Document assumptions, proceed with validation plan +- **8/8 dimensions complete:** Proceed to spec +- **Mission conflicts:** Escalate immediately, do not proceed + +### 6. Progress Tracking + +- Use `/review-progress` when switching contexts +- Update `AGENTS.md` Progress Log regularly +- Create checkpoints for complex features + +### 7. Workflow Integration + +- Commands are designed to work together +- Each workflow produces artifacts for the next +- Follow recommended "Next Steps" from each command + +--- + +## Troubleshooting + +### Missing Type Errors + +**Problem:** TypeScript errors about missing `./+types/[routeName]` imports. + +**Solution:** Run `bun run typecheck` to generate types. Never change import paths. + +### Workflow Not Executing + +**Problem:** Command returns description instead of executing. + +**Solution:** Ensure you're using the exact command format: `/[workflow-name]`. Check `.agents/commands/` for available commands. + +### Feature Hub Already Exists + +**Problem:** `/new-feature` fails because folder exists. + +**Solution:** The workflow will append a numeric suffix automatically. Or manually specify a different slug. + +### Missing Context + +**Problem:** Task prompts lack necessary context. + +**Solution:** Ensure research and spec are complete. Use `/clarify-feature` to fill gaps before creating task prompts. + +### Clarification Incomplete + +**Problem:** Clarification packet shows low completeness score (< 6/8). + +**Solution:** +1. Review "Gaps Requiring Clarification" section +2. Prepare specific questions for stakeholders +3. Schedule clarification session +4. Update clarification packet with answers +5. Re-run `/clarify-feature` if needed + +**Example:** +``` +Clarification: 3/8 complete +→ Review gaps: Success metrics, Timeline, Performance benchmarks +→ Prepare questions for stakeholder +→ Get answers via email/meeting +→ Update clarification packet manually or re-run clarify-feature +→ Now 7/8 complete, proceed with documented assumption +``` + +### Research Finds Conflicting Patterns + +**Problem:** Research discovers multiple conflicting approaches in codebase. + +**Solution:** +1. Document conflict in research packet +2. Add conflict to clarification packet +3. Escalate to team lead or decision maker +4. Get decision on which pattern to use +5. Update clarification packet with decision +6. Proceed with consistent pattern + +**Example:** +``` +Research finds: Two auth patterns (session vs Bearer token) +→ Document conflict in research +→ Add to clarification: "Which pattern for new feature?" +→ Check with team lead +→ Decision: "Use Bearer tokens (newer pattern)" +→ Update clarification packet +→ Proceed +``` + +### Stakeholder Unavailable + +**Problem:** Need clarification, but stakeholder is unavailable. + +**Solution:** +1. Document assumptions explicitly +2. Mark assumptions with "Validation Required: Yes" +3. Include validation plan (who, when, how) +4. Proceed to spec with assumption log +5. Schedule follow-up validation when stakeholder returns + +**Example:** +``` +Stakeholder unavailable until next week +→ Document assumption: "50% adoption target (industry benchmark)" +→ Mark: "Validation Required: Yes, Owner: PM, Date: Next week" +→ Proceed to spec with assumption documented +→ Validate when stakeholder returns +→ Update spec if assumption was wrong +``` + +--- + +## File Structure Reference + +``` +.devagent/ +├── core/ +│ ├── workflows/ # Workflow definitions +│ └── templates/ # Templates for artifacts +└── workspace/ + └── features/ + └── active/ + └── YYYY-MM-DD_feature-slug/ + ├── AGENTS.md # Progress tracker + ├── research/ # Research packets + ├── clarification/ # Requirement clarification + ├── spec/ # Specifications + └── tasks/ # Task plans & prompts + +.agents/ +└── commands/ # Command files (symlinked to .cursor/commands) +``` + +--- + +## Getting Help + +- **Workflow questions:** Review `.devagent/core/workflows/[workflow-name].md` +- **Command reference:** Check `.agents/commands/[command-name].md` +- **Project patterns:** See `AGENTS.md` (root) and `.devagent/core/AGENTS.md` +- **Code standards:** Review `.cursor/rules/` for coding guidelines + +--- + +## Quick Reference Card + +``` +New Feature Workflow: +1. /new-feature "Title" +2. /research "Question" +3. /clarify-feature +4. /create-spec +5. /plan-tasks +6. /create-task-prompt +7. [Implement with Cursor AI] +8. /validate-code +9. /review-progress (optional) + +Simple Enhancement: +1. /research "Question" +2. /create-task-prompt +3. [Implement] +4. /validate-code + +Bug Fix: +1. /research "Problem" +2. [Fix] +3. /validate-code +``` + +--- + +*Last Updated: 2025-11-07* + From 244e930cc6bef9b105a4b1f7ac7955dd4a241463 Mon Sep 17 00:00:00 2001 From: "codegen-sh[bot]" <131295404+codegen-sh[bot]@users.noreply.github.com> Date: Sun, 30 Nov 2025 15:40:59 +0000 Subject: [PATCH 2/3] Add learned lessons from first-time DevAgent user experience This document captures valuable insights from a first-time implementation using DevAgent workflows, including common confusion points, iterative process learnings, and practical recommendations. Co-authored-by: Jake Ruesink --- .devagent/learned-lessons.md | 529 +++++++++++++++++++++++++++++++++++ 1 file changed, 529 insertions(+) create mode 100644 .devagent/learned-lessons.md diff --git a/.devagent/learned-lessons.md b/.devagent/learned-lessons.md new file mode 100644 index 0000000..374c21b --- /dev/null +++ b/.devagent/learned-lessons.md @@ -0,0 +1,529 @@ +# DevAgent Learned Lessons: First-Time User Experience + +**Author:** Antony Duran +**Date:** 2025-11-13 +**Context:** First implementation using DevAgent workflows for "Simple Datatable to View Data" feature + +--- + +## Table of Contents + +1. [Initial Impressions & Confusion](#initial-impressions--confusion) +2. [How Workflows Work in Practice](#how-workflows-work-in-practice) +3. [The Iterative Process & DEVELOPER-GUIDE.md](#the-iterative-process--developer-guidemd) +4. [Common Questions & Solutions](#common-questions--solutions) +5. [Best Practices & Recommendations](#best-practices--recommendations) + +--- + +## Initial Impressions & Confusion + +### The Overwhelming First Look + +When first encountering DevAgent, the structure can feel overwhelming. There are multiple directories (`.devagent/core/`, `.devagent/workspace/`, `.agents/commands/`), workflows, templates, and documentation scattered across different locations. + +**Key Confusion Points:** +- **`.agents/commands/` vs `.devagent/core/workflows/`** — What's the difference? How do they relate? +- **Where to start?** — The `.devagent/core/README.md` exists but isn't immediately obvious +- **Workflow vs Command** — Are these the same thing? How do they interact? + +### The Discovery Process + +Through actual usage, the structure became clearer: +- **`.devagent/core/`** = Portable agent kit (workflows, templates) that can be copied to any project +- **`.devagent/workspace/`** = Project-specific artifacts (features, research, specs, decisions) +- **`.agents/commands/`** = Command files that trigger workflows (symlinked to `.cursor/commands/`) + +**The Missing Piece:** A high-level "Getting Started" guide that explains: +- The relationship between commands and workflows +- Where to start for different types of work +- How workflows chain together +- A glossary of terms + +--- + +## How Workflows Work in Practice + +### The Actual Workflow Sequence + +Based on the datatable feature implementation, here's how workflows were used in practice: + +``` +1. /new-feature "Add datatable to view dataset data" + → Creates feature hub with AGENTS.md and folder structure + → Recommends next steps (research, clarify) + +2. /research "table components and data access patterns" + → Investigates codebase, finds existing patterns + → Creates research packet with findings + → Identifies gaps requiring clarification + +3. /clarify-feature + → Validates requirements across 8 dimensions + → Creates clarification packet + → Identifies missing information (4/8 complete initially) + +4. /clarify-feature (re-run after gap-fill) + → Updates clarification packet with new information + → Improves completeness (7/8 complete) + +5. /create-spec + → Synthesizes research + clarification into spec + → Creates comprehensive specification document + +6. /plan-tasks + → Breaks spec into 6 tasks with 28 subtasks + → Creates implementation plan + +7. /create-task-prompt + → Converts tasks into AI-ready execution prompts + → Includes context references and file hints + +8. [Implementation using Cursor AI] + → Execute tasks one by one using task prompts + +9. /clarify-feature (re-run for major direction change) + → Scope changed: migrate to @lambdacurry/forms + → Creates comprehensive clarification document + → Updates completeness (8/8 complete) + +10. /create-spec (re-run for v2) + → Creates new spec reflecting migration requirements + +11. /plan-tasks (re-run for migration) + → Creates migration task plan + +12. /create-task-prompt (re-run for migration) + → Creates migration task prompts +``` + +### Key Insights + +**1. Workflows Can Be Re-Run** +- If something changes, you can re-call the same command to update previous documents +- This is powerful but can create confusion about which document is "current" +- **Solution:** Use clear versioning in filenames (e.g., `spec-v2.md`) and update `AGENTS.md` references + +**2. Workflows Chain Naturally** +- Each workflow produces artifacts that feed into the next +- Research → Clarification → Spec → Tasks → Prompts +- **But:** You can skip steps for simple features (research → create-task-prompt) + +**3. Iteration is Expected** +- The datatable feature went through multiple clarification cycles +- Initial implementation (TanStack Table) was later migrated to @lambdacurry/forms +- **Lesson:** Don't be afraid to re-run workflows when scope changes + +**4. Workflows Don't Execute Automatically** +- After `/new-feature`, you must manually call the next workflow +- Workflows are **tools**, not autonomous agents +- **You remain the coordinator** — workflows don't talk to each other + +--- + +## The Iterative Process & DEVELOPER-GUIDE.md + +### Why DEVELOPER-GUIDE.md Was Created + +After the first few workflow executions, several issues emerged: + +1. **Lost after research** — Research recommended `/clarify-feature`, but it didn't ask clarifying questions as expected +2. **Unclear next steps** — After each workflow, it wasn't always clear what to do next +3. **No examples** — Workflow descriptions were abstract; real examples were needed +4. **Gap handling** — When research or clarification found gaps, the process wasn't clear + +### How DEVELOPER-GUIDE.md Helped + +The DEVELOPER-GUIDE.md was created to: +- **Provide step-by-step examples** — Real interactions showing how workflows chain together +- **Explain gap handling** — What to do when research finds `[NEEDS CLARIFICATION]` tags +- **Clarify decision points** — When to proceed with assumptions vs. filling gaps +- **Show iteration patterns** — How to handle scope changes and re-runs + +### The Iterative Learning Process + +``` +Initial Confusion + ↓ +First Workflow Execution (/new-feature) + ↓ +Second Workflow Execution (/research) + ↓ +Confusion: "What do I do with gaps?" + ↓ +Third Workflow Execution (/clarify-feature) + ↓ +Confusion: "It didn't ask questions?" + ↓ +Manual Clarification (gap-fill document) + ↓ +Re-run /clarify-feature + ↓ +Continue with /create-spec + ↓ +Realization: "I need examples and guidance" + ↓ +Create DEVELOPER-GUIDE.md + ↓ +Use DEVELOPER-GUIDE.md for remaining workflows + ↓ +Much smoother experience +``` + +**Key Takeaway:** The DEVELOPER-GUIDE.md emerged from **actual pain points** during first-time usage. It's not theoretical—it's a practical guide based on real experience. + +--- + +## Common Questions & Solutions + +### Q1: How do I feed `/new-feature` output to `/research`? + +**The Question:** After `/new-feature` creates a feature hub, how do I pass that context to `/research`? + +**Solution A: Reference the Feature Hub Path** +``` +You: /research "What table components exist in the codebase? + How do we query organization database tables?" + + Feature: .devagent/workspace/features/active/2025-11-06_simple-datatable-to-view-data/ +``` + +**Solution B: Reference the AGENTS.md File** +``` +You: /research "table components and data access patterns" + + Context: See .devagent/workspace/features/active/2025-11-06_simple-datatable-to-view-data/AGENTS.md +``` + +**Best Practice:** Always include the feature hub path or `AGENTS.md` reference when chaining workflows. Workflows read from the workspace, but explicit references help ensure context is captured. + +--- + +### Q2: How do I resume a task after switching contexts? + +**The Question:** If I stop working and come back later, how do I let the LLM know what to continue working on? + +**Solution A: Use `/review-progress`** +``` +You: /review-progress + Task: tasks/2025-11-06_datatable-component-task.md + Completed: Task 1.1 (DataTable component created) + In Progress: Task 1.2 (Server pagination endpoint) + Blocked: Need clarification on pagination API format +``` + +This creates a checkpoint file and updates `AGENTS.md` with progress state. + +**Solution B: Reference AGENTS.md and Task Prompts** +``` +You: [Open feature hub AGENTS.md] + [Review "Progress Log" and "Implementation Checklist"] + [Open current task prompt file] + + Continue from Task 1.2: Implement server-side pagination endpoint + See: tasks/2025-11-06_task-prompts.md, Task 1.2 + Context: Feature hub AGENTS.md shows Task 1.1 complete +``` + +**Best Practice:** Use `/review-progress` when stopping work. When resuming, reference both `AGENTS.md` (for overall progress) and the specific task prompt file (for current task details). + +--- + +### Q3: What if I disagree with research findings? + +**The Question:** After `/research` execution, I don't agree with the outcome or proposed solution. What's the proper way to proceed? + +**Solution A: Document Disagreement in Clarification** +``` +You: /clarify-feature + + Note: Research recommended TanStack Table, but I want to use + @lambdacurry/forms instead. See research/2025-11-06_datatable-research.md + for historical record, but we're proceeding with @lambdacurry/forms. +``` + +The clarification packet will document this decision, and future workflows will use the clarified approach. + +**Solution B: Re-run Research with Different Focus** +``` +You: /research "How do we implement data tables with @lambdacurry/forms? + What are the server-side pagination patterns?" + + Note: Previous research focused on TanStack Table (see + research/2025-11-06_datatable-research.md), but we're exploring + @lambdacurry/forms as an alternative. +``` + +This creates a new research packet that can be referenced in the spec. + +**Best Practice:** Research packets are **historical records**. If you disagree, either: +1. Document the disagreement in clarification (recommended for stakeholder decisions) +2. Create new research with different focus (recommended for technical alternatives) +3. Keep old research for historical context, but proceed with clarified approach + +--- + +### Q4: Is `/create-task-prompt` for one task or all tasks? + +**The Question:** Does `/create-task-prompt` produce a master prompt for all tasks, or one prompt per task? + +**Answer:** It produces **one file with multiple task prompts** (one per task). Each task has: +- Execution prompt (detailed instructions) +- File hints (where to create/modify files) +- Context references (research, spec, code paths) +- Acceptance criteria + +**Usage Pattern:** +``` +1. /create-task-prompt (creates tasks/2025-11-06_task-prompts.md) +2. Open task prompts file +3. Copy Task 1.1 execution prompt +4. Paste into Cursor chat with context references +5. AI implements Task 1.1 +6. Repeat for Task 1.2, 1.3, etc. +``` + +**Best Practice:** Work through tasks **sequentially**. Each task builds on the previous one. Validate after each task before moving to the next. + +--- + +### Q5: How do I handle major direction changes? + +**The Question:** What if the current feature state is "good enough" but requirements change significantly? + +**Solution A: Start New Feature (Recommended if Current State is Good)** +``` +Current State: TanStack Table implementation complete and functional +New Requirement: Migrate to @lambdacurry/forms + +Decision: Keep current feature as-is (it's functional) + Create new feature: "Migrate datatable to @lambdacurry/forms" + New feature can reference old feature's artifacts +``` + +**Solution B: Re-run Workflows in Current Feature (Recommended if Current State Needs Changes)** +``` +Current State: TanStack Table implementation, but needs major refactor +New Requirement: Migrate to @lambdacurry/forms + +Decision: Re-run /clarify-feature (update scope) + Re-run /create-spec (create v2 spec) + Re-run /plan-tasks (create migration plan) + Re-run /create-task-prompt (create migration prompts) +``` + +**Decision Criteria:** +- **Current state is functional and acceptable?** → Start new feature +- **Current state needs major changes anyway?** → Re-run workflows in current feature +- **Unclear?** → Document decision in `AGENTS.md` Key Decisions section + +**Best Practice:** For the datatable feature, we used **Solution B** because: +1. The TanStack Table implementation was complete but needed migration +2. The migration was a natural evolution, not a separate feature +3. Re-running workflows kept all context in one place + +--- + +### Q6: Should I use best models for planning and auto for implementation? + +**The Question:** Is DevAgent designed to work with different models for planning vs. implementation? + +**Answer:** DevAgent workflows are **model-agnostic**. They work with any LLM that can: +- Follow structured instructions +- Read and write markdown files +- Reference context from workspace + +**Current Usage Pattern:** +- **Planning workflows** (`/research`, `/create-spec`, `/plan-tasks`) → Use best available model (e.g., Claude Sonnet 4.5) +- **Implementation** (Cursor AI with task prompts) → Use auto or best available model + +**Potential Enhancement:** +- **Background agents** (Codegen) → Can run implementation tasks asynchronously +- See `.devagent/core/workflows/codegen/run-codegen-background-agent.md` for details + +**Best Practice:** +- Use best models for **planning** (research, spec, tasks) — these benefit from reasoning +- Use auto or best models for **implementation** — depends on token budget and complexity +- Consider background agents for **independent tasks** that can run in parallel + +--- + +## Best Practices & Recommendations + +### 1. Start with `/new-feature`, Then Research + +**Don't skip the feature hub.** Even for simple features, creating a feature hub provides: +- Centralized progress tracking (`AGENTS.md`) +- Organized artifact storage (research/, spec/, tasks/) +- Clear ownership and status + +**Workflow:** +``` +/new-feature "Brief description" + ↓ +/research "Specific questions" + ↓ +[Continue based on complexity] +``` + +### 2. Always Reference Feature Hub in Workflows + +When chaining workflows, always include the feature hub path: + +``` +/research "question" + Feature: .devagent/workspace/features/active/YYYY-MM-DD_feature-slug/ +``` + +This ensures workflows can: +- Read existing artifacts (research, clarification, spec) +- Update `AGENTS.md` with progress +- Maintain context across workflow executions + +### 3. Use AGENTS.md as Your North Star + +`AGENTS.md` is the **single source of truth** for feature progress: +- **Progress Log** — Chronological history of what happened +- **Implementation Checklist** — What's done, in progress, or pending +- **Key Decisions** — Important choices with rationale +- **References** — Links to all artifacts (research, spec, tasks) + +**Check `AGENTS.md` before:** +- Starting a new workflow +- Resuming work after context switch +- Making scope changes +- Creating new artifacts + +### 4. Document Assumptions Explicitly + +When proceeding with incomplete information: +1. **Document in clarification packet** — Mark as `[ASSUMPTION]` with validation plan +2. **Update AGENTS.md** — Add to Key Decisions section +3. **Include in spec** — Document in Risks & Open Questions section +4. **Schedule validation** — Set a date/owner for assumption validation + +**Never proceed with undocumented assumptions.** + +### 5. Re-Run Workflows When Scope Changes + +If requirements change significantly: +1. **Re-run `/clarify-feature`** — Update requirements and completeness +2. **Re-run `/create-spec`** — Create new spec version (use `-v2` suffix) +3. **Re-run `/plan-tasks`** — Create new task plan +4. **Re-run `/create-task-prompt`** — Create new task prompts +5. **Update `AGENTS.md`** — Document the change in Progress Log + +**Don't try to manually update old artifacts.** Re-running workflows ensures consistency. + +### 6. Work Through Tasks Sequentially + +Task prompts are designed to be executed **one at a time**: +1. Copy task execution prompt +2. Paste into Cursor chat with context references +3. AI implements the task +4. Validate (lint, typecheck, test) +5. Move to next task + +**Don't try to execute all tasks at once.** Each task builds on the previous one. + +### 7. Use `/review-progress` for Context Switches + +When stopping work (end of day, switching features, interruptions): +``` +/review-progress + Task: tasks/YYYY-MM-DD_task-prompts.md + Completed: Task 1.1, 1.2 + In Progress: Task 1.3 (50% complete) + Blocked: Need clarification on API format +``` + +This creates a checkpoint for easy resumption. + +### 8. Keep Artifacts Organized + +**File Naming:** +- Research: `research/YYYY-MM-DD_topic.md` +- Clarification: `clarification/YYYY-MM-DD_description.md` +- Spec: `spec/YYYY-MM-DD_feature-spec.md` (use `-v2` for major revisions) +- Tasks: `tasks/YYYY-MM-DD_task-plan.md` and `tasks/YYYY-MM-DD_task-prompts.md` + +**Versioning:** +- Major revisions: Use `-v2`, `-v3` suffixes +- Minor updates: Re-run workflow (overwrites old file, but history in `AGENTS.md`) + +### 9. Validate Early and Often + +After each implementation task: +``` +/validate-code +``` + +This runs: +- `bun run lint` — Linting errors +- `bun run typecheck` — TypeScript errors +- `bun run test` — Test failures + +**Fix errors immediately** before moving to the next task. + +### 10. Don't Be Afraid to Iterate + +The datatable feature went through: +- Initial research → TanStack Table approach +- Implementation → TanStack Table complete +- Scope change → Migrate to @lambdacurry/forms +- Re-clarification → Updated requirements +- Re-spec → v2 specification +- Re-plan → Migration task plan +- Re-prompt → Migration task prompts + +**This is normal.** Workflows are designed to be re-run when scope changes. + +--- + +## Summary + +### Key Takeaways + +1. **DevAgent is a tool, not an autonomous agent** — You remain the coordinator +2. **Workflows can be re-run** — Don't be afraid to iterate when scope changes +3. **AGENTS.md is your north star** — Check it before starting, update it as you progress +4. **Workflows chain naturally** — Research → Clarify → Spec → Tasks → Prompts +5. **Document assumptions** — Never proceed with undocumented assumptions +6. **Use `/review-progress`** — For context switches and resumption +7. **Validate early and often** — Fix errors before moving to next task +8. **Iteration is expected** — Complex features will go through multiple cycles + +### For New Devs + +**Start Here:** +1. Read `.devagent/core/README.md` (overview) +2. Read `.devagent/core/AGENTS.md` (workflow roster) +3. Read `DEVELOPER-GUIDE.md` (this document's companion) +4. Start with `/new-feature` for your first feature +5. Follow the workflow sequence, referencing examples in DEVELOPER-GUIDE.md + +**Common Mistakes to Avoid:** +- Skipping feature hub creation +- Not referencing feature hub in workflow calls +- Trying to execute all tasks at once +- Proceeding with undocumented assumptions +- Not checking `AGENTS.md` before starting work + +### For DevAgent Creators + +**Potential Enhancements:** +1. **Getting Started Guide** — High-level overview explaining commands → workflows relationship +2. **Workflow Chaining Hints** — After each workflow, suggest next steps with ready-to-run commands +3. **Gap Handling Guidance** — When research finds `[NEEDS CLARIFICATION]`, provide clear next steps +4. **Progress Resumption** — Better tooling for resuming work after context switches +5. **Model Recommendations** — Guidance on which models to use for which workflows +6. **Background Agent Integration** — Clearer documentation on using Codegen for parallel execution + +--- + +**Last Updated:** 2025-11-13 +**Related Documents:** +- `DEVELOPER-GUIDE.md` — Comprehensive workflow guide with examples +- `.devagent/core/README.md` — Core kit setup and usage +- `.devagent/core/AGENTS.md` — Workflow roster and reference + From 508a72abac28a892a499c2787a93becbdde3a207 Mon Sep 17 00:00:00 2001 From: Jake Ruesink Date: Tue, 30 Dec 2025 11:10:21 -0600 Subject: [PATCH 3/3] Update Developer Guide and Learned Lessons to Reflect Workflow Consolidation - Renamed the Developer Guide to emphasize the plan-driven feature implementation approach. - Consolidated the previous `create-spec` and `plan-tasks` workflows into a single `create-plan` step, streamlining the feature implementation process. - Updated all relevant commands in the Developer Guide and Learned Lessons to use the new `devagent` command format. - Enhanced documentation clarity by providing detailed descriptions of the new workflows and their execution processes. - Adjusted examples and best practices to align with the updated workflow structure, ensuring consistency across documentation. --- .devagent/DEVELOPER-GUIDE.md | 362 +++++++++++++---------------------- .devagent/learned-lessons.md | 198 +++++++++---------- 2 files changed, 228 insertions(+), 332 deletions(-) diff --git a/.devagent/DEVELOPER-GUIDE.md b/.devagent/DEVELOPER-GUIDE.md index 0cbc980..c4b3726 100644 --- a/.devagent/DEVELOPER-GUIDE.md +++ b/.devagent/DEVELOPER-GUIDE.md @@ -1,7 +1,9 @@ -# Developer Guide: Spec-Driven Feature Implementation +# Developer Guide: Plan-Driven Feature Implementation This guide walks you through implementing new features using the DevAgent workflow system. Follow these steps to go from idea to implementation with proper documentation and validation. +**Note:** This guide has been updated to reflect the consolidated workflow system. The previous `create-spec` and `plan-tasks` workflows have been merged into `create-plan`, and `implement-plan` now automates task execution. All workflows use the `devagent [workflow-name]` invocation format. + ## Table of Contents 1. [Quick Start](#quick-start) @@ -19,26 +21,22 @@ For a simple feature, the typical flow is: ```bash # 1. Scaffold feature hub -/new-feature "Add user profile editing" +devagent new-feature "Add user profile editing" # 2. Research existing patterns -/research "How do we handle form editing in this codebase?" +devagent research "How do we handle form editing in this codebase?" # 3. Clarify requirements -/clarify-feature - -# 4. Create specification -/create-spec +devagent clarify-feature -# 5. Plan implementation tasks -/plan-tasks +# 4. Create plan (combines spec + task planning) +devagent create-plan -# 6. Generate task prompts -/create-task-prompt +# 5. Implement tasks from plan +devagent implement-plan -# 7. Implement (using Cursor AI with task prompts) -# 8. Validate code -/validate-code +# 6. Review progress (optional, when switching contexts) +devagent review-progress ``` --- @@ -50,26 +48,20 @@ The DevAgent system uses a structured workflow to ensure features are well-resea ``` Feature Idea ↓ -[new-feature] → Scaffold feature hub - ↓ -[research] → Investigate technical patterns & constraints - ↓ -[clarify-feature] → Validate requirements with stakeholders - ↓ -[create-spec] → Write detailed specification +devagent new-feature → Scaffold feature hub ↓ -[plan-tasks] → Break down into implementation tasks +devagent research → Investigate technical patterns & constraints ↓ -[create-task-prompt] → Generate AI-ready task prompts +devagent clarify-feature → Validate requirements with stakeholders ↓ -Implementation → Code using Cursor AI +devagent create-plan → Create comprehensive plan (product context + implementation tasks) ↓ -[validate-code] → Lint, typecheck, test +devagent implement-plan → Execute tasks from plan automatically ↓ Complete ``` -**For complex features**, use the full workflow. **For simple enhancements**, you can skip directly to `research` → `create-task-prompt`. +**For complex features**, use the full workflow. **For simple enhancements**, you can skip directly to `devagent research` → `devagent create-plan` → `devagent implement-plan`. --- @@ -77,7 +69,7 @@ Complete ### Step 1: Scaffold Feature Hub -**Command:** `/new-feature` +**Command:** `devagent new-feature` **When to use:** Start here for any new feature, even if it's just an idea. @@ -107,7 +99,7 @@ AI: Creates .devagent/workspace/features/2025-11-06_simple-datatable-to-view-dat ### Step 2: Research Technical Patterns -**Command:** `/research` +**Command:** `devagent research` **When to use:** Before writing specs, investigate existing code patterns, libraries, and constraints. @@ -185,7 +177,7 @@ AI: [Provides technical evidence for decision] ### Step 3: Clarify Requirements -**Command:** `/clarify-feature` +**Command:** `devagent clarify-feature` **When to use:** When requirements are unclear or you need stakeholder validation. @@ -370,46 +362,45 @@ Ready for spec --- -### Step 4: Create Specification +### Step 4: Create Plan -**Command:** `/create-spec` +**Command:** `devagent create-plan` **When to use:** After research and clarification are complete (or sufficient for MVP). **Input:** Feature hub path and research/clarification references **What it does:** -- Synthesizes research and requirements into detailed spec -- Defines functional narrative, technical approach, and acceptance criteria -- Creates spec document in `spec/YYYY-MM-DD_feature-spec.md` +- Synthesizes research and requirements into a comprehensive plan +- Combines product context (objectives, users, solution principles) with implementation tasks +- Creates plan document in `plan/YYYY-MM-DD_feature-plan.md` - Links to research and clarification artifacts +- **Note:** This workflow consolidates the previous `create-spec` and `plan-tasks` workflows into a single step **Example:** ``` -You: /create-spec +You: devagent create-plan -AI: Reads research and clarification packets, creates comprehensive spec: - - Context & Objectives - - User Stories - - Functional Requirements - - Technical Approach (TanStack Table, server-side pagination) - - Acceptance Criteria +AI: Reads research and clarification packets, creates comprehensive plan: + - Product Context: Objectives, Users, Solution Principles + - Implementation Tasks: Concrete files/modules to modify + - Acceptance Criteria: Behavior-focused validation - Risks & Open Questions - Creates: spec/2025-11-06_datatable-specification.md + Creates: plan/2025-11-06_datatable-plan.md ``` -**Output:** Review-ready specification document +**Output:** Review-ready plan document with both product context and implementation tasks -**⚠️ Handling Spec Gaps:** +**⚠️ Handling Plan Gaps:** -If `create-spec` identifies missing information, it will flag gaps and recommend next steps: +If `create-plan` identifies missing information, it will flag gaps and recommend next steps: -**Example: Spec Identifies Missing Requirements** +**Example: Plan Identifies Missing Requirements** ``` -You: /create-spec +You: devagent create-plan AI: Reads clarification packet (6/8 complete), identifies gaps: @@ -418,11 +409,11 @@ AI: Reads clarification packet (6/8 complete), identifies gaps: - [MISSING] Error handling strategy undefined - [ASSUMPTION] Using TanStack Table (from research, needs validation) - Recommendation: Run /research for API patterns + Recommendation: Run devagent research for API patterns or update clarification packet with technical decisions." You: [Choose one:] - Option A: /research "API endpoint patterns for pagination" + Option A: devagent research "API endpoint patterns for pagination" Option B: [Update clarification packet with technical decisions] Option C: [Proceed with assumptions documented in Risks section] ``` @@ -430,136 +421,63 @@ You: [Choose one:] **What to do:** - **For technical gaps:** Run additional research - **For requirement gaps:** Update clarification packet, re-run clarify-feature -- **For assumptions:** Document in spec's Risks section, proceed with validation plan +- **For assumptions:** Document in plan's Risks section, proceed with validation plan --- -### Step 5: Plan Implementation Tasks +### Step 5: Implement Plan -**Command:** `/plan-tasks` +**Command:** `devagent implement-plan` -**When to use:** After spec is approved, before starting implementation. +**When to use:** After plan is approved, ready to start implementation. -**Input:** Spec path +**Input:** Plan document path (or feature hub path) **What it does:** -- Breaks spec into ordered, execution-focused tasks -- Maps tasks to specific files/modules -- Includes validation (tests, linting) in each task -- Creates task plan in `tasks/YYYY-MM-DD_task-plan.md` +- Reads implementation tasks from the plan document +- Executes tasks sequentially in dependency order +- Tracks progress in `AGENTS.md` (Implementation Checklist and Progress Log) +- Validates task dependencies before execution +- Skips non-coding tasks gracefully +- Pauses only for truly ambiguous decisions or blockers **Example:** ``` -You: /plan-tasks +You: devagent implement-plan + Plan: plan/2025-11-06_datatable-plan.md -AI: Analyzes spec, creates task breakdown: +AI: Parses plan document, extracts implementation tasks: Task 1: Create DataTable component using TanStack Table Task 2: Implement server-side pagination endpoint Task 3: Add route for dataset viewing Task 4: Write tests for DataTable component Task 5: Write tests for pagination endpoint - Creates: tasks/2025-11-06_datatable-implementation-plan.md -``` - -**Output:** Task plan with concrete implementation steps - ---- - -### Step 6: Generate Task Prompts - -**Command:** `/create-task-prompt` - -**When to use:** When ready to start coding, convert tasks into AI-ready prompts. - -**Input:** Task plan or spec path - -**What it does:** -- Converts tasks into detailed AI execution prompts -- Includes context references (research, specs, code paths) -- Creates task prompt file in `tasks/YYYY-MM-DD_task-slug.md` -- Each task includes: description, execution prompt, file hints, context refs - -**Example:** - -``` -You: /create-task-prompt - -AI: Creates task prompt file with: - Task 1.1: Create DataTable component - - Execution prompt: "Create a DataTable component using TanStack - Table v8 that displays dataset rows with server-side pagination..." - - File hints: app/components/DataTable.tsx, packages/components/src/ui/table.tsx - - Context refs: research/2025-11-06_datatable-research.md, - spec/2025-11-06_datatable-spec.md + Executes Task 1: [Creates DataTable component] + Updates AGENTS.md: Task 1 marked complete - Creates: tasks/2025-11-06_datatable-component-task.md -``` - -**Output:** AI-ready task prompts with full context - ---- - -### Step 7: Implement Using Cursor AI - -**When to use:** During coding, reference task prompts in Cursor chat. - -**How to use:** -1. Open the task prompt file in Cursor -2. Copy the execution prompt for the current task -3. Paste into Cursor chat with context references -4. AI implements the task using provided context - -**Example:** - -``` -You: [Paste task prompt from tasks/2025-11-06_datatable-component-task.md] + Executes Task 2: [Implements pagination endpoint] + Updates AGENTS.md: Task 2 marked complete - Task 1.1: Create DataTable component using TanStack Table v8... - - Reference: research/2025-11-06_datatable-research.md - Reference: spec/2025-11-06_datatable-spec.md - Reference: packages/components/src/ui/table.tsx - -AI: [Implements DataTable component following spec and research findings] + [Continues with remaining tasks...] ``` -**Best Practice:** Work through tasks sequentially, validating each before moving to the next. - ---- - -### Step 8: Validate Code - -**Command:** `/validate-code` - -**When to use:** After implementing a task or before committing. - -**What it does:** -- Runs `bun run lint` - checks for linting errors -- Runs `bun run typecheck` - checks for TypeScript errors -- Runs `bun run test` - runs all tests -- Provides specific fix suggestions if errors found - -**Example:** - -``` -You: /validate-code +**Output:** Implemented code with progress tracked in `AGENTS.md` -AI: Running lint... ✓ No errors - Running typecheck... ✗ Error in DataTable.tsx:42 - Running tests... ✗ 2 tests failing - - [Provides specific fix suggestions with file paths and line numbers] -``` +**⚠️ Important Notes:** -**Output:** Validation results with fix suggestions +- **Plan document is read-only** — the workflow does not modify the plan +- **Dependencies are validated** — tasks with incomplete dependencies are skipped +- **Non-coding tasks are skipped** — only coding tasks are executed +- **Progress is tracked** — `AGENTS.md` is updated after each task completion +- **Manual validation** — Run `bun run lint`, `bun run typecheck`, and `bun run test` manually after implementation --- -### Step 9: Review Progress (Optional) +### Step 6: Review Progress (Optional) -**Command:** `/review-progress` +**Command:** `devagent review-progress` **When to use:** When switching contexts, at end of day, or after interruptions. @@ -603,23 +521,26 @@ AI: Creates checkpoint: | Command | Purpose | Input | Output | |---------|---------|-------|--------| -| `/new-feature` | Scaffold feature hub | Title/description | Feature hub with `AGENTS.md` | -| `/research` | Investigate technical patterns | Research question | Research packet | -| `/clarify-feature` | Validate requirements | Feature concept | Clarification packet | -| `/create-spec` | Write specification | Feature hub path | Specification document | -| `/plan-tasks` | Break down implementation | Spec path | Task plan | -| `/create-task-prompt` | Generate AI prompts | Task plan/spec | Task prompt file | -| `/validate-code` | Validate implementation | None | Lint/typecheck/test results | -| `/review-progress` | Capture progress state | Task/spec + progress | Checkpoint file | +| `devagent new-feature` | Scaffold feature hub | Title/description | Feature hub with `AGENTS.md` | +| `devagent research` | Investigate technical patterns | Research question | Research packet | +| `devagent clarify-feature` | Validate requirements | Feature concept | Clarification packet | +| `devagent create-plan` | Create comprehensive plan | Feature hub path | Plan document (product context + tasks) | +| `devagent implement-plan` | Execute implementation tasks | Plan document path | Implemented code + progress updates | +| `devagent review-progress` | Capture progress state | Plan/task + progress | Checkpoint file | +| `devagent review-pr` | Review pull requests | PR number/URL | Review artifact | +| `devagent compare-prs` | Compare multiple PRs | PR numbers/URLs | Comparison artifact | +| `devagent mark-feature-complete` | Archive completed feature | Feature hub path | Moved to completed/ with path updates | ### Additional Commands | Command | Purpose | When to Use | |---------|---------|-------------| -| `/brainstorm` | Generate feature ideas | Exploring solution space | -| `/update-product-mission` | Update product mission | Mission conflicts or updates | -| `/update-tech-stack` | Document tech stack | New project or major changes | -| `/update-constitution` | Update governance docs | Constitutional changes | +| `devagent brainstorm` | Generate feature ideas | Exploring solution space | +| `devagent update-product-mission` | Update product mission | Mission conflicts or updates | +| `devagent update-tech-stack` | Document tech stack | New project or major changes | +| `devagent update-constitution` | Update governance docs | Constitutional changes | +| `devagent build-workflow` | Create new workflows | Adding new agent capabilities | +| `devagent update-devagent` | Update DevAgent core | Syncing with latest DevAgent changes | --- @@ -662,24 +583,24 @@ Is it a technical question? **Scenario:** Add a "Copy to Clipboard" button to dataset cards. ``` -You: /new-feature "Add copy dataset ID to clipboard button" +You: devagent new-feature "Add copy dataset ID to clipboard button" AI: [Creates feature hub] -You: /research "How do we handle clipboard operations in React? +You: devagent research "How do we handle clipboard operations in React? Are there existing copy buttons in the codebase?" AI: [Finds existing clipboard patterns, creates research packet] -You: /create-task-prompt +You: devagent create-plan -AI: [Creates task prompt with implementation steps] +AI: [Creates plan with implementation tasks] -You: [Implements using Cursor AI with task prompt] +You: devagent implement-plan -You: /validate-code +AI: [Executes tasks from plan, implements feature] -AI: [Validates code, provides fixes if needed] +You: [Manually run: bun run lint && bun run typecheck && bun run test] ``` **Result:** Feature implemented with minimal overhead. @@ -691,17 +612,17 @@ AI: [Validates code, provides fixes if needed] **Scenario:** Build a data visualization dashboard. ``` -You: /new-feature "Create data visualization dashboard" +You: devagent new-feature "Create data visualization dashboard" AI: [Creates feature hub] -You: /research "What chart libraries are available? +You: devagent research "What chart libraries are available? How do we structure dashboard layouts? What's the data access pattern for aggregated queries?" AI: [Investigates, creates comprehensive research packet] -You: /clarify-feature +You: devagent clarify-feature AI: [Creates clarification packet, identifies gaps] "Need stakeholder input on: chart types, refresh intervals, @@ -709,27 +630,17 @@ AI: [Creates clarification packet, identifies gaps] You: [Meets with stakeholders, fills gaps] -You: /create-spec - -AI: [Creates detailed spec with all requirements] - -You: /plan-tasks - -AI: [Breaks down into 8 implementation tasks] - -You: /create-task-prompt +You: devagent create-plan -AI: [Creates task prompts for each task] +AI: [Creates comprehensive plan with product context and 8 implementation tasks] -You: [Implements Task 1 using Cursor AI] +You: devagent implement-plan -You: /validate-code +AI: [Executes tasks sequentially, tracks progress in AGENTS.md] -AI: [Validates, provides fixes] +You: [Manually validate after each task: bun run lint && bun run typecheck] -You: [Continues with remaining tasks...] - -You: /review-progress +You: devagent review-progress AI: [Captures progress, updates AGENTS.md] ``` @@ -743,16 +654,14 @@ AI: [Captures progress, updates AGENTS.md] **Scenario:** Fix form validation error. ``` -You: /research "Form validation error on email field - +You: devagent research "Form validation error on email field - what's the current validation logic?" AI: [Finds validation code, identifies issue] You: [Fixes bug directly] -You: /validate-code - -AI: [Validates fix] +You: [Manually run: bun run lint && bun run typecheck && bun run test] ``` **Result:** Quick fix without full workflow overhead. @@ -764,11 +673,11 @@ AI: [Validates fix] **Scenario:** Build export feature, but requirements are vague. ``` -You: /new-feature "Add data export functionality" +You: devagent new-feature "Add data export functionality" AI: [Creates feature hub] -You: /research "What export formats are supported? +You: devagent research "What export formats are supported? How do we handle large file exports?" AI: Investigates, finds: @@ -778,7 +687,7 @@ AI: Investigates, finds: Creates: research/2025-11-06_export-research.md -You: /clarify-feature +You: devagent clarify-feature AI: Creates clarification packet: ⚠️ Problem Statement: Partial (why export needed?) @@ -814,7 +723,7 @@ You: [Email product manager] You: [Update clarification packet with answers] -You: /clarify-feature +You: devagent clarify-feature [Include PM answers] AI: Updates clarification packet: @@ -824,16 +733,16 @@ AI: Updates clarification packet: ✓ Constraints: Complete (50MB limit) Completeness: 6/8 dimensions complete - Status: Ready for Spec (with assumptions) + Status: Ready for Plan (with assumptions) -You: /create-spec +You: devagent create-plan -AI: Creates spec with documented assumptions: +AI: Creates plan with documented assumptions: - Technical approach for 50MB file generation - CSV/Excel implementation (PDF deferred) - Success tracking for export usage - Creates: spec/2025-11-06_export-specification.md + Creates: plan/2025-11-06_export-plan.md ``` **Result:** Iterative clarification cycle fills gaps before spec work. @@ -845,7 +754,7 @@ AI: Creates spec with documented assumptions: **Scenario:** Research discovers conflicting patterns in codebase. ``` -You: /research "How do we handle authentication in API routes?" +You: devagent research "How do we handle authentication in API routes?" AI: Investigates, finds: - Pattern A: Uses session cookies (older routes) @@ -858,7 +767,7 @@ AI: Investigates, finds: You: [Review research, see conflict] -You: /clarify-feature +You: devagent clarify-feature [Include auth pattern question] AI: Adds to clarification packet: @@ -873,9 +782,9 @@ Team Lead: "Use Bearer tokens for all new features" You: [Update clarification packet] -You: /create-spec +You: devagent create-plan -AI: Uses Bearer token pattern in spec +AI: Uses Bearer token pattern in plan ``` **Result:** Conflict resolved through clarification, spec uses consistent pattern. @@ -887,7 +796,7 @@ AI: Uses Bearer token pattern in spec **Scenario:** Need to proceed, but stakeholder is on vacation. ``` -You: /clarify-feature +You: devagent clarify-feature AI: Creates clarification packet: ⚠️ Success Criteria: Missing (adoption target?) @@ -907,15 +816,15 @@ You: "Assumptions (to be validated with [Stakeholder] on return): - Validation date: [Stakeholder return date] " -You: /create-spec +You: devagent create-plan [Include note about assumptions] -AI: Creates spec with: +AI: Creates plan with: - Assumptions clearly documented in Risks section - Validation plan included - Proceeds with best-effort assumptions - Creates: spec/2025-11-06_feature-spec.md + Creates: plan/2025-11-06_feature-plan.md [Later, when stakeholder returns] @@ -925,10 +834,10 @@ Stakeholder: "Adoption target should be 50%, not 40%" You: [Update clarification packet] -You: /create-spec +You: devagent create-plan [Re-run with updated requirements] -AI: Updates spec with corrected success metric +AI: Updates plan with corrected success metric ``` **Result:** Feature proceeds with documented assumptions, validated later. @@ -1246,8 +1155,7 @@ Stakeholder unavailable until next week ├── AGENTS.md # Progress tracker ├── research/ # Research packets ├── clarification/ # Requirement clarification - ├── spec/ # Specifications - └── tasks/ # Task plans & prompts + └── plan/ # Plan documents (product context + tasks) .agents/ └── commands/ # Command files (symlinked to .cursor/commands) @@ -1268,29 +1176,25 @@ Stakeholder unavailable until next week ``` New Feature Workflow: -1. /new-feature "Title" -2. /research "Question" -3. /clarify-feature -4. /create-spec -5. /plan-tasks -6. /create-task-prompt -7. [Implement with Cursor AI] -8. /validate-code -9. /review-progress (optional) +1. devagent new-feature "Title" +2. devagent research "Question" +3. devagent clarify-feature +4. devagent create-plan +5. devagent implement-plan +6. devagent review-progress (optional) Simple Enhancement: -1. /research "Question" -2. /create-task-prompt -3. [Implement] -4. /validate-code +1. devagent research "Question" +2. devagent create-plan +3. devagent implement-plan Bug Fix: -1. /research "Problem" -2. [Fix] -3. /validate-code +1. devagent research "Problem" +2. [Fix manually] +3. [Run: bun run lint && bun run typecheck && bun run test] ``` --- -*Last Updated: 2025-11-07* +*Last Updated: 2025-12-27* diff --git a/.devagent/learned-lessons.md b/.devagent/learned-lessons.md index 374c21b..6c7cbdc 100644 --- a/.devagent/learned-lessons.md +++ b/.devagent/learned-lessons.md @@ -49,52 +49,44 @@ Through actual usage, the structure became clearer: Based on the datatable feature implementation, here's how workflows were used in practice: ``` -1. /new-feature "Add datatable to view dataset data" +1. devagent new-feature "Add datatable to view dataset data" → Creates feature hub with AGENTS.md and folder structure → Recommends next steps (research, clarify) -2. /research "table components and data access patterns" +2. devagent research "table components and data access patterns" → Investigates codebase, finds existing patterns → Creates research packet with findings → Identifies gaps requiring clarification -3. /clarify-feature +3. devagent clarify-feature → Validates requirements across 8 dimensions → Creates clarification packet → Identifies missing information (4/8 complete initially) -4. /clarify-feature (re-run after gap-fill) +4. devagent clarify-feature (re-run after gap-fill) → Updates clarification packet with new information → Improves completeness (7/8 complete) -5. /create-spec - → Synthesizes research + clarification into spec - → Creates comprehensive specification document +5. devagent create-plan + → Synthesizes research + clarification into comprehensive plan + → Creates plan document with product context and implementation tasks + → Note: This workflow consolidates the previous create-spec and plan-tasks workflows -6. /plan-tasks - → Breaks spec into 6 tasks with 28 subtasks - → Creates implementation plan +6. devagent implement-plan + → Executes tasks from plan document sequentially + → Tracks progress in AGENTS.md automatically + → Validates dependencies before execution -7. /create-task-prompt - → Converts tasks into AI-ready execution prompts - → Includes context references and file hints - -8. [Implementation using Cursor AI] - → Execute tasks one by one using task prompts - -9. /clarify-feature (re-run for major direction change) +7. devagent clarify-feature (re-run for major direction change) → Scope changed: migrate to @lambdacurry/forms → Creates comprehensive clarification document → Updates completeness (8/8 complete) -10. /create-spec (re-run for v2) - → Creates new spec reflecting migration requirements - -11. /plan-tasks (re-run for migration) - → Creates migration task plan +8. devagent create-plan (re-run for migration) + → Creates new plan reflecting migration requirements -12. /create-task-prompt (re-run for migration) - → Creates migration task prompts +9. devagent implement-plan (re-run for migration) + → Executes migration tasks from updated plan ``` ### Key Insights @@ -102,12 +94,13 @@ Based on the datatable feature implementation, here's how workflows were used in **1. Workflows Can Be Re-Run** - If something changes, you can re-call the same command to update previous documents - This is powerful but can create confusion about which document is "current" -- **Solution:** Use clear versioning in filenames (e.g., `spec-v2.md`) and update `AGENTS.md` references +- **Solution:** Use clear versioning in filenames (e.g., `plan-v2.md`) and update `AGENTS.md` references **2. Workflows Chain Naturally** - Each workflow produces artifacts that feed into the next -- Research → Clarification → Spec → Tasks → Prompts -- **But:** You can skip steps for simple features (research → create-task-prompt) +- Research → Clarification → Plan → Implementation +- **But:** You can skip steps for simple features (research → create-plan → implement-plan) +- **Note:** The workflow has been simplified - `create-spec` and `plan-tasks` were consolidated into `create-plan` **3. Iteration is Expected** - The datatable feature went through multiple clarification cycles @@ -182,7 +175,7 @@ Much smoother experience **Solution A: Reference the Feature Hub Path** ``` -You: /research "What table components exist in the codebase? +You: devagent research "What table components exist in the codebase? How do we query organization database tables?" Feature: .devagent/workspace/features/active/2025-11-06_simple-datatable-to-view-data/ @@ -190,7 +183,7 @@ You: /research "What table components exist in the codebase? **Solution B: Reference the AGENTS.md File** ``` -You: /research "table components and data access patterns" +You: devagent research "table components and data access patterns" Context: See .devagent/workspace/features/active/2025-11-06_simple-datatable-to-view-data/AGENTS.md ``` @@ -203,29 +196,29 @@ You: /research "table components and data access patterns" **The Question:** If I stop working and come back later, how do I let the LLM know what to continue working on? -**Solution A: Use `/review-progress`** +**Solution A: Use `devagent review-progress`** ``` -You: /review-progress - Task: tasks/2025-11-06_datatable-component-task.md - Completed: Task 1.1 (DataTable component created) - In Progress: Task 1.2 (Server pagination endpoint) +You: devagent review-progress + Plan: plan/2025-11-06_datatable-plan.md + Completed: Task 1 (DataTable component created) + In Progress: Task 2 (Server pagination endpoint) Blocked: Need clarification on pagination API format ``` This creates a checkpoint file and updates `AGENTS.md` with progress state. -**Solution B: Reference AGENTS.md and Task Prompts** +**Solution B: Reference AGENTS.md and Plan Document** ``` You: [Open feature hub AGENTS.md] [Review "Progress Log" and "Implementation Checklist"] - [Open current task prompt file] + [Open plan document] - Continue from Task 1.2: Implement server-side pagination endpoint - See: tasks/2025-11-06_task-prompts.md, Task 1.2 - Context: Feature hub AGENTS.md shows Task 1.1 complete + Continue from Task 2: Implement server-side pagination endpoint + See: plan/2025-11-06_datatable-plan.md, Task 2 + Context: Feature hub AGENTS.md shows Task 1 complete ``` -**Best Practice:** Use `/review-progress` when stopping work. When resuming, reference both `AGENTS.md` (for overall progress) and the specific task prompt file (for current task details). +**Best Practice:** Use `devagent review-progress` when stopping work. When resuming, reference both `AGENTS.md` (for overall progress) and the plan document (for task details). If using `devagent implement-plan`, it will automatically resume from where it left off. --- @@ -235,7 +228,7 @@ You: [Open feature hub AGENTS.md] **Solution A: Document Disagreement in Clarification** ``` -You: /clarify-feature +You: devagent clarify-feature Note: Research recommended TanStack Table, but I want to use @lambdacurry/forms instead. See research/2025-11-06_datatable-research.md @@ -246,7 +239,7 @@ The clarification packet will document this decision, and future workflows will **Solution B: Re-run Research with Different Focus** ``` -You: /research "How do we implement data tables with @lambdacurry/forms? +You: devagent research "How do we implement data tables with @lambdacurry/forms? What are the server-side pagination patterns?" Note: Previous research focused on TanStack Table (see @@ -263,27 +256,27 @@ This creates a new research packet that can be referenced in the spec. --- -### Q4: Is `/create-task-prompt` for one task or all tasks? +### Q4: How does `devagent implement-plan` work? -**The Question:** Does `/create-task-prompt` produce a master prompt for all tasks, or one prompt per task? +**The Question:** Does `devagent implement-plan` execute all tasks automatically, or do I need to run it multiple times? -**Answer:** It produces **one file with multiple task prompts** (one per task). Each task has: -- Execution prompt (detailed instructions) -- File hints (where to create/modify files) -- Context references (research, spec, code paths) -- Acceptance criteria +**Answer:** `devagent implement-plan` **executes tasks automatically** from the plan document. It: +- Parses the plan document to extract implementation tasks +- Validates task dependencies against AGENTS.md +- Executes tasks sequentially in dependency order +- Updates AGENTS.md after each task completion +- Skips non-coding tasks gracefully +- Pauses only for truly ambiguous decisions or blockers **Usage Pattern:** ``` -1. /create-task-prompt (creates tasks/2025-11-06_task-prompts.md) -2. Open task prompts file -3. Copy Task 1.1 execution prompt -4. Paste into Cursor chat with context references -5. AI implements Task 1.1 -6. Repeat for Task 1.2, 1.3, etc. +1. devagent create-plan (creates plan/2025-11-06_feature-plan.md) +2. devagent implement-plan (executes all tasks from plan) +3. Review AGENTS.md to see progress +4. Manually validate: bun run lint && bun run typecheck && bun run test ``` -**Best Practice:** Work through tasks **sequentially**. Each task builds on the previous one. Validate after each task before moving to the next. +**Best Practice:** The workflow executes as much as possible without stopping. Review progress in AGENTS.md after execution. For partial execution, you can specify a task range (e.g., "tasks 1-3"). --- @@ -306,10 +299,9 @@ Decision: Keep current feature as-is (it's functional) Current State: TanStack Table implementation, but needs major refactor New Requirement: Migrate to @lambdacurry/forms -Decision: Re-run /clarify-feature (update scope) - Re-run /create-spec (create v2 spec) - Re-run /plan-tasks (create migration plan) - Re-run /create-task-prompt (create migration prompts) +Decision: Re-run devagent clarify-feature (update scope) + Re-run devagent create-plan (create v2 plan) + Re-run devagent implement-plan (execute migration tasks) ``` **Decision Criteria:** @@ -334,8 +326,8 @@ Decision: Re-run /clarify-feature (update scope) - Reference context from workspace **Current Usage Pattern:** -- **Planning workflows** (`/research`, `/create-spec`, `/plan-tasks`) → Use best available model (e.g., Claude Sonnet 4.5) -- **Implementation** (Cursor AI with task prompts) → Use auto or best available model +- **Planning workflows** (`devagent research`, `devagent create-plan`) → Use best available model (e.g., Claude Sonnet 4.5) +- **Implementation** (`devagent implement-plan`) → Use best available model for automated execution **Potential Enhancement:** - **Background agents** (Codegen) → Can run implementation tasks asynchronously @@ -350,18 +342,18 @@ Decision: Re-run /clarify-feature (update scope) ## Best Practices & Recommendations -### 1. Start with `/new-feature`, Then Research +### 1. Start with `devagent new-feature`, Then Research **Don't skip the feature hub.** Even for simple features, creating a feature hub provides: - Centralized progress tracking (`AGENTS.md`) -- Organized artifact storage (research/, spec/, tasks/) +- Organized artifact storage (research/, clarification/, plan/) - Clear ownership and status **Workflow:** ``` -/new-feature "Brief description" +devagent new-feature "Brief description" ↓ -/research "Specific questions" +devagent research "Specific questions" ↓ [Continue based on complexity] ``` @@ -371,7 +363,7 @@ Decision: Re-run /clarify-feature (update scope) When chaining workflows, always include the feature hub path: ``` -/research "question" +devagent research "question" Feature: .devagent/workspace/features/active/YYYY-MM-DD_feature-slug/ ``` @@ -407,45 +399,43 @@ When proceeding with incomplete information: ### 5. Re-Run Workflows When Scope Changes If requirements change significantly: -1. **Re-run `/clarify-feature`** — Update requirements and completeness -2. **Re-run `/create-spec`** — Create new spec version (use `-v2` suffix) -3. **Re-run `/plan-tasks`** — Create new task plan -4. **Re-run `/create-task-prompt`** — Create new task prompts -5. **Update `AGENTS.md`** — Document the change in Progress Log +1. **Re-run `devagent clarify-feature`** — Update requirements and completeness +2. **Re-run `devagent create-plan`** — Create new plan version (use `-v2` suffix) +3. **Re-run `devagent implement-plan`** — Execute updated tasks +4. **Update `AGENTS.md`** — Document the change in Progress Log **Don't try to manually update old artifacts.** Re-running workflows ensures consistency. -### 6. Work Through Tasks Sequentially +### 6. Use `devagent implement-plan` for Automated Execution -Task prompts are designed to be executed **one at a time**: -1. Copy task execution prompt -2. Paste into Cursor chat with context references -3. AI implements the task -4. Validate (lint, typecheck, test) -5. Move to next task +The `devagent implement-plan` workflow executes tasks automatically: +1. Parses plan document for implementation tasks +2. Validates dependencies before execution +3. Executes tasks sequentially +4. Updates AGENTS.md after each task +5. Pauses only for blockers or ambiguous decisions -**Don't try to execute all tasks at once.** Each task builds on the previous one. +**For manual implementation:** You can still work through tasks manually by referencing the plan document, but `devagent implement-plan` automates the process. ### 7. Use `/review-progress` for Context Switches When stopping work (end of day, switching features, interruptions): ``` -/review-progress - Task: tasks/YYYY-MM-DD_task-prompts.md - Completed: Task 1.1, 1.2 - In Progress: Task 1.3 (50% complete) +devagent review-progress + Plan: plan/YYYY-MM-DD_feature-plan.md + Completed: Task 1, 2 + In Progress: Task 3 (50% complete) Blocked: Need clarification on API format ``` -This creates a checkpoint for easy resumption. +This creates a checkpoint for easy resumption. When resuming, `devagent implement-plan` will automatically continue from where it left off. ### 8. Keep Artifacts Organized **File Naming:** - Research: `research/YYYY-MM-DD_topic.md` - Clarification: `clarification/YYYY-MM-DD_description.md` -- Spec: `spec/YYYY-MM-DD_feature-spec.md` (use `-v2` for major revisions) -- Tasks: `tasks/YYYY-MM-DD_task-plan.md` and `tasks/YYYY-MM-DD_task-prompts.md` +- Plan: `plan/YYYY-MM-DD_feature-plan.md` (use `-v2` for major revisions) **Versioning:** - Major revisions: Use `-v2`, `-v3` suffixes @@ -453,9 +443,9 @@ This creates a checkpoint for easy resumption. ### 9. Validate Early and Often -After each implementation task: +After `devagent implement-plan` execution or manual implementation: ``` -/validate-code +bun run lint && bun run typecheck && bun run test ``` This runs: @@ -463,7 +453,7 @@ This runs: - `bun run typecheck` — TypeScript errors - `bun run test` — Test failures -**Fix errors immediately** before moving to the next task. +**Fix errors immediately** before moving to the next task. Note: There is no `/validate-code` workflow - validation is done manually. ### 10. Don't Be Afraid to Iterate @@ -472,11 +462,10 @@ The datatable feature went through: - Implementation → TanStack Table complete - Scope change → Migrate to @lambdacurry/forms - Re-clarification → Updated requirements -- Re-spec → v2 specification -- Re-plan → Migration task plan -- Re-prompt → Migration task prompts +- Re-plan → v2 plan document +- Re-implementation → Migration tasks executed -**This is normal.** Workflows are designed to be re-run when scope changes. +**This is normal.** Workflows are designed to be re-run when scope changes. The workflow has been simplified - `create-spec` and `plan-tasks` were consolidated into `create-plan`, and `implement-plan` automates task execution. --- @@ -487,11 +476,13 @@ The datatable feature went through: 1. **DevAgent is a tool, not an autonomous agent** — You remain the coordinator 2. **Workflows can be re-run** — Don't be afraid to iterate when scope changes 3. **AGENTS.md is your north star** — Check it before starting, update it as you progress -4. **Workflows chain naturally** — Research → Clarify → Spec → Tasks → Prompts +4. **Workflows chain naturally** — Research → Clarify → Plan → Implement 5. **Document assumptions** — Never proceed with undocumented assumptions -6. **Use `/review-progress`** — For context switches and resumption -7. **Validate early and often** — Fix errors before moving to next task +6. **Use `devagent review-progress`** — For context switches and resumption +7. **Validate early and often** — Run lint/typecheck/test manually after implementation 8. **Iteration is expected** — Complex features will go through multiple cycles +9. **Workflow consolidation** — `create-spec` and `plan-tasks` merged into `create-plan` +10. **Automated implementation** — `devagent implement-plan` executes tasks automatically ### For New Devs @@ -499,13 +490,14 @@ The datatable feature went through: 1. Read `.devagent/core/README.md` (overview) 2. Read `.devagent/core/AGENTS.md` (workflow roster) 3. Read `DEVELOPER-GUIDE.md` (this document's companion) -4. Start with `/new-feature` for your first feature +4. Start with `devagent new-feature` for your first feature 5. Follow the workflow sequence, referencing examples in DEVELOPER-GUIDE.md **Common Mistakes to Avoid:** - Skipping feature hub creation - Not referencing feature hub in workflow calls -- Trying to execute all tasks at once +- Using old workflow names (`/create-spec`, `/plan-tasks` instead of `devagent create-plan`) +- Not using `devagent implement-plan` for automated task execution - Proceeding with undocumented assumptions - Not checking `AGENTS.md` before starting work @@ -515,13 +507,13 @@ The datatable feature went through: 1. **Getting Started Guide** — High-level overview explaining commands → workflows relationship 2. **Workflow Chaining Hints** — After each workflow, suggest next steps with ready-to-run commands 3. **Gap Handling Guidance** — When research finds `[NEEDS CLARIFICATION]`, provide clear next steps -4. **Progress Resumption** — Better tooling for resuming work after context switches +4. **Progress Resumption** — Better tooling for resuming work after context switches (partially addressed by `devagent implement-plan`) 5. **Model Recommendations** — Guidance on which models to use for which workflows -6. **Background Agent Integration** — Clearer documentation on using Codegen for parallel execution +6. **Validation Integration** — Consider adding automated validation step to `devagent implement-plan` --- -**Last Updated:** 2025-11-13 +**Last Updated:** 2025-12-27 **Related Documents:** - `DEVELOPER-GUIDE.md` — Comprehensive workflow guide with examples - `.devagent/core/README.md` — Core kit setup and usage