TLDR
New conversation for every task. Prevents context pollution. Plan mode FIRST. Always review the approach before AI writes code. For fixes and debugging, this is critical. Minimal prompting. "Can we plan task 2.3?" is enough — AI reads your docs for context. Verify and score. Check the work, assign confidence, close, next task.
The Cycle
1. Open new Cline chat / Claude Code session
2. "Can we please plan task X.X?"
3. AI reads docs, proposes approach
4. Review plan, ask questions, adjust
5. "Proceed" → execution
6. Approve commands as needed
7. AI completes work, provides confidence score
8. Verify it works (run app, check browser, run tests)
9. Close conversation
10. Next task Every task follows this cycle. No exceptions.
Why New Conversation Per Task
Long conversations accumulate garbage: old debugging tangents, superseded decisions, conflicting context from abandoned approaches, token costs for context you're barely using. By message 80, AI is confused and you're paying for 80 messages of context on every response.
New conversation = fresh start. Your documentation provides continuity, not chat history.
Plan Mode: Non-Negotiable for Fixes
For new tasks following a sprint plan, plan mode confirms alignment. For fixes, debugging, and tweaks, plan mode is absolutely critical:
I need to fix this: [describe the problem]
Please investigate and propose a plan before making any changes. Why this matters for fixes: Without a plan, AI starts changing code based on its first guess. First guesses are often wrong for bugs. Changes without a plan create new bugs. A 2-minute plan review saves 30 minutes of rework.
Minimal Prompting
You don't need elaborate prompts. This works:
"Can we please plan task 2.3?"
AI will read your .clinerules, ARCHITECTURE.md, the sprint plan, the task spec, and LEARNINGS.md, then propose an approach. If your docs are good, the prompt can be simple. Add context only when needed: external APIs, changed requirements, specific constraints.
Execution
Once the plan looks right: "Looks good. Proceed."
AI starts working — creating files, editing code, running commands. In Cline, approve terminal commands as they appear. Quick approval for standard stuff (install, test). Careful review for anything destructive (rm, database operations, git operations).
Completion and Scoring
When AI finishes, it should provide a confidence score with evidence:
## Confidence: 8/10
**Done:**
- Login endpoint working
- Register endpoint working
- JWT tokens generating correctly
- Tests passing (8/8)
- Smoke tested in browser
**Deferred:**
- Rate limiting (Sprint 2 per roadmap) Your job: run the app, test manually, check the score makes sense. Below 8/10? Fix before moving on. Then close the conversation and move to the next task.
When Things Go Wrong
- AI seems confused: Your docs might be incomplete or contradictory. Check them.
- AI does something unexpected: Stop execution, go back to plan mode, discuss.
- Confidence below 8: Don't move on. Ask what's missing and fix it.
- Task taking too long: Probably too big. Split into subtasks.
- Going in circles on a bug: Write a task doc, start fresh. See Context Management.
The Side-Task Trap
During a task, AI (or you) notices something else that needs fixing. Wrong approach: "While we're here, let's fix that too." Right approach: "That's important but not our current task. Write a quick task doc for it and let's stay focused."
Each conversation has limited context. Piling unrelated work into it increases token costs, increases confusion, and increases error risk. Write the task doc. Start a fresh conversation for it later.
Quick Reference
| Phase | What Happens |
|---|---|
| New conversation | Fresh context, AI reads docs |
| Plan Mode | AI proposes approach, you review |
| Execution | AI writes code, you approve commands |
| Completion | Confidence score, verification |
| Close | Done. Next task gets new conversation |