TLDR
Two real projects. Real costs. Real timelines. This isn't theoretical — these are outcomes from applying exactly the methodology described in this guide.
Case Study 1: RISE Desktop App
~$400
total token cost
4 weeks
to working MVP
1 dev
no team required
What It Is
RISE is a desktop Electron application for the Low Code Foundation community. It provides members with a unified workspace for AI-assisted workflows, project management, and community resources.
Stack: Electron, React, TypeScript, SQLite, electron-store for persistence.
The Methodology in Practice
Week 1: Foundation
2-hour brainstorming session with Claude Opus. Scoped the MVP, agreed on Electron as the framework, designed the data model, and generated all foundation docs. Strict .clinerules with mandatory testing and plan-mode-first.
Weeks 2–3: Execution
Worked through sprint tasks systematically. Each task averaged 45 minutes from "can we plan task X" to merged code. Confidence scoring kept quality consistent — three tasks came back below 8/10 and were fixed before continuing.
Week 4: Polish and Audit
Phase audit caught two security issues (unvalidated user input) and five medium-priority code quality issues. Fixed before launch. Final codebase was clean enough to hand to another developer with no explanation needed.
What Worked
- Thorough ARCHITECTURE.md meant AI never had to guess at schemas — zero schema-related bugs
- Strict
.clinerulesproduced consistent test coverage throughout - The fresh-conversation-per-task rule eliminated context drift entirely
- Phase audit caught real issues that daily development had normalised
What Was Painful
- Two tasks were scoped too broadly — generated 500+ lines and scored 6/10. Had to split and redo them.
- One debugging session went in circles for 90 minutes before we used the context rescue pattern. Should have done that at minute 30.
Case Study 2: VH Conference Toolkit
~$350
total token cost
3 sprints
to production
Open source
public GitHub
What It Is
A suite of open-source tools for B2B event professionals — agenda builders, attendee engagement widgets, session rating tools. Built as standalone embeddable components that event organisers can configure and deploy without technical expertise.
Stack: SvelteKit, TypeScript, Tailwind CSS, PostgreSQL, Drizzle ORM, Docker, Hetzner.
The Methodology in Practice
This project demonstrates the methodology at its most thorough. The public GitHub repo shows exactly what good project documentation looks like in practice — including the full ARCHITECTURE.md with complete database schemas, the sprint-based task structure, and the .clinerules file enforcing quality standards.
Key decisions made during brainstorming: JSONB for tool configuration schemas (flexible, avoids over-engineering), three-tier hosting model (managed/self-hosted/embedded), standalone deployment rather than SaaS to avoid infrastructure costs for MVP validation.
Results
The codebase is handoff-ready: any developer can read the docs and start contributing within an hour. Zero context onboarding sessions needed. The phase audit between Sprint 2 and Sprint 3 caught auth inconsistencies that would have been expensive to fix post-launch.
Key Insight
The cost of the methodology (brainstorming session, doc generation, scoring, audits) was approximately $50 extra over three sprints. The time saved in debugging, context re-establishment, and architectural inconsistencies paid that back in the first sprint alone.
Browse the Repo →