AI Coding Guide

AI-Assisted Software Engineering

Team Workflows

By Richard Osborne, CTO at Visual Hive

Last updated:

TLDR

The methodology scales to teams with one key addition: shared documentation becomes the contract that keeps everyone building consistently. One person owns the architecture decisions. Everyone follows the same task → plan → act → score cycle. Code reviews focus on whether AI produced good output, not just whether humans wrote good code.

What Changes for Teams

Solo: You're the architect, developer, and reviewer. Decisions are fast. Consistency depends on your discipline.

Team: Architecture decisions need consensus. Everyone needs the same rules. Code reviews need to account for AI-generated code. Documentation becomes genuinely shared.

The Architecture Owner Role

Designate one person as Architecture Owner for each project. Their responsibilities:

  • Lead the Opus brainstorming session
  • Own and maintain ARCHITECTURE.md and .clinerules
  • Approve changes to foundational patterns
  • Run or oversee phase audits
  • Resolve technical conflicts

This doesn't mean they do all the building. It means they ensure architectural consistency while others execute tasks in parallel.

Shared Standards in .clinerules

Your .clinerules is the team contract. Every developer (and their AI) follows the same rules. No exceptions for "that's how I prefer to do it."

Add team-specific rules:

## Branch Convention
- Feature: feature/task-X.X-description
- Bug fix: fix/description
- Never commit directly to main

## PR Requirements
- Link to task spec in PR description
- Include confidence score in PR notes
- All tests must pass before requesting review

## AI Workflow
- Plan mode required for all tasks
- Confidence score required in every PR
- New conversations for every task

Dividing Work

The task-based system divides naturally. One developer per task. Dependencies in the sprint plan prevent conflicts. Status updates keep everyone aligned.

| Task | Owner | Dependencies | Status |
|------|-------|-------------|--------|
| 2.1 Auth | Alice | 1.3 | ✅ |
| 2.2 Profiles | Bob | 2.1 | 🔄 |
| 2.3 Tools API | Carol | 1.3 | 🔄 |
| 2.4 Tool UI | Bob | 2.3 | ⬜ |

Two developers can work in parallel (Alice on 2.1, Carol on 2.3) with no conflict because the tasks have independent dependencies.

Reviewing AI-Generated Code

Code review changes when AI is the author:

  • Check the task spec — did AI build what was specified?
  • Verify tests exist — AI sometimes generates tests that test the wrong things
  • Look for hallucinated patterns — AI occasionally invents helper functions that don't exist elsewhere
  • Check the confidence score — was it honest? Does the code match the claimed score?
  • Architecture consistency — does it follow ARCHITECTURE.md conventions?

LEARNINGS.md as Team Memory

Solo projects use LEARNINGS.md to capture personal discoveries. Teams use it as shared institutional memory. When any developer (with their AI) discovers a gotcha, it goes in LEARNINGS.md. Every future session across the whole team benefits.

This compounds fast. After 3 weeks, LEARNINGS.md contains hard-won knowledge that makes every AI session 20% more effective.

Onboarding New Developers

The documentation-first approach makes onboarding significantly faster:

  1. New developer reads README, ARCHITECTURE.md, .clinerules — 45 minutes
  2. New developer reads sprint plan and task specs — 30 minutes
  3. New developer picks up the next unassigned task — immediate

No pairing sessions needed for context. No asking "how does the auth work" — it's in ARCHITECTURE.md. This is a direct benefit of documentation-first development.

Building something with AI?

Talk to Visual Hive →