AI Coding Guide

AI-Assisted Software Engineering

Phase Audits

By Richard Osborne, CTO at Visual Hive

Last updated:

TLDR

At the end of each major phase (end of sprint, pre-launch), start a fresh Claude conversation with no dev context and ask it to review your codebase critically. It will find problems you've stopped seeing. Fix them before continuing.

Why Fresh Eyes Work

When you've been building something for weeks, you stop seeing its problems. The workaround that seemed temporary. The inconsistency you worked around. The security gap you meant to fix.

AI with fresh context has no blind spots from your development history. It reads the code as it is, not as you remember building it.

The Audit Prompt

I'm asking you to do a structured audit of this project.
You are a senior reviewer, NOT the developer.
Be direct and honest. Do not soften findings.

Review for:
1. Security vulnerabilities
2. Error handling gaps
3. Performance issues
4. Code consistency
5. Missing tests
6. Documentation gaps
7. Anything that would concern a senior engineer

Provide a prioritised list of findings with severity (Critical/High/Medium/Low)

When to Run Audits

  • End of sprint — before planning Sprint 2
  • Pre-launch — before going to production
  • After major refactors — did the refactor introduce problems?
  • When something feels off — trust the instinct, run an audit

Acting on Audit Findings

Create a task for each Critical/High finding. Add them to the sprint backlog with appropriate priority. Medium findings: add to sprint if time allows. Low findings: add to a backlog for future cleanup.

Don't dismiss audit findings because fixing them would be inconvenient. That's exactly when they're most important.

Example Audit Output

## Audit Findings

### Critical
- No input sanitisation on user profile fields
  (XSS risk, fix before launch)

### High
- JWT tokens stored in localStorage
  (move to httpOnly cookies)
- No rate limiting on auth endpoints

### Medium
- Error messages leak internal implementation details
- Missing tests for edge cases in tool config

### Low
- Console.log statements left in production code
- Some inconsistent naming conventions

This is a good audit — specific, actionable, prioritised. A 1-hour audit easily justifies itself by catching one Critical issue before launch.

Auditing the Docs Too

Ask Claude to audit your documentation separately:

Review our ARCHITECTURE.md, .clinerules, and sprint docs.
Are they accurate to the current codebase?
What's missing or outdated?

Docs drift. Code changes and docs don't follow. An audit catches that before the divergence causes problems.

Building something with AI?

Talk to Visual Hive →