Skip to content

Review Lens

Software testing has a concept called a viewpoint.

A ruleset for what to test. Boundary values, error cases, state transitions, permissions. Define the viewpoints and anyone can cover the same ground. It is the mechanism for escaping the ad-hoc "I poked around a bit" approach.

Many projects have no equivalent viewpoint set for code reviews.

Nobody has defined what the reviewer should look for. The result: reviews become opinion pieces. "This could probably be written a bit more cleanly." That is the linter's job. "This variable name is unclear." That is human work, sure. Not the linter, not Prettier, not Biome. But it is not the essence of review either.

Some teams split reviews into first pass and second pass. First pass: an engineer on the team checks logic correctness, edge case coverage, consistency with existing code. Second pass: a lead or architect checks alignment with design principles, performance impact, security. Different viewpoints. When one person tries to cover everything, something slips through.

Teams that write their viewpoints down tend to run well. Build a review checklist. Line up the minimum items to inspect. Is input validated. Is error handling adequate. Any N+1 queries. Any missing authorization checks. With a checklist, even a junior reviewer can maintain a baseline of quality.

AI review tools like CodeRabbit have arrived. Define your viewpoints and you can delegate checklist-based review to AI as well.

Reviews turn contentious when viewpoints are not shared. A prioritizes readability. B prioritizes performance. C just wants it to work. All three are right. The priorities differ. Agree on viewpoints first and the review becomes confirmation, not debate.

I thought about this while having AI merge a PR with no human reviewer assigned.