Skip to main content


Been thinking about code review as a bottleneck.

What if we didn’t review production code or unit (fast) tests, BUT:

* all code must be fully tested
* all non-unit (slow) tests must be reviewed, and written before the implementation
* any exceptions to the above (basically, untested code) need full review

You can still review the rest, just not as a blocker.

What do you think? Would this work? Would it encourage good quality software?
I don't think tests can supplant code review, after all one could be testing the wrong thing or not according to best practices ...
@quincy Right, but can they supplant *blocking* code review?
I'd argue that it should, so as to reduce the impact of errors and the "responsibility burden" ...

however I'm well aware that not every review has the same thoroughness, so now I'm thinking this could be the way to go –

review everything, but go casual on the parts that are already well tested?
@quincy That makes sense to me. I think I would argue that most of the time, you can go so casual that you can just not do it. 🙃

I think it’s important to be rigorous about the definition of “well-tested”. If you think it’s well-tested then find a bug every week, change your definition.
Definitely. Defining what one means by "well-tested" is very important, right up there with "definition of done" :-)
This entry was edited (1 year ago)
I feel like this is a weird question in that blockers are important for writing quality software?

You need to explain your code to a reviewer as part of writing it, and you know a reviewer will call you out for doing bad.

Also - what you *meant* to do isn't tested. Just what you did.

samir, lost in the woods reshared this.

@silverwizard I have worked in teams that wrote very high-quality software without blocking code review.

My experience has been that many people, when faced with blocking code review, simply make so many changes at once that the reviewer takes a cursory glance and says, “I guess it’s fine”.

I agree with you that what you meant to do isn’t tested. I’m a big fan of acceptance tests (which are typically slow), and I would very much prefer if those were still reviewed.
That seems like those two paragraphs are contradictory - not sure how to reconcile them?
It just seems like you are saying people are writing quality code without review, and review is being done badly, then it's kinda hard to measure anything.

The value of good review is when reviewers take their role seriously.
@silverwizard I’m explaining my experiences, which of course may not be your experiences.

I think I agree with you, but I’m not sure why the review needs to be blocking in that case.
I guess I don't want code touching my release branch until all the quality steps are complete?
@silverwizard That’s interesting. Can you define “quality” in this case?

Is there a reason why you prefer manual review to automated review here?
@silverwizard Also, can I assume you don’t have any kind of manual testing phase, then?
So generally my processes have involved:
Code is committed to a branch
pipelines kick off unit tests
Code review confirms code does what it means to and meets quality standards
code hits develop branch
integration tests are run, both automated and manual on staging servers
released to prod
@silverwizard Gotcha. So you’re talking about code/internal quality, right? I assume the staging tests are the checks for external/customer-facing quality?

If that’s so, then why does one block a merge, and the other doesn’t?

Put another way, why verify internal quality in isolation, and external quality after integration? Is there an issue with integrating first?