Been thinking about code review as a bottleneck.
What if we didn’t review production code or unit (fast) tests, BUT:
* all code must be fully tested
* all non-unit (slow) tests must be reviewed, and written before the implementation
* any exceptions to the above (basically, untested code) need full review
You can still review the rest, just not as a blocker.
What do you think? Would this work? Would it encourage good quality software?
Quincy
in reply to samir, a very hard worker • • •samir, a very hard worker
in reply to Quincy • • •Quincy
in reply to samir, a very hard worker • • •I'd argue that it should, so as to reduce the impact of errors and the "responsibility burden" ...
however I'm well aware that not every review has the same thoroughness, so now I'm thinking this could be the way to go –
review everything, but go casual on the parts that are already well tested?
samir, a very hard worker
in reply to Quincy • • •@quincy That makes sense to me. I think I would argue that most of the time, you can go so casual that you can just not do it. 🙃
I think it’s important to be rigorous about the definition of “well-tested”. If you think it’s well-tested then find a bug every week, change your definition.
Quincy
in reply to samir, a very hard worker • • •silverwizard
in reply to samir, a very hard worker • •I feel like this is a weird question in that blockers are important for writing quality software?
You need to explain your code to a reviewer as part of writing it, and you know a reviewer will call you out for doing bad.
Also - what you *meant* to do isn't tested. Just what you did.
samir, a very hard worker reshared this.
samir, a very hard worker
in reply to silverwizard • • •@silverwizard I have worked in teams that wrote very high-quality software without blocking code review.
My experience has been that many people, when faced with blocking code review, simply make so many changes at once that the reviewer takes a cursory glance and says, “I guess it’s fine”.
I agree with you that what you meant to do isn’t tested. I’m a big fan of acceptance tests (which are typically slow), and I would very much prefer if those were still reviewed.
silverwizard likes this.
silverwizard
in reply to samir, a very hard worker • •samir, a very hard worker
in reply to silverwizard • • •silverwizard
in reply to samir, a very hard worker • •It just seems like you are saying people are writing quality code without review, and review is being done badly, then it's kinda hard to measure anything.
The value of good review is when reviewers take their role seriously.
samir, a very hard worker
in reply to silverwizard • • •@silverwizard I’m explaining my experiences, which of course may not be your experiences.
I think I agree with you, but I’m not sure why the review needs to be blocking in that case.
silverwizard
in reply to samir, a very hard worker • •samir, a very hard worker
in reply to silverwizard • • •@silverwizard That’s interesting. Can you define “quality” in this case?
Is there a reason why you prefer manual review to automated review here?
samir, a very hard worker
in reply to samir, a very hard worker • • •silverwizard
in reply to samir, a very hard worker • •Code is committed to a branch
pipelines kick off unit tests
Code review confirms code does what it means to and meets quality standards
code hits develop branch
integration tests are run, both automated and manual on staging servers
released to prod
samir, a very hard worker
in reply to silverwizard • • •@silverwizard Gotcha. So you’re talking about code/internal quality, right? I assume the staging tests are the checks for external/customer-facing quality?
If that’s so, then why does one block a merge, and the other doesn’t?
Put another way, why verify internal quality in isolation, and external quality after integration? Is there an issue with integrating first?