I have some idle time and have been asked to look into producing a code review checklist. This is quite an interesting matter, as to me, much like programming, code reviewing is a highly idiosyncratic, ineffable, thing, and so I wonder whether the community here has somehow managed to formalize it.<p>There are obvious things to put in such a checklist (e.g. short and legible methods, check for potential null reference exceptions) but sometimes even glaring bugs can sneak through code review. A recent example: in a PR, I noticed a method call had been deleted and replaced with nothing, and asked the developer about it. The developer assured me the functionality that the method had provided was implemented elsewhere in the PR, and that indeed seemed to be the case.<p>Turns out, that other implementation did not in fact cover all the functionality the deleted method call had provided, and an entire feature was lost due to that PR. In light of this, something else that could go in a checklist is "when a method call is deleted, verify what the method did". Another is that deleting that method call meant the method had no references (dead code), yet was not deleted, so neither I nor the other reviewers saw the method body in the diff, which would have shown that a feature was being lost. Another item could be "If method calls are deleted, check for dead code".<p>Do you have a formal method for conducting code reviews? What things do you look for?
In my experience, code reviews are more about knowledge sharing than correctness checking (unit tests and CI are better direct correctness checks). You prioritize understanding the code that has changed and hope you get correctness as a side-effect of that understanding. If you try to directly construct an exhaustive correctness checklist, you'll end up producing a CR process that's overly burdensome and that no one actually follows. So create a checklist for understanding the code, not checking correctness.<p>Here's the rough, hierarchical checklist I wrote up for myself at some point: <a href="https://gist.github.com/beyang/b3b494f028996b32ab3dae237bc1c2eb" rel="nofollow">https://gist.github.com/beyang/b3b494f028996b32ab3dae237bc1c...</a><p>I don't go through this entire checklist for <i>every</i> code review obviously, but whenever there's a diff that's large/complex enough for me to say "hoo-boy", I revisit it to remind myself of what I actually need to check in order to obtain an adequate understanding of the change.<p>The other thing that matters is tooling. One of the annoying things about most CR interfaces is they lack the capabilities of an IDE that actually let you jump around and make sense of the code in a tractable manner. Shameless plug: I helped build a browser extension that adds these capabilities back into most of the mainstream CR systems: <a href="https://docs.sourcegraph.com/integration/browser_extension" rel="nofollow">https://docs.sourcegraph.com/integration/browser_extension</a>. People swear by it and I'd highly recommend trying it out.<p>To get back to your original scenario, it seems like you asked the right initial questions, but stopped short of building a satisfactory understanding of what was going on. You took the author's word for it, when you could've asked a few more probing questions to check your (and their) understanding of the change. At the end of the day, the threshold for "LGTM" is a function of both how well you understand the code and how much you trust your teammates. Better process and better tooling can help out a lot with the former, but the latter is a human element that can't be ignored either.
In my previous job we went through the same process. A review involves two parties (the author and the reviewers) with different goals and obligations. Fast forward, we decided that the goal of a reviewer is to check<p>- readability/maintainability<p>- coding style<p>All other things (e.g. linting, formatting, simple error checks...) have to be automated, cause this is super anoying.<p>> As a general rule, keep the process lightweight, it should not be a burden, make it a fun activity.<p>Here are some more question that came up:<p>- Whats a good size for a review -- something up to 400 LOC, there are papers on that topic<p>- How long can the author wait, hence how often to reviewers have to review -- we tried to establish a habbit to do reviews twice a day<p>- How many people have to accept? -- this question is interesting, cause there is a correlation of number of reviewers and bad review quality.<p>- Who is reviewer -- opt-in, if knowledge sharing is your goal, keep it open. There are companies that mark branches in the source code with OWNER files to make this more explicit<p>- Are there mandatory reviewers -- yes, the owner should give his approval, unless he is on holiday<p>- How do you resolve discussions (and are they efficient)? -- IMO pairing on non-trivial comments was the most efficient approcah.<p>- If you are unhappy, can you decline a review? -- that would be brutal<p>- How do you communicate/notify people?<p>- How do you handle late comments? if you use pull requests, reviewers could still comment?
- To resolve a comment in the review you changed the code and revoke all approvals?<p>- Do you want to enforce behavior -- just make it a fun activity :)
Things I look for -<p>- Is the commit message coherent ? Can I understand the intent of the CL from the message ?<p>- If yes, then is this commit doing much...should it be split ?<p>- Is it functionally correct after a brief look at the code ?<p>Now on to the implementation -
- Are there any big functions ?<p>- Are there useless one line functions ?<p>- Is there too much nesting with conditionals ? Look for the snake pattern.<p>Concurrency -<p>- make sure that there is no cute "trick" to guarantee safety. Message passing preferred if not possible then basic locking techniques and language primitives. For eg ExecutorService in Java than reinventing your own wheel
In 10 years I have found very little value in code reviews. I’ve seen orgs expend tremendous amounts of time reviewing their code, and ending up with bug ridden spaghetti and missed requirements.<p>What’s more valuable to me is design and implementation reviews, where you can ensure people know about and are using the features provided by their language and framework and not reinventing the wheel, poorly.
I'd say that once you're at the point of formalizing reviews, you should have requirements documentation, and then, most of this becomes just an instance of "does the implementation meet the specification?".<p>When I worked in aerospace, every feature was linked to the requirement line that it implemented. Then it's easy to identify functionality which was removed (or left in) by mistake.
This should have been caught out by (failing) tests. Code review is not the right tool for the scenario you describe. There's an item for the checklist: is the new (or refactored) code covered by unit tests that demonstrate its correctness as well as no regressions?
Besides code quality and logic checking, I think it's important to also check if all the relevant docs have been updated. This means comments, README files, external wikis...