Most code review ends up being pair/team programming gone horribly awry. There's a lot of ego involved, it's slow and cumbersome, it's emotionally painful, and the metaphor is wrong (a panel of judges inspecting a lone defendant rather than a team collaborating on an engineering artifact).<p>I recently had the opportunity to work with other developers on a project by displaying a laptop onto a giant screen while we all sat on a couch and drank coffee: it was fun and we produced a high quality result. After years of code review, on the other hand, I can say that code review is painful, expensive, and mediocre.<p>I'm hoping that as software 'eats the world' and provides ever more value to larger numbers of people, companies will realize that putting a single developer on an engineering problem, and having the solution be submitted for review to the rest of the team <i>after the solution has been designed</i> is both unkind to your employees and bad business.
In the "How it all started section" -<p>> E explained that the main impetus behind the introduction of code review was to force developers to write code that other developers could understand; this was deemed important since code must act as a teacher for future developers.<p>This should always be at the forefront of the code reviewer's mindset. It's quite easy for the review process to degenerate into a style argument, quest to find every inefficiency, or attempt to make code as "clever" and brief as possible.
Xoogler here, circa 2014. Noogler orientation impressed upon us that every change was code reviewed, period! However my experience was that code reviews were heavyweight and frustrating, often to the point of just being skipped: completely different from the article's observation of latency in hours and "fast-paced iterative development."<p>1. My initial team was a firmware engineer, and myself doing Android development. My peer felt unqualified to review Android code, so I spent a lot of energy begging code review time from tangentially related teams. Their feedback was not deep or useful for obvious reasons, and the days of latency posed a major problem because our project was tied to a hardware schedule.<p>I learned later my teammate was simply stamping his own changes, which the tool permits, and so I started doing the same. Bypassing code review resolved the problem.<p>2. My next team was mostly in East Asia, while I was in Mountain View. If they had a question on a change, it would add two days latency due to time zone differences. And because I was physically detached from my peers, I believe they felt more comfortable deferring my changes. Again we were tied to a hardware schedule, so this latency was very painful.<p>3. The last scenario was overseeing builds, on (say) a Foxconn factory floor. For every build, the hardware coming off the line would behave unexpectedly, and we would have to very quickly update some code to make things work. Code reviews in this situation would have been absurd: any changes only had to last as long as the build itself, and well, there wasn't even network access. This is admittedly an exotic and specialized scenario (but recall orientation's "every change, period!")<p>As a Googler code reviews were either skipped or were a major burden. I made some mistakes: I ought to have escalated quicker, increased my visibility, been less cowed by the orientation. Google also could improve in some areas:<p>1. Code review expectations need to be adjusted for small heterogenous teams. Such teams are more likely as Google ramps up its hardware efforts.<p>2. The engineering culture ought to recognize the burden imposed by time zones. Code reviews must be more forgiving when the communication cost is high.<p>3. Shift orientation towards teams instead of company-wide. CR for self-driving cars must be fundamentally different than CR for matching selfies with famous artwork, or CRs for updating factory floor test fixtures.<p>Some of these may have changed since I left and if so I'd love to hear how.
Putting the human code review aside, one of the things I've been more impressed by during my time at Google has been the automated presubmits, generally the static analysis ones. I do a lot of Java, so Error Prone (<a href="https://errorprone.info/bugpatterns" rel="nofollow">https://errorprone.info/bugpatterns</a>) has been a useful one (cf <a href="https://errorprone.info/bugpattern/FormatString);" rel="nofollow">https://errorprone.info/bugpattern/FormatString);</a> we have some others that Tricorder runs but I can't seem to find public references to.<p>One thing that's also an interesting challenge is the work it takes to make something a <i>blocking</i> presubmit: sometimes you don't get useful feedback from one, or maybe a test doesn't provide very good signal, and then people start force submitting to bypass the check because it's not providing utility, but in the process also skip other checks.
I will be glad when people stop using the word “modern” for this kind of thing.<p>It’s an artifact of some imagined distinction we seem to cling to in the computer world, between the present and the past. No one talks about how to write modern novels. They’re just novels. It’s presumed that they are changing over time. And that things are situated in a historical progression.<p>Also, modernism is an actual thing, and it would be nice if we could use the word for that.
Unpopular (?) Opinion:<p>In most "modern" environments, with constant online updates, where shipped defects are not that costly anymore, code review is a net loss. You can just skip it. This study fails to compare with "no code review whatsoever" and it measures by "defects found" instead of "total effort spent" (i.e. the only metric that truly matters).<p>In code review, most developers will not do the kind of thorough analysis that uncovers serious bugs, if they actually did, code review would be too costly. Instead, disagreements on superficialities and bike-shed arguments are promoted, often leading to low-value follow-up changes that can easily cause worse issues than they solve.<p>Code review satisfies the urge for process and structure, but it also satisfies the need for diffusion of responsibility, which is <i>a bad thing</i>. People need to own their code.<p>Code reviews have some value during onboarding or with junior devs, to get everybody on the same page. After that, you can disregard it, the effort is better spent on other things like automated testing.
A link that's actually loading for me: <a href="https://ai.google/research/pubs/pub47025" rel="nofollow">https://ai.google/research/pubs/pub47025</a>
I'm surprised by the finding that Google changes have a medium sized of 24 lines changed. While I'm a big fan of small commits, I was under the impression that some of the large open source Google projects (Chrome, Android) have relatively large commits flowing into their repositories.
After years of trying to make it work at various companies I have stopped supporting "modern code review" implementations.<p>There is a host of problems with those in my opinions, and there are better alternatives.<p>MCR are typically practiced as very shallow analysis by a colleague. It is even solidified in the tyical feedback given as "Looks Good To Me". This is not even remotely near to what is needed to get quality results. Most problems discovered this way are very easy problems that jump right at you that could have been prevented with just a bit more focus on the initial implementation.<p>The reviewer is typically prevented from doing the code review correctly. She is typically engrossed in some other problem when the review comes requiring her to switch context. Aside from the annoyance factor, the typical developer will want to get to his original problem as quickly as possible.<p>Even more annoyingly, people will treat review notes personally and it is very difficult to be the person that does most of the reviews for your team and hence constantly point out errors in their judgement.<p>The reviewer is forced into needlessly artificial role of having to reject/accept colleagues' results with little time for analysis and facing the consequences of being on the other end of the stick the next time.<p>There is no incentive to spend required time on code review. Organizations typically are not willing to support this expense of time even though everybody is willing to support the concept of the review.<p>The review is typically done AFTER the implementation is already completed. This is possibly the worst moment to give the feedback, when the original developer is pressed for time to get his product shipped and where typically very little can be fixed. This causes people to get defensive which is straight way into conflict. It takes effort to avoid conflict if you try to be dilligent in your reviews which is the whole point. So if you don't like how the solution is structured it is typically too late to fix it and there is incentive to just let it go.<p>Finally, the code review typically delays the deployment of the solution as it sits in queue. This is against the agile idea of getting stuff shipped as fast as possible, hopefully immediately.<p>Much better strategy, one that I am promoting now, is pair programming where you physically sit with the other developer and spend time discussing the problem, the requirements, the solution and then implementation and verification. No work is allowed to be done outside of the pair.<p>The problem and requirements are analyzed by both persons so they both understand it well. The implementation is done by both people so they have equal, real chance of catching problems. The possible problems are caught early in the process even at the analysis stage, preventing from having large amounts of work done before review. The discussion is friendly and does not force one of the developers into artificial position where she has to accept or reject the solution. The developers take full responsibility for the implementation as they understand when they ship the code there will be no further review to catch errors -- only (hopefully) automated testing. Finally, when the implementation is done it is done and immediately available for shipment which means the process is not delayed.