I've worked with several companies and the successful ones do code review somehow. Before I go into them, let me preface with we have always required 100% code test coverage. If even one test fails, it doesn't go out. We have also used Github everywhere I've worked. The primary flow is that a developer writes code, submits a PR, the code gets reviewed, then it gets merged and deployed.<p>That said, there were two major strategies that we used:<p>Strategy 1: Gatekeeper<p>The project lead heads up the code reviews. She is the gatekeeper that determines whether code is good enough. For minor bug fixes and small changes, one extra set of eyes is generally sufficient. For larger changes and new features, she often recruits one or more developers to assist with the code review. All code review takes place in Github, with face-to-face meetings if there are any nuanced discussions that need to happen.<p>The major upside to this is that the lead knows what the big picture is. She knows if Joe has worked on this feature and should be consulted or not. She knows that Sarah is very familiar with this technology and should be consulted. She knows that this feature can't go out until that feature is. She knows that this bugfix might affect Henry's feature. This means that she can properly coordinate releases throughout the team and make sure everyone's on track.<p>The major downside is that there's a single point of failure. If the project lead isn't thorough, bad shit can slip through. Doesn't happen often, in my experience, but enough that there should have been more failsafes.<p>Strategy 2: More Eyeballs<p>Nobody "heads up" code review. Each pull request requires two people to approve it. Any number of people can comment and review it, but the minute the second person says "Looks good to me" or "+1" or whatever, then the PR is allowed to be merged.<p>The major upside to this is that you get more eyeballs on the project. You'll have a couple of different perspectives, so there's a greater chance that everything has been thought of.<p>The major downsides to this are: no birds-eye view, choosing the reviewers, and coordinating fixes/features. A big hurdle to this is choosing the reviewers. If you have 6 people on the team, and 2 people need to approve it, who do you pick? How do you pick? Randomly? Round-robin? Imagine that you know that Joe is a lax reviewer and Bob is a strict reviewer, and since you're tired of working on this feature, you give it to Joe so that it will go out sooner with less hassle. Big problem.<p>Furthermore, imagine that Sarah pushes a PR to a feature that was previously built by Mike, but she selects Joe and Henry as reviewers. Even though Mike worked heavily on this feature in the past and is probably the expert on it, it is entirely possible with this system that the feature could make it into production without Mike even being aware of it.<p>----<p>Those are the two strategies I've worked with. It's probably clear which one I prefer. That said, Strategy 2 has a lot to recommend to it. If your team is small enough, it's probably a good idea. For larger teams, I recommend either Strategy 1 (gatekeeper) or mixing the two (gatekeeper + 1-2 other approvals).