Reviews are undeniably important. I've spent a far above average time on this problem when I was on the dev team for CodeCollaborator so I feel qualified to suggest:<p>a) its a code review, not an author review<p>b) be respectful of your reviewer's time:<p>1. 200 lines is a big review obligation, 500 lines should be a hard maximum. Remember that boilerplate code does have meaning, and it counts. If you can't make time for reviews, then don't rubber stamp, just don't review.<p>2. Run whatever automation and analysis is available to you before submitting for review, and fix what it tells you to fix or note the trigger and document why its a false alarm.<p>3. You may count unit test code at 1/3 the normal rate, and even get 50 lines totally free. This relieves the burden of mentally running your code, and makes it easy to request additional test cases, which will be easier on your ego than bugs.<p>All the big problems people have with code reviews are about checking your ego, respecting your peers, and clear communication.
I've had a lot of trouble with this at my current job. There's a strict policy that requires a review before anything makes it in to a deliverable branch, but there's also a culture of emergency fire fighting where they will expect problems fixed quickly, often outside of working hours when a code review isn't really possible. So far our team's done full blown reviews for all new components, but we've waived reviews for small bug fixes. One big hurdle is that we are using proprietary source control software that does not play nice with any of the established review tools (another battle I am trying to fight.).