"An hour-long project ensued extracting data from our code review tool and correlating number of reviewers with time-to-review. Turned out the optimal number of reviewers to minimize time-to-review was between 1 and 2. Any more reviewers and delay grew substantially."<p>This is off topic to the point he is making but it bothered me that he assumes correlation implies causation, haha. When a code review isn't already going through is when I see people tossing on lots of reviewers.<p>Regardless, the article runs through some fun thoughts :)
I'm doing this right now. I got tired of the limitations with GitLab CI's caching, so I spent this weekend writing a custom caching layer that uses my own Google Cloud Storage bucket [1]. There are plenty of more valuable things that I could be working on, but this has been really fun.<p>[1] <a href="https://gist.github.com/ndbroadbent/d394f8a6890eddcaeafe9223e8b50be5" rel="nofollow">https://gist.github.com/ndbroadbent/d394f8a6890eddcaeafe9223...</a>
I'm easy to be one of the curiosity cogs ...if there's an autonomous fuel tank for this I haven't found it yet. More seriously,I think that 80% pig my work its exploratory and driven to a large degree by curiosity. I get paid really well for the other 20% because the quality of that output is driven by the investigation done without production.