> Me: I think engineers at FB and Google are probably familiar with using smaller repos (doesn’t Junio Hamano work at Google?), and they still prefer a single huge repo for [reasons].<p>I'm a former such engineer; I still prefer smaller repos. There's enough engineers at both companies that I can assure you such opinions (and knowledge) are quite varied.<p>> it’s often the case that it’s very easy to get a dev environment set up to run builds and tests.<p>I've worked with both; in both cases, the workflow was essentially a checkout, followed by a build, followed by running the tests. I've found this is more a product of the environment (i.e., do the developers care about tests being easy to run) than the VCS in use.<p>> With a monorepo, you just refactor the API and all of its callers in one commit.<p>I'd restate this: with a monorepo, you <i>must</i> refactor the API and <i>all</i> of its callers in one commit. You cannot do it gradually, or you <i>will</i> break someone. A gradual refactor is only possible in multiple repositories, specifically multiple repositories that obey something resembling semantic versioning. You make your breaking change, and because it is a breaking change, you up the version to indicate that. Reverse-dependencies wishing to update then must make the change, but can do so at their leisure.<p>I've seen some truly heroic work done to get "APIs with thousands of usages across hundreds of projects get refactored". Sometimes it _is_ easy: you can track down the callers with a grep, and fix them with a perl script. But I think you must limit yourself to changes of that nature: massive refactors too great for a script would leave you to edit the call sites. Though, with thousands of callers, this is probably true anyways, I find having to move even a couple dozen through a major change (such as one where the paradigm expressed by the API is completely wrong) is difficult if you must update them all at once.<p>Last, the most common "monorepo" system I've seen is Perforce, and compared to git it has such stark usability issues that I'd rather not go back to it (staging, git add -p, bisect, real branches). This comment though,<p>> where it’s impossible to do a single atomic commit across multiple files<p>I would hesitate to use "atomic" to describe commits in Perforce; if you check out CL X, make some changes, and "commit" ("submit" is Perforce's term), the parent of your new CL might be Y, <i>not</i> X, and you might get no warnings about this, either. Collisions on an individual file will prevent the submit from going through, but changes on separate files (that together represent a human-level merge conflict) , will not get caught. (They wouldn't show as merge conflicts in git, either, but git will tell you that someone updated the code, and refuse your push; unit tests must catch these, but in Perforce's case, you must run them after making your change permanently visible to the world.)