This article sucks. Does your company have tens of thousands of employees? If so, hopefully you have some engineers working on your tooling. If not, you don't have the problems described here, and you will also happily get all the "theoretical benefits" that this guy apparently didn't experience.<p>Moving to monorepo is often a win; also the name sucks, we have a a few "monorepos" at work, and I think it's the sweet spot. Rust and C firmware doesn't need to live with TypeScript frontend apps, but it's pretty wasteful to have apps of the same ecosystem unable to share any and all dependencies and utility code trivially.<p>(Where "trivially" means literally one line of code<p><pre><code> import { bleargh } from 'hoge';
</code></pre>
and not one single step more.)
I think of org-wide monorepos the same way I think of zealous microservice use, or microfrontends, or platform engineering, etc. It is a "shiny thing" that FAANGs use, which get a disproportionate amount of tech evangelist coverage, and then transition into becoming the perceived "best-practice" by bored devs (myself included) to then create some tech-debt work.<p>The reality is that these are somewhat niche usecases for very high scale orgs and/or very high-scale developer workforces which can afford to reinvent the wheel. The way the vast majority of companies should go about developing their systems is not new at all, its just to pragmatically apply tried and tested technologies, and transition to new shiny things where there is a demonstrable need.<p>Worth saying that I'm definitely a fan of team-scoped monorepos though. Being able to automate building and testing across all my apps, update dependencies across all apps, and deploy with a single merge/pull request is great.
TFA assumes several things which may or may not hold for a particular company or team.<p>For example, the Linux kernel works with its tens of millions of lines just fine, even though ownership is heavily distributed among many subsystem maintainers. Checking out a branch or pulling in changes rarely takes more than a few seconds.<p>On the other hand, some organizations abuse their VCS, using it as a repository for large immutable files such as build artifacts, software packages or similar non-source archives.
Orgs who let this happen are of course going to have a hard time. But that's hardly the monorepo's fault.
Having seen a large org attempt a poly -> mono repo conversion project (which took almost 2 years and failed and was in the process of being rolled back when I left) I agree with everything in this article<p>Once an org becomes large enough you need lots of custom tooling to make working across a huge codebase smoother, and that tooling is similar for mono and poly repos, but you need an additional metric ton of tooling to make monorepos work<p>And please pray you didn't decide to go with bazel for builds<p>My least favourite monorepo experience: I want to update my dependency for my tiny service for a security patch -> Oh, 400 other services depend on it and dozens of tests break when I try to update it -> this isn't worth my time I'll do something else instead<p>We were promised a world where every dependency was kept up to date by necessity but ended up in a world where all dependencies atrophied due to the increased difficulty to update them
I get what the article is saying. But if `ls -d ../*/.git/` prints the names of all the company's repos, and the same happens in a few coworker's computers, I start to question things.
I worked at 4 other companies before Google, and Google's source code tooling was the best I'd ever seen and definitely something the rest of the world can work towards.<p>In particular, it fostered a culture where accepting code changes from outside teams was normal (though obviously not always uncontentious), and prototyping those changes was pretty easy.<p>I've been using pantsbuild.org since I left and I think it hits a good sweet spot between blaze, which has a lot of overhead, and the rest of the world.
At a previous gig, they went down the monorepo rabbit hole. For all the reasons the author notes, in not terribly long, we were hearing about the monobuild project, and then the monodeploy project, which was strange, because I think the same people behind the monorepo project had been the ones who advocated for breaking up what had been a monolith service. Plus ça change, I suppose.
Monorepos are most painful when all the components are heavily coupled. Maybe a bunch of unrelated stuff in a single compilation unit (e.g. package) results in a lot of waste. And then people just end up not having clear interfaces and boundaries and ownership.<p>But I guess that's not a result of a monorepo inherently. It's just the result of poor engineering leadership.
I'm in the opposite situation, poly-repo hell.<p>Our team produces 4 libraries. The code for them along with test apps is spread across 7 repos. So when I want to make a core change (like updating the version of Gradle we use) it results in 7 different PRs that have to go in at the same time, and that everyone has to fetch in sync. It's pretty rare that a change only requires a single PR because of this.<p>Releases are a mess. To do a release we tag the latest commit in each of the repos with the same release number and use Jenkins to pull that tag from each repo and put it all together. But that's just like a suggestion, and it's not uncommon for a release to be based off of master. Good luck doing the forensics to figure out the state of everything for a release.<p>The best part is that each repo needs to be in a specific folder on disk because some of them look for the others via hard-coded paths.<p>There is one repo that needs to be separate, because it's public. But I've been pushing hard to get most of the repos pushed together into a single one just to add some sanity to my PRs.
Keeping prod stable is more important than alpha testing every team's latest commit, but monorepos tend not to accommodate staying on known-good versions of your dependencies and opting into updates when ready.