This has been the single biggest learning for me as well, but in slightly different flavours:<p>1. In machine learning. For example, suppose you want to generate an article. If you try to build a model that sequentially generates all the words, you'll have a bad time. You won't be able to train a decent size LM on such a large sequence due to OOM. If you generate chunk by chunk each new chunk won't have previous context.<p>The way you do is to decompose the problem: generate sections and subsections and may be a summary, then each paragraph gets conditioned on generated section and subsection, which can be generated parallely now.<p>This type of solution appears in many many places in ML.<p>2. In a relational DB design. Having all the information of an entity in a single table is bad for building concurrent application. If you acquire a lock on a row a lot more users have to wait. Decomposing the data in multiple relational tables allows you to isolate changes. If you've a tree like dependency, the sibling tables can be worked on in parallel without any worry.<p>3. Codebase. If you've a single codebase with large number of people developing on it, you'll have slow development cycle. Decomposed codebase (and corresponding services) with relatively stable API as interface will allow you to have parallel development with less conflicts. You can get QA done in parallel, deployment can be parallel, refactor can be parallel, etc.<p>4. To get a numerical solution of PDE instead of parallelising all the matrix vector computations, you get much more natural and efficient method by decomposing physical domain (say a 3d space) and solving PDE on the subdomains with boundary conditions on the interfaces. This is known as domain decomposition method. These methods can be parallelized efficiently, but even sequentially they converge faster than then no decomposition!<p>5. In system and code design to reduce mental load. Isolation of a piece of logic or a service allows you to develop and maintain it effectively. If you've many services depending on many services, keeping a mental model of all the dependencies become challenging and creates a lot of points of failure.