The general purpose answer to that question is OLTP. The transaction processing community has a number of benchmarks which look at the cost per transaction and large mainframes typically "win" in those scenarios. As for <i>why</i> they win, that is an interesting question.<p>As a systems enthusiast and someone who has watched as computers got small and then big and then small and then big again, I believe the fundamental answer is based in state machine theory. Specifically around how data becomes "entangled" with other data. That is the essence of what makes transactions hard.<p>I first ran into this looking at scaling file systems. Unlike RAID where all of the blocks in a stripe are related mathematically, a "file" as a sequence of octets is defined not only by the mutations that happen to it, but the order in which those mutations take place. So "append 1, 2, 3", back up one, append 4, 5" leaves 1, 2, 4, 5 if applied in sequence but leaves 1, 2, 3, 4 if the last two steps are swapped. Thus both operations and the order of the operations are important. To hold the state of a complex sequence stable, you generally have to have it all in memory ready to complete (commit) and then rapidly verify its stable, and then commit it.<p>Clusters of smaller systems have a hard problem with this. That said, I would love to play with some of Google's spanner systems to see how well they handle the OLTP workload with respect to cost/size/power. The paper suggests that there is a credible path there as flocks of distributed systems get cheaper and more easily connected.