It looks like a rather oversimplified model: the examples assign the same overhead to any component, transformation, or feature, so anything monolithic or that can be grouped together is favoured, even if buggy, clunky, or poorly supported. Transformations may be awkward (lossy, or having to make assumptions and make up some data, and/or dealing with poorly specified formats), which makes them unequal as well, but it's often arguable which ones are simpler. Features may have different priorities, but even if they don't, I imagine them to be among requirements, not up to technical decisions; if they were, it'd be another way to hack the resulting value by adding useless ones. And ignoring all that, it just says that systems with fewer components are generally simpler.<p>It seems to be the case with most talks about simplicity: pretty much everyone agrees that it's good, but disagrees on what it means and how to estimate it.