This is not a new concept by any means—it’s reviving a concept that has been around since at least the very early days of the web.<p>It’s known as push-based caching, as distinct from pull-based caching.<p>Push caching: when something changes, you immediately propagate the change to every location, to be valid until replaced. That <i>used</i> to be the standard model. It’s precise, but its eagerness can lead to prohibitive storage requirements (combinatorial explosion is very easy to achieve, and now you must store every computed state, not just the wrapping and the data) and cost of editing if the change has to propagate far (e.g. if you put a list of the five most recent blog posts in a sidebar of a site, making a new blog post now requires that <i>every single page</i> be regenerated). Also it’s conceptually more difficult to get right—doing no caching is far, far easier.<p>Pull caching: responses are generated on demand and cached for typically a limited time, meaning that you may serve stale data for up to your cache lifetime. But the laziness (that it doesn’t generate everything possible) works in its favour for many situations.<p>Hybrids are possible. Your cache may have the ability to programmatically invalidate entries, so that if you can calculate which pages <i>would</i> need to be regenerated, you can tell the cache, supplanting time-based caching and potentially yielding the most database-efficient result (no extraneous content generated, but pages cached exactly as long as they will be valid).<p>Build pipelines like Make are similar: you pull by specifying the resource you require, but its validity is determined by the dependency tree, and the tool essentially pushes things, materialising the necessary resources, until the requested resource is complete.<p>Push caching has an elegance that pull caching lacks (and I much prefer it, when feasible), but there <i>are</i> reasons why it’s not the standard model of the web any more. It’s tougher to implement well, and has scaling limits that can constrain your design.<p>----<p>As a practical application of this: in the article itself, it speaks of the reduced costs and better scaling of the static approach. But it doesn’t take into account the possibility that you generated a bunch of pages that weren’t ever requested (e.g. everyone read the blog post about widgets, but no one opened the list of posts tagged “widgets”, even though you had gone to the trouble of generating it), and so could easily have made more requests than you would have if you had instead had regular pull-based caching in place.