I think the article was getting at this at the end - different use cases naturally call for either a point-in-time snapshot (optimally serviced by pull) or a live-updating view (optimally serviced by push). If I am gauging the health of a system, I'll probably want a live view. If I am comparing historical financial reports, snapshot. Note that these are both "read-only" use cases. If I am preparing updates to a dataset, I may well want to work off a snapshot (and when it comes time to commit the changes, compare-and-swap if possible, else pull the latest snapshot and reconcile conflicts). If I am adjusting my trades for market changes, live view again.<p>If I try to service a snapshot with a push system, I'll have to either buffer an unbounded number of events, discard events, or back-pressure up the source to prevent events from being created. And with push alone, my snapshot would still only be ephemeral; once I open the floodgates and start processing more events, the snapshot is gone.<p>If I try to service a live view with a pull system, I'll have to either pull infrequently and sacrifice freshness, or pull more frequently and waste time and bandwidth reprocessing unchanged data. And with pull alone, I would still only be chasing freshness; every halving of refresh interval doubles the resource cost, until the system can't keep up.<p>The complicating real-world factor that this article alludes to is that, historically, push systems lacked the expressiveness to model complex data transformations. (And to be fair, they're up against physical limitations: Many transformations simply require storing the full intermediate dataset in order to compute an incremental update.) So the solution was to either switch wholesale to pull at some point in the pipeline (and try to use caching, change detection, etc to reduce the resource cost and enable more frequent pulling), or, introduce a pulling segment in the pipeline ("windowing" joins, aggregations, etc) and switch back to push after.<p>It's pretty recent that push systems are attempting to match the expressiveness of pull systems (e.g. Materialize, Readyset), but people are still so used to assuming pull-based compromises, asking questions like "How fresh does this data feed really _need_ to be?". It's analogous to asking "How long does this snapshot really _need_ to last?" - a relevant question to be sure, but maybe doesn't need to be the basis for massive architectural lifts.