I'm conflicted; its a nice write-up, and probably generally true right now, for most stuff.<p>However, I still live with databases big enough to still need cubes, although these cubes can afford to be less refined these days. Saying 'bigtable can do a regex on 30M rows per second' isn't saying it can't be done cheaper and quicker without paying google etc, if you just have some cubes.<p>And I think its going to track the normal sine wave: over time, data sets get bigger, and we keep oscillating between needing to cube and being able to have the reporting tool 'cube on the fly' behind the scenes.<p>I think there's a general move not mentioned in the article as data-lakes become faster, and then data outstrips them, and so on too.<p>The strength will be tooling that transparently cubes-on-demand. I wish there were efficient statistics and CDC that tracked metadata so tools can say 'this mysql table has been written to since I last snapshotted something', and, even better, 'this materialized view that I have in this database is now out of date because of writes that affect the expression it is used from on that other database over there' etc. Basic classic data-sources can do a lot of new things to make downstream tools able to cache better.<p>I have a slight problem with the terminology in the middle of the article, as I'm so far down the rabbit-hole that I think of cubes _as_ databases; I suffer cognitive dissonance when I read about shifts from cubes to databases etc. To me, a cube is just a fancy term for a table/view for a particular use-case.<p>One tool that I'm terribly excited about these days is presto. <a href="https://prestosql.io/" rel="nofollow">https://prestosql.io/</a> allows you to take a constellation of different normal databases and query them as though they were one big database. And you can just keep on adding data-sources. Awesome!