To put this into context I would recommend reading 'MonetDB/X100: Hyper-Pipelining Query Execution' [0]. Vectorized execution has been sort of an open secret in database industry for quite some time now.<p>For me, is particularly interesting reading the Spark achievements. I was part of the similar Hive effort (the Stinger initiative [1]) and I contributed some parts of the Hive vectorized execution [2]. I see the same solution that applied to Hive now applies to Spark:<p>- move to a columnar, highly compressed storage format (Parquet, for Hive it was ORC)<p>- implement a vectorized execution engine<p>- code generation instead of plan interpretation. This is particularly interesting for me because for Hive this was discussed then and actually <i>not</i> adopted (ORC and vectorized execution had, justifiably, bigger priority).<p>Looking at the numbers presented in OP, it looks very nice. Aggregates, Filters, Sort, Scan ('decoding') show big improvement (I would expected these, is exactly what vectorized execution is best at). I like that Hash-Join also shows significant improvement, is obvious their implementation is better than the HIVE-4850 I did, of which I'm not too proud. The SM/SMB join is not affected, no surprise there.<p>I would like to see a separation of how much of the improvement comes from vectorization vs. how much from code generation. I get the feeling that the way they did it these cannot be separated. I think there is no vectorized plan/operators to compare against the code generation, they implemented both simultaneously. I'm speculating, but I guess the new whole-stage code generation it generates vectorized code, so there is no vectorized execution w/o code generation.<p>All in all, congrats to the DataBricks team. This will have a big impact.<p>[0] <a href="http://oai.cwi.nl/oai/asset/16497/16497B.pdf" rel="nofollow">http://oai.cwi.nl/oai/asset/16497/16497B.pdf</a>
[1] <a href="http://hortonworks.com/blog/100x-faster-hive/" rel="nofollow">http://hortonworks.com/blog/100x-faster-hive/</a>
[2] <a href="https://issues.apache.org/jira/browse/HIVE-4160" rel="nofollow">https://issues.apache.org/jira/browse/HIVE-4160</a>
>> This style of processing, invented by columnar database systems such as MonetDB and C-Store<p>True, since MonetDB came of out of the Netherlands around 1993, but KDB+ came out in 1998, well over 8 years before C-Store.<p>K4 the language used in KDB is 230 times faster than Spark/shark and uses 0.2GB of RAM vs. 50GB of RAM for Spark/shark, yet no mention in the article. It seems a strange omission for such a sensational sounding title [1]. I don't understand why big data startups don't try and remake the success of KDB instead of reinventing bits and pieces of the same tech with a result in slower DB operations and more RAM usage.<p>[1] <a href="http://kparc.com/q4/readme.txt" rel="nofollow">http://kparc.com/q4/readme.txt</a>
I've been personally very impressed with Spark's RDD api for easily parallelizing tasks that are "embarrassingly parallel". However, I have found the data frames API to not always work as advertised and thus I'm very skeptical of the benchmarks.<p>I think a prime example of this is I was using some very basic windowing functions and due to the data shuffling (The data wasn't naturally partitioned) it seemed to be very buggy and not very clear why stuff was failing. I ended up rewriting the same section of code using hive and it had both better performance and didn't seem to have any odd failures. I realize this stuff will improve but I'm still skeptical.
It's interesting to see that as further work is done on spark (and I'm pleased they're actually improving the system), it behaves more and more like a database.
I'll have to benchmark Spark 2.0 against Flink, it seems like it could be faster than Flink now. It does depend on if the dataset they used was living in memory before they ran the benchmark, but it's still some pretty impressive numbers and some of the optimizations they made with Tungsten sound similar to what Flink was doing.