There are two lessons you could learn from this episode:<p>1. Use shallow trees and the clever workaround presented in the article.<p>2. Don't use Spark for tasks that require complex logic.<p>People should trace out the line of reasoning that leads them to use tools like Spark. It is convoluted and contingent - it goes back to work done at Google in the early 2000s, when the key to getting good price / performance was using a large number of commodity machines. Because they were cheap, these machines would break often, so you needed some really smart fault tolerance technology like Hadoop/HDFS, which was followed by Spark.<p>The current era is completely different. Now the key to good price / performance is to light up machines on-demand and then shut them down, only paying for what you use - and perhaps using the spot market. You don't need to worry about storage - that's taken care of by the cloud provider, and you can't "bring the computation to the data" like in the old days, removing one of the big advantages of Hadoop/HDFS. Because they are doing mostly IO and networking, and because computers are just more resilient nowadays, jobs rarely fail because of hardware errors. So almost the entire rationale that led to Hadoop/HDFS/Spark is gone. But people still use Spark - and put up with "accidentally exponential behavior" - because the tech industry is do dominated by groupthink and marketing dollars.
I've hit almost the exact same issue with Hive, with a somewhat temporary workaround (like this post) to build a balanced tree out of this by reading it into a list [1] and rebuilding a binary balanced tree out of it.<p>But we ended up implementing a single level Multi-AND [2] so that this no longer a tree for just AND expressions & can be vectorized neater than the nested structure with a function call for each (this looks more like a tail-call rather than a recursive function).<p>The ORC CNF conversion has a similar massively exponential item inside which is protected by a check for 256 items or less[3].<p>[1] - <a href="https://github.com/t3rmin4t0r/captain-hook/blob/master/src/main/java/org/notmysock/hive/hooks/AndOrRewriteHook.java#L155" rel="nofollow">https://github.com/t3rmin4t0r/captain-hook/blob/master/src/m...</a><p>[2] - <a href="https://issues.apache.org/jira/browse/HIVE-11398" rel="nofollow">https://issues.apache.org/jira/browse/HIVE-11398</a><p>[3] - <a href="https://github.com/apache/hive/blob/master/storage-api/src/java/org/apache/hadoop/hive/ql/io/sarg/SearchArgumentImpl.java#L288" rel="nofollow">https://github.com/apache/hive/blob/master/storage-api/src/j...</a>
Why not just...<p><pre><code> val transformedLeftTemp = transform(tree.left)
val transformedLeft = if (transformedLeftTemp.isDefined) {
transformedLeftTemp
} else None</code></pre>
Good read - fwiw if this is your blog some of your links are broken and think they are local -
<a href="https://heap.io/blog/%E2%80%9Dhttps://github.com/apache/spark/pull/24068%E2%80%9D" rel="nofollow">https://heap.io/blog/%E2%80%9Dhttps://github.com/apache/spar...</a>
Spark is this weird ecosystem of people who take absolutely trivial concepts in SQL, bury their heads in the sand and ignore the past 50 years of RDBMS evolution, and then write extremely complicated (or broken) and expensive to run code. But whatever it takes to get Databricks to IPO! Afterwards the hype will die down and everyone will collectively abandon it just like MongoDB except for the unfortunate companies with so much technical debt they can't extricate themselves from it.