Spark still uses Hadoop underneath. While Hadoop mapreduce uses the disk, spark uses memory for faster processing. And I am not sure anything is going to kill spark soon. Once a library gains critical mass, it's harder to replace it in existing systems