I want to process between 800GB and 4TB of binary data, divided in different files. The current software in Java is already parallel and can be executed in different nodes. Basically it receives a chunk of data and can merge 2 chunks.<p>The problem is it depends on a quite messy code to run in parallel. Is there any open source framework that could run my code in parallel? It should do the same as the current framework.<p>I have thought about Spark, but I'm not sure if it's the best option. Each entry only need to be accessed once, and the main goal is to extract a bunch of statistics in a grid.
I maintain an open-source project that does Data Parallel,
<a href="https://sourceforge.net/projects/tymeacdse" rel="nofollow">https://sourceforge.net/projects/tymeacdse</a>
As well as an open-source project that does Task Parallel if that is better for you,
<a href="https://sourceforge.net/projects/tymeacse" rel="nofollow">https://sourceforge.net/projects/tymeacse</a>