The CPU and the von Neumann architecture itself is procedural. High level code is mapped to assembly code, which is then mapped to binary machine code.<p>Given that, then higher level languages are merely another layer of abstraction. C is procedural, and while it can be functionally separated, it still maps down to assembly, and then to machine code.<p>Same goes for Lisp, or Haskell, or any other functional language. It still follows the chain down, and maps to assembly and machine code.<p>Given that, then what is the ability for a purely functional language, like Haskell, or whatnot, to utilize the full feature of today’s multi-core CPUs?<p>For example, a common functional approach, is to use a map() function, and apply a specific function, against a set of data. Say, you have a list of 1000 strings, and you want to capitalize it all, and return the new list of all capitalized strings. Then, you’d have another function called upper() that will apply to each element in that list.<p>So, your function call might look something like:<p><pre><code> capped_list = map( upper, my_list )
</code></pre>
And your return data, capped_list, will contain the list of all strings, capitalized. Simple enough.<p>So, the question is, do functional languages automatically have the ability to take this instruction, and spread the data over multiple cores, or multiple processors, in order to return the final answer, in a faster or more efficient process than a regular procedural language?