If only it was that easy: change programming languages, change programming models, and <i>poof</i>! Magical parallelism.<p>But parallelism is harder than that. It's an algorithm problem, a design problem, not a language or code problem. While OpenCL might be harder to write than plain C, for anything except the most embarrassingly parallel problems, that difficulty <i>pales</i> in comparison to making the solution parallel to begin with.<p>For every problem where you can simply split into masses of shared-nothing tasks, there's a dozen others where you can't. Rate-constrained video compression. Minimax AI search. Emulation. All of these <i>can</i> be parallelized, but it requires parallelizing the <i>algorithm</i> and making sacrifices that are light-years beyond what a dumb compiler that isn't even allowed to change program output (let alone rewrite half the program) could do.<p>Modifying an algorithm -- possibly even changing its structure entirely, changing its output, or even accepting nondeterminism -- is <i>inherent complexity</i>, to use the terminology of Mythical Man Month. No programming language or tool can eliminate this complexity. Good tools can make it easier to implement an algorithm efficiently, but they really can't take a complicated application and figure out how to change its design, structure, and behavior. Until we have AI-complete compilers that take program specs as "code" to be compiled, that's a human job.
Mutable state won't eat your children if you know how to handle it in a disciplined manner. Pure functions and immutable datatypes are definitely preferable in a variety of cases. However, just because you discover one day that there are better tools for driving screws than a hammer doesn't mean you have to start banging your nails in with a screwdriver and wearing a saffron robe and preaching to your friends that you have achieved liberation and that they won't be free until they also throw away their hammers.<p>The softwareverse (and he universe, for that matter) is still full of patterns and constructs that are inherently stateful, and need to be constructed such that they behave as if they were stateful. Heck, some constructs simply can't be made to work efficiently in a purely functional manner. Hash tables come to mind. Fortunately we already have a tool that's proven to be excellent for efficiently and effectively modeling state. No, it's not monads. It's <i>state</i>. And it does the job so well that even Haskell was forced to resort to it for its hash table implementation.<p>(Though admittedly not until after much sound and fury to the effect of, "Nobody needs hash tables; trees are Pure and just as good!" I'll say this for silver bullets - people sure can cling to them.)
While I agree with some things the author says, his conclusions are bullshit.<p>Very few processes want to be parallel. People who primarily do parallel programming think everything needs to be parallel, so they assume everyone must be going through the pain they are going through. This is false. Most people write sequential software.<p>Another issue is that purely functional programming is fundamentally flawed, because it attempts to eschew state. Many functional programmers will call this a virtue, but the world is stateful, and being able to reason about necessary state is how you do anything useful. Purely functional programming is great when you want a convenient test-bed for ideas, but Haskell as a practical programming language is a horrible idea. Ultimately, what we want is to eliminate unnecessary state, while reasoning about necessary state. Modern programming language researchers are doing this under the guise of "effects systems", however most of the research I've seen far has left me underwhelmed.<p>Eventually functional programming researchers are going to end up somewhere that is quite far from where they started, but just a few small steps from logic programming.
The article starts by mentioning data ownership, but then diverges after saying that didn't take hold in D and discusses just FP. Yes, FP works well for concurrency. But if you're complaining about data races, letting a thread own a piece of data and only be able to share it via channels solves the concurrency problem too. Thus imperative languages can live on if they adopt memory models in which data is owned by a single thread and cannot be shared. I notice Rust has a mechanism for sharing this kind of data by transferring ownership. That looks pretty cool to me.<p>Also, there are still plenty of applications where high level scripting languages like Python and Ruby work great for application logic, and where single threaded event-based systems work for handling user events. It's the data processing part that needs to be put in a safe concurrency sandbox.
I am a big believer in FP (reasoning about functional code is much sounder even in the sequential case) but not in parallel programming. Most algorithms just do not parallelize that well, regardless of the programming paradigm used.<p>Quote Donald Knuth:
<i>'[...] So why should I be so happy about the future that hardware vendors promise? They think a magic bullet will come along to make multicores speed up my kind of work; I think it’s a pipe dream. (No—that’s the wrong metaphor! "Pipelines" actually work for me, but threads don’t. Maybe the word I want is "bubble.")'</i><p><i>'I won’t be surprised at all if the whole multithreading idea turns out to be a flop, worse than the "Itanium" approach that was supposed to be so terrific—until it turned out that the wished-for compilers were basically impossible to write.'</i>
This is a cute idea, but your computer is almost never CPU pegged. It spends most of its time waiting for input from you. For 95% of software, faster execution (through parallelism or whatever) would have no effect.<p>But don't take my word for it. Look at all the applications you have open right now and check if a magical 1e100Hz processor would make any meaningful difference. For me: iTunes: no. Vim: no. Chrome with hackernews: no. VLC: no. Terminal: no.<p>"We need to rethink software design to make our programs run faster" – Hardly. We're actually seeing the opposite trend. People are caring <i>less</i> about CPU performance now than they used to. One of the effects is that real software is being written in slow languages like Ruby and Coffeescript, performance be damned.<p>There are obviously exceptions (games and scientific simulations). But for the other 95% of code out there, we simply don't need the parallelism of functional languages to write good software.
Hunch: as hardware (multi-core) improves, we will <i>not</i> achieve a sudden breakthrough that enables us to run conventional software approaches on it (functional, imperative or otherwise). Clever people have been anticipating and investigating multi-core for decades, without result, apart from the "embarrassingly parallelizable". In later decades, actual multi-core hardware was adopted in leading-edge then mainstream applications: in supercomputers; in server farms (esp google); in GPUs. Mainstream desktop CPUs have been multi-core for a decade now; and, today, as the article indicates, even phones are multi-core. Phones.<p>In the big picture, multi-core instead will slowly become good enough to solve problems <i>suited to it</i>, that couldn't be solved easily or cheaply before, such as massive simulation in non-government/military applications, and including problems that we did not even see as "problems" before, they seemed so insoluble.<p>It will be a new golden age of computation, with utterly different concepts, approaches and values from today. Our current issues of programming languages will disappear, pushed down to a lower level, and compared to it, imperative and functional programming will be as twins.
While I do agree with the author that FP has in-arguable benefits when it comes to optimizing for multithreading, I'm not sure I buy the logic pronouncing the full-fledged demise of imperative languages.<p>The notion that dev shops will switch to fully-functional languages because a junior dev is bound to muck up the code base seems a little far fetched. That's a <i>huge</i> price to pay for what amounts to a warm fuzzy feeling of immutable certainty (pun intended).
Great article, but does it answer the question it raised: Why has software evolution lagged behind?<p>Also, is the focus on parallel computing the only litmus to judge this as the downfall of every imperative programming language?<p>To answer the first question, we have to abandon our technical computing hats in favour of the philosopher's hat.<p>What went wrong with the promise of reusable objects? Whatever happened to 4GL? Why are frameworks confined to isolated corners of language evolution, far-removed from the domain problems faced by end-users?<p>Why, oh is every nerd developing an Order-Entry system from scratch? Why are nerds doing the same code again and again for decades in every conceivable new language (now FP).<p>You see, if we start accepting structural restrictions like those imposed by FP, then philosophically we should follow that <i>maxim</i> to its logical conclusion. Restrict to the point where we do not harm ourselves (by repeating).<p>Rant over. ;-)
I think this article is a really bad introduction, both to parallel and functional programming. There is just so much stuff that is badly informed, or simply wrong, e.g:<p>> For a functional programmer there is no barrier to concurrency, parallelism, or GPU programming.<p>Amdahls law? And GPUs, while having branch support recently, still only perform great for problems that are highly data parallel and are not in high need for branching.<p>> I may surprise you that the state of the art in parallel programming is OpenMP and OpenCL.<p>This did in fact surprise me a lot! Erlang? Well, if you think about it, a webserver renders webpages in parallel. If that is not state of the art and parallel, then what is?<p>While I am very interested both in parallel and functional programming, I am actually disappointed this article has made it onto the frontpage of HN...
I guess I am kind of confused about the title because I was under the impression Occam counts as imperative. Also, if we are talking general parallelization strategies then Grand Central Station is something to look at on the Mac.
> In fact the most popular language for parallel and distributed programming is Erlang — a functional language.<p>Erlang achieves its parallelism not through immutability (though it has that) but through a shared-nothing architecture, like smalltalk. You can implement shared-nothing concurrency imperatively. It circumvents the problem of two threads accessing the same memory location by restricting access to only one thread. Of course, then there's the problem when you do need to share data between threads - Erlang does this with STM (software transactional memory - like a DB).
Should we all take a serious look at functional programming? Absolutely. Should we start learning a functional programming language? Maybe not. C++ is a multi-paradigm language in which one can write functional programs, and it lends itself nicely to switching back to stateful imperative programming when needed.