It's worth nothing that functional programming is hard not because of the functional paradigm, but due to the constraints you have while solving a problem (memory, for one).<p>This requires changing the simple, composable functions that you could reuse forever, to complex ones that have to split the work into multiple stages.<p>Another source of inherent complexity which has nothing to do with the solution of the problem itself, is the handling of the input or runtime issues. Input cannot be guaranteed to be always coherent, and runtime issues may arise while running (user interrupt). At this point, interrupting the computation is easy, but as an user I want maybe:<p>* know why the function stopped, with a meaning answer (not just a stack trace)<p>* know the location of the error in the input data<p>* the possibility to resume the computation<p>If you've ever programmed functionally, you know how hard is to do something as simple as give meaningful error messages buried into a series of fold/map calls.<p>Keeping state is a simple (and I would say equally elegant) solution to this recurrent problem. Please consider that I'm saying this from a functional programming perspective (as in: objects as localized state containers not necessarily breaking the purity assumption).<p>Another issue is that we, as humans, are based on a stateful world. Stateful user interfaces are sometimes more efficient due to the way _we_ work. This can be seen in something as simple as "now select a file", which brings up a dedicated section of stateful code to navigate a tree. As such, you are constantly faced with the problem of keeping state.
We do have a functional language that's free from imperative constraints - it's called mathematics; everything runs on it.<p>We just don't have the source code.
Can someone please explain:
"
An FP system cannot compute a program since function
expressions are not objects. Nor can one define new
functional forms within an FP system. (Both of these
limitations are removed in formal functional programming
(FFP) systems in which objects "represent" functions.)
Thus no FP system can have a function, apply,
such that
apply: <x,y> = x :y
because, on the left, x is an object, and, on the right, x
is a function. (Note that we have been careful to keep
the set of function symbols and the set of objects distinct:
thus 1 is a function symbol, and 1 is an object.)"<p>I understand what it says. I don't understand why<p>apply:<x,y> === x:y<p>doesn't work in the construct of functional programming?
This page doesn't load for me, which is a pity.<p>Why limit the question just to functional programming?<p>This applies just as much if not more to imperative programming, at least in functional programming you have the option to execute any pure function in parallel on some independent chunk of hardware.<p>Whether imperative programming can be 'liberated' from the von Neumann bottle-neck is a much harder problem.<p>In the end both will still have to deal with Amdahl's Law, so even if you could get rid of the 'looking at memory through a keyhole' issue you're going to have to come to terms with not being able to solve your problem faster than the sequential execution of all the non-parallizable chunks.
This is a really interesting article (and a blog I want to come back to). Thanks, OP.<p>What Haskell seems to achieve, and I'm not an expert on it yet, is an incentive system that encourages small functions <i>especially</i> when you're "in" a monad, because of the "any side effects makes the whole function 'impure'" dynamic as seen in the type system (also known as "you can't get out of the monad"). Of course, it's sometimes a pain in the ass to thread your RNG state or other parameters (that remain implicit in imperative programming) through the function, and that's where you get into the rabbit hole of specialized monads (State, Reader, Writer, RWS, Cont) and, for more fun yet, monad transformers.<p>I think that the imperative approach to programming has such a hold because, at small scale, it's far more intuitive. At 20 lines, <i>defining functions</i> feels dry and mathematical and we're much more attracted to the notion of <i>doing</i> things (or, making the computer do things). And, contrary to what we think in functional programming, imperative programming <i>is</i> more intuitive and simpler at small scale (even if more verbose). It's in composition and at scale (even moderate scale, like 100 LoC) that imperative programming starts to reach that high-entropy state where it's hard to reason about the code.