If you take a look at how the GHC Haskell compiler (A "Sufficiently Smart Compiler(tm)" in my opinion) works, for example, it is not naively pushing allocating objects, creating thunks and emitting trampoline code.<p>Instead, it analyzes the program structures from graphs and emits rather efficient machine code in the end. Not too dissimilar from what your native code emitting C compiler does.<p>If you look at the machine code for something as "stupid" as the Haskell example below, the output object code does not resemble the semantics of the source program at all. (it's not quite as efficient as the same from a C compiler, but still proves a point)<p><pre><code> foreign export ccall fac :: Int -> Int
fac :: Int -> Int
fac n = foldl (*) 1 . take n $ [1..]
</code></pre>
Compiler and programming language research is a very important topic that yields real performance benefits as well as better programmer productivity. That includes using Category theory to reason about program correctness.<p>If you're interested in how the Haskell compiler works, "Implementation of Functional Programming languages" is a good (albeit a bit outdated) starting point. The whole book is freely available here: <a href="http://research.microsoft.com/en-us/um/people/simonpj/papers/slpj-book-1987/" rel="nofollow">http://research.microsoft.com/en-us/um/people/simonpj/papers...</a><p>I do agree with the title a bit, though. Some of our computer programming environments are just ridiculously slow. Being slow also means "consumes a lot of power" which is important when more and more computers are powered by batteries.
Aircraft are made of metal, not fluid dynamics.
Rockets are made of metal, not ballistics equations.
Nuclear bombs are made of metal, not quantum theory.<p>Well, metal <i>is</i> important, but to put it to any use other than very naïve fiddling with it, good abstractions are indispensable.<p>Flamebait titles, on the other hand, don't help it the smallest bit.
When you first wrote about how hadoop is a waste of time if you don't have multi-TB worth of data (<a href="http://www.chrisstucchio.com/blog/2013/hadoop_hatred.html" rel="nofollow">http://www.chrisstucchio.com/blog/2013/hadoop_hatred.html</a> ), I thought that was classic linkbait. And then I actually started seeing real benefits of not doing map-reduce for smaller datasets ( small = 1GB-1TB) & just sticking to plain old scala ( poso as opposed to pojo :)
Similarly, this article seems linkbait on the surface but makes a lot of sense if you do anything performance intensive. I recently tried implementing a multi layer neural net in Scala - I eventually ended up rewriting your mappers as while loops & introduce some mutables, because at some point, all this FP prettiness is nice but not very performant. It looks good as textbook code for toy examples, but takes too long to execute. Am still a huge fan of FP, but nowadays I don't mind the occasional side effect & sprinkling some vars around tight loops. Its just much faster, & sometimes that is important too.
What I took from this post, is that we should keep working on compilers. They could provably optimize away most of the performance issues and move us closer to doing category theory instead of dealing with the metal.
The metal is also made of category theory:<p><a href="http://conal.net/blog/posts/circuits-as-a-bicartesian-closed-category" rel="nofollow">http://conal.net/blog/posts/circuits-as-a-bicartesian-closed...</a>
I'm no expert on these matters, but it seems a bit ridiculous to call out entire swaths of the programming/computer science world while referencing the JVM as a benchmark.
Good read. Minor quibble: I'm not sure calling Julia a functional language is really a fair statement. Yes it's LISP-like, but if you want to go there, Ruby is a LISP-like language in the same ways Julia is. I don't really encounter many folks making the claim that Ruby is a functional language.
The most important thing here is to measure. Always measure. It doesn't matter what you think is going to be fast or not. The combination of super smart (or not so smart) compilers, multi-level hierarchal caches, pipelining, branch (mis)prediction, etc means you can't just look at a piece of code in a high level language and know how fast it will be. You always have to run the code and measure how long it takes. For anything where you really care about latency, measure.
Computers are metal but programming languages are for humans. The purpose of functional abstractions is primarily to help humans produce correct code, not necessarily to optimize for speed. Referentially transparent functions, immutable data and type checking are meant to provide guarantees that reduce complexity, make a program easier to reason about and help stop the programmer from introducing costly and dangerous bugs. It doesn't matter how fast your "fugly imperative code" is, if it's incorrect. And the lesson we've learned the hard way over and over again is that humans are just not smart enough to reliably write correct fugly imperative code.
Quick question: when the article calls a pair of mutually-recursive functions "corecursive", is this a commonly-used meaning of the term, or just a mistake (since they're certainly not corecursive in the coinductive sense)
Computers <i>are</i> "made of metal", and category theory does often lead far away from that reality. But that doesn't mean function calls are slow, or that we need Sufficiently Smart Compilers just to use map efficiently.<p>What it means is that abstractions should be designed with both the use and the implementation in mind. One way to do that is "zero-cost abstractions" a la C++, where the abstractions are pretty minimal. Another way is things like stream fusion and tco, where it's easy to accidentally stray out of the efficiently representable subset.<p>But there are a lot of ways to get abstractions that are both higher-level and "zero-cost" (generating the code you would have if you hadn't used them). For example, Python generators and C# iterators (coroutine-looking functions internally transformed into state machines) look a lot like Haskell higher-order functions but the benefits of laziness and stream fusion and tco are just built into the abstraction, rather than optimizations that may or may not happen, depending on the language/compiler. They also turn out to be more flexible, since the iterator state is reified.<p>Another example is entity-component systems in game engines. You still get a nice abstraction of game-world objects composed from behaviors, like you might see in an OO hierarchy, but the cache behavior is vastly improved and the behaviors are again more flexible.
Love the title.<p>A more common mistake I notice people making is writing code that makes more memory allocations than necessary.<p><pre><code> # Bad: Makes an extra instantiation of a list with 1 in
# it which then needs to be read
x = set([1])
# Good
x = set()
x.add(1)
</code></pre>
Overall, I think it is important to remember that when you write a program that every step translates to a set of operations. And this applies to all kinds of programming, not just functional programming.
When I started programming it was ZX Spectrum BASIC. No abstractions other than some symbols. Then I moved on to Perl, then PHP and eventually C# - all increasing abstraction from the hardware, but all mutable state and based off how hardware works.<p>Recently I've started moving on to F# and Haskell, and it's really opened my eyes.<p>While computers keep getting faster the humans who input the programs do not. While programming is about getting a computer to do things the most important part is making it do what you want it to to. Anything that helps humans reason about what the computer will do - rather than exactly how - is a good thing in my book.
One can just as easily make the counter-argument people understand theory, not computer instructions. So the cleaner and more functional your code is, the easier (and faster) it becomes for other people to build on top of / around it.<p>But that counter-argument only works given that the current application requires tons of people of work on your code. Just as the article's argument works only if you actually need to squeeze out those extra ms on your 1 machine.<p>My only take away from this is "use the right tools for the right job".
Surely the point is to use an effective language (for your particular definition of effective) and then optimize based on performance testing. Otherwise you lose time writing fast low level code that doesn't need to be fast or low level, and possibly never get to the important stuff.
silicon is not really metal[1], it's rock. It's the most abundant element on earth & the universe, which I find interesting. Most of our planet and universe could be turned into a giant computer.<p>[1] it's a metaloid