This is an interesting read. One part jumped out at me (sorry for the long quote)<p><i>The purpose of the technical problem was to assess more directly candidates ability to write Haskell programs that can be used to solve real world problems, where memory usage and performance are important. The problem was all about evaluation order and memory behaviour. We started by asking candidates to look at a short program and say what shape they would expect the heap profile to be. That would then lead on to a discussion of what things are evaluated at what stage and how much memory they are taking in the meantime. For the final step we asked candidates to rewrite the program to run in constant space. We felt overall that the technical problem was quite useful and we allowed it to become a significant factor in our final decision making process.</i><p><i>The choice of problem is based on our belief that a good understanding of evaluation order is very important for writing practical Haskell programs. People learning Haskell often have the idea that evaluation order is not important because it does not affect the calculated result. It is no coincidence that beginners end up floundering around with space leaks that they do not understand.</i><p>I've written a lot of C++ and a fair amount of Haskell, and this kind reasoning about the evaluation order and space usage of Haskell a program is no less of a black art than pointer arithmetic or manual memory management in C++. In both cases, it's too steep of a learning curve to just be able to write practical programs, which is why garbage collected languages have displaced C++ in many domains.<p>It's particularly damning to say that "good understanding of evaluation order is very important for writing practical Haskell programs" because the easiest evaluation order to understand is strict evaluation order. In that sense, lazy evaluation is to strict evaluation as manual memory management is to garbage collection, except that the latter is supposed to be an improvement on the former.
<i>The next Haskell will be strict</i><p>-Simon Peyton Jones<p><a href="http://www.cs.nott.ac.uk/%7Egmh/appsem-slides/peytonjones.ppt" rel="nofollow">http://www.cs.nott.ac.uk/%7Egmh/appsem-slides/peytonjones.pp...</a><p>The latest research fads in CS aren't always beneficial.
They were looking for someone with experience in a client-facing role, but conducted the interview over IRC. This means they couldn't assess that person's body language etc. An odd decision IMO.
I am surprised that such quality control would be required with Haskell developers. I would have thought that people with such a skillset would clearly be sufficiently motivated and capable such that filtering based on fairly rigorous criteria would seem to be unnecessary.
Maybe it would be usefull to gather this kind of commented experience about evaluation order and memory behaviour in a little book of case studies (or several blog posts). There are some slides by dons and book chapters in RWH, but it is still something difficult.