I wonder how long it will be before compilers/interpreters of async-aware languages just do this by default. CPUs and low-level language compilers jump through all kinds of hoops of out-of-order execution, branch prediction, caching, parallel execution, etc.<p>I picture a day maybe 10 years from now where developers in most languages don't even have to think about these things. All the old-timers will still be structuring their code "as though it didn't exist" whereas the new kids will fly along without even thinking about it. Kind of like garbage collection the first few years.
Great talk.<p>For more on concurrency and parallelism in Haskell, check out Parallel and Concurrent Programming in Haskell [0], deemed as the best book on the subject, also written by Simon Marlow.<p>[0]: <a href="http://chimera.labs.oreilly.com/books/1230000000929" rel="nofollow">http://chimera.labs.oreilly.com/books/1230000000929</a>
Haxl is a powerful abstraction with IMHO a beatifuly simple implementation.<p>However for our use case at LumiGuide (reading and writing registers of modbus devices) it wasn't simple enough. We just needed an abstraction for batching and did not need caching and the other features Haxl provides.<p>So I wrote monad-batcher which as the name implies only provides a batching abstraction (which can also be used to execute commands concurrently). All the other features can be build on top of monad-batcher as separate layers (separation of concerns).<p>The library is available on Hackage but needs a bit more documentation (a tutorial would be nice):<p><a href="http://hackage.haskell.org/package/monad-batcher" rel="nofollow">http://hackage.haskell.org/package/monad-batcher</a>
I looked through the slides but not the video and the slides ignore the hard problem: how do you schedule these requests? How do you know how many parallel requests you can issue without hammering the database or service? How do you batch queries so that you get acceptable latency and a query size that will not choke the database?<p>The last question is probably easy for most use cases where you have independent requests coming in (typical web application) - in the context of a single request you can usually get away with batching as much as is possible. But the scheduling problem is very similar to the promises of "free parallelism because Church-Rosser" - actually taking advantage is an open problem. Even when you know how much time each job takes in advance, multiprocessor scheduling is NP-hard.<p>Anyway, if someone watched the video and the question is addressed there, please let me know so I can watch it.
I have not fully digested yet, but seems very similar to Scala Parallel Collections and Java8 Streams. There are databases which implements such interfaces.