I noticed that you send the "native threading" case through your library as well. Have you compared to just using "naive" Java - Threads and a BlockingQueue?<p>Also: if the Google patches for the user-mode threading are adopted, will Quasar have any advantages over a JVM that uses the same syscalls? Can you explain where this would come from?<p>I think what you've done is genuinely cool, I'm just trying to better understand what the 10x advantage actually comes from.
"because it uses macros, the suspendable constructs are limited to the scope of a single code block, i.e. a function running in a suspendable block cannot call another blocking function; all blocking must be performed at the topmost function. It’s because of the second limitation that these constructs aren’t true lightweight threads, as threads must be able to block at a any call-stack depth"<p>Can you elaborate on this a bit? Let's say I have a function called 'fetch-url' which takes a core.async channel as an argument and makes a non-blocking http request (say, using http-kit), and in the callback handler i put the result onto the channel. If I'm in some other function, in which whose body I open a core.async go block and call fetch-url from within that go block, everything is still asynchronous is it not?
Any chance of someone putting together a benchmark for <a href="http://www.techempower.com/benchmarks/" rel="nofollow">http://www.techempower.com/benchmarks/</a> for quasar? It would be nice to see how it compares to other techniques.
Wouldn't this kind of development target be better served by optimizing small C/++ programs instead of trying to optimize to some abstract virtual machine implemented on top of the hardware? I mean if speed really is your goal, why not do it correctly instead of hitting yourself in the face with an extra tree before starting?