This article is wrong on so many levels...<p>- It implies that async execution is equivalent to callback hell. In reality there are excellent ways to have async code which looks just like sync (generators, async/await).<p>- It benchmarks multi-core (sync) vs single-core (async) and makes claims based on the results.<p>- It presents async execution as an antipode of clustering. In reality it's a best practice to make use of both.<p>...and everything that follows is just irrelevant.
I don't understand these strawmans. The virtue of the asynchronous programming model is low-memory overhead as compared to threads AND low latency for IO-bound tasks in highly concurrent scenarios.<p>Request per-process/thread has all the same memory overhead implication it has always had. It's almost like the author is ignorant of the reason for node.js' success or why it was built in the first place.<p>Also, "callback hell" is just FUD. Nobody who does this for a living and knows what they're doing really has an issue with this. Promises solve the unreliability issues, and async/await solves the syntax complexity issues.<p>I'd like to see this same analysis for 1000 req concurrency measuring memory overhead and using async/await for code comparison. Cooperative multitasking will always be capable of lower latency when you know what you're doing, and async programming is lightyears simpler than multi-threaded programming.
Edit: The whole idea behind clustering is to run an application instance per thread/core for better performance and load balance requests between the application instances. This article seems absurd in its intention to force you to choose between multi-threaded synchronous application instances or a single application instance using callbacks.<p>We've been running a Koa.js API server using Cluster in production for over a year now with no hiccups (on a Windows machine).<p>I've been thinking about making the switch to iisnode, as it handles clustering, graceful shutdown and zero-downtime from within IIS (and does a couple of other things). It uses named pipes to proxy connections and also supports web sockets among other things.<p>With the nodeProcessCommandLine configuration setting, you can pass parameters to node (e.g. --harmony), use babel-node or io.js.<p>See:
<a href="http://www.hanselman.com/blog/InstallingAndRunningNodejsApplicationsWithinIISOnWindowsAreYouMad.aspx" rel="nofollow">http://www.hanselman.com/blog/InstallingAndRunningNodejsAppl...</a><p>A blog post I wrote a while ago:
<a href="https://shelakel.co.za/hosting-almost-any-application-on-iis-via-iis-node/" rel="nofollow">https://shelakel.co.za/hosting-almost-any-application-on-iis...</a>
Hogwash. It seems like this person doesn't understand that node is an event based, asynchronous language, and that's one of its big advantages over other languages that force threads, or don't generally offer parallel execution.<p>If this had compared Node.js with clustering and async vs a synchronous language like Ruby, it might have been interesting. Maybe. But non-asynchronous operations in Node are an antipattern that core node contribs are trying to remove (the -Sync in node stdlib are trying to be removed).<p>Good coding conventions, promises, generators, and async/await are your friends for making callback hell go away.
OpenResty does something similar. Code can be written synchronously, but all the network io for example happens in a non-blocking manner. The code still looks like synchronous, though - without call back hell. This doesn't come without issues as you need to change libraries to use OpenResty (Nginx) network primitives. Overall it is one of the nicest platforms I have worked with. A great webserver (Nginx) that can be programmed with a great language (Lua + LuaJIT).<p>At Nginx conf the Nginx developers where showing interest to bring Javascript to the platform. They said that they will take similar approach that OpenResty uses (aka no callback hell).
I recently started programming more using promises and I can say that I am very satisfied with the way it does away with the callback hell problem.<p>Instead of taking a callback, a function returns a promise, which can be daisy-chained to do work.<p>Ex:<p>file.read().then(console.log);<p>Or using the example in the article:<p>var a = fileA.read();<p>var b = fileB.read();<p>promise.all([a,b]).then((data) => console.log(data[0], data[1]));
We (<a href="http://Clara.io" rel="nofollow">http://Clara.io</a>) run multiple NodeJS instances per machine and our code base is Async. I believe this gives us the best of both worlds.<p>Also Sync versions of calls in NodeJS are likely going to be deprecated, thus this won't even be possible in NodeJS going forward.
If you look more closely at the actual code, this exercise compares the performance of readFileSync (an operation that deliberately blocks) on 1 core vs 2 cores.
It makes sense to have both event-based and non-event-based options for server side javascript development.<p>OS-level multitasking won't be able to achieve the same level of concurrency, but the simplicity and maintainability of the application will go up. The right choice depends on the needs of the application, of course.<p>Both evented and non-evented approaches have their place, and most server-side languages allow development with either approach: Ruby, Python, C, Java all have solid options for evented and non-evented solutions.
I've adapted the node-fibers library to write synchronous style code. It works really well for my needs, but I do understand that my approach does litter the function prototype which is not ideal.<p>Code looks like this:<p><pre><code> var sync = require('./sync');
sync(function () {
try {
var result = someAsyncFunc1.callSync(this, 'param1', param2');
if (result.something === true) {
var result2 = someAsyncFunc2.callSync(this, 'param1');
} else {
var result2 = someAsyncFunc3.callSync(this, 'param1');
}
console.log(result2.message);
} catch (ex) {
// One of them returned an err param in their callback
}
});
</code></pre>
I haven't tested the performance, so no idea if it's a running like a dog.
>Cons: Larger memory footprint<p>The more annoying con is lack of shared memory. A single process can be much less complex when it doesn't have to worry about messaging systems and off process caching.
This is unsuitable for real-world applications, where you will, inevitably, need at least a little mutable shared state. Async handles this reasonably well at a decent performance cost; shared-memory threads (near-certain catastrophic failure) and database-only state (awful performance) do not.<p>The only real competition is transactional memory but it hasn't become mainstream yet.
<i>"Asynchronous event-driven programming is at the heart of Node.js, however it is also the root cause of Callback Hell."</i><p>I'd argue the root cause is... callbacks.<p>Asynchronous programming can be done elegantly, in a synchronous style using "async/await", originally (?) in C# [1], likely to be added to the next version of JavaScript [2], also in Dart [2], Hack [4], and Python 3.5 [5]. It can also be emulated in languages with coroutines/generators [6][7][8] (which in turn can be implemented by a fairly simple transpiler [9][10])<p>This:<p><pre><code> function foo(a, callback) {
bar(a, function(err, b) {
if (err) {
callback(err)
} else {
baz(b, function(err, c) {
if (err) {
callback(err)
} else {
// some more stuff
callback(null, d)
}
})
}
})
}
</code></pre>
Becomes this:<p><pre><code> async function foo() {
var a = await bar(a)
var c = await baz(b)
// some more stuff
return d;
}
</code></pre>
And you'll see even greater improvements when using other constructs like try/catch, conditionals, loops, etc.<p>[1] <a href="https://msdn.microsoft.com/en-us/library/hh191443.aspx" rel="nofollow">https://msdn.microsoft.com/en-us/library/hh191443.aspx</a><p>[2] <a href="http://jakearchibald.com/2014/es7-async-functions/" rel="nofollow">http://jakearchibald.com/2014/es7-async-functions/</a><p>[3] <a href="https://www.dartlang.org/articles/await-async/" rel="nofollow">https://www.dartlang.org/articles/await-async/</a><p>[4] <a href="http://docs.hhvm.com/manual/en/hack.async.asyncawait.php" rel="nofollow">http://docs.hhvm.com/manual/en/hack.async.asyncawait.php</a><p>[5] <a href="https://lwn.net/Articles/643786/" rel="nofollow">https://lwn.net/Articles/643786/</a><p>[6] <a href="https://github.com/petkaantonov/bluebird/blob/master/API.md#promisecoroutinegeneratorfunction-generatorfunction---function" rel="nofollow">https://github.com/petkaantonov/bluebird/blob/master/API.md#...</a><p>[7] <a href="https://github.com/kriskowal/q/tree/v1/examples/async-generators" rel="nofollow">https://github.com/kriskowal/q/tree/v1/examples/async-genera...</a><p>[8] <a href="http://taskjs.org/" rel="nofollow">http://taskjs.org/</a><p>[9] <a href="https://babeljs.io/docs/learn-es6/#generators" rel="nofollow">https://babeljs.io/docs/learn-es6/#generators</a><p>[10] <a href="https://facebook.github.io/regenerator/" rel="nofollow">https://facebook.github.io/regenerator/</a>
This is great. I wonder how long til the node community "discovers" that using a dedicated httpd and communicating over a standardized middleware (fcgi, wsgi, rack, etc) is also a superior approach instead of handling http directly.
Now forgive my naivety, but isn't this just threads?<p>I understand that because if the funny scoping rules it means that threading is actually surprisingly hard? surely you'd want more control over your threaded event loops?
Do your apps run single-threaded? Why wouldn't you cluster? In-memory state is easily avoidable. Using in-memory sessions even prints a warning in express by default.