While the performance of Erlang was not the best between the contenders, I really like the behavior of Erlang here: load makes almost no difference in latency, so while you're serving less clients, you're guarantee to be serving them well.<p>This is a much better behavior than having random spikes in latency, which translates to a small but significative amount of users complaining about that your application is slow. Erlang is the contrary, every user should have the same experience, nonetheless of the load of the server.
My understanding has always been that nodejs was ideal for high io workloads with a light cpu load.<p>Having it sleep for 100ms per request is a little strange for nodejs though because it is basically simulating cpu load, even though a node server typically hands off work to an optimized library that might be disk or network io bound instead of cpu bound. 100ms is a lot of cpu work per request, no? That is specifically the sort of task node is not recommended for, since it focuses on async.<p>Maybe I’m wrong, I need to work on larger projects to see what loads are more typical. At my last job, the entire focus was to use node to redirect work to c++ code, databases, etc, basically just request routing.<p>Still, this looks rough for node. 2x to 4x difference between it and the fastest is still a significant cost for such a nice backend to work with
Unencrypted HTTP suggests post load balancer, would have been nice to see core loads as I'm betting Node would scale better if it weren't being it's own LB layer as well. RAM usage would still be garbage but I bet performance would hit about Go level.<p>Go's concurrency model really fits the task best though and it's easy to see that in the results.
I don't believe the built-in Node.js http server implements backpressure or 503 errors out of the box. Do the other systems tested do so? Could that explain the poor behavior and out-of-memory crash?