Before everyone gets excited about these big numbers, I would like to remind you that even higher concurrency can be achieved with even lower CPU and memory usage using Erlang. These numbers are good for Node, but don't use this as evidence that Node is magical and much better at handling large numbers of connections than other systems.
I use Node in production. The main thing I like about it is that looking at system usage graphs while number of users grow, only thing that is going UP is bandwidth ;)<p>I'd really like to see a story of someone really having 100k connected browsers. My online game currently peaks at about 1000 concurrent connections, and node process rarely lasts longer than 2 hours before it crashes. Of course, using a db like Redis to keep users sessions makes the problem almost invisible to users, as restart is instantaneous. I'm using socket.io, express, crypto module, etc.<p>I'd really like to see real figures for node process uptime from someone having 5000+ concurrent connections.
Link to his next post showing him breaking 250k - <a href="http://blog.caustik.com/2012/04/10/node-js-w250k-concurrent-connections/" rel="nofollow">http://blog.caustik.com/2012/04/10/node-js-w250k-concurrent-...</a>
It's a shame, that he didn't mentioned about kernel tuning. Without custom settings ( like net.ipv4.tcp_mem ), i think, it's a very difficult to reach this numbers.
I did 3M/node on physical severs, 800K/node on EC2 instances.<p>We mostly use Erlang on server-side and node.js + CoffeScript on client-side (where they rightfully belong ;)
It struck me the author runs his apps as root (in screenshots). But then I remembered he's using node.js to handle "thousands of concurrent connections".
I would really love to know what he did to tune that Rackspace VM. I had a terrible time trying to get node.js and others to get past 5,000 concurrent websocket connections on a m1.large EC2 instance or on Rackspace.