From their readme-<p>"The test was only set to reach up to 100 concurrent connections (each sending 1000 messages per second) - Total of 100K messages per second."<p>So they had only 100 concurrent connections.
Is 100K mps on <i>8 cores</i> considered high for node/websockets microbenchmarking of the socket path?<p>That doesn't seem like much from past experience writing high-throughput messaging code, and all this is doing is spitting out length-framed messages to a socket.
There's also sockjs (<a href="http://sockjs.org" rel="nofollow">http://sockjs.org</a>) which has some rather impressive benchmark results when using the python/tornado server with PyPy (<a href="http://mrjoes.github.io/2011/12/15/sockjs-bench.html" rel="nofollow">http://mrjoes.github.io/2011/12/15/sockjs-bench.html</a>). 155,000-195,000 messages per second on a single core.
It'd be nice to compare it with:
<a href="https://github.com/automattic/socket.io-redis" rel="nofollow">https://github.com/automattic/socket.io-redis</a><p>I wrote an example application using it here:
<a href="https://github.com/guille/weplay" rel="nofollow">https://github.com/guille/weplay</a>
Why does each worker need a seperate store process? It seems on an 8 core machine max worker count can only be 3 (1 master, 3 workers, 3 stores). If workers had in-memory stores -or at least connect to a Redis server-, with 4 more workers performance should increase.