I wrote a chat server with Node.js/Socket.io + node-amqp. Message flow was dictated by business goals, and therefore was somewhat odd. Each client had his own queue and a single exchange was created for each room. Message came to N.js over a Websocket, from where it was pushed to a chat room's exchange with a temporary id. Next, message was posted to a Rails powered backend for processing and storing to a database. If Rails responded with success, a real message ID was pushed to the exchange, otherwise a command to kill message by temporary id was sent.<p>Given all this, I wasn't able to go beyond 2k simultaneous connections. Reasons? Our Rails setup wasn't able to handle that much requests. For the sake of testing I wrote a synthetic tests where Rails requests were mocked. RabbitMQ was running on Thinkpad Edge (quad core i5@2.6Ghz), node.js client and server was running on dual core pentium@3Ghz.
This configuration gave me 3.5-4.5 thousands of message deliveries per second, both PCs were at the top of capacity.
Such performance looked suspiciously low for me, so I went further. Reading node-amqp shown that it is totally flawed (i.e. it is allocating 130kb buffer for each message sent). Doing some tweaking I managed to increase message delivery rate to almost 15 thousands of messages/second.<p>Strange part of the outcome is that processing both 4k and 14k messages created almost identical load on RabbitMQ.<p>Lessons learnt:
1. RabbitMQ is very robust and easy to work with.
2. Do not measure RabbitMQ's performance by LA or CPU consumption. If you feel stuck, try optimizing something.
3. V8 memory allocation/GC suck. My server was constantly crashing due to std::bad_alloc being thrown after next packet buffer allocation.
4. node-amqp suck even more. I am going to work on it somewhen soon.
5. Node.js on it's own is a toy, not a tool. Do not build mission critical systems designated to handle long living connections with it.