> Each inbound WebSocket connection runs your program in a dedicated process. Connections are isolated by process.<p>That sounds bad; it <i>is</i> like “CGI, twenty years later”, as they say. In 2000 at KnowNow, we were able to support over ten thousand concurrent Comet connections using a hacked-up version of thttpd, on a 1GHz CPU with 1GiB of RAM. I’ll be surprised if you can support ten thousand Comet connections using WebSockets and websocketd even on a modern machine, say, with a quad-core 3GHz CPU and 32GiB of RAM.<p>Why would you want ten thousand concurrent connections? Well, normal non-Comet HTTP is pretty amazingly lightweight on the server side, due to REST. Taking an extreme example, this HN discussion page takes 5 requests to load, which takes about a second, but much of that is network latency — a total of maybe ½s of time on the server side. But it contains 7000 words to read, which takes about 2048 seconds. So a single process or thread on the server can handle about 4096 concurrent HN readers. So a relatively normal machine can handle hundreds of thousands of concurrent users without breaking a sweat.<p>On the other hand, Linux <i>has</i> gotten a <i>lot</i> better since 2000 at managing large numbers of runnable processes and doing things like fork and exit. httpdito (<a href="http://canonical.org/~kragen/sw/dev3/server.s" rel="nofollow">http://canonical.org/~kragen/sw/dev3/server.s</a>) can handle tens of thousands of hits on a single machine nowadays, even though each hit forks a new child process (which then exits). <a href="http://canonical.org/~kragen/sw/dev3/httpdito-readme" rel="nofollow">http://canonical.org/~kragen/sw/dev3/httpdito-readme</a> has more performance notes.<p>On the gripping hand, httpdito’s virtual memory size is up to 16kiB, so Linux may be able to handle httpdito processes better than regular processes.