Some random thoughts as somebody who codes a lot of Node.js at work and a lot Go in my freetime (and sometimes at work):<p>Did you try using pprof and the other tooling go provides to better understand your performance limitations? Tooling is a lot better in go ecosystem for understanding CPU and memory consumption, so if/when you run into into issues with Node you're going to be in a world of pain (this is basically a large portion of my job in a large node.js code base in the $day_job). You'll basically have to resort to using lldb and heapdumps in the Node world. I'm surprised the number of concurrent clients you go with Go was so small. I know lots of people using Go and Gorilla websockets that exceed 1.5k clients with similar memory constraints. To be perfectly honest, it sounds like you're doing something wrong.<p>As of Go 1.4, the default stack size per goroutine was 2KB and not 4KB.<p>If you add in TypeScript, you'll have a better type system than Go provides in the Node ecosystem. That's a huge point for using Node.js, especially if there are multiple contributors and the code base lasts for many years.
It's interesting he's referencing the Disruptor pattern. I wrote the Go port of the Disruptor implementation that he links to (<a href="https://github.com/smartystreets/go-disruptor" rel="nofollow">https://github.com/smartystreets/go-disruptor</a>) a while back and it performed beautifully. That said, channels weren't slow either. Our finding showed that we could easily push 10-30 million messages per second through a channel, so I'm struggling to understand what he defines as slow. That said, with a few tweaks to the Go memory model and I think I could take the Disruptor project to completion. Without those tweaks, I have to do memory fences in assembly.
Ryan Dahl, the creator of Node.js:<p>"That said, I think Node is not the best system to build a massive server web. I would use Go for that. And honestly, that’s the reason why I left Node. It was the realization that: oh, actually, this is not the best server-side system ever."<p>Full interview: <a href="https://www.mappingthejourney.com/single-post/2017/08/31/episode-8-interview-with-ryan-dahl-creator-of-nodejs/" rel="nofollow">https://www.mappingthejourney.com/single-post/2017/08/31/epi...</a>
OP is complaining about Goroutine stack size at 4kb per connection, his test shows that Node8 is taking up to 150MB of memory with 5k users, 4kb*5k = 20MB for Go memory, I don't understand how Nodejs can take less memory than Go, and without real numbers / test I'm pretty sure he's doing something wrong somewhere.<p>From my experience on some large prod deployment, Nodejs app takes much more memory and CPU vs Go app for doing similar work.
One thing to note is that they were using boltdb, which is an in process K/V store designed for high read loads, and doing a lot of writes to it. boltdb also tends to use a lot of memory, as it's using a mmap'd file. The switch also moved them to sqlite, which I would say is a much better fit for what they are doing, but means a lot of this is an apples and oranges comparison.
If instead of ditching Go he had posted some kind of help request to the 'gonuts' group / mailing list, I'm 100% certain that several people would've helped him with code reviews and feedback. I've seen this happen in the gonuts group countless times, including contributions/assistance from the core Go team that hang out there :)<p>As noted by other commenters, below, the code seems to have some issues. And, if it still didn't perform well after addressing those, somebody in gonuts would've helped teach how to profile it, and then expert-eyes could have looked over the profiler output and provided further feedback.
So, I am not a big fan of Javascript. I do not despise it or anything, I just never got to like it. I guess.<p>I did really love POE, though, so when I heard of Node.JS, I thought I will probably like this very much.<p>I am not sure what happened. I think it was the tutorials being always out of date. Node.js seem so be such a fast-moving target. I do not mind asynchronous, callback-driven code. But when a tutorial that was written three months ago fails to run because some library had a breaking API change in between, that tends to drive me away.<p>Think of Go what you want, but its policy towards backward compatibility is a big plus.
I feel his pain about the lack of decent WebSockets support in Rust. There's a few Websockets implementations but all of them are meant to run on a separate port from the web server. As in, they want you to run your web server on port 443 and the websocket on... Something else. Which makes zero sense (browsers will deny access to the second port because of security features related to SSL certificates).<p>Also, unless you go low level (lower than frameworks like Tokio) you can't easily access file descriptors to watch them (e.g. epoll) for data waiting to be read. It makes it difficult to use WebSockets for their intended purpose: Real-time stuff.<p>Rust needs a web framework that has built-in support for Websockets (running on the same port as the main web server) and also provides low-level access to things like epoll. Something like a very thin abstraction on top of mio (that still gives you direct access to the mio TcpListener directly).<p>In my attempts to get Tokio reading a raw file descriptor I just couldn't get it working. I opened a bug and was told that raw fd support wasn't really supported (not well-tested because it only works on Unix and cross-cross-platform stuff is a higher priority). Very frustrating.<p>I wish the Tokio devs didn't make the underlying mio TcpListener private in their structs.
I've dabbled a lot with Go. I've found it _very_ effective to a wide variety of problems I don't really have most of the time.<p>If I wanted to implement RAFT I would probably pick Go. If I want a simple REST/GraphSQL server then Node.js is so much easier. `async/await` is nicer for me than goroutines and I find my code easier to reason about.<p>Full disclosure: I'm a Node.js core team member and a Go fan. Part of my reasoning might be how much nicer Node.js got these last couple of years.
I've been working on a similar project lately which also uses the gorilla/websocket library. I just tested connecting 1500 connections in parallel like was done in this link for Raspchat, and my application only uses 75 MB along with all other overhead within it. I'm not sure how this would cause a Raspberry Pi with 512MB memory to thrash and come to a crawl unless Raspchat has a ton of other overhead outside of connection management.
I'm working on the exact opposite migration at the moment :)
(Most of our stack is Go, but we use the excellent Faye library written in Node)
The Node code is really well done. <a href="https://faye.jcoglan.com/" rel="nofollow">https://faye.jcoglan.com/</a>
Nothing wrong with the Node codebase. In our case we just had to add a lot of business logic. I could have done that in Node (we did for a long time), but I decided that with the latest set of changes we'd bring this component in line with the rest of our infrastructure.<p>It's hard to know without the code, but the author seems to be doing a few things wrong:<p>1. You only need a few channels, not N. Maybe 4-5 is enough.
2. In terms of goroutines you only need as many as are actively communicating with your server. So creating a new connection creates a goroutine, sending a message to a channel creates a goroutine etc.
3. You need something like Redis if you want to support multiple nodes<p>For inspiration check out this awesome project:
<a href="https://github.com/faye/faye-redis-node" rel="nofollow">https://github.com/faye/faye-redis-node</a><p>This will perform well in Node and even better in Go.
Besides that, as many here have pointed out, this sounds like a problem somewhere hiding in the Go code ruining the performance, it is certainly true, that an event-handler based approach is increadible efficient to manage a high number of simple requests with limited resources. If every request can be handled in a single event, it only has advantages. It does not require many resources and you don't have to deal with any synchronisation issues.<p>In many typical web applications you have less if no interaction between the connections, but rather complex logic running at each request. There the event-based approach, which must not block, is getting more complex to manage and you want to use all cpus in the system. There a goroutine based approach should shine much stronger, as the goroutines may block and you don't have to spread your program logic across callbacks.
As someone who has neither Go, nodejs, or RPi experience the results seem surprising. Many have already commented that the author must have been doing something wrong; the code is there for everyone to see, so could some wiser gopher take a look and tell whats actually going on here?
This is a very interesting direction to take. I've built a lot of my personal stuff on JS, and TBH, the one thing I really wish I had right now was a statically typed codebase.<p>I spend a lot of time thinking about why I'm creating a certain data model, whether I might need to change something in future, etc. About 60% of my productive time is spent thinking about how and why, so I hardly refactor. However, when the need arises, I wish I had something like Kotlin.<p>For the past few months I've been writing new JS code in TS, adding types here and there, I haven't tried out Kotlin on JS, but I'm hoping to go there.<p>I'm learning Go, but for other reasons. I find JS to be performant, my oldest active codebase has been around since the v0.8 days.
I don't use Go but I find the reasoning fueled by a confirmation bias to pick JS.<p>goroutines are now 2k. If you use 2 goroutines per connection that's 4k. If you have 10k connections that's roughly 20mb only which is very reasonable.
Why two (or three) go routines per connection? Why not one net socket (for reads) and two channels (one for writes, other for pub/sub) and select() between them? It seems like the OP is trying too hard to avoid event loops.
> Since go does not have generics or unions my only option right now is to decode message in a base message struct with just the @ JSON field and then based on that try to decode message in a full payload struct.<p>If he is in control of his protocol, why did he not shape it to suit his parser library? Instead of this:<p><pre><code> message1 = { "@": "foo", "foo1": 1, "foo2": 2 }
message2 = { "@": "bar", "bar1": 1, "bar2": 2 }
</code></pre>
Do this:<p><pre><code> message1 = { "foo": { "foo1": 1, "foo2": 2 } }
message2 = { "bar": { "bar1": 1, "bar2": 2 } }
</code></pre>
Then you can read both types of messages into a single Go type,<p><pre><code> type Mesage struct {
Foo *FooMessage
Bar *BarMessage
}
</code></pre>
After parsing, that element which is not nil tells you which type of message was sent.
I know very little about both Node and Go (currently learning the latter but haven't done anything really interesting so far :) - but, really it's hard to believe that Elixir/Phoenix would be disregarded so quickly if the crux of the problem is to have good pub/sub support.
I'm happy about this article, not because of what it says, but because of the popularity it got. NodeJS is not as bad as people think and I'm really excited about when it will regain status in the "pros" community.
Hi!<p>I'm implementing channels/coroutines in clojure[0] and js[1].<p>These are alpha quality right now, but I already have channels with backpressure and async go blocks. I wrote the js implementation to show that these could be ported to any language.<p>The main thought behind this is:
CSP - S = Communicating Processes = Green Threads<p>[0] <a href="https://github.com/divs1210/functional-core-async" rel="nofollow">https://github.com/divs1210/functional-core-async</a>
[1] <a href="https://github.com/divs1210/coroutines.js" rel="nofollow">https://github.com/divs1210/coroutines.js</a>
Open source software often makes use of the wisdom of the crowds to move forward, and when those crowds are in average less prepared, the results are comparably bad.<p>See the top answer for this question in StackOverflow: <a href="https://stackoverflow.com/questions/5062614/how-to-decide-when-to-use-node-js" rel="nofollow">https://stackoverflow.com/questions/5062614/how-to-decide-wh...</a> . The top answer with 1360 upvotes is wrong and had little to do with the question. This is a recurrent thing in the node world.<p>Go ask this same thing on an Erlang, Haskell, Rust, etc. forum and I am sure the right answer will come up quickly.
We had pretty good number of sim.connections on 512MB instance. Unfortunately there is no enough details on methodology you used to compare, so I can't compare the number of clients it can support. I appreciate if you can do it yourself.<p><a href="https://github.com/ro31337/hacktunnel" rel="nofollow">https://github.com/ro31337/hacktunnel</a>
Regarding Rust not having a mature websockets lib, this library seems to be a decent websockets implementation at a glance and passes all of the autobahn tests:<p><a href="https://github.com/housleyjk/ws-rs/" rel="nofollow">https://github.com/housleyjk/ws-rs/</a><p>In any case, that is where I'd begin :)
I think you are right, go is not the best language for your project, single threaded just moving blobs of data around node is a good solution. I'm not sure why anyone would have a problem understanding that. (And I'm a guy who hates node but loves go)
TJ Hollowaychuck the author of express.js had the opposite opinion.<p><a href="https://medium.com/@tjholowaychuk/farewell-node-js-4ba9e7f3e52b" rel="nofollow">https://medium.com/@tjholowaychuk/farewell-node-js-4ba9e7f3e...</a>
Would Matrix be a good fit for this use case? There are a bunch of different server implementations and the (web based) protocol is relatively simple and very well documented.
Using go's websocket library would also be viable.<p>For the receiving end, go can use http handlers for websockets. So when a message from any websocket is received the handler will be spawn and process it (just as any http request). Preferably dispatch it to a big central channel.<p>Keeping separate channels for each websocket seems overkill to me. Go has maps. Create a map with all the websocket connections and maybe smaller maps for each chat room, then set n workers to listen to the central channel and dispatch messages directly to each room member.
The reverse @tjholowaychuk ;)<p>(If you're a 10x engineer, check his new tool for 1e10x engineers: <a href="https://news.ycombinator.com/item?id=15731936" rel="nofollow">https://news.ycombinator.com/item?id=15731936</a>)