I played around with a go server to do some simple scaling numbers - looking at possibly using go to implement a large-number-of-idle-connections notification server.<p>I found the (good) result that I could spawn a new goroutine for each incoming connection with minimal (~4k) overhead. This is pretty much what you'd expect since a goro just needs a page for it's stack if it's doing no real work. I had something like 4 VMs each making ~30k conns (from one process) to the central go server with something like 120k conns.<p>I found one worrying oddity however. Resource usage would spike up on the server when I shut down my client connections (e.g. ctrl-C of a client proc with ~30k conns).<p>Reasoning about things a bit, I <i>think</i> this is due to the go runtime allocating an OS thread for each goro as it goes through the socket close() blocking call. I think it has to do this to maintain concurrency. So I end up with hundreds of OS threads (each only lives long enough to close(), but I'm doing a <i>lot</i> at the same time).<p>Can anyone comment:<p>- is this guess as to the problem likely to be correct?<p>- is this "thundering herd" a problem in practice?<p>- are there ways to avoid this? (Other than not using a goro-per-connection, which I think it the only idiomatic way to do it?)<p>My situation was artificial, but I could well imagine a case that losing, say a reverse proxy, could cause a large number of connections to suddenly want to close() and it would be a shame if that overwhelmed the server.
<i>While JavaScript drags the scars of its hasty standardization around with it, Go was designed very thoughtfully from the beginning, and as a result I find that it’s a pleasure to write.</i><p>This is very true. Go is a pleasure to write. In fact, it's such a pleasure then when you hit something that wasn't really well designed it's horrid.
<i>> The biggest promise that Node makes is the ability to handle many many concurrent requests. How it does so relies entirely on a community contract: every action must be non-blocking. All code must use callbacks for any I/O handling, and one stinker can make the whole thing fall apart. Try as I might, I just can’t view a best practice as a feature.</i><p>Nonblocking I/O isn't just a "best practice" in the sense that consistent indentation is a "best practice," it's a core tenet of the Node ecosystem. Sure, you could write a Haskell library by putting <i>everything</i> in mutable-state monad blocks, and porting over your procedural code line-for-line. It's allowed by the language, just like blocking is allowed by Node. But the whole point of Haskell is to optimize the function-composition use case.<p>The Node community has the benefit of designing all its libraries from scratch with this tenet in mind, so in practice you never/rarely need to look for "stinkers" unless they're documented to be blocking. And unless they're using badly-written blocking native code, you can just grep for `Sync` to see any blocking calls.
Node: Everyone knows JavaScript, there's a massive community, there are tons of libraries, and you get very good performance<p>Go: No one knows this language, there's a small-but-growing community, there are enough libraries to get a lot done, and you get even better performance<p>Java: They are paying me (money!) to write in this language
I've been using Go a lot lately. It's difficult to overstate just how much simpler it makes writing highly-concurrent server-type programs. Entire classes of bugs, issues, and puzzles just vanish.
<i>> There’s no arguing about whether to use semicolons, or putting your commas at the front of the line — the language knows what it wants. It’s a built-in hipster suppression mechanism.</i><p>Major point for saving man hours right there.
The fact that Node.js is being used in this equation says a lot about how much impact and penetration it has achieved in a rather short while.<p>Personally I hope that Go does just as well, if not a lot better. I am a bit of a fan of both.
Clojure is another nice alternative for fast servers, and using a concurrent, immutable and functional language is a huge win. http-kit is a good example of such server: <a href="http://http-kit.org/" rel="nofollow">http://http-kit.org/</a>
Here's a haskell comparison (hint: it does very well).<p><a href="https://gist.github.com/jamwt/5017172" rel="nofollow">https://gist.github.com/jamwt/5017172</a><p>Haskell was ghc 7.6.1 with ghc --make -O2<p>Go is go1.0.2 with "go build".
I have never understood the focus on speed as a selling point for Node. It may well be very fast, but it seems to me that the primary selling points would be the ability to share code between client and server and that you can start coding server side without learning a new language if all you know is JavaScript.
I was curious, so I actually ran both of the servers from the article on my little MacBook Air. The results are below.<p>First, go:<p><pre><code> $ ab -c 100 -n 10000 http://localhost:8000/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: localhost
Server Port: 8000
Document Path: /
Document Length: 1048576 bytes
Concurrency Level: 100
Time taken for tests: 10.085 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 10489017384 bytes
HTML transferred: 10487857152 bytes
Requests per second: 991.62 [#/sec] (mean)
Time per request: 100.846 [ms] (mean)
Time per request: 1.008 [ms] (mean, across all concurrent requests)
Transfer rate: 1015729.90 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 1 2 0.8 2 6
Processing: 21 99 5.6 98 137
Waiting: 1 3 2.7 2 41
Total: 25 101 5.6 101 139
Percentage of the requests served within a certain time (ms)
50% 101
66% 102
75% 103
80% 103
90% 105
95% 106
98% 108
99% 112
100% 139 (longest request)
</code></pre>
Secondly, node.js:<p><pre><code> $ ab -c 100 -n 10000 http://localhost:8000/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: localhost
Server Port: 8000
Document Path: /
Document Length: 1048576 bytes
Concurrency Level: 100
Time taken for tests: 15.765 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 10487558651 bytes
HTML transferred: 10486808576 bytes
Requests per second: 634.31 [#/sec] (mean)
Time per request: 157.653 [ms] (mean)
Time per request: 1.577 [ms] (mean, across all concurrent requests)
Transfer rate: 649639.92 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 1.7 1 11
Processing: 2 156 34.7 159 272
Waiting: 1 47 29.7 42 136
Total: 2 157 34.7 161 273
Percentage of the requests served within a certain time (ms)
50% 161
66% 174
75% 182
80% 187
90% 198
95% 209
98% 221
99% 227
100% 273 (longest request)
</code></pre>
Not only does go serve the traffic more quickly, but it also has a much lower standard deviation between slow and long requests. Impressive.
I run node/express for most of my web servers and each takes up about 10-15mb RAM. They're very basic no fluff. Anyone know what comparable mem footprint in Go?
I find this post paired with this thread confusing. Yes, Go is tempting and I'd like to try it since a lot of people get quickly into flow with Go, the "package manager is so great" and "everything is just a breath of fresh air".<p>But what I don't like: the negativity against Node and omitting some facts. In the replies of the orignal post a guy tested two (!) times Node and once it was significantly faster (v0.6) and once it had same speed (v8.0). So, why has mjijackson such different results in this thread at the top?? And maybe we should test it on real servers and not on a MBA. Moreover, we have here some micro benchmark which possibly doesn't reflect reality well. Don't get me wrong, I appreciate any benchmarking between languages but then please do it right and make no propaganda out of it. Further, Go's package manager seems to be nice but it does NOT have version control. How do you want to use this in a serious production environment. Maybe version control will come (but then tell how without loosing its flexibility) or not but this is something serious and definitely not an alternative to any server environment except for some mini services.<p>EDIT: downvoting is silly, propaganda and won't help the Go community in getting more credibility, better do some further benchmarks; otherwise this post/thread is full of distinct misinformation and should be closed
One advantage of node that wasn't mentioned is the ability to share server side and client side code. Avoiding discrepancies in the same form validation written in two different languages can often be more important than performance gains in server applications.
>There are also some officially maintained repositories outside of the stdlib that deal with newer protocols like websockets and SPDY.<p>Does anybody from HNers use Go with websockets? What package do you use?