How big is <i>defaultSendBufSize</i>?<p>If it's something like 1 Gig, then it's the OS that should be commended for the miracle of throughput, not Go. Even VB would be able to pull numbers like these with aggressive buffering.<p>A more sensible metric would be to measure the throughput <i>and</i> the longest time in transit. If you get Go deliver 2 mil/sec with sub-ms delivery time, then we'll have something to talk about.
<p><pre><code> $ time seq 5000000 >/dev/null
1.10s user 0.00s system 99% cpu 1.107 total
</code></pre>
There, my Mac Mini can push ~5M msgs/second!!1one<p>Do I get a pony now?<p>Seriously, what is this doing on HN and what on earth are people discussing?<p>If you want to brag with benchmarks then how about providing at least a remote clue about what you are measuring...
Not bad at all, that's approximately the message-passing overhead I measured in C++ on a similar CPU a while back.<p>I think the main utility for such a benchmark though is to establish a lower limit on theoretical per-message overhead. Any practical system is likely to want to do something interesting with the content of the messages.<p>But this lets us say "expend an average of at least 5 us of useful computation on each message in order to keep the overall cost of message passing below 10%".
zeromq/czmq equivalent: <a href="https://gist.github.com/4229625" rel="nofollow">https://gist.github.com/4229625</a><p>Does 3M+ messages a second over tcp/loopback in my test.<p>(The fact that this go is competitive is pretty sweet.)
Slowly, go is getting into more and more places. This is yet another nice replacement. It is probably going to be way faster than the Ruby implementation in the long run.
Mh, if I understand correctly the messaging system being tested is <a href="https://github.com/derekcollison/nats" rel="nofollow">https://github.com/derekcollison/nats</a> and it is not written in Go but in Ruby (+EventMachine). Or is there a Go version of NATS?<p>Also, this is only testing the time it takes the go client to write the messages to the socket, not the time the server takes to process the messages. So the benchmark would be the same with a noop server that reads and discards all incoming traffic. Am I wrong?
I'm curious to see if this is running with GOMAXPROCS above 1. I've seen the scheduler start to drag down reqs/sec with more than one thread in lightweight networking services like this.
The language isn't the important thing: <a href="http://martinfowler.com/articles/lmax.html" rel="nofollow">http://martinfowler.com/articles/lmax.html</a>