I've just read this article by Randall Degges on how ipify.org scaled to 30 billion API calls a month on a few Heroku dynamos after the app was re-written in Go.<p>Have you re-written any of your applications in Go and experienced significantly higher performance?
We wrote our bidder (in app advertising) in Go. It is globally distributed (close to the exchanges) and handles 1.5-2M requests/s (OpenRTB,~50-90k/s per instance) with a p99 of 10-20ms (without network latency). Really happy with Go, especially the GC improvements done by the Go team in the last few releases. For a previous similar project we used Ruby which was quite a bit slower.
At Uber we built Jaeger (<a href="https://github.com/jaegertracing/jaeger" rel="nofollow">https://github.com/jaegertracing/jaeger</a>), which is doing something like 200K writes per minute into our Cassandra cluster.
A BitTorrent client <a href="https://github.com/anacrolix/torrent" rel="nofollow">https://github.com/anacrolix/torrent</a>, and several projects using it. The original idea started in Python which just didn't cope. Things are probably better now in Python with green-threading being a standard concept, but you just can't easily get the throughput you need without minimal overhead, and that overhead is just too high in Python.
I once wrote a thing that went all over gmail figuring out where all the pending mail was supposed to go and presenting it as an interactive dashboard. It was easy to do in Go because baking in the html templates, static assets (like d3) and backend logic is pretty simple with Go’s standard libraries and build system.<p>I wrote another thing in Go that determined the backend latency of an anti-abuse system within Google. That prober made about ten million requests per second. Again I chose Go (over C++) not for its performance but for the ease of giving that thing a fancy interactive status page.
At SendGrid, much of our stack is Go but started off as Perl and Python. Our incoming clusters are geographically distributed to reduce latency, but a handful of nodes do just fine processing 40k rps. We could dramatically reduce cluster size, but choose not to for reasons around availability. These incoming requests generally create four to eight logging events that are processed and emitted for stats, tracking, and/or published to customer webhooks. Additionally, our MTA is now in Go, and each incoming request usually has some multiplier for the number of recipients.<p>We typically expect around a 20x improvement in throughput when we rewrite a service. Granted this depends on the nature of the service.<p>As much as reduced server costs and greater performance are awesome, one of my favorite parts is the increased maintainability of the services. Perl's AnyEvent and Python's Twisted (aptly named, btw) were much harder to reason about. Go's concurrency and simplicity make it a win for us.
A significant proportion of Segment's event-handling pipeline and processing code is written in Go. This includes our "Centrifuge" system for ensuring reliable event delivery to HTTP destinations, which we recently blogged about: <a href="https://segment.com/blog/introducing-centrifuge/" rel="nofollow">https://segment.com/blog/introducing-centrifuge/</a><p>With the exception of C (or perhaps Node.js for single-threaded programs), I can't imagine we would be running as efficiently on our AWS compute resources if we'd written our code in a different language.
I've written many distributed systems in Go for scalability reasons and more recently have been working on micro, an open source toolkit to help others do this <a href="https://github.com/micro/micro" rel="nofollow">https://github.com/micro/micro</a>. The core of which starts with go-micro, an RPC framework for building cloud-native applications <a href="https://github.com/micro/go-micro" rel="nofollow">https://github.com/micro/go-micro</a>.<p>Building systems that scale is not an easy task. Go lends itself very well to this task but at the same time there's more required than just the language. The communities belief in libraries rather than frameworks actually hinders this progress for others. Hopefully other tools like my own will emerge that sway people towards the framework approach.
Hi, we rewrote in UserEngage few modules from python3 to golang.<p>On python we use Django, DB postgres&citus, rabbitmq, redis.<p>Our main cluster have more than 25mln API requests daily.<p>For us Golang is >70 times faster than Django.
TiDB is a distributed HTAP database compatible with the MySQL protocol(<a href="https://github.com/pingcap/tidb" rel="nofollow">https://github.com/pingcap/tidb</a>)
Go-Jek and Grab are both very strong in SE-Asia in ride sharing. Both their backends are (re)written in Go. Each had a presentation on it during Gopher Con 2018 in Singapore. They are also big sponsors of that event: <a href="https://2018.gophercon.sg/" rel="nofollow">https://2018.gophercon.sg/</a><p>[Edit: one of them had some impressive stats on reducing servers while increasing demand]
We have built a small geo redirection server using Golang and Redis that handles around 50M requests per day. We have optimized our stack to cut TTFB and Golang make it easier to achieve this.
high performance != highly scalable. I was really hoping for some discussion on highly scalable systems and not necessarily optimizing database queries.
Average of 8 seconds request down to around 80ms. Main issue being the original component in the application was making heavy use of the ORM and honestly, had I really taken a step back and hand crafted queries and used plain old PHP objects I likely might not have needed to rewrite in Go. Doing so however is a really great exercise and leaves perhaps the most complex part of the application nicely separated from the CRUD side of things so I'm still thinking it was a pretty good move.
Message queue specifically built to meet our needs, writing to an optimized XFS volume.<p>Saved us a ton over the pre-built queues we'd been using.
YTBmp3 <a href="https://www.ytbmp3.com" rel="nofollow">https://www.ytbmp3.com</a> is built completely with Go. Autoscaling cloud instances based on load with a custom scaling solution is also written in Go. It handles very long running requests on streaming transcoding and compressing. Go's net/http is exposed to the internet with great results which adds to infrastructure simplifications, graceful reloading (important for long running requests). Go allows so much infrastructure simplifications that there is not even a single container involved. :)
> Have you re-written any of your applications in Go and experienced significantly higher performance?<p>Probably not what you're looking for, but I improved the runtime of my shell prompt by one order of magnitude by porting from Python to Go. <a href="https://blog.bethselamin.de/posts/latency-matters.html" rel="nofollow">https://blog.bethselamin.de/posts/latency-matters.html</a> and discussed at <a href="https://news.ycombinator.com/item?id=15059795" rel="nofollow">https://news.ycombinator.com/item?id=15059795</a>
The DNS server (<a href="https://github.com/abh/geodns" rel="nofollow">https://github.com/abh/geodns</a>) for the NTP Pool (<a href="https://www.ntppool.org/en/" rel="nofollow">https://www.ntppool.org/en/</a>) does close to a hundred billion queries a month across a bunch of tiny virtual machines around the world. The steady state load is about 30k qps, but with frequent brief spikes to many times that.
We wrote <a href="https://trackcourier.io" rel="nofollow">https://trackcourier.io</a> frontend in Go. Its been really stable so far and is ridiculously fast.
I migrated a image upload service from Python that was using 10’s of server to golang using only 3 servers handlings 500% more traffic than the old version
I rewrote a network TCP scanner in Go (about a year ago) and it performed much better. It scans a /12 for 50 common server ports in roughly 40 minutes. <a href="https://github.com/w8rbt/netscan" rel="nofollow">https://github.com/w8rbt/netscan</a>
We wrote a simple fcm/apns gateway in go. Not sure what its throughput is these days, but it must be hundreds of millions or billions of requests per day. (Life360)
When you wrote:<p>>how ipify.org scaled to 30 billion API calls<p>I was thinking "an hour" and thinking "damn that's impressive" (it would be 8.3 million per second), or obv per minute or per second would be even more impressive. (And someone is using that really heavily). Instead the end of that sentence is "per month", so 11,415 per second.<p>You don't need to "scale" for that :) you just need 1 good server.<p>Go is like a web-safe C. :)<p>By the way here is the article is talking about:<p><a href="https://blog.heroku.com/scaling-ipify-to-30-billion-and-beyond" rel="nofollow">https://blog.heroku.com/scaling-ipify-to-30-billion-and-beyo...</a>