TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

HTTP throughput regression from Go 1.7.5 to 1.8

152 pointsby 01walidover 8 years ago

12 comments

jerfover 8 years ago
As I&#x27;ve mentioned before [1], as the number starts getting too large, &quot;requests per second&quot; isn&#x27;t a useful way of measuring the performance of a webserver, you&#x27;re really more interested in &quot;seconds per request overhead&quot;. The former makes this sound horrible and leads to headlines that make it sound like the entire web stack has lost 20% of its performance, which is terrible. The latter shows that the &quot;request overhead&quot; has gone from ~100us per request to ~120us or so, which is a lot more informative and tends to lead to better understanding what the situation is.<p>This is not meant as an attack or a defense of Go. The facts are what the facts are. The point here is to suggest that people use terminology that is more informative and easier to understand. There are people for whom 20us per request extra is a sufficiently nasty issue that they will not upgrade. There are also a lot of people who are literally multiple orders of magnitude away from that even remotely mattering because their requests tend to take 120ms anyhow. Using &quot;seconds per request overhead&quot; both makes it easier to understand both the real performance impact with real times, and makes it easier to understand that we&#x27;re just talking about the base overhead per request rather than the speed of the entire request.<p>It might also discourage some of our, ah, more junior developers from being too focused on this metric. Why would I want to use a webserver that can only do 100,000 requests per second when I can use this one over here that can do 1,000,000 requests per second? If you look at it from the point of view that we&#x27;re speaking about the difference between 10 microseconds and 1 microsecond, it becomes easier to see that if my requests are going to take 10 <i>milliseconds</i> on average, this is not a relevant stat to be worried about when choosing my webserver, and I should examine just the other differences instead, which may be a great deal more relevant to my use cases.<p>Edit: Literally while I was typing this up I see at least three comments already complaining about this regression. My question to you, my <i>honest</i> question to you (because some of you may well be able to answer &quot;yes&quot;, especially with some of the tasks Go gets used for), is: Are you <i>really</i> going to have a problem with this? Does the rest of your request <i>really</i> run in <i>microseconds</i>? It&#x27;s actually pretty challenging in the web world to run in microseconds. It can be done, but a lot of the basic things you want to do end up like &quot;hit a database&quot; generally end up involving milliseconds, i.e., &quot;thousands of microseconds&quot;.<p>[1]: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=11187264" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=11187264</a>
评论 #13597673 未加载
评论 #13597801 未加载
评论 #13598426 未加载
评论 #13597457 未加载
评论 #13597586 未加载
评论 #13598849 未加载
评论 #13599901 未加载
评论 #13598372 未加载
评论 #13597698 未加载
评论 #13598941 未加载
arussellsawover 8 years ago
worth mentioning that this is only a noticeable performance regression in situations where the majority of the request is spent in http processing, eg &#x27;hello world&#x27; handlers. Here is an example of the performance improvements i&#x27;ve seen in a real world application, admittedly heavily GC bound, but still the performance improvements are considerable: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;arussellsaw&#x2F;status&#x2F;819904231759085571" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;arussellsaw&#x2F;status&#x2F;819904231759085571</a>
评论 #13597663 未加载
Matthias247over 8 years ago
If I understand the possible culprit commit (<a href="https:&#x2F;&#x2F;github.com&#x2F;golang&#x2F;go&#x2F;commit&#x2F;faf882d1d427e8c8a9a1be00d8ddcab81d1e848e" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;golang&#x2F;go&#x2F;commit&#x2F;faf882d1d427e8c8a9a1be00...</a>) correctly then real world applications could still be faster than with the older versions on average. E.g. if a request handler would start a database request and forward it&#x27;s CancellationToken (context.Done) to the database call both might be immediatly stopped with the new logic and the resources can be used for handling new requests. If in the old version the cancellation did not work properly the database request might have needed to run to completion before anything else could be done.
bsaulover 8 years ago
bradfitz : &quot;That was one of the biggest architectural changes in the net&#x2F;http.Server in quite some time. I never did any benchmarking (or optimizations) after that change. &quot;<p>Sorry, what ? It&#x27;s not like the http server of the stdlib is here only for doing hello world code samples... You would imagine those benchmark to be part of some CI process along with unit tests.
评论 #13597465 未加载
评论 #13597396 未加载
akerroover 8 years ago
Why it is too late? He doesn&#x27;t want to give any justification. Isn&#x27;t the point of RC and community supported development to catch such cases before stable is published? Just make another RC.
评论 #13599057 未加载
tmalyover 8 years ago
If you look at it, the change that was most attributed to the slow down, was committed on October 2016.<p>Why could the people making an issue about the 0.5 us slow down per request not have tested or ran a benchmark sooner?
cameroncooperover 8 years ago
Surprised that nobody has mentioned the true hero of this story - git bisect - awesome tool, and perfect for pinpointing these sorts of regressions.
eternalbanover 8 years ago
The std. dev. &amp; max numbers caught my eyes:<p><pre><code> avg. std dev max Latency 195.30us 470.12us 16.30ms -- go tip Latency 192.49us 451.74us 15.14ms -- go 1.8rc3 Latency 210.16us 528.53us 14.78ms -- go 1.7.5 </code></pre> That is a seriously fat distribution. Has anyone ever benched for percentiles?
sddfdover 8 years ago
Conspiracy theory: They knew they&#x27;d take a 20 microseconds hit on every connection close, and (rightfully) did not care.<p>So basically this is a communication issue with a community that does not understand what to make of its own benchmarks.
sisciaover 8 years ago
As jerf mention I don&#x27;t believe that this particular regression is going to be significant for the almost totally of the use cases (and the very few that are going to be touch by it probably are savy enough to test their performance before to deploy in production).<p>What I believe is more serious is that this wasn&#x27;t catch during the development, it could definitely be a worth trade off however we should be aware of it...
OhSoHumbleover 8 years ago
&quot;Too late for Go 1.8, but we can look into performance during Go 1.9.&quot;<p>That probably shouldn&#x27;t be the response for a major performance regression in a release candidate.<p>Looks like I&#x27;m sticking to Go 1.7 for however long it&#x27;ll take before 1.9 is released.
评论 #13598003 未加载
评论 #13597454 未加载
评论 #13597468 未加载
评论 #13597405 未加载
reimertzover 8 years ago
Why would it be too late? Isn&#x27;t this the whole reason for release candidates? To find final major issues before releasing the next major version?<p>If not, could someone please educate me?
评论 #13597591 未加载
评论 #13597494 未加载
评论 #13597513 未加载