TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Practical Go benchmarks

148 点作者 minaandrawos大约 7 年前

11 条评论

aodin大约 7 年前
The majority of the performance difference between strings concat and builder in your example is explained by memory allocation. Every loop of concat will result in a new allocation, while the builder - which uses []bytes internally - will only allocate when length equals capacity, and the newly allocated slice will be approx. twice the capacity of the old slice (see: <a href="https:&#x2F;&#x2F;golang.org&#x2F;src&#x2F;strings&#x2F;builder.go?#L62" rel="nofollow">https:&#x2F;&#x2F;golang.org&#x2F;src&#x2F;strings&#x2F;builder.go?#L62</a>).<p>Therefore, 500,000 rounds of concat is about 500,000 allocations, while 200,000,000 rounds of builder is ~ 27.5 allocations (=log2(200000000)).<p>I would suggest a different benchmark to approximate real world usage:<p><pre><code> func BenchmarkConcatString(b *testing.B) { for n := 0; n &lt; b.N; n++ { var str string str += &quot;x&quot; str += &quot;y&quot; str += &quot;z&quot; } } func BenchmarkConcatBuilder(b *testing.B) { for n := 0; n &lt; b.N; n++ { var builder strings.Builder builder.WriteString(&quot;x&quot;) builder.WriteString(&quot;y&quot;) builder.WriteString(&quot;z&quot;) builder.String() } } </code></pre> Which still shows a significant performance advantage for using builder (-40% ns&#x2F;op):<p><pre><code> BenchmarkConcatString-4 20000000 93.5 ns&#x2F;op BenchmarkConcatBuilder-4 30000000 54.6 ns&#x2F;op</code></pre>
评论 #16534597 未加载
tapirl大约 7 年前
I would mention that, gc (the official Go compiler) makes special optimization for string concatenation operation (+). If the number of strings to be concatenated is known at compile time, using + to concatenate strings is the most efficient.<p><pre><code> package a import &quot;testing&quot; import &quot;strings&quot; var strA, strB string var x, y, z = &quot;x&quot;, &quot;y&quot;, &quot;z&quot; func BenchmarkConcatString(b *testing.B) { for n := 0; n &lt; b.N; n++ { strA = x + y + z } } func BenchmarkConcatBuilder(b *testing.B) { for n := 0; n &lt; b.N; n++ { var builder strings.Builder builder.WriteString(x) builder.WriteString(y) builder.WriteString(z) strB = builder.String() } } </code></pre> Result:<p><pre><code> goos: linux goarch: amd64 BenchmarkConcatString-2 20000000 83.7 ns&#x2F;op BenchmarkConcatBuilder-2 20000000 102 ns&#x2F;op</code></pre>
评论 #16534075 未加载
评论 #16533894 未加载
kjksf大约 7 年前
String benchmarks are so broken.<p>They way he uses b.N is wrong. b.N is different for different loops so he&#x27;s e.g. timing 100 iterations of string &#x27;+&#x27; with a 1000 iterations of builder.WriteString()<p>Also the compiler can completely null out no-op functions (without side effects) so in benchmarks it&#x27;s a good idea to assign the value being calculated into e.g. a global variable.<p>The corrected code is: <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;kjk&#x2F;6a7d7135ae1e5fa6cd1f0db23d2eaf4d" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;kjk&#x2F;6a7d7135ae1e5fa6cd1f0db23d2eaf4d</a><p>An example of correctly benchmarking:<p><pre><code> func BenchmarkConcatString(b *testing.B) { for n := 0; n &lt; b.N; n++ { var str string for i := 0; i &lt; 100; i++ { str += &quot;x&quot; } gStr = str } } </code></pre> After fixes it paints significantly different picture:<p><pre><code> go test -bench=. -benchmem goos: darwin goarch: amd64 BenchmarkConcatString-8 300000 5148 ns&#x2F;op 5728 B&#x2F;op 99 allocs&#x2F;op BenchmarkConcatBuffer-8 1000000 1046 ns&#x2F;op 368 B&#x2F;op 3 allocs&#x2F;op BenchmarkConcatBuilder-8 1000000 1177 ns&#x2F;op 248 B&#x2F;op 5 allocs&#x2F;op</code></pre>
评论 #16535140 未加载
评论 #16535142 未加载
bpicolo大约 7 年前
While I don&#x27;t doubt that strings.Builder does is quicker than += concat for many iterations, to make it a fair comparison you probably need to pull out the string at the end rather than just writing to the buffer. It&#x27;s also not obvious for example what the difference is with just 2 strings to join if I need to join two strings together 40 trillion times or whatnot.<p>Nice collection of microbenchmarks though. Interesting to see magnitude differences from e.g. regexp compile
Vendan大约 7 年前
Fun fact: the crypto rand &quot;number&quot; benchmark depends on the number you pass into it:<p><pre><code> BenchmarkCryptoRand27-8 5000000 388 ns&#x2F;op BenchmarkCryptoRand28-8 3000000 356 ns&#x2F;op BenchmarkCryptoRand29-8 5000000 335 ns&#x2F;op BenchmarkCryptoRand30-8 5000000 327 ns&#x2F;op BenchmarkCryptoRand31-8 5000000 331 ns&#x2F;op BenchmarkCryptoRand32-8 5000000 322 ns&#x2F;op BenchmarkCryptoRand33-8 3000000 480 ns&#x2F;op BenchmarkCryptoRand34-8 3000000 474 ns&#x2F;op </code></pre> for benchmarks like<p><pre><code> func BenchmarkCryptoRand32(b *testing.B) { for n := 0; n &lt; b.N; n++ { _, err := crand.Int(crand.Reader, big.NewInt(32)) if err != nil { panic(err) } } } </code></pre> This is because the crypto&#x2F;rand library is very very careful to give you unbiased random numbers.
friday99大约 7 年前
The string benchmark has the issue that the amount of work done varies with each pass through the loop since the string just keeps getting appended to. A proper benchmark like the ones in the comments here do the same amount of work for every loop.
jossctz大约 7 年前
Note that you can also get the number of bytes processed per second by calling the SetBytes method. This is very useful on some bench (hashing, base64, ...):<p><pre><code> func benchmarkHash(b *testing.B, h hash.Hash) { data := make([]byte, 1024) rand.Read(data) b.ResetTimer() b.SetBytes(len(data)) for n := 0; n &lt; b.N; n++ { h.Write(data) h.Sum(nil) } }</code></pre>
pbnjay大约 7 年前
&gt; The following benchmarks evaluate various functionality with the focus on real-world usage patterns.<p>I can&#x27;t say I write much code that does one thing many times in a really tight loop. It would be a lot more interesting if the code combined multiple functions into the loop body in a better attempt to simulate &quot;real-world usage patterns.&quot;
评论 #16535561 未加载
antoaravinth大约 7 年前
I always wanted to ask this. I&#x27;m a full stack developer with good knowledge on Java and JavaScript. I&#x27;m currently reading Golang especially for its concurrency idioms. It is good and easy to write concurrent code but people always come and say about actors which are very good when compared with channels. I have never used actors before.. Whats your thoughts on this?
majewsky大约 7 年前
Even though this is clearly a benchmarking game, I don&#x27;t like that it does not explain how the things benchmarked against each other sometimes have drastically different usecases.<p>I can assure you that someone is going to use these numbers to argue that crypto.Rand needs to be replaced by math.Rand BECAUSE SPEED, or that MD5 should be preferred over SHA2&#x2F;3.
Xeoncross大约 7 年前
It&#x27;s worth noting that the first number in a benchmark result is how many loops (for n := 0; n &lt; b.N) that Go used to find the results.<p>The nanoseconds, bytes, and allocs per operation are the important part.