I'd argue that golang is inherently not a systems language, with its mandatory GC managed memory. I think it's a poor choice for anything performance or memory sensitive, especially a database. I know people would disagree (hence all the DBs written in golang these days, and Java before it), but I think C/C++/Rust/D are all superior for that kind of application.<p>All of which is to say, I don't think it matters. Use the right tool for the job - if you care about generic overhead, golang is not the right thing to use in the first place.
Go does has some form of monomorphization implemented in Go1.18; it is just behind a feature flag(compiler flags).<p>Look at the assembly difference between this two examples:<p>1. <a href="https://godbolt.org/z/7r84jd7Ya" rel="nofollow">https://godbolt.org/z/7r84jd7Ya</a> (without monomorphization)<p>2. <a href="https://godbolt.org/z/5Ecr133dz" rel="nofollow">https://godbolt.org/z/5Ecr133dz</a> (with monomorphization)<p>If you don't want to use godbolt, run the command `go tool compile '-d=unified=1' -p . -S main.go`<p>I guess that the flag is not documented because the Go team has not committed themselves to whichever implementation.
This is a really long and informative article, but I would propose a change to the title here, since "Generics can make your Go code slower" seems like the expected outcome, where the conclusion of the article leans more towards "Generics don't always make your code slower", as well as enumerating some good ways to use generics, as well as some anti-patterns.
For me Go has replaced Node as my preferred backend language. The reason is because of the power of static binaries, the confidence that the code I write today can still run ten years from now, and the performance.<p>The difference in the code I’m working with is being able to handle 250 req/s in node versus 50,000 req/s in Go without me doing any performance optimizations.<p>From my understanding Go was written with developer ergonomics first and performance is a lower priority. Generics undoubtedly make it a lot easier to write and maintain complex code. That may come at a performance cost but for the work I do even if it cuts the req/s in half I can always throw more servers at the problem.<p>Now if I was writing a database or something where performance is paramount I can understand where this can be a concern, it just isn’t for me.<p>I’d be very curious what orgs like CockroachDB and even K8s think about generics at the scale they’re using them.
Really well written article. I liked that the author tried to keep a simple language around a fair amount of complex topics.<p>Although the article paints the Go solution for generics somewhat negative, it actually made me more positive to the Go solution.<p>I don't want generic code to be pushed everywhere in Go. I like Go to stay simple and it seems the choices the Go authors have made will discourage overuse of Generics. With interfaces you already avoid code duplication so why push generics? It is just a complication.<p>Now you can keep generics to the areas were Go didn't use to work so great.<p>Personally I quite like that Go is trying to find a niche somewhere between languages such as Python and C/C++. You get better performance than Python, but they are not seeking zero-overhead at any cost like C++ which dramatically increases complexity.<p>Given the huge amount of projects implemented with Java, C#, Python, Node etc there must be more than enough cases where Go has perfectly good performance. In the more extreme cases I suspect C++ and Rust are the better options.<p>Or if you do number crunching and more scientific stuff then Julia will actually outperform Go, despite being dynamically typed. Julia is a bit opposite of Go. Julia has generics (parameterized types) for performance rather than type safety.<p>In Julia you can create functions taking interface types and still get inlining and max performance. Just throwing it out there are many people seem to think that to achieve max performance you always need a complex statically typed language like C++/D/Rust. No you don't. There are also very high speed dynamic languages (well only Julia I guess at the moment. Possibly LuaJIT and Terra).
I'm excited about generics that gives you a tradeoff between monomorphization and "everything is a pointer". The "everything is a pointer" approach, like Haskell, is incredibly inefficient wrt execution time and memory usage, the "monomorphize everything" approach can explode your code size surprisingly fast.<p>I wouldn't be surprised if we get some control over monomorphization down the line, but if Go started with the monomorphization approach, it would be impossible to back out of it because it would cause performance regressions. Starting with the shape stenciling approach means that introducing monomorphization later can give you a performance improvement.<p>I'm not trying to predict whether we'll get monomorphization at some future point in Go, but I'm just saying that at least the door is open.
My first use of Go generics has been for a concurrent "ECS" game engine. In this case, the gains are pretty obvious. I think.<p>I get to write one set of generic methods and data structures that operate over arbitrary "Component" structs, and I can allocate all my components of a particular type contiguously on the heap, then iterate over them with arbitrary, type-safe functions.<p>I can't fathom that doing this via a Component interface would be even as close as fast, because it would destroy cache performance by introducing a bunch of Interface tuples and pointer dereferencing for every single instance. Not to mention the type-unsafe code being yucky. Am I wrong?<p>FWIW I was able to update 2,000,000 components per (1/60s) frame per thread in a simple Game of Life prototype, which I am quite happy with. But I never bothered to evaluate if Interfaces would be as fast
Great article, just skimmed it, but will definitely dive deeper into it. I thought Go is doing full monomorphization.<p>As another datapoint I can add that I tried to replace the interface{}-based btree that I use as the main workhorse for grouping in OctoSQL[0] with a generic one, and got around 5% of a speedup out of it in terms of records per second.<p>That said, compiling with Go 1.18 vs Go 1.17 got me a 10-15% speedup by itself.<p>[0]:<a href="https://github.com/cube2222/octosql" rel="nofollow">https://github.com/cube2222/octosql</a>
Some of the issues pointed out by this (very good) article may already be fixed in tip Go, with <a href="https://go-review.googlesource.com/c/go/+/385274" rel="nofollow">https://go-review.googlesource.com/c/go/+/385274</a>
The first code-to-assembly highlighting example here is beautiful. Question to the authors— is that custom just for this article?<p>Is there an open source CSS library or something that does this?
what I expect to happen now that golang has generics and reports like these will show up is golang will explore monomorphizing generics and get hard numbers. they may also choose to use some of the compilation speeds they've gained from linker optimizations and spend that on generics.<p>I can't imagine monomorphizing being that big of a deal during compilation if the generation is defered and results are cached.
This is a great article yet with an unnecessarily sensationalist headline. Generics can be improved in performance over time, but a superstition like "generics are slow" (not the exact headline, but what it implies to reader) can remain stuck in our heads forever. I can see developers stick to the dogma of "never use generics if you want fast code", and resorting to terrible duplication, and more bugs.
Key tldr from me:<p>> Ah well. Overall, this may have been a bit of a disappointment to those who expected to use Generics as a powerful option to optimize Go code, as it is done in other systems languages. We have learned (I hope!) a lot of interesting details about the way the Go compiler deals with Generics. Unfortunately, we have also learned that the implementation shipped in 1.18, more often than not, makes Generic code slower than whatever it was replacing. But as we’ve seen in several examples, it needn’t be this way. Regardless of whether we consider Go as a “systems-oriented” language, it feels like runtime dictionaries was not the right technical implementation choice for a compiled language at all. Despite the low complexity of the Go compiler, it’s clear and measurable that its generated code has been steadily getting better on every release since 1.0, with very few regressions, up until now.<p>And remember:<p>> DO NOT despair and/or weep profusely, as there is no technical limitation in the language design for Go Generics that prevents an (eventual) implementation that uses monomorphization more aggressively to inline or de-virtualize method calls.
> Inlining code is great. Monomorphization is a total win for systems programming languages: it is, essentially, the only form of polymorphism that has zero runtime overhead<p>Blowing your icache can result in slowdowns. In many cases it's worth having smaller code even if it's a bit slower when microbenchmarked cache-hot, to avoid evicting other frequently used code from the cache in the real system.
Monomorphisation is a double-edged blade. Sometimes keeping the code smaller and hot is better than inlining everything, especially when your application does not exclusively own all the system resources (an assumption that many “systems programming languages” sadly do). There is too much focus on “performance” aka. microbenchmarks, but they don’t tell you the whole story. If you have a heavily async environment, with multiple tasks running in parallel and waiting on each other in complex patterns, more compact, reusable code can not only speed up the work but also allow you to do more work per watt of energy.<p>I think it’s great that golang designers decided to follow Swift’s approach instead of specializing everything. The performance issues can be fixed in time with more tools (like monomorphissation directives) and profile-guided optimization.
This is a very interesting article. I was however a bit confused by the lingo, calling everything generics. As I understood it the main point of the article quite precisely matched the distinction between generics and templates as I learned it. Therefore what surprised me most was the fact that go monomorphizes generic code sometimes. Which however makes sense given the way go's module-system works – i.e. imported modules are included in the compilation – but doesn't fit my general understanding of generics.
Similar to how the GC has become faster and faster with each version, we can expect the generics implementation to be too. I wouldn’t pay much attention to conclusions about performance from the initial release of the feature. The Go team is quiet open with their approach.
> there’s no incentive to convert a pure function that takes an interface to use Generics in 1.18.<p>Good. I saw a lot of people suggesting in late 2021 that you could use generics as some kind of `#pragma force-devirtualization`, and that would be awful if it became common.
Related. The introduction of Generics in Go revived an issue about the ergonomics of typesafe Context in a Go HTTP framework called Gin:
<a href="https://github.com/gin-gonic/gin/issues/1123" rel="nofollow">https://github.com/gin-gonic/gin/issues/1123</a><p>If anyone can contribute, please do.
Meh. The people who screamed loudest about Generics missing in Go aren't going to be using the language now that the language has them, and are going to find something new to complain about.<p>The language will suffer now with additional developmental and other overhead.<p>The world will continue turning.
(off-topic) Anyone else using Firefox know why the text starts out light gray and then flashes to unreadably dark gray after the page loads? (The header logo and text change from gray to blue too)
Is there any large project that done an in-place replacement to use generics that has been benchmarked? I doubt that the change is even measurable in general.
Well sure. Not writing hand tuned assembly can make your code slower, too. Go's value as a language is how it fills the niche between Rust and Python, giving you low level things like manual memory control, while still making tradeoffs for performance and developer experience.