I am extremely excited for this feature, especially now that I live in "absolutely no allocations" embedded land.<p>But also, beyond that, this is pretty much the last major feature that I've wanted in Rust. I've got some long-form writing in my head about this, but previously, I would have cast it as two or 2.5 features:<p>* const generics<p>* GATs<p>* specialization (this is the half)<p>However, when I've started to think about explaining these sorts of things, GATs (and to some degree specialization) feel much more to me like a removal of a restriction on combinations of features, more than "new features" strictly speaking. I think the line between these two perspectives is fascinating. On some level, you can even cast const generics in this way too: what it's really doing is making arrays a first-class feature of the language. I think that's a bigger stretch than GATs though, so I am not fully sure I'd make that argument. (The minimum feature we're stabilizing now basically does only this, but we do plan on going farther, so it feels true on a technicality now but not later.)<p>Regardless, it's been nice to see how much we've slowed down in adding major things. November of 2019 was the last time we had a feature this large hit stable. 18 months between huge things feels much more like the cadence of more mature languages that have been around a lot longer than Rust.<p>There is still a lot of work to do removing restrictions on existing features, especially const fn and const generics. But Rust is really starting to feel "done" to me, personally.
Having const generics will also permit much more elegant implementations of low level linear algebra.<p>For example, when writing a `GEMM` (general matrix multiplication) routine, the core building block is a so called kernel, which gets moved over the matrices like a stencil. The size of the kernel however depends on a) the numerical precision (float, double), b) the available SIMD intrinsics, and c) the architecture the code is executed on (haswell, skylake or zen have different latency and throughput for different intrinsics).<p>Right now it's surprisingly painful to write an optimal kernel and buffer, which is different for every architecture and which you need to kind of hardcode. With const generics this should become much easier.
I'm really happy that this finally landed. Unfortunately the limitations of the MVP are severe.<p>For example you can't use associated constant as generic arguments, preventing constructions like this:<p><pre><code> trait HashFunction {
const OUTPUT_SIZE: usize;
fn hash(input: &[u8]) -> [u8; Self::OUTPUT_SIZE];
}</code></pre>
Users of the serde-big-array crate can already opt into using min_const_generics by enabling the const-generics feature. The advantage: you no longer have to list a bunch of needed array sizes (or rely on the builtin defaults), but can use whatever size you want.<p>Serde proper needs something like const_evaluatable_checked before it can offer large array support.
This is one of my big frustration with Rust and probably the only feature from C++ I rally missed (C++ templates having "non-type parameters"). I'm very happy to soon see it lifted.<p>There were workarounds for some common scenarios already, but it's just so much more simple and convenient to be able to write `fn foo<const N: u32>()`.<p>I'll finally be able to simplify a significant portion of my code.
This is a pretty big deal. One benefit it gives is it's much easier to write code evaluated at compile time. Most Rust libraries use generics, so if you use a library, compile time support isn't usually available. By adding support for const generics compile time support can become widespread.
This is amazing. Kudos rust team for nearing this milestone.<p>I use rust for a (to be released -- comment with your github if you want early access) Bitcoin Smart Contract embedded domain specific language (edsl) that operates sort of like a circuit meta-programming language. Having const generics will drastically simplify my code and enable greater "structural type safety" (checked at the rust level rather than as a part of my edsl).<p>They're not wrong when they say that this is the most highly anticipated feature :)
Newbie question if someone doesn't mind.<p>This seems to make it easier to do stuff without allocating heap memory.<p>How large is the stack? Can I do complex things with only the stack?
D Templates can take just about anything in the language as an argument (user defined, builtin, doesn't matter) and I can confirm that being able to do this is extremely useful i.e. if you write a hybrid allocator you can specify the metaparameters as a template parameter.<p>e.g.<p><pre><code> struct SmallString(size_t N) { /* etc. */ }</code></pre>
It must be hard to design a zero-cost abstractions language. It seems that there are more and more terms and concepts that come up in order to support more directly-pragmatic PL features.[1] Reminds me a bit of how Haskell comes across to me from the outside with all its GHC extensions. Haskell is a research language but other more specialized functional languages have the luxury of being able to theorize and implement more orthogonal and perhaps more “elegant” concepts and approaches. (Again, merely an impression from the outside.)<p>Consider the design effort behind parametric memory allocators. That seems like a pretty cutting-edge problem. And yet I bet the Rust folks knew that they would want/need to do this way before they started doing that work in earnest, because people from C++ seem to want the same thing (if they don’t have it already?).<p>I idly wonder if one could, if one was in a similar position as Rust was some years ago, just go ahead and design a full-on unapologetic type-level programming language from the start. Because you <i>know</i> that your type-level terms will be worthy of the moniker “language” eventually (and it might not be a compliment as such).<p>Just a nice, high-level language that doesn’t bother with the “bare metal” concepts that Rust the value-level language has to deal with. (Can it even be done? Don’t ask the peanut gallery about that.)<p>Either that or you accidentally build an emergent language that Gankro can write an article about one day titled, I don’t know, Shitting Your Pants With Higher-Order Unsafe Unwind Type-Level Allocator Escape-Suppressing Storm Cellars.<p>[1] In this case: you have more use for compile-time integers than something more general like being able to describe that two nat-indexed lists are of the same length, like you can in Idris.