There are plenty of Rust features I'd love to see finalized, like GAT ( HKT, sort of), generators, async trait methods, custom test frameworks, ...<p>But there is an area that could have a big impact on certain (mostly higher level) domains, yet doesn't seem to get much attention: better trait objects.<p>They are severely limited in a few aspects:<p>* only a single trait/vtable<p>* casting is only available with Any and you can't cast between different traits, requiring really awkward super-traits with manual conversion methods or hacks like mopa [1]<p>* object safety rules are cumbersome and prevent certain important traits like Clone to be available, leading to clone_boxed, clone_arc everywhere, or proc macro solutions like dyn-clone [2]<p>* ...<p>Doing anything more fancy with them usually feels annoying. Therefore the standard library and entire ecosystem strongly favor generics and monomorphization.<p>This is generally fine and has worked out well for the language, but there are plenty of use cases where more advanced trait objects could reduce code size and compile times with very small impact on performance, while also enabling some interesting new patterns.<p>I realize there are plenty of implementation challenges that make work in this area far from trivial in the current language, but it's frustrating to miss out on part of a toolbox.<p>I think Swift is an interesting comparsion. The languages are similar in quite a few aspects, but Swift often prioritizes small code size and dynamic dispatch over monomorphization. Compile times aren't that great either, though...<p>ps: it is briefly mentioned in the post, but switching to LLD has provided noticeable build time improvements on most of the binary crates I am working on.<p>[1] <a href="https://github.com/chris-morgan/mopa" rel="nofollow">https://github.com/chris-morgan/mopa</a>
[2] <a href="https://github.com/dtolnay/dyn-clone" rel="nofollow">https://github.com/dtolnay/dyn-clone</a>
I love when authors add how to read aloud something to help newcomers, as in<p><pre><code> fn print<T: ToString>(v: T) {
</code></pre>
> We say that “print is generic over type T, where T implements Stringify”
This is a good article, but rather misses the point on performance of monomorphization vs. dynamic dispatch. Yes, CPU indirect branch predictors are getting better, and compilers are getting smarter about identifying opportunities to turn dynamic into static dispatch. But inlining remains the optimizer’s silver bullet, enabling a host of dependent optimizations. It’s those further optimizations that make the primary performance difference for static calls.
> first, modern CPUs have invested a lot of silicon into branch prediction, so if a function pointer has been called recently it will likely be predicted correctly the next time and called quickly<p>Huh, TIL. Branch prediction is normally about predicting which branch an `if` would take. But apparently this applies to indirect jumps as well: <a href="https://stackoverflow.com/a/26240197/1082652" rel="nofollow">https://stackoverflow.com/a/26240197/1082652</a>
A true gem from the "comments on the last episode":<p>> <i>The compile times we see for TiKV aren't so terrible, and are comparable to C++</i><p>So if you're already used to the terrible compile times of C++, the compile times of Rust won't seem that bad in comparison. And Mozilla, where Rust started, mostly relies on C++. That does explain a lot...
> In general, for programming languages, there are two ways to translate a generic function:<p>> 1. translate the generic function for each set of instantiated type parameters<p>> 2. translate the generic function just once, calling each trait method through a function pointer (via a “vtable”).<p>The approach in haskell might be considered a variation of 2, since it involves indirection, but a little different from other languages normally using vtables, since it's not selecting different implementations at run time, but just looking up the pre-determined implementation through a new parameter.<p>In particular, the function is transformed into a higher-order function accepting a new parameter representing the dynamic to_string functinoality, then at the call site, the appropriate concrete implementation to_string parameter is inserted for the new transformed function, and similarly this new higher-order function 'print' only needs to be compiled once.
Once all of this compile-time metaprogramming and code execution starts to happen, it always makes me ask: doesn’t this just conclude with dynamic typing? I’m currently a static typing lover. But it’s almost as if we’re just looking for the full power of a programming language. Why not just use the language itself instead of a weird, stratified compile time language?
Can we have the title changed to "Generics and Compile Time in Rust"? The way it's written know, I thought for sure it would be about compile-time programming using generics.
I am convinced that `rustc` should have an "optimizer lint" phase with all the same checks other languages perform to change the behavior of the code and instead suggest changes that affect running code and compile time, like `Box`ing fields or variants that disproportionately affect the size of an `enum`[1] or to suggest changing generic params to trait objects or vice versa[2] when it makes sense.<p>The advantage of not doing it automatically is that the behavior of the code once compiled can be <i>always</i> inferred from looking at the code, no magic and sudden changes in behavior because some threshold has been passed in some optimizer.<p>[1]: <a href="https://github.com/rust-lang/rust-clippy/pull/5466" rel="nofollow">https://github.com/rust-lang/rust-clippy/pull/5466</a><p>[2]: <a href="https://github.com/rust-lang/rust-clippy/issues/14" rel="nofollow">https://github.com/rust-lang/rust-clippy/issues/14</a>
Nim or Rust?<p>Some Rust syntax seems overly confusing. But Nim doesn’t really have strong corporate support backing it.<p>But, both are still new, and missing a lot of libraries.<p>Rust is annoying, where there aren’t standardized libraries for common functions. Just some guy’s tweet to use some random cargo crate from someone.
> * Note that in these examples we have to use inline(never) to defeat the optimizer. Without this it would turn these simple examples into the exact same machine code. I'll explore this phenomenon further in a future episode of this series.*<p>I’m really eager to use more rust, but three optimizations really turn me off. Optimizing the compiler feels like meta programming.