I don't get why people are scared of GC. After working in embedded software, using mostly C, it was evident from profiling that C programs spend a lot of time allocating and releasing memory.
And those programs usually release memory in place, unlike GC languages where the release is deferred, done in parallel, and with time-boxed pauses. Most "systems" programs use worse memory management strategies than what a modern GC actually offers.<p>Sure, some devices require using static memory allocations or are quite restricted. But a lot of other "system programming" targets far more capable machines.
This is a common trend among D, Chapel, Vale, Hylo, ParaSail, Haskell, OCaml, Swift, Ada, and now June.<p>While Rust made Cyclone's type system more manageable for mainstream computing, everyone else is trying to combine the benefits of linear/affine type systems, with the productivity of automated resource management.<p>Naturally it would be interesting to see if some of those attempts can equally ping back into Rust's ongoing designs.
struct Node<'a, 'b, 'c> {
data1: &'a Data
data2: &'b Data
data3: &'c Data
}<p>Wow. It's like teaching C++ and starting from SFINAE. Or C# and starting from type parameter constraints.<p>Please think of a real-world examples when teaching stuff. I am very eager to see the program a beginner would need to write that requires: 1) references in a struct; 2) 3 separate lifetime parameters for the same struct.
Effect systems strike again! They've come up a few times recently on HN, and region-based memory management is another problem they can solve. This paper describes a type system that region-based memory management falls out of as a special case: <a href="https://dl.acm.org/doi/10.1145/3618003" rel="nofollow">https://dl.acm.org/doi/10.1145/3618003</a>
> Rust's focus on embedded and system's development is a core strength. June, on the other hand, has a lean towards application development with a system's approach. This lets both co-exist and offer safe systems programming to a larger audience.<p>I think this is a mistake, both on June's part and on Rust's. All low-level languages (by which I mean languages that offer control over all/most memory allocation) inherently suffer from low abstraction, i.e. there are fewer implementations of a particular interface or, conversely, more changes to the implementation require changes to the interface itself or to its client. This is why even though writing a program in many low-level languages can be not much more expensive than writing the program in a high-level language (one where memory management is entirely or largely automatic), costs accrue in maintenance.<p>This feature of low-level programming isn't inherently good or bad -- it just is, and it's a tradeoff that is implicitly taken when choosing such a language. It seems that both June and Rust try to hide it, each in their own way, Rust by adopting C++'s "zero-cost abstraction approach", which is low abstraction masquerading as high abstraction when it appears as code on the screen, and June by yielding some amount of control. But because the tradeoff of low-level programming is real and inescapable, ultimately (after some years of seeing the maintenance costs) users learn to pick the right tradeoff for their domain.<p>As such, languages should focus on domains that are most appropriate for the tradeoffs they force, while trying to aim for others usually backfires (as we've seen happen with C++). Given that ultimately virtually all users of a low level language will be those using it in a domain where the low-level tradeoff is appropriate -- i.e. programs in resource-constrained environments or programs requiring full and flexible control over every resource like OS kernels -- trying to hide the tradeoff in the (IMO) unattainable hope of growing the market beyond the appropriate domain, will result in disappointment due to a bad product-market fit.<p>Sure, it's possible that C++'s vision of broadening the scope of low-level programming was right and it's only the execution that was wrong, but I wouldn't bet on it on both theoretical (low abstraction and its impact on maintenance) and empirical (for decades, no low-level languages have shown signs of taking a significant market share from high-level languages in the applications space) grounds. Trying to erase tradeoffs that appear fundamental -- to have your cake and eat it -- has consistently proven elusive.
I don't know why but Rust's syntax just nails it for me. The more I use it the more I appreciate it. I see many projects that diverge from Rust's syntax while being inspired by it. Why ?
Related, I really like the look of hare[1], sadly they don't seem to be interested in a cross-platform compiler. As I understand it, some of the design decisions have basically led it to be mostly a linux/bsd language.<p>I personally love C. I think designing a language top-down is a poor approach overall, I prefer the bottom-up approach of the C-inspired for system languages, that aim to fix C rather than <i>this is how the world should beeee!</i><p>[1] <a href="https://harelang.org/" rel="nofollow">https://harelang.org/</a>
The discussion of grouped lifetimes reminds me of the principles of Flow-based programming (without the visual part), where one main idea is that only one process owns a data packet (IP) at a time.<p>My own experience coding in this style [1] has been extremely reassuring.<p>You can generally really safely consider only the context of each process at a time, since there aren't even any function calls between processes, only data sharing.<p>This meant for example that I could port a PHP application that I had been coding on for years, fighting bugs all over, into a flow-based Go application in two weeks, with a perfectly development time perfectly linear to the number of processes. I just coded each processes in the pipeline one by one, tested them and continued with the next. There were never any surprises as the application grew, as the interactions between the processes are just simple data sharing which can't really cause that much trouble.<p>This is of course a radically different way of thinking and designing programs, but it really holds some enormous benefits.<p><a href="https://github.com/rdfio/rdf2smw/blob/master/main.go#L58-L150">https://github.com/rdfio/rdf2smw/blob/master/main.go#L58-L15...</a>
This seems to be an "arena" or "pool" allocation approach. Conceptually quite a mature technique, but this adds the benefit of statically checking against pool lifetime?<p>Probably works quite well for systems programming, where things are either "live forever", "reallocate within some pool" (thread handles, file descriptors, etc), or "transient" (for the lifetime of a system call or similar).
So many comparisons to Go and C# on this thread. While at some level of abstraction, all languages are the same, comparing GCed languages to non-GC languages doesn't make sense in my opinion. Rust would have never been made if the creators were fine with Java.
> Effectively, this would mean that a data structure, like a linked list, would have a pointer pointing to the head which has a lifetime, and then every node in the list you can reach from that head has the same lifetime.<p>Right, isn't that what GhostCell and its variants (QCell) are all about? This would be great if it led to a more elegant and principled implementation of that pattern, that could also end up being fully supported in Rust itself.
This lifetimes thing is maybe not even a top 3 mistake Rust makes. I hope successor languages can have a metaprogramming system that is less dreadful than proc macros, the ability for users to write libraries that are generic over user-provided writers, readers, and allocators, and the ability to bubble up errors from functions that call fallible functions from 2 different libraries without writing your own huge struct definition every time.<p>It may also be nice if constructs that don't cause memory accesses and only ever do the correct thing or crash on the target cpu (such as integer division or pshufb without an address operand on any intel chip ever) were not unsafe. Placing "Well, LLVM says this arithmetic operation is UB and we won't bother to fix it" and "What if one day there's an x64 chip that does something other than crash if it encounters instructions from ISA extensions it does not have?" into the same bucket as "playing with raw pointers" is a bit weird.
For use cases which aren't bare-metal embedded:<p>1) Prefer the stack wherever possible.<p>2) RC.<p>3) Unique pointers by default unless explicitly noted otherwise.<p>4) Immutable arguments and returns by default unless explicitly noted otherwise.<p>There's a language that does this all already, and it's called Nim.
I'm not very well versed in Rust, but isn't it possible to implement this sort of checked arena allocation in rust using lifetimes? Something like slotmap (<a href="https://docs.rs/slotmap/latest/slotmap/" rel="nofollow">https://docs.rs/slotmap/latest/slotmap/</a>) except all of the pointers/keys have their lifetime tied to the arena/pool/map.
Better systems programming is model based design and automatic code generation, period.<p>It is the be-all and end-all that will make those scenes in Star Trek where they alter some core starship system programming in 2 minutes flat without mistakes actually plausible.
Seems like the wrong problem to solve. "Systems programming" is hard, and should be hard, for reasons unrelated to the programming language used. Something like Rust which forces you to constantly reevaluate your design before you even press the compile button, is ideal.<p>What's really lacking are safe, easy, strong typed general purpose languages that can leverage AOT compilation for high performance and static analysis etc. A language with the learning curve of Python, near C performance and strong safety guarantees of Rust. There is nothing that suggests to me such a language would not be possible. Swift and C# come closest but they are warped by their respective corporate overlords.