Lots of interesting thoughts, and a good outline of ownership's limitations, but I think some big leaps were made. "Fatal flaw" is quite a hyperbole, and the comparison to OOP, while interesting, is much too hand-wavy to be used as evidence for that assertion.<p>But they were spot-on about what is arguably Rust's biggest weakness, and the things people do to work around it, and I think there's kind of an open question whether that status-quo can be improved upon. I.e. Rust right now is excellent at certain things ("linear things"), and okay at non-linear things. But could it be made to be really good for the latter too? Or will it remain somewhat confined to its specialization? The language occupies uncharted territory either way, so these kinds of questions are not surprising, and I for one am excited to see where things go.
I read the entire article, and the actual criticisms seems to be that not everything that can be represented with ownership semantics. I don't see how that would be a "fatal flaw" in any way, and it just seems to be a sensationalized headline.
I’ve reached the bit about Aristotelian metaphysics (why??) and I still can’t tell what is the author’s point.<p>EDIT: Alright, I’ve skimmed to the end and it seems to me it is a very r/iamverysmart way of saying “I prefer to use RefCell”.<p>Honestly, I have trouble trusting someone’s judgement in programming language semantics when they can’t seem to be a good judge of their own written language semantics.<p>Why define an agent at the beginning, but leave “agency” undefined? What does even “capability to act in a certain environment” mean? Is a function/method something with “capability to act in a certain environment”? Is a C++ class that??<p>Also:<p>> It is also a category error to treat them as “real objects” since “real objects” and “programming objects” have little connection with each other ontologically.<p>Mate, you link to the Wikipedia article about categorical errors, where it says that a CE is defined as comparing things that are ontologically different, thus showing your whole sentence as a tautology. What’s the value added for the reader??
This article compares the hierarchical nature of the type system of OOP languages (think Java) with the hierarchical nature of ownership in languages with Ownership/Move semantics (Rust and newly C++). The author notes that "composition over inheritance" is a rejection of the strictly tree-shaped OOP type system. In the same way, Rust's `RefCell` is an escape hatch from the strict hierarchy of ownership, maintaining a run-time invariant of one owner rather than a compile-time invariant.<p>One difference I see between these two escape hatches are switching costs. Rewriting the Java standard library to use composition and not inheritance borders on impossibly difficult. Rewriting the rust standard library to use RefCell instead of &mut everywhere would be arduous and possibly automatable, but not impossible.
In my simple peasant brain I always associate "ownership" with a simple chunk of memory, not with "objects" as in OOP.<p>As soon as a piece of code "knows" about this chunk of memory it has certain responsibilities, like not randomly scribbling over values or freeing the memory while others pieces of code might attempt the same.<p>"Ownership" and "Move Semantics" are entirely artificial conventions to give some structure to those responsibilties. Casting those conventions into "hard language rules" introduces tradeoffs, evaluating those tradeoffs is important when deciding what language to use to solve certain problems. And this is where Rust comes in. Rust just enforces a couple of such "made up" conventions, none of those rules are dictated by the "real world" (e.g. the hardware the code runs on)... it's easy to get all tangled up in theoretical concepts like type systems, and functional programming, but in the end the only thing that counts is the machine code that's running on a concrete CPU. Sometimes I feel that language designers are losing sight on this simple reality ;)
I suspect that a lot of the inconvenience of modelling non-linear problems in Rust actually doesn’t come down to ownership semantics alone, but the combination of ownership semantics <i>and the low-level memory model</i>. However, I don’t have any solid evidence to support this, because ownership semantics have only really been implemented with the low-level memory model, because the low-level memory model strongly benefits from it (memory safety; it more closely matches how computers work internally at that low level) and GC languages don’t <i>need</i> ownership semantics, and people don’t generally restrict things without cause (and ownership semantics are definitely a restriction).<p>My thesis here is that Rc<RefCell<_>> and similar are an effective escape hatch to ownership semantics, but they’re much less convenient than a garbage-collected type in a scripting language specifically because of the language’s low-level memory model. A GC language with ownership semantics might be able to obviate RefCell, which is the cause of Rc<RefCell<_>>’s inconvenience.<p>I’d be interested to see experimentation with ownership semantics in languages with higher-level memory models (which broadly means GC languages). My thesis could turn out to be quite wrong, and I haven’t worked out all the details of how the combination would work (especially I’m hazy on parts of removing RefCell, which is pretty critical), but I think it could be worth some research—ownership semantics for its own sake, rather than ownership semantics for memory safety.<p>> <i>10. Getting used bypassing the borrow checker to reduce fighting implies people have just found a way to cope with the constraints it imposes.</i><p>Here’s the thing, though: it’s not just about finding a way to <i>cope</i> with the constraints it imposes; it’s also about <i>thriving within</i> the constraints it imposes, because it stops you from doing problematic things and guides you in the path of better designs much of the time (though non-linearity can definitely be a problem; it’s not all buttercups and daisies). When I work in JavaScript, I spend half the time missing ownership semantics, occasionally feeling downright miserable because ownership semantics would have made a problem far easier or less dangerous. Remember what I said about ownership semantics for its own sake rather than ownership semantics for memory safety? Yeah, most of the time I love Rust’s ownership semantics for the modelling rather than the memory safety.
I failed to find an example of a practical problem in the article. What I did find was a load of circling around the subject from a distant philosophical perspective, all boiling down to essentially "you know what, there exist problems that ownership semantics aren't suitable for" (but using lots of wisely-sounding language), forgetting to present the problems.<p>The article also suffers from the syndrome of taking a feature of a language and pretending that everything has to be seen through the lens of that feature. Well, newsflash: ownership semantics apply to only some things in Rust. They apply to values and bindings. But they don't apply to e.g. indices. I can take as create as many i's, where i=5, all referring to the same Vec, which later are going to be used to mutate the Vec, as I like, and Rust won't stop me. Similarly, there is Rc and Arc.<p>It also included one of those "wise" thoughts along the lines of "You know what? The true nature of computers/economy/life/love is..." - one of the templates of meaningless sentences. In this case it was "however virtually all computers are fundamentally mutable things". Hmm... Did we suddenly forget about pipelining, instruction ordering, caching, instruction-level parallelism, C/C++'s problems with memory aliasing? C is not a low-level language[1].<p>[1]: <a href="https://queue.acm.org/detail.cfm?id=3212479" rel="nofollow">https://queue.acm.org/detail.cfm?id=3212479</a><p>If we were to write a statement analogous to the one advocated for in the article, but talking about mathematics instead, it could be "the fatal flaw of mathematics is that a lot of it is about reducing problems to linear algebra". How ridiculous does that sound?
> Dealing with singular values can be very useful, but not everything is a value. Some things are fundamentally “non-values” e.g. instructions/control-flow/declarations.<p>If there are important things in your language that aren't values, that's a problem in your language. You really don't need any non-values; just look at, say, Haskell.<p>> As I have described above, both the OOP and OS both share similarities:<p>> Traditional-OOP: A (linear) value hierarchy of behaviour. The values act as agents.<p>> Ownership Semantics: A (linear) value hierarchy of responsibility. Agents are responsible for values.<p>This is just bollocks once you actually look at the details of it. An inheritance hierarchy is fixed once and for all at compile time, and can't ever change; either there's no way to change which class a given class inherits from at all, or at best there's some uncontrolled dark magic you can do. Whereas a defining characteristic of ownership semantics is that ownership can be safely passed from one owner to another in a controlled way.
This minor observation caught my eye, very well articulated:<p>"There are many criticisms of OOP456789 but my general criticism is that by placing emphasis on <i>trying to solve problem in the type system, it shifts focus from the data structures and algorithms</i>, the core of what a program fundamentally is."
> Ownership semantics are a form of an affine substructural type system1112 which means that they are fundamentally described by a linear logic, and explains why it struggles to express non-linear problems<p>What is a “non linear logic” or a “non linear problem “ in this context?
> Object orientated programming is a form of misinterpreted and misapplied Aristotelian Metaphysics applied to a domain it was never meant to model<p>Ha! I always felt like OOP is a natural outgrowth of Scholasticism, which is basically saying the same thing.
How confused can one person be?<p>(Also: what distinction is being made between "oriented" and "orientated", and why?)<p>The author appears to believe that the analogies used to invent convenient names for language constructs are supposed to really <i>be</i> the construct, rather than a shorthand description.<p>Language features, once they map to actual semantics, become pure mechanisms with their own ironclad logic, with only an analogical relationship to whatever concept motivated and, usually, named them.<p>Thinking in terms of the concept, or (worse) the word chosen to refer to it, as the author does again and again, can only generate confusion.<p>The author is also confused about the role of inheritance in C++. Unlike in Java, which offers practically nothing else to organize a program, in C++ inheritance is wholly optional, and is typically used only where it solves a specific problem.<p>The author further confuses
regular and virtual member functions, and their respective roles in design space.<p>The sorts of confusion seen here are quite common in philosophy, which has great difficulty staying grounded to facts of experience, something I think Wittgenstein complained of. In programming most of us don't have that problem, because the facts of problems to be solved and, moreso, of the physical machines we use in solving them provide ground truth.<p>Computer Science perennially tries to cut itself loose from that ground truth, but is always brought to heel by the demand of its graduates for employability.