"It is not simpler. Just organised in a more systematic way."<p>This is the kicker. In most large projects(100k-million+ loc) the largest issue is complexity. Object oriented programming is just systematic way to organize that complexity.<p>The OP says that "In production code, I would split it in several files as appropriate. Or not. It doesn't really matter. What does matter is that we can easily find those procedures whenever we want to inspect or modify them."<p>It 100% does matter, and having all procedures that operate on and change the state of an object in the same file is super helpful. Being able to make guarantees that certain state cannot be changes outside of the organization unit of the class(encapsulation) is super helpful. Being able to abstract over certain related but non identical functions without having to resort to switch statements is super helpful.(polymorphism) And the ability to reuse code when different pieces of data share some identical features and we want to abstract over a portion of those features is useful(inheritance).<p>OOP doesn't give you any abilities you didn't have before, and you can write great readable structured procedural code. But it does give you more tools in your tool box that when used correctly can help structure your code in a way that limits the burden of the software's complexity.
The biggest problem I see with this article is the author is comparing Simula style OOP (Prevalent in C++, Java, C#, etc.) with procedural style programming and concludes anthropomorphism is at fault.<p>When your objects act more like very lightweight processes ala Smalltalk or Erlang model that communicate with each other via messaging, OOP becomes a lot more than a "coat of syntax sugar over the procedural cake" and anthropomorphism sudden works very well. If you see objects as data structures with methods rather than isolated processes with message based interfaces (think actor model), it's not all that surprising anthropomorphisms don't add much value. However, when your conceptual model of objects implies each object is a concurrent and isolated process, anthropomorphisms make sense. I sometimes even give them names and imagine them as employees in an organization. Asynchronous communications can be conceptualized as inboxes and outboxes of messages and synchronous communications can be thought of one employee getting into a queue himself to personally deliver the message.<p>Sometimes metaphors translate surprisingly when you become a stickler for making sure all aspects of a metaphor are accounted.
Like many people I went through an "objectify all the things!" phase, but now I find myself moving more and more to stateless service layers, which ends up looking like old-fashioned procedural code coated with a sugary OOP shell.<p>Encapsulation and inheritance and stateful objects have their place, but the "object" metaphor can really lead down the wrong path when taken too literally (as it is in practically every course that starts with "animal" or "shape" or "car" examples).
Regarding Dijkstra's quote: our brains are wired to model people with internal desires... sometimes it can lead to some errors, like animism, but if used correctly, it can be a powerful technique.<p>I've designed concurrent systems with many processes talking to each other. Each process has a name that describes a profession, owns certain parts of a shared database and transacts with other processes.<p>Sure that analogy can break down in the limits, but thinking like this allows me to get a much clearer mental picture of what's happening.
One obvious problem with that comparison - it is a toy example.<p>It is really not about anthropomorphizing objects, it is about where the state is kept and who can modify it how. And the right or at least the best choice heavily depends on the problem you are modeling. If you are primarily tracking the whereabouts of students it is probably a good choice that students keep track of that information and provide an interface to update it. If you are primarily interested in rooms and who is inside, it may be better to make the rooms responsible and provide an interface like classroom.enter(student).
It is really hard to decide on the best way to model, since it doesn't resemble any real-world programming problem.<p>But OO culture has an unfortunate tradition of using bad examples by using physical objects as analogies for objects in code. Cars, fruits, students, teachers, whatever. Problem is, objects in code rarely correspond to physical objects. In the cases where physical objects are represented, it is most often in the form of database entities, which are typically treated like data-objects, which makes the discussion in the example moot.
I was disappointed by this article in the end. I was ready for a good old fashioned OOP bash, supported by strong opinions and examples. We should always remain open to new (or old, in this case!) patterns. Instead we get a "tie" given a very simple (not real world) example.<p>In the end I'm left with the same conclusion. Given that there is no obvious performance gain from a procedural approach, I would opt for a design pattern that is friendlier to the programmer (the human being). It's helpful to model things this way as an organizing approach. And there are obvious benefits to maintaining state within their respective concerns.<p>It isn't to say OOP is the only and best answer. It very much depends on the program, but I was hoping for a more obvious win on the procedural side here.
The author is missing maybe the following point: when there will be multiple objects that need to move, and some of the move methods will require state, then it will be nice to encapsulate the state within objects, rather than having messy global state and methods that access a global state... which can lead to coupling between methods, if not concurrency issues, and before you know it, you end up with hard to understand, non-maintainable, non-refactorable spaghetti code.
The article resonates with me.<p>I’ve spent quite a few years coaching and dealing with developers with a heavy electronics background work in OO/C++ environments and I came to the conclusion that they would be more comfortable and competent in C. All smart people but the bells and whistles of C++ AND OOP just was confusing.<p>I’m surprised at some comments here claiming that OO is ‘friendlier to the programmer’, I would put money on the average developer being able to work their way around even the Linux kernel quicker than a comparably sized C++ code base with all it’s OO goodness.<p>What OOP was trying to solve back in the day was and still is a real problem but it annoys me how many additional problems ‘full blown’ OOP introduces.
The article isn't wrong, per se, but it offers a terrible strawman example. It is an example of OO done wrong, not a criticism of OO.<p>The example offers a flawed data model. Each student is in a classroom, but it doesn't make sense to model the location as an attribute of the student -- unless it is actually a coordinate position. If it's a symbolic location (a classroom), it is much more logical to treat the classroom as a container; therefore it is the responsibility of the containers to move stuff around.<p>OO tends to make much more sense once you analyze where the encapsulation should be.
Is this a joke or something? The arguments simply don't hold water. OOP will make the program run faster? Was running faster than whatever is a promise of OOP? Walking each student while the others wait in the main hall would take forever? So we are talking about parallelism with the actor model. How would procedural help that?<p>"[OOP] just organized in a more systematic way." Managing complexity and abstraction is a main function of programming. How is having a simple, consistent, systematic way to organize complexity not an advantage?
Using the article's example, maybe if you break the abstraction of the iterator, you realize that moving students to classrooms might just be a single pointer assignment, which can be done in a few instructions. In other words, maybe the classes blind you to the fact that the students don't need to be iterated.<p>I feel like this is one of my bigger gripes with OOP code bases: overemphasis on abstraction sometimes makes simple and efficient solutions harder to come by.
This confuses OOP/procedural semantics with low-level "CPU" work. The argument seems to be that the programming model of a language does not always expose the efficiency of the underlying implementation. And this is also true for "anthropomorphic" models.
Not really a great example, really a straw man argument because the code is so shallow. It's really about having relevant code in the relevant place. So yes the student object could have that code placed in it's class. That would be truly effective if you had, say, two kinds of students. Like one in a wheelchair and one that can walk. They'd both implement the function to move to the classroom, but internally one would walk() and the other would roll(). That's polymorphism. In the procedural case you'd have to check if(student.inWheelChair == YES) and it would get complex.
still i would like to see a concrete example where OOP is better. I thought that OOP had advantages in larger projects with many kinds of data where it helped organize the code to make it more readable and editable. At that point you need to be careful how you use it to make it manageable. Not sure how small an example would could be before it started to show some benefits.
Bad programmers write bad programs. It's not methodology. Good programmers have successfully written a lot of software in an OO design. That doesn't mean OO is the best solution for <i>you</i>, it just means that you can't judge a book by its cover, and in this industry, we're covered in bad programmers.
Completely disagree with the Dijkstra quote. Mentally modelling command flow as a 'person' makes total sense, as long as you keep in mind the abilities/limitations of the 'person' you're talking about.<p>That said, I agree that OOP is anthropomorphism gone crazy.