Programming is hard. A few years back, in the 90's when most of the code in my field was still mostly structured, and bad at that, a lot of people were saying that OOP would sort it out. I was skeptical, not because I'm resistant to change, but because it was obvious to me that doing OOP right was (and is) very hard. Not that structured programming was easy.<p>Fast forward to today: programming is still hard, and actually it probably got a lot harder. OOP did not sort it out. Most of the code in my field is object-oriented, and bad at that. A lot of people are saying that FP will sort it out. I am skeptical, not because I'm resistant to change, but because it is obvious to me that doing FP right is (and will be) very hard. Not that object oriented programming is easy.<p>Am I alone in thinking that fast forward a few years, once there is enough rotten FP code written, we will be reading people ditching FP because it's the root of all evil?<p>The facts are that programming is hard. Working with legacy code is hard. Learning a paradigm well enough so the code you write in it is not total crap is very hard, and requires years of practical experience if you are proficient at another paradigm, let alone if you simply skimmed a paradigm and moved away because it was too hard...<p>It's great that people want to move on from single-platform, single-paradigm monocultures, with one caveat: breadth without depth is shallowness.<p>I'd like to read people treating languages and platforms as tools and not as cargo cults. You don't read carpenters writing they'll ditch hammers for screwdrivers because the old cupboard they are fixing uses nails. You read carpenters debating the pros and cons of using hammers versus screwdrivers. And you read better carpenters that debate how cupboards are designed, because it is ultimately more important than whether they are glued, nailed or screwed.
The title "I'm sick of object-oriented programming" is misleading. The author is sick of their work with Ruby and the fact that it is OO is one amongst several complaints.<p>I use a mixture of OOP and FP day to day (mostly with the Java/Scala/Clojure family) and have to say that both have their place. In large projects I appreciate OO design patterns for clarity and flexibility (though maybe that is just because it is what I'm used to), and FP's mandate on immutability for the same.<p>Finally I have grown to whole heartedly share the author's dislike of dynamic typing. I find Scala, not its more "pure" FP cousins Clojure and Haskell, to provide the most productive balance of the above.<p>Anyone else like me: tried both and ended up walking the middle road?
Ok, I get it. Mature Ruby codebase sucks. Integer division changes are surprising. But what does it have to do with object-orientedness?<p>> break functionality into lots of small objects
> use immutabile objects as much as possible (e.g. using thin veneers over primitives or adamantium)<p>are the guidelines that I'm using in C#.<p>> separate business logic into collections of functions that act on said objects (service objects)
minimize mutation and side effects to as few places as possible<p>How does separating out functions minimize mutation?<p>> thoroughly document expected type arguments for object instantiation and method invocation with unit tests (mimickry of a static type system)<p>Yet another argument for static typing...
The problems exposed by languages like Ruby are not necessarily problems with dynamic types. For example you can do reasoning about purity, side-effects and whatnot in languages of the LISP family as well. Clojure for example is much, much saner than Ruby in all the points listed by TFA - static typing is for example by definition anti-modularity and anti-adaptability and note that I prefer static typing over dynamic typing and Scala over Clojure. On the issue of static versus dynamic one has to view this as a different school of thought and to apply one versus the other depending on the needs of the project.<p>Uncontrolled side-effects are the real issue behind most of the accidental complexity that we are seeing. We all badly need to adopt more abstractions and techniques from functional programming.<p>Also, changing languages or idioms doesn't necessarily help with the exposed problems. We also need a change of mentality in how we are doing software development. Lets face it, when we need to do something right now, urgent, that should have been done yesterday - no matter the language, no matter the abstractions or idioms involved, we are bound to do stupid shit - because there's accidental complexity and then there's inherent complexity and nothing saves you from inherent complexity other than thinking really well about the problem at hand and splitting it into simpler, more manageable parts.<p>This is also why TDD is a failure and complete bullshit in how it is advertised. Tests don't save you from doing stupid shit. Tests don't tell you whether your architecture is any good, they only tell you if your architecture is testable. Tests don't prove the absence of bugs, they only prove their presence. Tests only tell if you reached a desired target, not what that target should be. And perhaps most importantly since this is touching the core of their purpose, when uncontrolled side-effects are happening in your system, tests are a poor safety net - anybody that had to deal with concurrency issues can attest to that.<p>Agile methodologies are also trying to paint a turd. Yes, we should deploy or publish as soon as we've got something to publish. We should pivot a lot. We should communicate more with the end-users or within the team. And so on and so forth. But it's an indisputable fact that some problems are hard enough that they can't be solved by puking code and tests in a matter of hours or days, or by adding more people to the team.
I inherited a 6 y/o rails codebase and I worked on it for 6 months straight. I am usually against rewriting things, but this is like going into one of those abandoned houses and pulling off layer and layer of crap to find more termites, cockroaches etc. Every time I had it 'working', I found all kinds of weirdness, usually do to with forked GEMs to the old dev his personal github, which where not only old/ancient but had stuff bolted onto them so they could not be upgraded in any way.<p>So rewriting now... In F# (and C# where handy). I like Ruby but for projects like this I have seen it fall over too often; most Rails dedicated companies I know deliver great projects but they don't have to do long term support. I would like to see how that would work out as I see companies in the wild struggling to find devs <i>willing</i> to support their codebases. This is not a gripe with Ruby/Rails per se, but (in my experience!) Python programmers who do large Django (or other frameworks, but I encounter mostly Django) projects are more disciplined so less goes haywire, there are too many people who can do JS, so you'll always find people willing to work on your crap. Ruby is in a niche but popular spot; hard to find coders, some coders are not so good and yet enormous projects are created in it.
I get this periodically. I have to walk away and go and do something else for a bit before I go nuts. This is usually after debugging some patternitis rats nest that is completely over-engineered.<p>However occasionally amongst the muck, a beautiful and elegant thing pops out and makes it all OK. This event is getting rarer for me as our product evolves though.
The older I get, and the more I watch and help programmers (including programming myself), the more I believe this is an OO/mutability problem.<p>We created OO, at least the C++ version, in part as a way to create boxes of code -- a reusable module system. But what happened was that we created a huge stinking pile of mutability and hidden dependencies. If I'm looking at a method in an object that takes one parameter, I literally have no freaking idea what the current state of the object is, what the current state of the parameter is or might be (and let's assume the parameter is itself an object or graph of objects). In fact, it's impossible for me to reason about what the hell I'm looking at. That's why we are forced to use the debugger so much.<p>Pure FP takes that all away. I have data going in, I do a transform, I have data going out. If my data is clean and my transforms are broken down enough to be understandable? It just works.<p>We keep trying to bolt on solutions like TDD to a fundamentally flawed model of development. Damn, I hate to say that, because I <i>love</i> OOA/D/P. I'm not giving it up, but my current programming practices consist of using OCAML/F# and pure functions to begin with, then "Scaling up" to objects as systems get more mature. If I've got a big closure, I'm probably looking at an object. So far I've found that scaling up is not necessary. I get more mileage from composing my functions into command-line executables, a la unix, than I do sticking everything together. But that could change.<p>It's right to be discouraged. There's something deeply wrong here. A big change is coming to software development.
I am simply not convinced that static typing resolves as many bugs as its proponents claim. The bugs I have are very rarely related with the type of a variable. They are incomplete/incorrect implementations of business rules. The type system doesn't solve that.<p>The other point of the article is that dynamic typing makes code rot. I am not convinced that static languages do any better. Code rot is not a problem of the typing system, it's a problem with the programmer writing the code, or the environment he's in. Let's see what happens when he inherits a 5-year-old Haskell codebase.<p>That said, I think the OP will do good in learning other languages. That always helps.
The problems I experienced with TDD in Rails applications (it's about the only way I'm using Ruby) didn't derive from object orientation or dynamic typing. They came from time and money constraints, especially in those one hour or one day rushes to get a new small functionality into the code base and deliver it into production. If a company has an internal development staff the technical debt can be repaid later on. If a company works with consultants paid by the hour, there might be never enough money to do it. An application with reasonable test coverage can quickly turn into an application with 100% broken tests and weeks of work to fix them all. But this isn't about Ruby or OO, it's about customers and the amount of money they want to spend no matter how good you are at explaining them what's happening to their software.<p>I have a Rails 3.2/Ruby 1.9.3 application that will be hard to port to Rails 4/Ruby 2.1 because of that. The point of the original post could be that writing that application in Haskell would require less work on tests so we could have less technical debt by now. Maybe. But when integration tests break (originally run with Selenium) because the UI changed so much and there are many new functionalities to test, I don't think the language can help. It's back to having enough budget.
I think it's fair to say that 'everything is an object' was always a bad idea. I'm a little surprised that it took Ruby devs that long to realise that (Java devs still seem fine though).<p>My favourite example is the 'Utility' module that almost every project ends up with at some point. Why would that ever be an object? In C++ you'd open up a namespace and trow a bunch of free standing functions at it. Doesn't have to be more complicated than that. Classes are supposed to be one method of abstraction (among many others) that programming languages offer to us to structure our code.<p>The real problem with OOP aren't objects though. It's inheritance. Complex inheritance graphs are probably the best way to couple supposedly independent parts of your code as tightly as possible. And they're notoriously hard to wrap your head around. I guess a good example are component based scene graphs (again, in C++). Whenever you're implementing some sort of graph, chances are you start by writing a class called 'Node'. That's fantastic as long as you stop the chain there. Each individual object in your scene should be a subclass of Node that has any number of independent components (mesh component, audio component, AI component, what have you) attached to it. Favoring composition over inheritance is always a good idea as far as I'm concerned and I'm happy to see languages like Rust adopting it.<p>I don't want to get into the whole TDD thing right now. All I'm gonna say is that assuming your code didn't break anything because all tests passed is a risky business and testing JavaScript UIs might not be as useful as you think. Having said that, Cinder (<a href="http://libcinder.org" rel="nofollow">http://libcinder.org</a>) had a bug in it's matrix multiplication code not so long ago that could have easily been detected with unit tests.
He's inherited several rails codebases and has to maintain them.<p>Sounds like the 'other peoples code' problem to me.<p>Just like all non trivial abstractions are leaky (Joel) I guess all non-trivial applications are 'hairy'
I was sick of Object Oriented programming the first moment I had to deal with an API built with OO that replaced a previous API without it.<p>It was the WordPerfect 6 something language, replacing the WP5.1 macro language.<p>I had built a perfectly reasonable autocorrection engine, it handled typos, misplaced commas and periods, adding accents, and a few things more.<p>The Object Oriented macro language in WP6 removed any possibility to make my macros work, in exchange for adding silly graphic buttons.<p>Now I like C++ and D, but they are not 'pure OO'. Therefore I hate Java.
Most of the points he makes are a byproduct of expecting too much from other people. I.E. gem authors, or misunderstanding why something exists I.E. the ruby standard library, which is a motley crew of modules meant to let people write hacky scripts a la perl.
This is why I'm getting away from magic/cleverness as much as possible. Clever solutions usually rely on the dark corners of a language/ecosystem, which might break in the future.
When Edsger Dijkstra pronounced <i>Program testing can be used to show the presence of bugs, but never to show their absence!</i> had your Pa met your Ma?
I suspect Rails the framework is as much to blame here as Ruby, OOP or dynamic typing.<p>The siren song of static typing is loud today. Robert Smallshire has given convincing -- and even evidence based -- arguments that it increases development time while catching only a very small percentage of bugs:<p><a href="http://vimeo.com/74354480" rel="nofollow">http://vimeo.com/74354480</a>
I guess we all agree that one way to get around the burden of reasoning around mutable state is restricting the language.<p>I find that an equally reasonable way is to be able to add new language constructs to make this reasoning far more (humanly) tractable. You can do this with functions, and objects and so on, many a times, but without proper macros, you may lose far too much in terms of performance (in a dynamic language).
I've never written tests for my projects. I hate writing tests because almost certainly, as I'm creating a new software, things change. I'm focused on the changing part all the time I hate being bogged down by having to write a script that tells me everything is fine.<p>On a large scale project, I understand the need.
The original headline is this:<p>"Sick of Ruby, dynamic typing, side effects, and basically object-orientied programming"<p>He's sick of OOP in Ruby, not of OOP in general. Ruby's dynamic typing means you have one type, "the dynamic type". The compiler leaves you entirely on your own as static analysis is impossible. It means many more tests, assertions, and runtime error conditions. Most OOP techniques have evolved out of, and rely on strong static typing, Ruby isn't that kind of language.<p>OOP in Ruby is the inevitable result of many Ruby projects growing in scope and size. When you reach that point, a better way is to move part of the project out of Ruby and into a language that can handle that scope and size.<p>P.S.: Any communication with the outside world is a side effect (ex. network, file, database I/O), so sick of it or not, you better get used to it. Haskell masking side effects as monads through a language loophole isn't making things significantly better.
I'm surprised those shiny new languages still can't do what was considered normal in good old BASIC. And all programmers nowadays go "Meh! BASIC. Considered harmful! Will permanently damage your brain!!!"<p>Come on people, FreeBASIC has really great options for object-oriented programming and could use your skills.