A great skill I have been developing as I mature is the ability to look at a seemingly suboptimal implementation I receive and not be appalled by the perceived sub-optimality but rather appreciative of the history that is embedded in the little details. For me, any system producing correct and useful results is perfect under the constraints of its implementation. Even if the developer was not great, someone was great enough to mitigate for it. I have seen many such ugly, perfect systems.
You can get pretty damn close if you establish some reasonable constraints.<p>For me, a perfect system is one that is stable over long periods of time. Achieving these outcomes first requires careful selection of tools, frameworks and platforms. Building anything on unstable foundations is never going to end well over time.<p>We have large chunks of code that haven't been modified since ~2014. How many shiny frameworks and languages have come and gone between then and now? You can be certain I have basically forgotten about this code. Not out of disdain or neglect, but because it just works 100% of the time now.<p>We have a lot of problems to deal with in front of us, so we find the courage to properly settle things and move on.
Problem for me is if I settle, I start to settle every step of the way and 0.95^n is pretty small when n is not small. If you settle here and there you already can have something that’s 10% of the optimal thing. It’s a lot worse if you make a habit of it. I see some honour in sitting with my failure and struggling to be optimal than settle. I know it’s holding me back and that it’s mentally and emotionally taxing but I’m more terrified of descending far below mediocrity.
With the type of software many of us deal with - web, business, enterprise, on general purpose cpu’s, we keep quicksand in our hands and try to shape it continuously to meet ever changing expectations.<p>Lessons learned over and over:
Working code works, and, don’t let perfect be the enemy of good.<p>If you can change it with little effort it’s as close to perfect as it have to be.<p>Ofc not all domains are the same, but here I am with quicksand in my hands.
One major reason:<p>All systems we construct are actually at best models of reality. Because the information content of reality is always bigger, the model must always fail at some point due to the fact that information content is proportional to surface area (Bekenstein bound) so all models are trying to represent with less information a reality that contains more information.<p>This difference means you can never have perfect predictive fidelity in any system. You do not even need to invoke human frailty but even that is the same - the human mind can never model and predict the universe for the same reason so anything we imagine will always have limits and become wrong as prediction and model at some point.
Pretty good article and good points. I want to add a few more based on my experience:<p>* On the topic of systems changing. The "perfect" system is a moving target. Have you ever encountered an entire module that could be replaced by a library call? The plot twist is that library didn't exist 5 years ago when the code was written. Found an ugly hack that's completely unnecessary? that was written to overcome a system limitation that has since been removed. As time passes, how a "perfect" system looks like changes as well.<p>* On the topic of never the same implementation, we aren't all experts in everything. Being assigned to a project in a tech you have no knowledge whatsoever is a very humbling experience. You'll make tons of mistakes, and code will be far less than ideal, but it's only with time that you'll get to appreciate (and fix) those.<p>* On the point of Good > Perfect, there is the topic of diminishing returns and leverage. In the previous two points we either can't or don't know how to write better systems. Sometimes we can and know how to do it, but it's better not to. A while back I wrote a service that's central to our infrastructure. I spent a bit of time making sure code is clean and we have good test coverage. It has been in production a while and it has never caused an issue. Sometimes I look at it and see some obvious faults: interfaces could be tidier, or some code could be cleaner. However I force myself to not spend time fixing it. Why? it isn't a high leverage activity. Rather than spending time polishing a service that works well, I should spend that time fixing open bugs or improving parts of the system that don't work as well. Even if it isn't as satisfying to our ego, the return on investment will be much higher.
>I realized that even armed with all the theory and context, the perfect system still remains a mythical creature. In other words - it doesn’t exist.<p>I wouldn't go so far to say this.<p>The perfect system does exist we just need to define it formally. This will take a long time and a lot of research but we can get there.<p>Any time you hear the words "designing systems" it refers to some aspect of reality we don't understand and we go through this "design process" where we attempt to guess and check our way to a very sub optimal solution.<p>Take for example the distance between two points. The answer is a straight line. We have a formal definition of this axiom. We do not need to design the shortest distance between two points. If we complicate the problem and ask the question what is the best way to travel between two points in the United States... well then the answer gets much more complicated. Do you take a car? Do you take a bus? Do you take a plane? Which one is cheaper? Which one is faster? Which one is better for the environment? All kinds of decisions make it too complicated to calculate a solution so we turn to Design. We use "design" to create systems where no closed form solution yet exists. And in the past decade we've used machine learning as one possible way of finding a solution for these types of problems.<p>While we have to use designs for building planes and such I do not believe that this will always be the case for programming. I truly believe in a world where it is possible to calculate our program designs. If you really squint... I sort of see a path leading to this world within functional programming and category/type theory.<p><a href="https://www4.di.uminho.pt/~jno/ps/pdbc.pdf" rel="nofollow">https://www4.di.uminho.pt/~jno/ps/pdbc.pdf</a>
I used to think my love of making things orderly and perfect was a boon as a software engineer, where things should be clean and logical - that this was me sticking to high professional standards. But I've come to realize that these OCD tendencies (I realized I had OCD a few years back) are actually a huge setback. I have to actively work to say enough is enough, my text editor, programs, desktop, filesystem layout, etc. is never going to be perfect let alone bug free, and neither will anyone else's code.
Everything is about finding good trade-offs and arriving at the best compromises. Time to market, reliability, scalability, complexity, maintainability, costs, value to customers, and more are just variables you need to put into complex equations to be solved.<p>Experience helps. Inventivity helps. Trying to do things better than you did before helps. Asking for input helps. Research helps.
This can simply be summed up as: we're human. Humans aren't perfect and neither is the code we write. At best we can try to enforce checks and balances with CI and git hooks but since things always change, you'll come across some point where you need to make something in limited time and you'll have to compromise
Chaos and entropy always get into mechanism, but if we don't even try to get it right at the begining then the system might not even make it to prod.