I see and hear a lot of complaints around inheriting code bases that are less than stellar. If anyone has, I'd love to hear about cases where you inherited a "good" code base, whatever that may mean: awesome test coverage, good documentation, solid organization, consistent styling/formatting, abundant best practices, whatever!
I would argue that a lot of the time, people do not inherit a "bad" codebase. They inherit a codebase that successfully made enough of the right quality-vs-speed tradeoffs to survive long enough to be inherited by someone other than its original author.<p>It's easy to spend a day with a codebase (that others spent years writing) and call it "bad". I'd argue it even feels pretty good to take that stance of superiority. But you're viewing it with literally zero of the context of the time in which it was written. You see none of the constraints, none of the pressures, none of the alternatives presented in the moment.<p>Particularly for a young or small company, if you're "inheriting" a codebase it's because it's existed and been in operation for a while. Yes, it may still be bad. But I would advise taking time to consider whether it's actually, within the lens of yesterday (or 2 years ago)... good?
The codebase I'm working on now is what I consider an exemplary Ruby on Rails project. It is 14 years old and still going strong. It is structured exactly like you'd expect a Rails project to be structured. The gems the authors chosen have been reliable so far with few exceptions. We regularly step into sections of code that are 5 or even 10 years old, and modify/extend them with no issue. Even brand new programmers (fresh out of boot camp) find it easy to work with. It is a success by all standards of software engineering.
I've inherited code where I thought sections were well written, but probably not an entire project.<p>"good" is incredibly subjective, and subjectivity is temporal in nature. there
been times when I thought code wasn't"good" <i>at the time of inheritance</i>, but several years later, found appreciation for it. Perhaps not enough to consider it <i>good</i>...but some appreciation.<p>I've tried to let go of classifying code as good or bad, or any other subjective means.<p>Non-exhaustive list of things I'm more concerned with these days:<p>- How long does it take for a new hire to onboard and be productive?<p>- How quickly are we able to respond with bug fixes?<p>- Do we have enough test tooling to have high enough fidelity coverage to say we've implemented something, or that an issue is fixed?<p>- How observable is it in production?<p>- How reproducible are any issues?<p>- How accurate is the documentation?<p>- Is there enough test coverage to be able to reimplement a portion with confidence?<p>You can probably take all of those things and distill it down into a "good/bad", but I think in general it's better to look at specific concerns that are managed over time.<p>I'm less concerned about the current state of the code, and far more concerned with how easily I can change the state of the code without incident.
I once inherited an old ASP.NET Web Forms code base, I was a solo developer at that time.<p>I hated it, on my young and inexperienced eyes “everything was a mess”. It didn’t follow any good practice. The code didn’t have any layer, most of the code was written in the view and it didn’t follow the DRY principle, it had just a few libraries to share some code, so there was a lot of repetition.<p>Of course, I started “improving” it. I chose a layered architecture, I wrote several generic classes to implement the repository pattern, since I was a solo developer it took me a while to migrate big pieces of code to my new implementation.<p>Bugs came and went and I started noticing something. Even though my implementation was “beautiful”. I hated to fix bugs on the new and “improved” code base. To debug I had to jump around a lot. The bug was always hidden. And sometimes my changes could affect other places in the code base so I had to be really careful.
In the other hand fixing bugs in the old implementation was pretty straightforward and it rarely broke other places.<p>The project came to an end. The company bought a CRM that had more features so they shut the old code base down.<p>All my efforts to improve the code were a waste of time. They wasted my time, not only because it was not going to be used in the future, but because even if it lived longer my changes added unnecessary complexity to the code base. This complexity made it difficult to work on a really simple project, it made it fragile and it was simply not fun.<p>I learned a lot. I don’t judge code bases anymore. Now I can see the benefits of “ugly” code.
Not someone else's code but I had the strange experience recently of revisiting an old project of mine (DNS server written in typescript + deno) and being intimidated by <i>the quality of my own code</i>.<p>I had those familiar feelings of "I could never write code like this" or "I'd never have thought to do it like that!" or "This person must really know their shit". Turns out it was me all along and I'd just forgotten I'd done it all.<p>Grokking it and modifying it after 2-something years was actually simple as I'd left good comments, abstractions, and even reasonable unit test coverage (even integrated with GitHub actions that run on push) so changes were a breeze.<p>There has to be a life lesson there somewhere.
I worked on some ancient FORTRAN for CommBank (formerly Commonwealth Bank of Australia). Most of the details are still under NDAs, but the project was amazing.<p>Every variable and function was explained in a series of physical manuals in excruciating, and up-to-date, detail. The manuals had index lists by name, function, type and concept, making it ridiculously easy to find exactly what you were looking for. The documentation felt almost like reading Knuth's Art of Programming. It explained not just how a function worked, but also the dependencies and how they worked on that particular hardware, including pieces of the FORTRAN standard library.<p>On top of that nugget of most people's fantasies, there was actually a test suite! It wasn't written by the original authors, and had been pieced together over the years. But it was a testsuite for code running on a mainframe the size of a small room. Since when do you ever get tests for code written in the 70s!?<p>Working for CommBank was hard - the standards for absolutely everything that they have and do are A-grade. A single complaint from a coworker or customer can land you in front of a review board. But the work they produced, at least what I saw, is absolutely worth it.
Wait, that's an option in the probability space?<p>Actually once I was lucky enough to join a company where the code was written well, had good test coverage, the team implemented good code reviews, required tests on bug fixes and features, etc. The pandemic was the five hundred year storm to their leveraged business model (hospitality business targeting business travelers) and they folded within months.<p>At the next job I created the backend code from scratch and was the primary maintainer. I left it in pretty good shape with good test coverage. There were plenty of things I'd have done differently given the chance though. Hopefully the next guy doesn't curse me.<p>At the current job things are a mess again. I'm working on improving it. Git blame has me at having modified/added 8000 lines in the backend Python code since I joined three months ago. Slowly digging out of the hole as it caves in around me.
Inherited an <i>awful</i> stack of hacks once; given a list of names it printed "hello my name is" badges on a dot matrix printer. In a big, friendly font. This was before PrintShop [1], some little single purpose app running on Victor 9000 "almost PC compatibles" under DOS 2.<p>The list came out of a COBOL "DBMS" system that had a 64kb table limit. They had more names to print than that.<p>The stack of hacks consisted of scripts that ran trough all the tables of the "members list" DB (multiple floppies were doable, by the time i got it they had a hard drive), creating text "dump" files of the bits that went to the printer; more scripts to assemble those dumps and reformat them (in GWBASIC), and finally a script that fed one record at a time through the pretty printer formatting program and printed it out.<p>My contribution was figuring out a way to feed the "pretty printer" multiple records per run; instead of invoking the chain once per record. saved days for the entire print run.<p>It was a horrible stack and it wasn't fun to work with and I cussed the people who had implemented it; however: given the constraints when it was built and the resources of the people using it; it was <i>incredibly cool</i>. Until I saw it I'd have said it wasn't possible with that collection of parts, but it functioned as required and did so for a decade. Eventually it was replaced with WordPerfect doing a mail merge operation; the people wearing the name badges complained that the font wasn't as pretty.<p>[1] <a href="https://en.wikipedia.org/wiki/The_Print_Shop" rel="nofollow">https://en.wikipedia.org/wiki/The_Print_Shop</a>
The two best:<p>1) Working (professionally) on a project that happens to be open source: <a href="https://github.com/metabase/metabase/" rel="nofollow">https://github.com/metabase/metabase/</a><p>2) Coming to a Rails project mostly written by a very senior 7-person team. There was still a fair amount of jank (mostly from seed-round assumptions that weren't holding up when I joined after the Series A) but it still followed The Rails Way and nothing was too gross. It also helped that everyone important was still at the company and available for questions.<p>PS on (2): Seven of those original eight dev are now gone, including me; the median years of experience has gone from 8-10 to 0-3; the team size is at least 40 and I think more; and my understanding from friends still at the company is that the codebase is in general a flaming mess.
I worked for several years at Ankama on the game client of the MMORPG Dofus. When I arrived, the code base had already undergone an entire refactor and a change from ActionScript 2 to 3. At that time the developers had spent some time to breakdown the code into well defined libraries implementing design patterns to solve issues they had before the refactoring efforts. They'll forever have my gratitude for that.<p>End results was an easy to maintain code and very extensible. But also it taught me a lot on how to architect things, what patterns to pick, etc. To this date, I've never worked on games with a code base as good as this one. Instead I'm doomed to see all the problems those games have in terms of architecture... (sometimes I can help solve some of them, when I'm granting enough time, but it's rarely a priority for companies, since there's no user facing changes and can induce regressions)<p>I had the chance to port the code to C# for some R&D in Unity, although I didn't really know C# at that time... but because the code base was so well split into libraries that made sense for the game, I could port them and test them separately and was able to progress much faster than expected. First with a client running as a console app, then later in Unity.<p>My love for that code base went as far as giving a lecture at the local University about its architecture and the patterns used in it :)<p>Fun fact: The libraries in Dofus are named after Discworld references, the world rendering library is named Atuin for example. A terrible idea in retrospective for new developers joining the team who had to idea what Discworld was!
Absolutely, and there's a common thread in them: context. When I'm preparing to hand a project over, the README will be updated with design constraints (maybe even the RFC/project launch documents), why certain decisions were made; basically explaining anything which would raise an eyebrow.<p>I do this because I received a project with this note, and it stuck with me as an excellent idea; a letter to future explorers.
on first glance, no.<p>after a sufficient time spent meditating on the construction techniques of that project’s chesterton’s fence, most codebases i’ve seen become much more reasonable. Once i’ve spent enough time using it myself instead of just reading it, design decisions or organic evolutions start to make sense.<p>A good-faith humility is a good attitude to have as a reader, but it only really comes with professional maturity. Un-curling the sneering lip most of us seem to pick up in our late teens takes, in my experience, about a decade in a relatively attentive person.<p>some are still truly horrific, but they’re relatively rare.
IMO, none of the things OP listed here are what makes code "good". Test coverage, documentation, organization, constant style, "best practices". You can have one or all of those aspects in any project and it can still be a nightmare to maintain.<p>What makes code good is "How hard is it to fix issues" and "How easy is it to understand". You can have well documented code which ultimately is hard to understand. You can have well organized code which ultimately makes fixing issues with said code hard.<p>In order to know if a code base is good, you have to experience maintaining it. You can't (easily) know a code base is bad with cursory glances to mental checkboxes about it.<p>The metric I use for good code now a days is "How often does this wake me up in the middle of the night?" Good code is code that doesn't cause my employer to pester me off hours.
Yep, all of the above! The main front end code base at the company I joined earlier this year (<a href="https://deep6.ai/" rel="nofollow">https://deep6.ai/</a>) is excellent. It is some of the best code I've worked with in my 13 year career. A lot of thought and care went into its design, implementation, and stewardship. The early investments that were made into the TypeScript-based project's quality make it very easy to extend, iterate on, and improve.
Very long time ago, I worked on a very large code base for a medical imaging platform that ran on multiple operating system platforms and had a shelf-life of 10+ years. The code had two personalities.<p>First was all the scaffolding/framework and inter-process/server protocol code was meticulous and beautiful. As a young c/c++ favoring engineer, I was very impressed and my design thinking was influenced a lot by this, for a long time.<p>Second, there were these deep algorithm implementations for digital image processing pipelines and also dicom data protocol implementation stuff. These "plugins" or "processing elements" were heavily optimized to squeeze out every last microsecond out of the start-to-finish execution wall time. So the code was hard to follow just by reading. Also, there's no way to understand this code without knowing the domain knowledge (studying the Matlab implementation of the image processing algorithm – to understand that required having a basic theoretical understanding of signal/image processing concepts).<p>But this was great as different engineers/teams could work on different deep algorithm processing elements and the framework/scaffolding ensured there were no leakages or undue blast-radius of any messy bugs.<p>A decade later I found myself staring at a very large php codebase. This codebase had a lot of sprawl but no depth. It was messy (bad idioms, wasteful of resources, functionally buggy etc) but it was easy to read and understand. PHP + the framework/scaffolding we had was very forgiving of these mistakes. The application would continue to chug along even with a lot of warnings and some data induced errors. It took some mental attitude adjustment to not lose it every time I saw crazy stuff like 3-level nested for-loops that would unpack an array of huge serialized objects and iterate over their elements only to not use the results for any final response rendering at all.
Actually About 15 years ago I joined a gaming company in need of a job. Not the "cool" gaming kind but the slots/gambling kind. What could I possibly learn from this shady product that was a scourge on society I thought. Youd have to be desperate to be working here I thought. Must be shambles of a code base with sprint after sprint of crunch times I thought.<p>Gotta say I was amazed at not only the quality of the code base, but also the engineers there. Very balanced, mature, collaborative, kind and really the one place I learnt most about <i>good</i> software engineering. It was a C++ codebase with <i>just</i> the right amount of abstractions that you could expand when needed. No fancy syntax magic. Impressive debugabbility!<p>Folks at WMS gaming thanks for an amazing learning experience and patience despite me being an entitled little <i>$h1t</i> who was a pain to work with!
Yes.<p>It wasn't that the code itself was badly or well written so much as the concepts of how each process was isolated from each other, the communication protocol was well established and the external dependencies were kept to a minimum.<p>Data structures were chosen for ease of understanding rather than (run-time) efficiency which was the appropriate choice for this application. The application(s) relied heavily on various other scripts and the operating system to establish a (secure) network communication, offloading a lot of the complexity from the application to the operating system (where it belongs, in my opinion).<p>I recently update/ported the code base to work on more modern hardware and, besides some minimal updates and fixes, it worked well.<p>The code base was 10+ years old and mostly written in a combination of C and shell.
You have to be <i>really</i> arrogant to assume that you know within 5 minutes of looking at somebody else's code that you know how to do it "better".<p>I am sure you know how to do it <i>differently</i>, and more like how <i>you</i> prefer to do it, but that is not the same thing at all.<p>Legacy software is <i>successful</i> software. You are never asked to maintain failed software. Only software that is successfully generating $ years after it was originally developed.<p>So be humble. Show a bit of respect. And don't automatically assume that you are some super genius who knows how to do everything better.<p>And remember that other <i>equally</i> unenlightened developers will look at <i>your</i> code and go WTF and complain about how crap <i>your</i> code is. Don't be like them.
I've not gone through several inherited codebases. At first I called them awful --- however, as I've gotten older, I realize that code is harder to read than most people think. We are trained to follow a particular set of styles, linters, testing strategies, etc.<p>When we sit down to an inherited code base, we don't know what they were doing, so it looks terrible. There might be a very simple key that you'll never figure out until you've spent a lot of time reading the code. If you go adding code that doesn't jive with the hidden premises, there will be conflicts that you create, which you'll blame on the old code, even though it was your lack of understanding that created it.
Surprisingly the best codebase I've taken over was written by a uni student. Everything was strikingly simple to comprehend and yet perfectly abstracted.<p>I think their inexperience and lack of hubris made them go to a lot of effort to be idiomatic in a language they were using for the first time.
I got to work on a (very) brief project -- I had two weeks to add features to a product to demo at a trade show. The person who had written the code I inherited had done a <i>really</i> good job of setting up a nice architecture, and used a library I had not encountered before, but which turned out to be really nice. So extending it was (nigh) trivial, and changing things was really straightforward.<p>The project lead was incredibly impressed that I was able to make a solid contribution check-in on my first day :-)
Maybe "inherited" is a strong word, but some of the open source projects I contributed to were beautifully crafted. Django is one such example, on all fronts — docs are great, there are tests, and so on.<p>At work, not so much, as it's mostly very rushed, badly designed software.
I think we are typically not as good at spotting good code as we think we are.<p>For example, I inherited a large code base I while back and thought the classic "I can do better than this". After a few hours of hacking together a demo, only then could I appreciate the existing code base and how nice it actually was.<p>I think in general I look for a good level of abstraction - but not too an insane level. The best measure is how quickly you can understand it, and how long it takes for you to contribute to it.
Yes, I work on an 11 year old Ruby on Rails codebase, and except for a few sections that are a little overengineered and crufty (Asset compilation..... enough said), the bulk of the codebase and the entirety of the "business logic" is really easy to understand and navigate. About the only time new engineers ever have problems finding something is when it's defined in a "has_" macro with dynamic interpolation, but those cases are pretty rare, and you generally learn to recognize them over time. And certainly it makes <i>using</i> and <i>sharing</i> the code much easier when those larger pieces of repetition are extracted (for example, we have an "acts_as_markdown :column_name" macro that defines e.g. column_name_as_html and column_name_as_plain_text methods). And the validation/callback structure makes it really easy to add new features without having to worry about breaking old ones, and the testing experience with rspec is second-to-none.
> ... a "good" code base, whatever that may mean: awesome test coverage, good documentation, solid organization, consistent styling/formatting, abundant best practices...<p>IMO many legacy systems were coded to "good" standard for their time. This reflected the choices of idioms, styles, and robustness criteria.<p>Properly maintained codebase carries those conventions forward. When it's augmented to present day expectations, it's supposed to be done in non-destructive way possibly. There could be seams but not scars all over.<p>In my experience, the onus is on the inheritors to try and make and effort to keep the legacy code alive yet consistent.<p>Alas, those assigned to maintenance are often too junior to recognize the consistency let alone care about it. Thus the codebase degrades into a patchwork of "I've been there" marks.
All the time!<p>As for what that looks like - it's hard to say. I would not say I commonly find "good" documentation or organization. Sufficient test coverage has been common and extremely helpful (especially b/c tests are often implicit documentation about how functionality is expected to proceed).<p>I would generally say that well done code has a flow that follows the conventions of the languages & libraries that it uses. Being able to appreciate the flow means that, whatever direction you want to go, you know how to pivot from the current state.<p>When I get "bad" code it's code that I can't actually work on until I do weeks or months of work trying to understand what the original intent <i>was</i>.
Yes – I believe the author took the Design Patterns book and went to town.<p>The classes were small (< 100-200 LOC), they fit together well, and the code was well tested. It was a SyncML parsing library developed in-house in Objective-C.<p>What helped, I believe, was him coming from a Ruby / RoR background (extensive OOP usage) and the fact that this was his 2nd attempt at writing this, after he wrote a similar library in Ruby.<p>I think about some code to this day and try to emulate wherever possible. Although I think the guy that wrote it was also a very smart person and experienced programmer, so I don't beat myself up if I can't quite make it to that standard.
Yes; my current project<p>I inherited a codebase where the backend is written in Kotlin (using Micronaut) and the frontend in react<p>Both the backend and frontend are very clean and i learnt so many new cool things just by reading the code.<p>The code is so easy to follow and understand, and the architecture is very nice<p>The frontend's consists of react functional components that are are written in a way that makes them very reusable and configurable; each component can be extensively configured with props, making it very rare to have to create new components<p>The backend is structured into independent microservices and it is therefore very extendible and the microservices themselves are small and easy to modify
One that I inherited that's around 10 to 12 years old now, I think it was designed reasonably well. Certainly some things I would do differently, and some flaws, but I think they knew what they were doing and did a good, or at least decent, design.<p>Since it was written in a language and framework for which I have no particular expertise, I'm judging this purely off the things I do know -- the rough structure of an MVC, and database design. Maybe the way they used the framework is poor, but it looked decent to my untrained (in language and framework) eyes.
One of my favorite good code bases, which I'd sometimes describe as being advanced technology that was gifted to us by outer space aliens...<p>I inherited a large code base in Scheme (Lisp) from two PhD engineering domain experts. One of them had been a systems programmer before grad school, and had built the foundation from scratch, including an entire complex Web backend and frontend framework, including continuation-based Web forms UI serving, and a versioned ORM with a meta layer (extensible by customer sites using an early browser-based Web UI builder), etc..<p>The system evolved for over a decade, with a very small and super-productive team, and was able to respond very rapidly to new requirements.<p>One more conventional Web example: when we needed to be in AWS, we owned and understood the underlying framework intimately, it had good abstraction layers where we needed it, we could code the protocols and understand the distributed systems changes, and just do it... which also got us the side honor of being the first system to get a particular federal security certification for AWS.<p>Another Web example: when we needed a handheld app, we were able to get into the guts of the meta layer, and do an HTML5 Offline app. A large part of which was generated dynamically, as a semantic translation of complex Web forms from the meta layer to idiomatic smartphone and tablet UI. (Admin user had previously painted a form with particular spatial layout with rich controls for knowledge capture in desktop and occasionally modified it, a new algorithm did structure recovery of grouping and ordering of those fields, mapped them to modern device-responsive handheld controls more usable on small touchscreens, and the system updated the generated app package for JIT updating as necessary.)<p>There were numerous other examples of how the code base evolved to growing functionality and operational requirements, but those two might be most recognizable.<p>Of course, part of it was the team and how we were managed. And part of it was that the code base gave the team a very smart head start with a powerful foundation that let it churn out functionality at a high rate early on, yet was also amenable to evolution with a very small team. I think these parts were complementary, and affected each other.
I thought about this for a bit and came to the conclusion that I have, but not because the codebase knocked my socks off. Instead, the codebase didn't have any of the signs of bad practices. Most everything was where I expected it to be, there was sufficient, but not stifling test and review practices, etc. It felt natural to work within.<p>So I don't think that a good codebase does any one particular thing well, it just avoids the bad parts of bad codebases. Via negativa in practice.
Not exactly, but I have had 2 experiences that made me think it's possible:<p>1) I was working for a small company that did contracts related to graphics and printing. At some point we were asked to evaluate a MacOS (8 or 9) extension from Extensys. I don't recall what the extension did beyond the general thing that all extensions do, which is patch some system routines to update their functionality. But the code was incredibly clean and clear. Despite having never written a Mac system extension before and having only a vague understanding of the process of doing so, I was easily able to follow the code and understand how it worked. Unfortunately, we ended up not taking whatever the job was (probably just updating the extension for a more recent OS or something), so I didn't actually get to work on it.<p>2) Many years ago I sold some video filters for popular video editing programs (Final Cut Pro, After Effects, etc.). Eventually I sold my IP to a competitor and went to work for someone else. Fast forward ~12 years, and I'm at a trade show, when I see some 3rd company still selling my products! I don't know if that means my code was actually good, but it survived for at least 12 years after I stopped working on it, so I guess someone was at least able to figure it out.
I haven't inherited it, but I've been working on an experimental fork of Stockfish as a side project.<p>The code isn't exactly easy to understand, but that's inherent to the complexity of the domain. But there's a lot of elegance to a lot of the data structures and how well optimised they are for the problem at hand.<p>And Stockfish' way of utilising multiple cores is simply beautiful to me. There are all sorts of algorithms for parallellisation of the Principal Variation Search algorithm at the core of Stockfish's search, to do with distributing nodes between threads and so on and so forth.<p>If you go read Stockfish, it might seem like it's just running ncpu separate single-threaded searches. Because that really is what it's doing. Which seems crazy at first.<p>But what it does is it has a shared, effectively constant time(technically O(n) where n is 3, the number of entries per cluster) lookup hash table for caching search results, keyed by the node. And it's even lockless. And then each thread has its own set of statistics generated through search, which then influences the order in which that thread visits nodes because they're used for move ordering heuristics. And there's some other heuristics where the thread might jump straight from searching depth n to n+2, to inject some more randomness.<p>So there is a distribution of different nodes to different threads. It's just an emergent property of fairly simple things happening in each thread, whether they got a cache hit or miss, etc.<p>The reason this is so elegant is that the search algorithm itself is much simpler this way because it doesn't care about what other threads are doing. It looks at the transposition table, that's all, everything past that is good old single threaded programming. Then there's a very simple bit of code at the end that does a vote for the best move, based on evaluation and various other statistics(like the number of times the thread has changed its mind on the best move).<p>What excites me so much about this is that you could in theory do wildly different things in different threads. They only have to agree on what goes into the transposition table, what it means, and how to vote at the end. Stockfish doesn't do that, so that's one of the things I've been trying to explore in my own project.
My current workplace has what I consider good codebases. Good coding standards, good abstractions, reusability, and performance. It has plenty of tricky areas and bits of code that are poorly written or confusing, but that doesn't change the overall picture. If you have a large codebase that many developers worked on over many years and it's still doing a good job and able to be worked on, you're doing alright.
I joined a company to help modernise it as the stack was based upon ColdFusion and they struggled to find/retain developers to maintain it.<p>The previous developers done a great job of documenting and structuring the system in a way that made it easy for me to migrate it onto something maintainable.<p>If there was enough developers floating around to make it viable to maintain in ColdFusion it would still be going now and doing a great job too!
Yes, at YouTube I inherited the Pilot Studio iOS codebase from Will Kiefer. This included a solid iOS application framework and a couple really nice prototypes. We hired some great folks, extended "PilotKit" quite a bit, and built a dozen incredible apps with it (Motion Stills, and the UX experiments that became YouTube Live and YouTube Stories, plus a bunch of fun internal stuff).
I worked on QRes (airline reservation system) while a contractor at ITA after it had been eaten by El Goog, but before it had been fully digested. It was really neat. It was written in Common Lisp, but that's not what made it cool other than in a superficial I-like-Lisp sense. But it <i>was</i> written by old Lisp wizards, and their meticulous attention to detail, including maintainability. It took just a couple of M-. in Emacs/SLIME to find what I was looking for and everything was structured clearly and easy to maintain/change.<p>And the testing! It came with its own testing DSL that allowed you to specify templates of expected XML results and check that the actual response from the live web service matched the template. A new test could be written in a handful of lines. The test suite was HUGE and comprehensive, and when adding a new test it was easy to find a group of similar tests to put it in. I never enjoyed writing test code as much as I did on that project, and that's how it should be for every project.
Had to support an XML/SOAP interface to a legacy system that ran on AS/400, it was written in somewhat modern PHP, documented and formatted very well. Modular and easy to follow. I was very impressed. Had a very classic code feel to it where every source file had a massive comment section listing all the functions and explaining a bunch of stuff.
Not really.<p>And the truth is I've probably been responsible for bad code that others have inherited, especially early in my career. My favorite way back then was to over-engineer and gold plate. I've also been the "over commenting guy" at times in the past. Thankfully I've mostly recovered from those ailments. Admitting it truly is the first step.<p>Now, I have a lot more patience for what I used to consider "bad code". I don't get too worked up by SOLID principles or other design issues, although I strive for a well-designed code base when I have a say in the matter (ie. greenfield or refactoring).<p>The thing that gets to me now is if the developer shows a complete lack of understanding of the language: Like they use concurrency, but they don't understand concurrency; Or they misuse an ORM and fall prey to the N+1 problem.<p>Those are sort of fundamental problems in my view, and indicate a developer who was in over his or her head.
I once inherited a reasonably good code base. It wasn't a <i>large</i> project but it did have a certain complexity; it was a web application to write and manage HTML-formatted e-mail templates that the company then sent for various purposes from different applications.<p>It had no testing and documentation was little more than a brief overview in the internal wiki so at first it didn't feel very welcoming. But then the code turned out to be quite well organized and approachable. I added to it a couple of features that had been ignored for some time and the code really made sense. It guided you quickly towards the correct places you'd need to work on.<p>Ultimately the project itself was somewhat flawed because nobody wants to layout and maintain e-mail templates, specially when somebody insists that they want "100% pixel-perfect coverage on <i>all</i> e-mail clients including Outlook Express 5.01" -in 2014-.
Sort of, but mostly yes. I inherited a code base and some team members for a video management system (security-VMS). It was a fantastic code base for the feature set it provided. However, the product strategy also changed with this inheritance, and that change in context of requirements made it less than optimal.<p>In no way am I faulting the original authors. They designed a system for precisely what was requested and it worked beautifully. My point is that "well done" can change meaning based on the environment. There were spots in the architecture/organization that made valid assumptions that turned out to no longer be valid and had to be re-worked to accommodate the changes. They were correct in not laying out that flexibility in the first place, but it still represented an in congruence between the problem domain and the solution.
This isn't a codebase that I inherited at work, but an open source library that I used at work which impressed me: Leaflet JS has been around for 11 years, enables web devs to do really complex mapping tasks easily, has zero dependencies, is 39KB of JS (vs 261 for mapbox), is extremely legible, and easily extensible using native JS concepts rather than fancy abstractions. To me it's a shining example of how over-engineered everything on the web is today. You can make an interactive map of the world and have a smaller bundle and simpler code than even the most basic React app.<p><a href="https://github.com/Leaflet/Leaflet" rel="nofollow">https://github.com/Leaflet/Leaflet</a>
I like seeing (through git-blame or similar) sections of code that were written after-the-fact by a new coder, but that still fit in seamlessly.<p>I don't mean just in terms of indentation, variable-naming, etc., but also in deeper ways, such as decisions about when to create new functions or extend object structures. Other clues relate to decisions about balancing code flexibility and execution speed, and also about what steps along the journey require road signs.<p>It's hard to define these things, but I find that the quality of code integration becomes evident when I'm sufficiently immersed in the code that takes the form of a communication between coders.
I've worked on good (enough) code: good code coverage in tests, relatively fast tests, easy setup to develop (integrated DB, mail server, LDAP server), consistent formatting, no useless comments, good naming, code organization that was logical, even if it required some time to get accustomed to it, some good documentation as READMEs.<p>Not everything was perfect, but it was much better than the code changed by future generations which tried to mess it up with almost every commit, in the name of "it's good enough", "consistent style is not needed as I can still read the code", "what tests?", "we can refactor later"...
I will tentatively say "yes".<p>I had one project I inherited that was fairly clean. The codebase was well structured, tests existed and would run, there was documentation in place and it was relevant.<p>I have personal preferences that were fairly different from the original authors (namely - they chose coffeescript, and had a fascination with single line methods and chaining) but I can't really fault them a ton there - to each their own.<p>It helped that it was a very small project, so there just wasn't much space to get lost in the weeds, but it's still probably one of the better organized legacy code bases I've been handed.
I've only ever encountered one firm's codebase that I felt was terrible, and it was pervasive across everything they did. Over abstracted with whatever the design pattern of the week was, pointless microservices to nowhere, having novice frontend devs attempt to write Go. Blunty, it just didn't work and fell over constantly.<p>Everything else made sense at the time it was written. Sure it may have outgrown it's usefulness today but if something chugs along for a few years I'm not going to call it bad code, just bad for the current situation, it was fine when it was authored.
Yes, but also it wasn't really a codebase? It was a compilation of Ansible roles, perl and bash scripts who all made sense and worked with each other, but it wasn't like a singular application. You still had to manually write your json configuration and then execute the Ansible that called the right scripts.<p>I think the reason it was good is that nothing was too integrated. You had of course scripts who integrated multiple others (mostly Ansible roles tbh), and that could be complex to understand, but everything else was great.<p>So: keep everything small?
I'm in a surprising situation right now: I've just inherited a huge, well written C# .NET project that's not generating much value to the business. Too much boilerplate, too much unused models and extensions. Things like that. It's weird because every single piece of code seems well written and well documented, but this mammoth solves a very small part of a small business, leaving the internal users to reach out to spreadsheets most of the time.
> I see and hear a lot of complaints around inheriting code bases that are less than stellar.<p>I think this has to do with a common mindset among developers. If they don't immediately recognize familiar patterns and structures then it's the fault of the code. Many folks value <i>readability</i> and believe that code should be written for other humans to read and understand. Yet what we consider <i>readable</i> varies greatly among individuals, programming languages, and communities.<p>More experienced programmers won't be so easy to jump to conclusions. They may realize that it takes time to understand why code is written or structured a certain way. They recognize that it will take time to learn and appreciate the code.<p>However experienced programmers also develop a sense of taste and style. They build opinions based on experience and if they see a pattern in use that they associate with negatively then it's likely they will not have a good opinion of the code base.<p>Less experienced programmers are trying to build their sense of taste and style and will associate with whatever they perceive makes them superior. They often have an immediate and strong reaction to a code base. Their opinions and feedback are often couched in absolute terms.<p>Personally I've inherited great code bases. One of my favourites was a messy, old C++ web application written in the 90s by someone without much experience at the time. It didn't use any standard libraries, had no tests and documentation was non-existent. It used the file system as the database storing XML files all over the place. A single-threaded CGI application: something I wasn't unfamiliar with.<p>You would think I would have held my nose while dealing with this code base. Yet I consider it a good one because the team that came before me did a lot of work to wrap this monstrous code base in Python and started writing tests for it: a lot of tests.<p>Those tests enabled them to start synchronizing the data the application normally stores in XML files into a Postgres database. When I came on I had a completely different idea of what <i>legacy</i> software was. The engineer handing the project over to me gave me their dog-eared copy of Michael Feathers' <i>Working Effectively with Legacy Code</i>.<p>I like that code base because I learned a lot from it. I kept adding more tests and started adding more functionality and replacing the old C++ bits slowly but surely. Eventually we were able to get rid of the XML stuff, the email system, and replace it with Python code. The application was running the whole time and making the company money. It was a great learning experience.<p>One of the qualities I've come to value the most in an engineer is <i>tenacity</i>. When someone inherits a code base and can roll up their sleeves and make it something better than it was before: that's someone I want to work with.<p>Too many developers raise their hands, complain that this code is terrible, and suggest re-writing it. Or they burn out and find new jobs else where. Me? I like to stick around, figure things out, and make them go to 11.
“Inherit” might be the wrong word, as the guy who wrote it is still at the company, and is my manager.<p>The thing that struck me about the code is how well scorched the whole thing is. Portability is great, structure is clean, comments are good. When I came in, there were no tests, so I started adding them, but overall it’s been the best code base I’ve gotten to work with.
I've inherited an internal application that handles some stuff for manufacturing written in .NET. (windows app, used by 100 people cca, 10k lines of OOP code roughly)
The way it was written or actually re-written, allowed even for a non-programmer like me (at least not formally trained) to easily to implement all the changes.
I once inherited a Verilog-A interpreter made by some guys at Motorola. My boss said the code was awful, I found it really nice, structured and well commented.<p>I ripped it off and replaced with a compiler with waay better performance. It was appreciated but I think my code, while more effective wasn't as nice to look at.
Sure, about three years ago I picked up a Gatsby project.<p>The codebase was just… normal, it was fine. Really two simple for there to be anything I would object to.<p>It was, however, my first exposure to Gatsby. I had struggled in the past with the older approaches to React SSR, absolutely eye-opening, changed everything about the way I work.
I once worked on porting code for a several million line cobol application into Java. My part was a truly herculean set of complex business process (maybe 500kloc) and this task would've been damn near impossible if it wasn't for the fact that their code was immaculately consistent.
Yes-- working right now on a tool that uses Rust compiler internals. A previous contributor made a module with a clean interface to almost all of the compiler analyses I needed and without much compiler cruft. Coming across it was a borderline religious experience.
I don't think you will get a lot of "yes" answers?<p>I have seen that a lot of devs simply see code that was not written by them as wrong. So they tend to rewrite it to fit their mental model of what good is.<p>I wonder how much time is lost in this
Not inherited, but I had an opportunity to see Alexander Stepanov's C++ STL code at Agilent Technologies(they inherited some of HP's code). The code was very well written.
My first job had a very high quality PHP monolith codebase. I’ll always defend PHP to this day, because I’ve seen it done right. I would easily choose it over Ruby or Python for a new project.
There’s no such thing as a good code base. There’s only code that you deployed to production and that is working and code you wish you wrote that never got deployed and has no value.