I have some sympathy for the idea that DI is harmful to good software design, but this article isn't an argument for it.<p>My specific issue is that DI, and a number of other things, including single-implementation interfaces and mocks in testing, are normally used as a means to an end: testable fragments of code. Individually testable fragments of code, taken to its logical conclusion, converts every function into a class, possibly implementing an interface, and taking dependencies (i.e. the other methods it calls) as instance arguments, either directly to the method, or as arguments to the constructor (in a kind of OO partial application).<p>You then end up with an atomized library of classes with names like ThingDoer and methods like doTheThing(). All the methods are now testable in isolation, since you can mock all the dependencies, and there's no risk of any pesky static references reaching out and pulling in stuff you can't easily mock.<p>Splitting everything up so aggressively means that somebody now needs to put all the pieces back together. Some automated help (DI, IoC) is used. That's where the DI comes in.<p>Some of the problems created by this style:<p>* Cognitive overload: turning every dependency into a pluggable modularization point greatly inflates the number of concepts required to understand the code, especially from outside a library, because all the subcomponent parts all too often end up in the same namespace as the outer coordinating parts.<p>* Far harder to understand without debug stepping: runtime composition of code and extra levels of indirection impede IDE code navigation - go to definition on a method, and you find out it's actually just on an interface, then you have to look up the class hierarchy, find the concrete implementation - only one if you're lucky - before you can trace things through.<p>* Over-modularization / over-abstraction: since the code is split up into so many tiny bits, there's an illusion that reuse or modification of the code is possible by simply adding an extra implementation of one of the single-implementation interfaces. But extensibility needs to be designed in; pervasive, mandatory abstraction boundaries are unlikely to be good fits for ad-hoc future extension.<p>* Brittle tests: because module boundaries go all the way down, and are individually tested, a refactoring that modifies the implementation of a library is made far more painful. Slightly chunkier tests - not quite integration, but unit-testing at the library level, the semantics that library clients actually care about - go a long way to reduce this. But once you go in this direction, the whole reason for the edifice's existence - individually unit-testable atoms of code - is called into question.<p>This is also my problem with mainstream Java code style.<p>My preferred style is to write support libraries that are individually testable at a slightly higher level, or are functional-style static methods that are generic and wholly testable with simple stubs, and write the main business code such that it uses the libraries in a fashion that's they're close to obviously correct as possible. Isolate any complicated logic into a testable functional static method, or a testable general (but not necessarily complete) library. Then integration-test this higher-level business logic.<p>A common problem I see with many junior Java devs is that they write effectively procedural code split into method-per-class classes, and they zip together business logic and more complex implementation logic alongside one another. Rather than building abstractions that make their business logic simple and free of complex implementation, you end up with a procedural call tree that has a gestalt - the complex implementation - spread across and intermixed with business logic, and all of it tied together via indirected runtime composition, because testing.<p>That's fairly abstract, so I'll make it concrete. Consider a spreadsheet report generator over data coming from entities in a database. Using a spreadsheet library (e.g. Apache POI) is typically quite thorny because it needs to try and support all the features, so you end up with complex logic dealing with each master row, then other methods that have complex logic dealing with each detail row. Code that has detailed knowledge about the business domain is intermixed with code that has detailed knowledge about the spreadsheet library's model. Let's not even talk about tests.<p>An alternative approach - and a refactoring I made - was to create a reporting-oriented write-only facade for the spreadsheet manipulation. The business logic was then conflated from multiple complex classes to a single simple class that had straightforward code using the spreadsheet writer.