So you just copy and paste the subsets of the Backend methods and their signature into separate interfaces that only have the methods needed for the user function and its test? That doesn't sounds very good.<p>I'd rather have one global "backend" interface and have mocks use that. If the type system gives me issues with unimplemented methods, than that's the problem with the type system itself -- allowing for partial interfaces (through keywords or something) should be a first-class feature.
This is good advice, but the benefits and advantages are not entirely explained.<p>The approach is that client's should declare their <i>own</i> interfaces and libraries/backend's should implement those interfaces.<p>- So instead of:<p><pre><code> app > depends on > api
</code></pre>
- It's<p><pre><code> api > depends on > app interface
&
app > depends on > app interface
</code></pre>
Basically another version of [1] "The Dependency Inversion Principle", which allows you to break hard dependencies into lightweight interfaces.<p>[1] <a href="https://en.wikipedia.org/wiki/Dependency_inversion_principle" rel="nofollow">https://en.wikipedia.org/wiki/Dependency_inversion_principle</a>
I'm super confused what's the point here. There seems to be something about how abstract interfaces (i.e., interfaces that are separated from the implementation) are good for testing (and more?) because there is less coupling (also at build time) to other modules.<p>It might be hard to get for me as a C programmer because we make manually decoupled interfaces all the time (we create header files explicitly). In a sense all interfaces in C are abstract, and only the linker matches actual implementations to the interfaces. I believe with explicit header files there isn't any of the issues mentioned in the post.
It's great to see this blog post get published. Evan is one of the most creative engineers I've gotten to work with.<p>Go's structural interfaces are often harped on, but this is exactly a case where they are very useful. Perhaps an approach is to use named interfaces for variance in implementation strategy, and structural directives for usages as described in the blog post. I wouldn't discount how often this pattern comes up.
I think allowing extensions on existing classes/structs can also solve this problem without having structural typing.<p>ie the client can create a new interface and explicitly implement it for existing classes/structs.<p>I think Swift and rust allows this.
Great post. The author makes their point clearly and concisely - and what's more, they're absolutely right. The recommendation of this post might seem strange at first, since in some ways it contradicts some other programming heuristics. But it can make dependencies much more clear and explicit.<p>Though, Go and Typescript are not necessarily the best languages in which to fully embrace this idea. This is a scenario where type inference is very useful.<p>As other comments say, it can be a hassle to define sub-interfaces for each function, which can lead to a fair bit of duplicated text. But in a language with type-inference for function types (instead of only within-function type inference), the used interfaces can be inferred instead of explicitly written down, providing a lot of flexibility and allowing this principle to be followed in more cases.
BTW:<p>>>> the obvious thing is to do in a nominally typed language like Java or C++ is the "extract interface" refactoring, where you create an interface [...]<p>well, first of all, I still think that a library (even an internal one) should expose only interfaces by definition.
It's not only for the sake of the tests, it's also because it forces to keep boundaries "clean".<p>and by second... what I normally use is to trow in mockito[0] and let it do the subclassing, instantiation and definition of custom methods in 2-3 lines...<p>[0] <a href="https://site.mockito.org/" rel="nofollow">https://site.mockito.org/</a><p>I'm not sure that "make it easier to define interface <a posteriori>" it's a good selling point to structural typing
This is a good article about structural interfaces which makes some positive points. I don’t have a rebuttal per se, but the testing part of this article is an interesting side topic.<p>In the specific example given for the given product, a real life developer would write all the code to get the UI up and running, test it works, and probably just ship it.<p>What do you need to write tests for? Well: there’s nothing wrong with writing tests and a better developer would include at least some testing with their new product feature. They’ll certainly want to know if their product breaks in production, and to be alerted automatically, detecting any regressions.<p>If you write a test around the backend code with a test double / mock / fake for the backend, then indeed it will be the case that when you break the code that interacts with the “backend”, your test dashboard will alert you with “BackendTest FAILED”. But goodness me it’s a lot of work to make the code testable like that: work to rethink the code into a testable form, and more importantly cognitive load on the next person who comes along as has to read your now not-so-simple backend fetching logic. The linked article certainly shows how to reduce the impact of making the code testable, but you’d still have to make the mock, which is effort as well as more lines of code to go wrong.<p>A much better tool in this sort of situation is an integration test. It mirrors what you do as a developer when you first write the code, for starters. And if your real life backend is sufficiently stable such that the integration test is reliable, or can at least be ignored when the backend is in a known-broke state, then an integration test <i>combined with hygienic version control</i> is a much better way of isolating regressions to particular diffs.<p>There are a lot of assumptions needed to make this work — linear commit history, good testing tools, developers making focused and logical commits which change one thing at a time — but having that kind of infrastructure and culture is far more beneficial to an organization anyway, with the bonus that you don’t need to write complex code that is clouded with test infrastructure wizardry mixed in with the business logic.
> <i>But if you're not using a framework, the obvious thing is to do in a nominally typed language like Java or C++ is the "extract interface" refactoring</i><p>I've seen such statements here and there, and it makes me feel I'm missing a whole subculture of our industry. Seriously, are we learning things by IDE operations nowadays? Is selecting text and applying an option from the "automatic refactor" menu a first-class programming concept nowadays?
This is wrong, bad advice, for the same reasons that structural typing is wrong and nominal typing is right. It goes against all OOP and modularity principles. Interfaces should never be declared at the use site because an interface is a contract promised by the implementor.<p>As for the example, it feels contrived and unprofessional. There should never be a monolithic "Backend" type. Functionality should be separated into services, each one with an interface, and each one injected dynamically for easy testability and upgradeability.
I think the bulk of this post is speaking to the interface segregation principle.<p>Also, I would not use the advice below as a _blanket rule_ as the author puts it.<p>> interfaces generally belong in the package that uses values of the interface type, not the package that implements those values.<p>It could be a good rule for many applications, but I find it difficult to swallow as a universal rule. I feel there is so many possibilities where this might not apply, for example perhaps you have a module comprised entirely of interfaces in order to decouple a layer of implementation.
It seems the larger point here is that more evolved type systems allow for a better bidirectional decoupling of interfaces and implementations. Rust has a lot of things that make this easier-yet-governable. The commenter (
jstimpfle) who mentioned C is also right - C properly used allows one to do things, since it's not bound by OOP strictures.