TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Big TDD Misunderstanding (2022)

63 pointsby WolfOliver3 months ago

12 comments

int_19h3 months ago
Simple rule of thumb.<p>Always write functional tests first. Doesn&#x27;t matter if they are slow - you still want something that faithfully captures the specified behavior and allows you to detect regressions automatically.<p>Then, <i>if</i> your resulting test suite is too slow, add finer-grained tests in areas where the perf benefits of doing so dwarf the cost of necessary black-boxing.<p>Getting down to the level of individual classes, never mind functions - i.e. the traditional &quot;unit tests&quot; - should be fairly rare in non-library code.
评论 #43078028 未加载
评论 #43074346 未加载
recursivedoubts3 months ago
Agree w&#x2F;the author that the concept of &quot;unit&quot; often hurts test quality.<p>You should be striving to balance the long-term usefulness of your tests with the debuggability of those tests. In my experience, those tests are what most people would call &quot;integration tests&quot; (although that name, like so much terminology in the testing world, is confusing and poorly defined.)<p>You want to get the tests up at as high a level of abstraction as possible where the API and correctness assertions are likely to survive implementation detail changes (unlike many unit tests) while at the same time avoiding the opaque and difficult to debug errors that come with end-to-end testing (again, the language here is confusing, I assume you know what I mean.)<p>See <a href="https:&#x2F;&#x2F;grugbrain.dev&#x2F;#grug-on-testing" rel="nofollow">https:&#x2F;&#x2F;grugbrain.dev&#x2F;#grug-on-testing</a>
atum473 months ago
My first job interview after college. It was a pair programming session but I would not touch the keyboard. I would dictate and the interviewer would type. I had to implement a dynamic array in java using TDD in one hour. From time to time I would break protocol and jump into implementation before the tests. Overall it went good. I finish the job. The interviewer however assesed that he was afraid I would not actually use TDD on the job, he was spot on. I did not get the job.
评论 #43064460 未加载
评论 #43064325 未加载
评论 #43064676 未加载
codr73 months ago
I rarely write isolated unit tests.<p>Most of the systems I build use a database on which all logic depends, and often a network connection.<p>I&#x27;ve worked on systems where these aspects were mocked, and they eventually grind to a halt because of the effort required to make the tiniest change.<p>First of all you need a way to create a pristine database from code, preferably in memory. Second nested nested transactions are nice, since you can simply rollback the outer transaction per test case; otherwise you need to drop&#x2F;create the database which is slower.<p>For networked servers, an easy way to start&#x2F;stop servers in code and send requests to them is all you need.<p>Given these pieces, it&#x27;s easy to write integration tests that run fast enough and give a lot of bang for the buck.<p>TDD is even more rare for me, I typically only do that when designing API&#x27;s I&#x27;m unsure about, which makes imagining user code difficult. And fixing bugs, because it makes total sense to have a failing test to verify that you fixed it, and that it remains fixed.
评论 #43063890 未加载
评论 #43064119 未加载
simonw3 months ago
Wow. If &quot;unit&quot; in &quot;unit test&quot; does indeed mean that the test itself should be able to run independent of the other tests then maybe I can get over my avoidance of calling them &quot;unit tests&quot;!<p>I dislike that term because the most valuable tests I write are inevitably more in the shape of integration tests - tests that exercise just one function&#x2F;class are probably less than 10% of the tests that I write.<p>So I call my tests &quot;tests&quot;, but I get frustrated that this could be confused with manual tests, so then I call them &quot;automated tests&quot; but that&#x27;s a bit of a mouthful and not a term many other people use.<p>I&#x27;d love to go back to calling them &quot;unit tests&quot;, but I worry that most people who hear me say that will still think I&#x27;m talking about the test-a-single-unit-of-code version.
评论 #43068268 未加载
评论 #43068690 未加载
Supermancho3 months ago
&gt; The argument for isolating the units from each other is that it is easier to spot a potential bug.<p>That&#x27;s not the only argument. The important result of this, is ensuring the &quot;unit&quot; of code is written to be testable. This happens to require it be simple and extensible. It does not enforce making the code or tests comprehensible.<p>When you don&#x27;t trust someone&#x27;s code, have them write detailed unit tests. They will find most of their problems on their own and learn better practices, along the way.<p>I am, in no way, implying that unit tests are a replacement for integration or behavioral or E2E testing et al...depending on how you want to define those.
brumar3 months ago
&gt; Do not isolate code when you test it.<p>&gt; Only isolate your code from truly external services<p>That makes tests more trustworthy but also sometimes harder to maintain I think. I have seen cases where small changes on the code base created strong ripple effects with many tests to update. Arguably, the tests were not very well written or organized and with too many high level tests. Still, this and the very large execution time of the test collection made me realized that for medium to large projects, I will be much more careful in the future before going all in with the no-mock approach.
评论 #43062632 未加载
deterministic3 months ago
What works great for me:<p>I auto test the <i>API</i> of the server&#x2F;system&#x2F;library&#x2F;module I am responsible for. Nothing else. No auto testing of internal details.<p>It lets me completely rewrite internals without breaking the tests.<p>The API tests needs to be so good that another developer could implement the same server&#x2F;system&#x2F;library&#x2F;module using the tests only.<p>And the API tests needs to try as hard as possible to break the code being tested.<p>Using this method I have had zero bugs in production for the last 5+ years.
motorest3 months ago
From the article:<p>&gt; <i>The argument for isolating the units from each other is that it is easier to spot a potential bug. (...) In my opinion, this does not pay out because of the huge amount of false positive test cases you get and the time you need to fix them. Also, if you know the code base a little you should have an idea where the problem is. If not, this is your chance to get to know the code base a little better.</i><p>This is at best specious reasoning, and to me reflects that the blogger completely misses the point of having tests.<p>To start off, there is no such thing as a false positive test. Your tests track invariants, specially those which other components depend on. The whole point of having these tests is to have a way to automatically check for them each and every single time we touch the code, so that the tests warn us that a change we are doing will cause the application to fail.<p>If you somehow decide to change your code so that a few invariants break, these are not &quot;false positives&quot;. This is your tests working as expected and warning you that you must pay attention to what you are doing so that you do to not introduce regressions.<p>It&#x27;s also completely mind-boggling and absurd to argue that &quot;knowing the code&quot; is any argument to avoid tracking invariants. The whole point of automated test suites is that you do not want the app to fail because you missed any detail or corner case or failure mode. Knowing the code does not prevent bugs or errors or regressions.<p>I&#x27;m perplexed by the way we have people write long articles on unit tests when they don&#x27;t really seem to understand what they are supposed to achieve.
评论 #43064443 未加载
评论 #43064545 未加载
评论 #43064823 未加载
评论 #43064427 未加载
评论 #43064639 未加载
arialdomartini3 months ago
&gt; see here and here<p>I believe links are significantly more useful when they include descriptive text like the title or author, rather than just &#x27;here&#x27;.
nobleach3 months ago
Eh, I knew this would somehow come back to that Kent C Dodds stuff. I can respect the guy, but his opinion on valuable testing differs from mine and quite a lot of others. His opinion is &quot;the more your tests represent the way your users use your app, the better they are&quot;. For E2E tests, yes, absolutely. For unit tests, no... not at all. As a very short example, an E2E test will prove that yes, I did indeed show a button on the screen. A unit test will prove the code that put it there worked. The code can &quot;fail open&quot; for many reasons. (In JavaScript for example, forgetting an `await` keyword can return a truthy value!).<p>The worst part about it is that he called himself a thought leader, called his approach a &quot;best practice&quot; and had nothing really to back that up. Now people go around repeating it all the time. It&#x27;s frustrating.
评论 #43064632 未加载
评论 #43063401 未加载
评论 #43063225 未加载
评论 #43062642 未加载
评论 #43063260 未加载
hansvm3 months ago
Disclaimer: I don&#x27;t have a lot of comments on TDD as a whole, other than that most software I write is very exploratory (I&#x27;ll throw away the first 3-10 drafts, and by no means does that mean it takes 3-10x as long to write), and the best language for me for that exploration is often the language of the actual software I&#x27;m writing. TDD, in that environment, doesn&#x27;t seem very applicable since the whole point is that we don&#x27;t know what&#x27;s actually possible (or, when it&#x27;s possible, if the tradeoffs are worth it).<p>The author has a lot of opinions about testing though which conflict with what I&#x27;ve found to work in even that sort of dynamic environment. Their rationale makes sense on the surface (e.g., I&#x27;ve never seen a &quot;mock&quot;-heavy [0] codebase reap positive net value from its tests), but the prescription for those observed problems seems sub-optimal.<p>I&#x27;ll pick on one of those complaints to start with, IMO the most egregious:<p>&gt; Now, you change a little thing in your code base, and the only thing the testing suite tells you is that you will be busy the rest of the day rewriting false positive test cases.<p>If changing one little thing results in a day of rewriting tests, then either (a) the repo is structured such that small functional changes affect lots of code (which is bad, but it&#x27;s correct that you&#x27;d therefore have to inspect all the tests&#x2F;code to see if it actually works correctly afterward), or (b) the tests add coupling that doesn&#x27;t exist otherwise in the code itself.<p>I&#x27;ll ignore (a), since I think we can all agree that&#x27;s bad (or at least orthogonal to testing concerns). For (b) though, that&#x27;s definitely a consequence of &quot;mock&quot;-heavy frameworks.<p>Why?<p>The author&#x27;s proposal is to just test observable behavior of the system. That&#x27;s an easy way to isolate yourself from implementation details. I don&#x27;t disagree with it, and I think the industry (as I&#x27;ve seen it) discounts a robust integration test suite.<p>What is it about &quot;unit&quot; tests that causes problems though? It&#x27;s that the things you&#x27;re testing aren&#x27;t very well thought through or very well abstracted in the middle layers. Hear me out. TFA argues for integration tests at a high level, but if you (e.g.) actually had to implement a custom sorting function at your job would you leave it untested? Absolutely not. It&#x27;d be crammed to the gills with empty sets, brute-force checking every permutation of length &lt;20, a smattering of large inputs, something involving MaxInt, random fuzzing against known-working sorting algorithms, and who knows what else the kids are cooking up these days.<p>Moreover, almost no conceivable change to the program would invalidate those tests incorrectly. The point of a sorting algorithm is to sort, and it should have some performance characteristics (the reason you choose one sort over another). Your tests capture that behavior. As your program changes, you either say you don&#x27;t need that sort any more (in which case you just delete the tests, which is O(other_code_deleted)), or you might need a new performance profile. In that latter case, the only tests that are broken are associated with that one sorting function, and they&#x27;re broken _because_ the requirements actually changed. You still satisfy O(test_changes) &lt;= O(code_changes); the thing the author is arguing doesn&#x27;t happen because of mocks.<p>Let&#x27;s go back to the heavily mocked monstrosities TFA references. The problem isn&#x27;t &quot;unit&quot; testing. Integration tests (the top of a DAG), and unit tests (like our sorting example, the bottom of a DAG) are easy. It&#x27;s the code in between that gets complicated, and there might be a lot of it.<p>What do we do then?<p>At a minimum, I&#x27;d personally consider testing the top and bottom of your DAG of code. Even without any thought leadership or whatever garbage we&#x27;re currently selling, it&#x27;s easy to argue that tests at those levels are both O(other_code_written) in cost and also very valuable. At a high level (TFA&#x27;s recommendation), the tests are much cheaper than the composite product, and you&#x27;d be silly not to include them. At a low level (truly independent units, like the &quot;sorting&quot; case study), you&#x27;d also be silly not to include them, since your developers are already writing those tests to check if it works as they implement the feature in the first place, and the maintenance cost of the tests is both proportional to the maintenance cost of the code being tested and extremely valuable in detecting defects in that code (recall that bugs are exponentially more expensive to fix the further down the pipeline the propogate before being triaged).<p>Addressing the bottom of your DAG is something the article, in some sense, explicitly argues against. They&#x27;re arguing against the inverted pyramid model you&#x27;ve seen for testing. That seems short-sighted. Your developers are already paying approximately the cost of writing a good test when they personally test a sorting function they&#x27;re writing, and that test is likely to be long-lived and useful; why throw that away? More importantly, building on shaky foundations is much more expensive than most people give it credit for. If your IDE auto-complete suggests a function name that says it does the right thing and accepts the arguments you&#x27;re giving it, you get an immediate 10x in productivity if that autocomplete is always right. Wizards in a particular codebase (I&#x27;ve been that wizard in a few, my current role as well; that isn&#x27;t a derogatory assessment of &quot;other&quot; people) can always internalize the whole thing and immediately know the right patterns, but for everyone else with &lt;2yrs of experience in your company in particular (keep in mind that average silicon valley attrition is 2-3yrs), a function doing what it says it&#x27;s going to do is a godsend to productivity.<p>Back to the problem at hand though. TFA says to integration test, and so do I. I also say to test your &quot;leaf&quot; code in your code DAG, since it&#x27;s about the same cost and benefit. What about the shit in between?<p>In a lot of codebases I&#x27;ve seen, I&#x27;d say to chock it up as a lost cause and test both the integration stuff (that TFA suggest) and also any low-level details (the extra thing I&#x27;m saying is important). Early in my career, I was implementing some CRUD feature or another and explicitly coached (on finding that the reason implementation was hard was a broken function deep in the call-stack) to do the one-liner fix to make my use case work instead of the ten-liner to make the function actually correct and the 1000-liner to then correct every caller. I don&#x27;t think they were wrong in giving that advice. I&#x27;m sad that the code was in a state where that was reasonable advice.<p>If you&#x27;re working on newer projects though (or plan to be at a place for awhile and have the liberty to do some cleanup with every new feature (a pattern I wholly endorse and which has served me very well personally)), it&#x27;s worth looking at that middling code and figuring out why it&#x27;s so hard to work with. 99% of the time, the reason mocks look attractive isn&#x27;t because they&#x27;re the only solution. It&#x27;s because they&#x27;re the only solution that makes sense once you&#x27;ve already tied your hands. You don&#x27;t need something to &quot;unit&quot; test the shutdown handler; you need something to test the total function which processes inputs and outputs and is called by the shutdown handler. You don&#x27;t need to &quot;unit&quot; test a UI page that requires 3 different databases to produce any output; you need to unit test the functions which turn that output into that UI page (ideally, without mocks, since although those ostensibly do the same thing they usually add an extra layer of complexity and somehow break all your tests), and for something that messy you might even just need an &quot;integration&quot; test around that UI page asserting that it renders approximately correctly.<p>What else? People sell all kinds of solutions. &quot;Functional Programming&quot; or &quot;OOP&quot; or whatever. Programming is imperative when you execute it, and the right representation for the human reader varies from problem to problem. I don&#x27;t have any classes to sell or methodologies to recommend. I do strongly recommend taking a very close look at the abstractions you&#x27;ve chosen though. I&#x27;ve had no problem deleting 90% of them at new jobs, making the code faster, more correct, and easier to modify (I usually do so as part of a &quot;coup,&quot; fixing things slowly with each new feature). When every new feature deletes code, the benefits tend to snowball. I see my colleagues doing that now to code I recently wrote, and I&#x27;d personally do it again.<p>[0] People typically mean one of two things when they say they&#x27;re &quot;mocking&quot; a dependency. The first is that they want a function to be &quot;total&quot; and have reasonable outputs for all possible inputs. They&#x27;ll mock out many different interface implementations (or equivalent blah blah blah in your favorite language) to probe that behavior and ensure that your exponential backoff routine behaves reasonably when the clock runs backward, when 1000 of them are executed simultaneously, and whatnot. That tends to make for expensive tests, so I tend to see it reserved for risky code in teams which are risk-averse, but it&#x27;s otherwise very good at its job. The other case is using some sort of &quot;mock&quot; library which lets you treat hard dependencies as soft dependencies and modify class instantiation, method return values, and all sorts of things to fit the test you&#x27;re trying to write. This latter case is much more common, so it&#x27;s what I&#x27;m referring to in a &quot;heavily mocked&quot; codebase. It&#x27;s a powerful tool which could be used for good, but IME it&#x27;s always overused enough that it would be better if it didn&#x27;t exist.