DHH either is being disingenuous, or <i>badly</i> misunderstands unit testing.<p>He opens with this:<p>> The classical definition of a unit test in TDD lore is one that doesn't touch the database. Or any other external interface, like the file system. The justification is largely one of speed. Connecting to external services like that would be too slow to get the feedback cycle you need.<p>No, "unit tests" in TDD -- and long before, TDD didn't change anything about the definition -- are tests that, to the extent practical, test <i>all</i> and <i>only</i> the functionality of the specific <i>unit</i> under test, hence the name. <i>That's</i> the reason why external interactions are minimized in proper unit tests (whether or not TDD is being practiced). TDD <i>observes</i> that such tests are generally fast, and builds the red-green-refactor cycle around that fact, but <i>speed</i> isn't the justification for the isolation, isolation of the functionality being tested from other functionality is the <i>point</i> of unit testing (which is designed not only to identify errors, but to <i>pinpoint</i> them.)
Am I only the only one that doesn't find 'All tests in 4 minutes, all model tests in 80 seconds' very impressive? It sounds like a really long time to me.<p>You know what could increase the speed dramatically.... decoupling.<p>I also think decoupling phrased in the context of the Rails 2 to Rails 3 upgrade, where pretty much everything changed, makes perfect sense. Imagine just having a few wrapper classes that spoke to Rails and only having to adapt them. Sounds good to me!<p>Bernhardt: Boundaries <a href="http://www.confreaks.com/videos/1314-rubyconf2012-boundaries" rel="nofollow">http://www.confreaks.com/videos/1314-rubyconf2012-boundaries</a><p>Weirich: Decoupling from Rails <a href="http://www.youtube.com/watch?v=tg5RFeSfBM4" rel="nofollow">http://www.youtube.com/watch?v=tg5RFeSfBM4</a><p>Wynne: Hexagonal Rails <a href="http://www.youtube.com/watch?v=CGN4RFkhH2M" rel="nofollow">http://www.youtube.com/watch?v=CGN4RFkhH2M</a>
While reading this, I couldn't help but think of Alan Kay's biting assertion about the pop culture of programming.<p>I'm not interested in pop culture; I'm interested in being a better developer, and that requires a highly critical process of evaluating my practice. It's not enough if something works once, I want to know <i>why</i> it was effective there, and when I can use it. I want to try practices like TDD just to see how they affect the design, and then decide if I like that force. I'll use hexagonal architecture on side projects just to see how it helps, and if it's worthwhile. In short, I want to continue to <i>study</i> the art of software development rather than trusting emotion-laden blog posts with something as serious as my skill.<p>I don't believe Rails is so special it warrants revisiting all of the lessons from the past we've learned about modularity, small interfaces, and abstraction. It's just a framework.
DHH and Uncle Bob are arguing past each other at this point.<p>Uncle Bob is saying that Rails is not your application, your business objects that contain all your logic shouldn't inherit from ActiveRecord::Base because that ties you to a specific version of a specific framework (have fun migrating to a new version of Rails!) and means you have to design and migrate your schema before you can run any tests on your model code. You should be able to test your logic in isolation and then plug it into the framework.<p>DHH is saying that if you're writing a Rails application, of course Rails is your application. Why waste hours adding layers of indirection that make your code harder to understand, just to make your tests run faster?<p>Of course if it's just a prototype, who cares? But I really agree with Uncle Bob that tightly coupling your application logic to (a specific version of) Rails/ActiveRecord is a bad idea if you want to make a long-lasting, maintainable application of any non-trivial size.
> The justification is largely one of speed.<p>Is it ?<p>I was under the impression that you don't include them because a unit test is testing a very specific piece of code and not the dependencies around it. This is why you'll mock disk/db/network, just like you'll mock <i>other</i> pieces of code.
I sense that these posts are written for a specific audience, rebutting a set of arguments familiar to that audience, and that's why they seem so reductive and narrowly-applicable, but I can't quite grasp how much of the argument translates to the rest of the world.
I always assumed the point of mocking a database response was to ensure that you were testing <i>just</i> your code, and not also the existence of a database with the right schema, the ability to connect to it, as well as the correctness of the code that rolls back any side effects.
111 assertions in 4 seconds? Why not 4 milliseconds, or 4 microseconds? These must be some pretty huge assertions. I guess I'm missing something about modern programming...
There's something to be said for DHH's point here, even though he's confused about what a unit test is. Integrated and end to end tests are much, much more important than unit tests. They actually test the application, not a contrived, isolated, scenario.<p>Much of the testing activity and literature of late has been complaining how brittle end-to-end tests are, because all the focus is on pure unit tests. This leads to defect pile-up at release time or at the end of an iteration. Whereas the smoother teams I've worked with did end-to-end and integration tests all the time. Unit tests existed too, but only when there was sufficiently complex logic or algorithms to warrant such a test, or if we used TDD to flesh out interfaces or interactions for a feature.<p>Many web applications don't have a lot of logic, they have a lot of database transactions with complex variations for updates or queries. So, <i>especially</i> if you have an ORM, which are notoriously fiddly ... it makes sense to have the majority of tests (TDD or not) hit the database, since the code will only ever be executed WITH a database.<p>Mocking or decoupling the database can introduce wasteful assumptions and complexities that aren't needed in your code base. The only time it makes sense to decouple the database is if expect you'll need polyglot persistence down the road and your chosen persistence framework won't help you.<p>I have worked with developers that prefer test cases run in under 1 second on every save. To me it helps to have a set of unit tests that are in-memory and very fast, that cover basic sanity checks like model integrity, input validation and any in-memory algorithms. But the bulk of tests really need to test your code <i>as it will be used</i>, which often involves database queries. At worse, use an in-memory database that can load test data and execute tests in a couple of seconds.
"These days I can run the entire test suite for our Person model — 52 cases, 111 assertions — in just under 4 seconds from start to finish. Plenty fast enough for a great feedback cycle!"<p>4 seconds is really slow, actually, and enough to take you out of flow. With a PORO Person object, decoupled from the system, that number will easily be sub 500 ms and possibly much less.
A problem with running all your tests in a single transaction is that that's not actually what happens when your code is ran. You will have multiple transactions (unless for some reason you wrap ever single web request inside a transaction, which I think is a terrible idea).<p>There's slightly different things that happen: <i>now()</i> will always return the same time, deferrable constraints/triggers are useless, you can't have another database connection looking at the test results or modifying the database (say you are testing deadlocks or concurrent updates, or you have code that opens a new database connection to write data to the database outside the current transaction), etc.<p>It's fine for simple, vanilla ActiveRecord use where you aren't using lots of database features, I suppose.
> Oracle abomination<p>Okay... PostgreSQL is great but it still has a bit of catching up to do.<p>> ... run your MySQL<p>Wait, Oracle is an abomination but MySQL is okay?<p>> Before each test case, we do BEGIN TRANSACTION, and at the end of the case, we do ROLLBACK TRANSACTION. This is crazy fast, so there's no setup penalty.<p>You know what is just as easy? Making SQLite databases (aka files) for each test case. Copy a file, open it, delete it. It has the added benefit of allowing you to actually commit changes and not worry about rollback. There are some compatibility issues, and I'm not familiar with all those issues in a Rails context.
For local testing on Postgres where you don't care about database reliability, you can also speed things up a lot by setting `fsync = off` and `synchronous_commit = off`.<p>(Never do that on a production database, of course!)
Testing dependencies is not a bug, if there is a reason to not test them, like you need to test an error condition, or your dependancy is external (oauth etc), then certainly, but if there is no need to mock a dependency other than a dogma of some definition of unit test, then it usually isnt worth it.<p>With every test the questions should be answered are what bugs is this going to catch and which one will it miss, if you mock a dependency then you are introducing cases in which it will miss bugs and there should be a justification along with it.
4 seconds is a long time. I'm reminded of the SVN fans who say things like "I can commit in 2 seconds, that's plenty fast enough". Which it is, until you've experienced the alternative, and then you can't imagine going back.<p>Also, all that separation isn't free. Sure, I don't <i>need</i> to run all my unit tests every time I make a change - but if they're fast enough that I can, that's much less cognitive overhead than having to think about which tests are relevant and press the correct button.
If using MySQL, and need to run tests, the following option our our <i>DEVELOPMENT</i> server really sped things up:<p>innodb_flush_log_at_trx_commit = 0
Hitting the database or not, using fixtures introduces coupling into your test suite that's often more trouble than it's worth.<p><a href="http://interblah.net/the-problem-with-using-fixtures-in-rails" rel="nofollow">http://interblah.net/the-problem-with-using-fixtures-in-rail...</a>
So here's how I summarise the whole essay: "Hardware is cheap. Instead of making your software perform well, why not just throw more hardware at the problem."<p>Well, I've tried this before and it didn't work.