Most of this is pretty good. Some of this is a little old-school.<p>There are a couple of things I disagree with, speaking from my own experience as a test lead. The biggest is the definition of "a good testcase."<p>If you define your test cases at the specific UI widget level, they have to be updated whenever any UI change happens, period. Since you usually end up with a number of test scenarios for a given area of the UI, that usually means that a UI update actually requires a number of test updates.<p>That has the same maintenance as UI automation tests, but without the benefit of a widget map, reusable functions, or an automated callout when the test is wrong. And as anyone familiar with the test automation pyramid knows, UI automation tests provide a huge scaling issue with maintenance even <i>with</i> those. Echoing that scaling issue into your test documentation is a huge mistake.<p>Tests should be defined at the level that makes the intent of the test clear to an experienced tester or user--no more, no less--unless for some reason you're shipping them to an inexperienced audience like an outsourcing firm. That usually means being very specific about input data, and -if- it's microtesting UI behavior (tab from here, end up here) being specific about that. Everything else, generalize. You do not need to spell out stuff 123abc for the person who's been running tests for you on a daily basis.<p>Maybe you don't write it at "check that signin works," but maybe more like "check that sign-in works with a valid username/pw, at no more than 2 seconds latency."<p>But you know, if you don't care about latency, just say "Check that valid sign in works." That's <i>fine</i>. You only have to get specific about what kind of <i>invalid</i> data should trip it up, and even then "without a digit," "without a symbol," "with less than 8 characters," that's all fine too. The test will get executed correctly if your tester isn't a total loss.<p>Also, people suck at manual regression testing but are great at shortcutting and will wander off a script that specific anyway. So you may as well accept that and write them at the level they'd execute. What you lose in repeatability, you 100% gain in maintenance and productivity.<p>Repeatability is overrated when humans do it: take advantage of the fuzziness. It's more coverage. Just make sure they can write a good enough bug report to tell you what they actually did that caused the issue. If you want that level of repeatability and definition on the tests, do it with automation. At least there you can define a widget map and some flow-encapsulation methods.<p>Honestly, if I didn't get the job with at least the nice and less-ranty version of that answer, probably not the right company for me anyway. If you're not legally IEEE-compliant or an aerospace/medical company, wasting time benefits nobody.<p>One of the most exciting movements in QA is context-driven testing. It has 7 tenets here:<p><a href="http://context-driven-testing.com/" rel="nofollow">http://context-driven-testing.com/</a><p>...but they all boil down to "do the right thing for what you want done, and quit being so damned dogmatic about it." That includes not generating artifacts if nobody else will read them, and not defining tests past the level of detail needed to make them effective.<p>QA has a crappy reputation because it's slow, ponderous, and often not very effective. Maybe part of that is because people are updating overly-detailed docs all the time because other people have told them they should, rather than because they really need to.<p>If you need that level of product documentation, write the product documentation and refer the tests to it. At least that way you have "single point of truth" for flow. Fragmenting and repeating the docs across a bunch of disparate tests is just like copy-paste coding: a maintenance nightmare.<p>I'm personally in the middle of helping my org move to checklist-based tests. They can be used as loose regression tests, or as missions for exploratory testing, and only have to be updated when the basic layout of the app changes instead of every single flow detail. It'll get us out from the currently crushing maintenance of reviewing and potentially updating 6000+ micro-documented UI tests on every release, and will be a huge win.<p>People really should compare this sort of thing to coding maintenance. We've learned so much there that can be applied here as well.