TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Software testing, and why I'm unhappy about it

78 点作者 drothlis超过 2 年前

10 条评论

13of40超过 2 年前
I have a lot of bitter things to say about automated testing, having spent 14 years of my life trying to knead it into a legitimate profession, but here&#x27;s the most significant:<p>You test case is more useless than a turd in the middle of the dining room table unless you put a comment in front of it that explains what it assumes, what it attempts, and what you expect to happen as a result.<p>Because if you just throw in some code, you&#x27;re only giving the poor bastard investigating it two puzzles to debug instead of one.
评论 #34437345 未加载
评论 #34437305 未加载
评论 #34437153 未加载
评论 #34440160 未加载
评论 #34438696 未加载
评论 #34438735 未加载
评论 #34437650 未加载
评论 #34444339 未加载
评论 #34439084 未加载
mixedCase超过 2 年前
&gt; During day-to-day development, the important bit isn&#x27;t that there are no failures. The important bit is that there are no regressions.<p>And that&#x27;s why we test and why tests shouldn&#x27;t be allowed to fail.<p>Just because the scenarios described make testing hard does not change reality of what makes tests valuable.<p>If pre-existing failures are halting the production pipeline and you don&#x27;t like it, switch off trunk based development and see if you like the waits and constant rebasing in large projects&#x2F;teams. But don&#x27;t eff with the bloody tests!
评论 #34436477 未加载
drewcoo超过 2 年前
Doctor, it hurts when I punch myself in the head!<p>If testing that way is painful (and it is), then work with people to remove the pain. Tests are supposed to help developers, not constrain or punish them.<p>Put tests in the same repo as the SUT. Do more testing closer to the code (more service and component tests) and do less end-to-end testing. Ban &quot;flakey&quot; tests - they burn engineering time for questionable payoff.<p>Test failures can be thought of as &quot;things developers should investigate.&quot; Make sure the tests are focused on telling you about those things as fast as possible.<p>Also, take the human out of the &quot;wait for green, then submit PR&quot; steps. Open a PR but don&#x27;t alert everyone else about it until you run green, maybe?
评论 #34438249 未加载
评论 #34438679 未加载
gampleman超过 2 年前
Seems to me like you&#x27;re underinvesting in tooling. It&#x27;s a mistake a lot of development shops make - you focus on your product, so you can&#x27;t spend time building something completely orthogonal, but in the process you suddenly waste man-years wasting time on a broken PR process, instead of spending a month early on building some tooling that would have removed the pain in the first place.
andreareina超过 2 年前
The continuous testing is something I’ve thought about and it’s a tricky one. We use property tests[1] so here’s a quick stab at how I’d like it to look like:<p>Test starts failing, immediately send a report with the failing input, then continue with the test case minimisation and send another report when that finishes.<p>Concurrently, start up another long running process to look for other failures, <i>skipping the input that caused the previous failure</i>. We do want new inputs for the same failure though. This is the tricky one. We could probably make it work by having the prop test framework not reuse previously-failing inputs, but that’s one of the big strategies it uses to catch regressions.<p>[1] specifically, hypothesis on python
ranting-moth超过 2 年前
&gt; The above development practice works well when the SUT and TB are both defined by the same code repository and are developed together.<p>I once witnessed a team creating an app, specs and tests in three respective repositories. For no other reason than &quot;each project should be in it&#x27;s own repository&quot;.<p>The added work&#x2F;maintenance around that is crazy, for absolutely no gain in that case.
nurettin超过 2 年前
If I am given the time and resources I do this:<p>Phase 1. Code and test basic functions concerning any kind of arithmetic, mathematical distribution, state machines, file operations and datetimes. This documents any assumptions and makes a solid foundation.<p>Phase 2. Write a simulation for generating randomized inputs to test the whole system. Run it for hours. If I can&#x27;t generate the inputs, find as big a variety of inputs as possible. Collect any bugs, fix, repeat. This reduces the chances of finding real time bugs by three orders of magnitude.<p>This has worked really well in the past whether I&#x27;m working on games, parsers or financial software. I don&#x27;t conform to corporate whatever driven testing patterns because they are usually missing the crucial part 2 and time part 1 incorrectly.
theamk超过 2 年前
The author&#x27;s problem is pretty simple: the test repo is required for pre-merge tests to pass, but it can be updated independently, without having pre-merge tests pass.<p>And the answer is pretty simple: pin the specific test repo version! Use lockfiles, or git submodules, or put &quot;cd tests &amp;&amp; git checkout 3e524575cc61&quot; in your CI config file _and keep it in the same repo as source code_ (that part is very important!).<p>This solves all of author problems:<p>&gt; new test case is added to the conformance test suite, but that test happens to fail. Suddenly nobody can submit any changes anymore.<p>Conformance test suite is pinned, so new test is not used. A separate PR has to update conformance test suite version&#x2F;revision, and it must go through regular driver PR process and therefore must pass. Practically, this is a PR with 2 changes: update pin and disable new test.<p>&gt; are you going to remember to update that exclusion list?<p>That&#x27;s why you use &quot;expect fail&quot; list (not exclusion) and keep it in driver&#x27;s dir. Ad you submit your PR you might see a failure saying: &quot;congrats, test X which was expect-fail is now passing! Please remove it from the list&quot;. You&#x27;ll need to make one more PR revision but then you get working tests.<p>&gt; allowing tests to be marked as &quot;expected to fail&quot;. But they typically also assume that the TB can be changed in lockstep with the SUT and fall on their face when that isn&#x27;t the case.<p>And if your TB cannot be changed in lockstep with SUT, you are going to have truly miserable time. You cannot even reproduce the problems of the past! So make sure your kernel is known or at least recorded, repos are pinned. Ideally the whole machine image, with packages and all is archived somehow -- maybe via docker or raw disk image or some sort of ostree system.<p>&gt; Problem #2 is that good test coverage means that tests take a very long time to run.<p>The described system sounds very nice, and I would love to have something like this. I suspect it will be non-trivial to get working, however. But meanwhile, there is a manual solution: have more than one test suite. &quot;Pre-merge&quot; tests run before each merge and contain small subset of testing. A bigger &quot;continuous&quot; test suite (if you use physical machines) or &quot;every X hours&quot; (if you use some sort of auto-scaling cloud) will run a bigger set of tests, and can be triggered manually on PRs if a developer suspects the PR is especially risky.<p>You can even have multiple levels (pre-merge, once per hour, 4 times per day) but this is often more trouble than it worth.<p>And of course it is absolutely critical to have reproducible tests first -- if you come up to work and find a bunch of continuous failures, you want to be able to re-run with extra debugging or bisect what happened.
评论 #34446638 未加载
评论 #34439174 未加载
drothlis超过 2 年前
Some good ideas here for when your tests are in a separate repo than the system under test (GPUs&#x2F;drivers&#x2F;compilers in the case of the author, but it&#x27;s applicable to a variety of industries).
评论 #34436353 未加载
t00超过 2 年前
Have I misunderstood the article or it is just a matter of separating feature branches and putting relevant tests in a feature branch while keeping regression in a master branch?