TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Why Rust nextest is process-per-test

114 点作者 jicea4 个月前

11 条评论

OptionOfT4 个月前
I prefer per process over the alternatives.<p>When you write code you have the choice to do per process, per thread, or sequential.<p>The problem is that doing multiple tests in a shared space doesn&#x27;t necessarily match the world in which this code is run.<p>Per process testing allows you to design a test that matches the usage of your codebase. Per thread already constrains you.<p>For example: we might elect to write a job as a process that runs on demand, and the library we use has a memory leak, but it can&#x27;t be fixed in reasonable time. Since we write it as a process that gets restarted we manage to constrain the behavior.<p>Doing multiple tests in multiple threads might not work here as there is a shared space that is retained and isn&#x27;t representative of real world usage.<p>Concurrency is a feature of your software that you need to code for. So if you have multiple things happening, then that should be part of your test harness.<p>The test harness forcing you to think of it isn&#x27;t always a desirable trait.<p>That said, I have worked on a codebase where we discovered bugs because the tests were run in parallel, in a shared space.
评论 #42676232 未加载
评论 #42688588 未加载
o11c4 个月前
A much better model still is a <i>mixture</i>.<p>* Use multiple processes, but multiple tests per process as well.<p>* Randomly split and order the tests on every run, to <i>encourage</i> catching flakiness. Print the seed for this as part of the test results for reproducibility.<p>* Tag your tests a lot (this is one place where, as many languages provide, &quot;test classes&quot; or other grouping is very useful). Smoke tests should run before all other tests, and all run in one process (though still in random order). Known long-running tests should be tagged to use a dedicated process and mostly start early (longest first), except that a few cores should be reserved to work through the fast tests so they can fail early.<p>* If you need to kill a timed-out test even though other tests are still running in the same process - just kill the process anyway, and automatically run the other tests again.<p>* Have the harness provide fixtures like &quot;this is a temporary directory, you don&#x27;t have to worry about clearing it on failure&quot;, so tests don&#x27;t have to worry about cleaning up if killed. Actually, why not just randomly kill a few tests regardless?<p>I wrote some more about tests here [1], but I&#x27;m not sure I&#x27;ll update it any more because of Github&#x27;s shitty 2FA-but-only-the-inconvenience-not-the-security requirement.<p>[1]: <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;o11c&#x2F;ef8f0886d5967dfebc3d" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;o11c&#x2F;ef8f0886d5967dfebc3d</a>
cortesi4 个月前
Nextest is one of the very small handful of tools I use dozens or hundreds of times a day. Parallelism can reduce test suite execution time significantly, depending on your project, and has saved me cumulative days of my life. The output is nicer, test filtering is nicer, leak detection is great, and the developer is friendly and responsive. Thanks sunshowers!<p>The one thing we&#x27;ve had to be aware of is that the execution model means there can sometimes be differences in behaviour between nextest and cargo test. Very occasionally there are tests that fail in cargo test but succeed in nextest due to better isolation. In practice this just means that we run cargo test in CI.
评论 #42679295 未加载
marky19914 个月前
I don&#x27;t understand why he jumps straight from &#x27;one test per process&#x27; to &#x27;one test per thread&#x27; as the alternative.<p>I&#x27;m not actually clear what he means by &#x27;test&#x27; to be honest, but I assume he means &#x27;a single test function that can either pass or fail&#x27;<p>Eg in python (nose)<p>class TestSomething: def test_A(): ... def test_B(): ...<p>I&#x27;m assuming he means test_A. But why not run all of TestSomething in a process?<p>Honestly, I think the idea of having tests have shared state is bad to begin with (for things that truly matter, eg if the outcome of your test depends on the state of sys.modules, something else is horribly wrong), so I would never make this tradeoff to benefit a scenario that I never think should be done.<p>Even if we were being absolute purists, this still hasn&#x27;t solved the problem, the second your process communicates with any other process (or server). And that problem seems largely unsolveable, short of mocking.<p>Basically, I&#x27;m not convinced this is a good tradeoff, because the idea of creating thousands and thousands of processes to run a test suite, even on linux, sounds like a bad idea. (And at work, would definitely be a bad idea, for performance reasons)
评论 #42674756 未加载
评论 #42674691 未加载
评论 #42675648 未加载
评论 #42674953 未加载
评论 #42675707 未加载
Ericson23144 个月前
This is good for an entirely different reason, which is running cross-compiled tests in an emulator.<p>That is especially good for bare metal. If you don&#x27;t have global allocator, have limitted ram, etc., you might not be able to write the test harness as part of the guest program at all! so you want want to move as much logic to the host program as possible, and then run as little as a few instructions (!) in the guess program.<p>See <a href="https:&#x2F;&#x2F;github.com&#x2F;gz&#x2F;rust-x86">https:&#x2F;&#x2F;github.com&#x2F;gz&#x2F;rust-x86</a> for an example of doing some of this.
评论 #42676059 未加载
pjc504 个月前
This will be <i>horrendously</i> slow on Windows.
评论 #42676563 未加载
bfrog4 个月前
There’s a similar test library for c that does this and it’s great. I love the concept, it works well most of the time.
sedatk4 个月前
&gt; Memory corruption in one test doesn’t cause others to behave erratically. One test segfaulting does not take down a bunch of other tests.<p>Is &quot;memory corruption&quot; an issue with Rust? Also, if one test segfaults, isn&#x27;t it a reason to halt the run because something got seriously broken?
评论 #42675415 未加载
评论 #42675601 未加载
amelius4 个月前
According to some of these reasons every library should run in its own process too.
评论 #42675591 未加载
评论 #42675492 未加载
评论 #42677996 未加载
zbentley4 个月前
This article is a good primer on why process isolation is more robust&#x2F;separated than threads&#x2F;coroutines <i>in general</i>, though ironically I don&#x27;t think it fully justifies why process isolation is better <i>for tests</i> as a specific usecase benefitting from that isolation.<p>For tests specifically, some considerations I found to be missing:<p>- Given speed requirements for tests, and representativeness requirements, it&#x27;s often beneficial to refrain from too much isolation so that multiple tests can use&#x2F;excercise paths that use pre-primed in memory state (caches, open sockets, etc.). It&#x27;s odd that the article calls out that global-ish state mutation as a specific benefit of process isolation, given that it&#x27;s often substantially faster and more representative of real production environments to run tests in the presence of already-primed global state. Other commenters have pointed this out.<p>- I wish the article were clearer about threads as an alternative isolation mechanism <i>for sequential tests</i> versus threads as a <i>means of parallelizing tests</i>. If tests really do need to be run in parallel, processes are indeed the way to go in many cases, since thread-parallel tests often test a more stringent requirement than production would. Consider, for example, a global connection pool which is primed sequentially on webserver start, before the webserver begins (maybe parallel) request servicing. That setup code doesn&#x27;t need to be thread-safe, so using threads to test it <i>in parallel</i> may surface concurrency issues that are not realistic.<p>- On the other hand, enough benefits are lost when running clean-slate test-per-process that it&#x27;s sometimes more appropriate to have the test harness orchestrate a series of parallel executors and schedule multiple tests to each one. Many testing frameworks support this on other platforms; I&#x27;m not as sure about Rust--my testing needs tend to be very simple (and, shamefully, my coverage of fragile code lower than it should be), so take this with a grain of salt.<p>- Many testing scenarios want to abort testing on the first failure, in which case processes vs. threads is largely moot. If you run your tests with a thread or otherwise-backgrounded routine that can observe a timeout, it doesn&#x27;t matter whether your test harness can reliably kill the test and keep going; aborting the entire test harness (including all processes&#x2F;threads involved) is sufficient in those cases.<p>- Debugging tools are often friendlier to in-process test code. It&#x27;s usually possible to get debuggers to understand process-based test harnesses, but this isn&#x27;t usually set up by default. If you want to breakpoint&#x2F;debug during testing, running your tests in-process and on the main thread (with a background thread aborting the harness or auto-starting a debugger on timeout) is generally the most debugger-friendly practice. This is true on most platforms, not just Rust.<p>- fork() is a middle ground here as well, which can be slow, though mitigations exist, but can also speed things up considerably by sharing e.g. primed in-memory caches and socket state to tests when they run. Given fork()&#x27;s sharp edges re: filehandle sharing, this, too, works best with sequential rather than parallel test execution. Depending on the libraries in use in code-under-test, though, this is often more trouble than it&#x27;s worth. Dealing with a mixture of fork-aware and fork-unaware code is miserable; better to do as the article suggests if you find yourself in that situation. How to set up library&#x2F;reusable code to hit the right balance between fork-awareness&#x2F;fork-safety and environment-agnosticism is a big and complicated question with no easy answers (and also excludes the easy rejoinder of &quot;fork is obsolete&#x2F;bad&#x2F;harmful; don&#x27;t bother supporting it and don&#x27;t use it, just read Baumann et. al!&quot;).<p>- In many ways, this article makes a good case for something it doesn&#x27;t explicitly mention: a means of annotating&#x2F;interrogating in-memory global state, like caches&#x2F;lazy_static&#x2F;connections, used by code under test. With such an annotation, it&#x27;s relatively easy to let invocations of the test harness choose how they want to work: reuse a process for testing and re-set global state before each test, have <i>the harness itself</i> (rather than tests by side-effect) set up the global state, run each test with and&#x2F;or without pre-primed global state and see if behavior differs, etc. Annotating such global state interactions isn&#x27;t trivial, though, if third-party code is in the mix. A robust combination of annotations in first-party code and a clear place to manually observe&#x2F;prime&#x2F;reset-if-possible state that isn&#x27;t annotated is a good harness feature to strive for. Even if you don&#x27;t get 100% of the way there, incremental progress in this direction yields considerable rewards.
评论 #42677609 未加载
grayhatter4 个月前
Restating the exact same thing 4 different times in the first few paragraphs is an LLM feature right?
评论 #42674890 未加载
评论 #42675575 未加载