TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Tests that sometimes fail

217 点作者 sams99将近 6 年前

35 条评论

matharmin将近 6 年前
We&#x27;ve had a couple of cases of flaky tests failing builds over the last two years at my company. Most often it&#x27;s browser &#x2F; end-to-end type tests (e.g. selenium-style tests) that are the most flaky. Many of them only fail in 1-3% of cases, but if you have enough of them the chances of a failing build is significant.<p>If you have entire builds that are flaky, you end up training developers to just click &quot;rebuild&quot; the first one or two times a build fails, which can drastically increase the time before realizing the build is actually broken.<p>An important realization is that unit testing is not a good tool for testing flakyness of your main code - it is simply not a reliable indicator of failing code. Most of the time it&#x27;s the test itself that is flaky, and it&#x27;s not worth your time making every single test 100% reliable.<p>Some things we&#x27;ve implemented that helps a lot:<p>1. Have a system to reproduce the random failures. It took about a day to build tooling that can run say 100 instances of any test suite in parallel in CircleCI, and record the failure rate of individual tests.<p>2. If a test has a failure rate of &gt; 10%, it indicates an issue in that test that should be fixed. By fixing these tests, we&#x27;ve found a couple of techniques to increase overall robustness of our tests.<p>3. If a test has a failure rate of &lt; 3%, it is likely not worth your time fixing it. For these, we retry each failing test up to three times. Not all test frameworks support retying out of the box, but you can usually find a workaround. The retries can be restricted to specific tests or classes of tests if needed (e.g. only retry browser-based tests).
评论 #20031144 未加载
评论 #20029541 未加载
评论 #20032739 未加载
评论 #20033700 未加载
评论 #20037216 未加载
mceachen将近 6 年前
Every company I&#x27;ve founded or worked for has struggled with flaky tests.<p>Twitter had a comprehensive browser and system test suite that took about an hour to run (and they had a large CI worker cluster). Flaky tests could and did scuttle deploys. It was a never-ending struggle to keep CI green, but most engineers saw de-flaking (not just deleting the test) as a critical task.<p>PhotoStructure has an 8-job GitLab CI pipeline that runs on macOS, Windows, and Linux. Keeping the ~3,000 (and growing) tests passing reliably has proven to be a non-trivial task, and researching why a given task is flaky on one OS versus another has almost invariably led to discovery and hardening of edge and corner conditions.<p>It seems that TFA only touched on set ordering, incomplete db resets and time issues. There are <i>many</i> other spectres to fight as soon as you deal with multi-process systems on multiple OSes, including file system case sensitivity, incomplete file system resets, fork behavior and child process management, and network and stream management.<p>There are several aspects I added to stabilize CI, including robust shutdown and child process management systems. I can&#x27;t say I would have prioritized those things if I didn&#x27;t have tests, but now that I have it, I&#x27;m glad they&#x27;re there.
评论 #20036373 未加载
评论 #20034775 未加载
joosters将近 6 年前
In an old job, we had a frustrating test that passed well over 99 times in 100. It was shrugged off for a very long time until a developer eventually tracked it down to code that was generating a random SSL key pair. If the first byte of the key was 0, faulty code elsewhere would mishandle the key and the test failed.<p>Keeping the randomness in the test was the key factor in tracking down this obscure bug. If the test had been made completely deterministic, the test harness would never have discovered the problem. So although repeatable tests are in most cases a good thing, non-determinism can unearth problems. The trick is how to do this without sucking up huge amounts of bug-tracking time...<p>(Much effort was spent in making the test repeatable during debugging, but of course the crypto code elsewhere was deliberately trying to get as much randomness as it could source...)
评论 #20029556 未加载
评论 #20031437 未加载
评论 #20030726 未加载
评论 #20032825 未加载
评论 #20033089 未加载
pytester将近 6 年前
What I found to be the major reasons for flaky tests:<p>* Non-determinism in the code - e.g. select without an order by, random number generators, hashmaps turned into lists, etc. - Fixed by turning non-deterministic code into deterministic code, testing for properties rather than outcomes or isolating and mocking the non-deterministic code.<p>* Lack of control over the environment - e.g. calling a third party service that goes down occasionally, use of a locally run database that gets periodically upgraded by the package manager - fixed by gradually bringing everything required to run your software under control (e.g. installing specific versions without package manager, mocking 3rd party services, intercepting syscalls that get time and replacing them with consistent values).<p>* Race conditions - in this case the test should really repeat the same actions so that it consistently catches the flakiness.
评论 #20029710 未加载
评论 #20029154 未加载
roland35将近 6 年前
There was one weird bug reported to me in an microcontroller based project I was recently working on which shut off half the LCD screen. I wrote a test which blasted the LCD screen with random characters and commands and did not see the same error for awhile... but it finally happened during a test! I was able to then see that when I was checking the LCD state between commands I only would toggle the chip select for the first half of the LCD (there were 2 driver chips built into the screen and you had to read each chip individually). There would be no way I could have recreated the bug without automated tests.<p>I have had to deal with non-deterministic tests with my embedded systems and robotic test suites and have found a few solutions to deal with them:<p>- Do a full power reset between tests if possible, or do it between test suites when you can combine tests together in suites that don&#x27;t require a complete clean slate<p>- Reset all settings and parameters between tests. A lot of embedded systems have settings saved in Flash or EEPROM which can affect all sorts of behaviors, so make sure it always starts at the default setting.<p>- Have test commands for all system inputs and initialize all inputs to known values.<p>- Have test modes for all system outputs such as motors. If there is a motor which has a speed encoder you can make the test mode for the speed encoder input to match the commanded motor value, or also be able to trigger error inputs such as a stalled motor.<p>- Use a user input&#x2F;dialog option to have user feedback as part of the test (for things like the LCD bug).<p>Robot Framework is a great tool which can do all these things with a custom Python library! I think testing embedded systems is generally much harder so people rarely do it, but I think it is a great tool which can oftentimes uncover these flaky errors.
darekkay将近 6 年前
Related stories: &quot;unit tests fail when run in Australia&quot; [1] and &quot;the case of the 500-mile email&quot; [2]. There is a whole GitHub repository dedicated to some very interesting debugging stories [3].<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;angular&#x2F;angular.js&#x2F;issues&#x2F;5017" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;angular&#x2F;angular.js&#x2F;issues&#x2F;5017</a><p>[2] <a href="http:&#x2F;&#x2F;www.ibiblio.org&#x2F;harris&#x2F;500milemail.html" rel="nofollow">http:&#x2F;&#x2F;www.ibiblio.org&#x2F;harris&#x2F;500milemail.html</a><p>[3] <a href="https:&#x2F;&#x2F;github.com&#x2F;danluu&#x2F;debugging-stories" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;danluu&#x2F;debugging-stories</a>
zubspace将近 6 年前
We call them Flip Floppers.<p>We do a lot of integration testing, more so than unit testing, and those tests, which randomly fail, are a real headache.<p>One thing I learned is that setting up tests correctly, independent of each other, is hard. It is even harder if databases, local and remote services are involved or if your software communicates with other software. You need to start those dependencies and take care of resetting their state, but there&#x27;s always something: Services sometimes take longer to start, file handles not closing on time, code or applications which keeps running when another test fails... etc, etc...<p>There are obvious solutions: Mocking everything, removing global state, writing more robust test setup code... But who has time for this? Fixing things correctly can even take more time and usually does not guarantee that some new change in the future disregards your correct code...
评论 #20028859 未加载
评论 #20029041 未加载
lukego将近 6 年前
I have learned to love non-deterministic tests.<p>The world is non-deterministic. A test suite that can represent non-determinism is much more powerful than one that cannot. To paraphrase Dijkstra, &quot;Determinism is just a special case of non-determinism, and not a very interesting one at that.&quot;<p>If a test is non-deterministic then a test framework needs to characterize the distribution of results for that test. For example &quot;Branch A fails 11% (+&#x2F;- 2%) of the time and Branch B fails 64% (+&#x2F;- 2%) of the time.&quot; Once you are able to measure non-determinism then you can also effectively optimize it away, and you start looking for ways to introduce more of it into your test suites e.g. to run each test on a random CPU&#x2F;distro&#x2F;kernel.
评论 #20029079 未加载
评论 #20029130 未加载
throwaway5752将近 6 年前
Call it a pet peeve, but if we call it &quot;chaos engineering&quot; it costs a ton and gets people conference talks when a sporadic system integration issue is found. But if you have the same thing happen in a plain old CI half the time it will be ignored or flagged flaky.
评论 #20032437 未加载
mekane8将近 6 年前
As soon as I saw that whole section on database-related flakiness my mind went from &quot;flaky unit tests&quot; to &quot;tests called unit tests that are actually integration tests&quot;. I worked on a team where we labored under that misconception for a long, long time. By the time we finally realized that many of the tests in our suite were integration tests and not unit tests it was too late to change (due to budget and timeline pressure).<p>I really like the different approaches to dealing with these flaky tests, that is a good list.
评论 #20033204 未加载
评论 #20033116 未加载
jonthepirate将近 6 年前
Hi - I&#x27;m Jon, creator of &quot;Flaptastic&quot; (<a href="https:&#x2F;&#x2F;www.flaptastic.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.flaptastic.com&#x2F;</a>) and passionate advocate for unit test health.<p>Having coded at both Lyft and at DoorDash, I noticed both companies had the exact same unit test health problems and I was forced to manually come up with ways to make the CI&#x2F;CD reliable in both settings.<p>In my experience, most people want a turnkey solution to get them to a healthier place with their unit testing. &quot;Flaptastic&quot; is a flaky unit tests recognition engine written in a way that anybody can use it to clean up their flaky unit tests no matter what CI&#x2F;CD or test suite you&#x27;re already using.<p>Flaptastic is a test suite plugin that works with a SAAS backend that is able to differentiate between a unit test that failed due to broken application code <i>versus</i> tests that are failing with no merit and only because the tests are not written well. Our killer feature is that you get a &quot;kill switch&quot; to instantly disable any unit test that you know is unhealthy with an option to unkill it later when you&#x27;ve fixed the problem. The reason is this is so powerful is that when you kill an unhealthy test, you are able to immediately unblock the whole team.<p>We&#x27;re now working on a way to accept the junit.xml file from your test suite. We can run it through the flap recognition engine allowing you to make decisions on what you will do next if you know all of the tests that failed did fail due to known flaky test patterns.<p>If Flaptastic seems interesting, contact us on our chat widget we&#x27;ll let you use it for free indefinitely (for trial purposes) to decide if this makes your life easier.
评论 #20030487 未加载
andrey_utkin将近 6 年前
At Undo we develop a &quot;software flight recorder technology&quot; - basically think of `rr` reversible debugger, it is our open source competitor.<p>One particular usecase for Undo (besides obviously recording software bugs per se) is recording execution of tests. Huge time saver. We do this ourselves - when a test fails in CI, engineers can download a recording file of a failing test and investigate it with our reversible debugger.
评论 #20034057 未加载
bhaak将近 6 年前
At our place, we call them &quot;peuteterli&quot; (losely translated: &quot;could-be-ish&quot; constructed from the French &quot;peut être&quot; and slapped on the local German diminutive -li.<p>For the ID issue I have a monkey patch for Activerecord:<p><pre><code> if [&quot;test&quot;, &quot;cucumber&quot;].include? Rails.env class ActiveRecord::Base before_create :set_id def set_id self.id ||= SecureRandom.random_number(999_999_999) end end end </code></pre> Unique IDs are also helpful when scanning for specific objects during test development. When all objects of different classes start with 1, it is hard to following the connections.
notacoward将近 6 年前
I deal with this issue a lot in my current job, and did in my last job too. IMX timing issues are by far the most common culprit. Usually it&#x27;s because a test has to guess how long a background repair or garbage-collection activity will take, when in fact that duration can be highly variable. Shorter timeouts mean tests are unreliable. Longer timeouts mean greater reliability but tests that sometimes take forever. Speeding up the background processes can create CPU contention if tests are being run in parallel, making <i>other</i> tests seem flaky. Various kinds of race conditions in tests are also a problem, but not one I personally encounter that often. Probably has to do with the type of software I work on (storage) and the type of developers I consequently work with.<p>No matter what, developers complain and try to avoid running the tests at all. I&#x27;d love to force their hand by making a successful test run an absolute requirement for committing code, but the very fact that tests have been slow and flaky since long before I got here means that would bring development to a standstill for weeks and I lack the authority (real or moral) for something that drastic. Failing that, I lean toward re-running tests a few times for those that are merely flaky (especially because of timing issues), and quarantine for those that are fully broken. Then there&#x27;s still a challenge getting people to fix their broken tests, but life is full of tradeoffs like that.
Slartie将近 6 年前
We&#x27;re usually calling them &quot;blinker tests&quot; in our integration test suite. Reasons for blinker tests vary, but most are in line with what others here have already stated: concurrency, especially correct synchronization of test execution with stuff happening in asynchronous parts of the (distributed) system under test, is by far the biggest cause for problematic tests. This one is often exagerrated by the difference in concurrent execution on developer machines with maybe 4-6 cores and the CI server with 50-80, which often leads to &quot;blinking&quot; behavior that never happens locally, but every few builds on the CI server.<p>Second biggest is database transaction management and incorrect assumptions over when database changes become visible to other processes (which are in some way also concurrency problems, so it basically comes down to that). Third biggest is unintentional nondeterminism in the software, like people assuming that a certain collection implementation has deterministic order, but actually it doesn&#x27;t, someone was just lucky to get the same order all the time while testing on the dev machine.
jonatron将近 6 年前
&quot;Making bad assumptions about DB ordering&quot; That&#x27;s caught me out before. Postgres is just weird, I had to run the same test in a loop for an hour before it&#x27;d randomly change the order.
评论 #20029677 未加载
adamb将近 6 年前
If anyone is looking for ideas for how to build tooling that fights flaky tests, I consolidated a number of lessons into a tool I open sourced a while ago.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;ajbouh&#x2F;qa" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ajbouh&#x2F;qa</a><p>It will do things like separate out different kinds of test failures (by error message and stacktrace) and then measure their individual rates of incidence.<p>You can also ask it to reproduce a specific failure in a tight loop and once it succeeds it will drop you into a debugger session so you can explore what&#x27;s going on.<p>There are demo videos in the project highlighting these techniques. Here&#x27;s one: <a href="https:&#x2F;&#x2F;asciinema.org&#x2F;a&#x2F;dhdetw07drgyz78yr66bm57va" rel="nofollow">https:&#x2F;&#x2F;asciinema.org&#x2F;a&#x2F;dhdetw07drgyz78yr66bm57va</a>
pjc50将近 6 年前
The two big problems seem to be concurrency (always a problem) and state, which immediately suggest that making things as functional as possible would help a lot.<p>Ideally all state that&#x27;s used in a test would be reset to a known value at or before the start of the test, but this is quite hard for external non-mocked databases, clocks and so on.<p>For integration tests, do you run in a controllable &quot;safe&quot; environment and risk false-passes, or an environment as close as possible to production and risk intermittent failure?<p>A variant I&#x27;ve seen is &quot;compiled languages may re-order floating point calculations between builds resulting in different answers&quot;, which is extremely annoying to deal with especially when you can&#x27;t just epsilon it away.
评论 #20029802 未加载
rrnewton将近 6 年前
Both this article and this comment thread include a number of different ideas regarding controlling (or randomizing) environmental factors: test ordering, system time, etc.<p>But why do all of this piecemeal? Our philosophy is to create a controlled test sandbox environment that makes all these aspects (including concurrency) reproducible:<p><a href="https:&#x2F;&#x2F;www.cloudseal.io&#x2F;blog&#x2F;2018-04-06-intro-to-fixing-flaky-tests" rel="nofollow">https:&#x2F;&#x2F;www.cloudseal.io&#x2F;blog&#x2F;2018-04-06-intro-to-fixing-fla...</a><p>The idea is to guarantee that any flake is easy to reproduce. If people have objections to that approach, we&#x27;d love to hear them. Conversely, if you would be willing to test out our early prototype, get in touch.
invertednz将近 6 年前
I used to work at a company with over 10,000 tests where we weren&#x27;t able to get more than an 80% pass rate due to flaky tests. This article is great and covers a lot of the options for handling flaky tests. I founded Appsurify to make it easy for companies to handle flaky tests, with minimal effort.<p>First, don&#x27;t delete them, flaky tests are still valuable and can still find bugs. We also had the challenge where a lot of the &#x27;flakiness&#x27; was not the test or the application&#x27;s fault but was caused by 3rd party providers. Even at Google &quot;Almost 16% of our tests have some level of flakiness associated with them!&quot; - John Micco, so just writing tests that aren&#x27;t flaky isn&#x27;t always possible.<p>Appsurify automatically raises defects when tests fail, and if the failure reason looks to be &#x27;flakiness&#x27; (based on failure type, when the failure occurred, the change being made, previous known flaky failures) then we raise the defect as a &quot;flaky&quot; defect. Teams can then have the build fail based only on new defects and prevent it from failing when there are flaky test results.<p>We also prioritize the tests, which causes fewer tests to be run which are more likely to fail due to a real defect, which also reduces the number of flaky test results.
pure-awesome将近 6 年前
&gt; A few months back we introduced a game.<p>&gt; We created a topic on our development Discourse instance. Each time the test suite failed due to a flaky test we would assign the topic to the developer who originally wrote the test. Once fixed the developer who sorted it out would post a quick post morterm.<p>What&#x27;s the game here? It just seems like a process. Useful, sure, but not particularly fun...
boothby将近 6 年前
I&#x27;m the primary developer for a heuristic, nondeterministic algorithm. It&#x27;s both production software, and also a neverending research project. Specifically, I can&#x27;t guarantee that a particular random seed will always produce identical results because that hobbles my ability to make future improvements to the heuristic. I&#x27;ve got reasonable coverage of my base classes and subroutines, but minor changes to the heuristic can have significant impact on the &quot;power&quot; of the heuristic.<p>My solution was to add a calibrated set of benchmarks. For each problem in the test suite, I measure the probability of failure. From that probability, I can compute the probability of n repeated failures. Small regressions are ignored, but large regressions (p &lt; .001) splat on CI. It&#x27;s fast enough, accurate enough, and brings peace of mind.<p>I understand that, and why, engineers hate this. But it&#x27;s greatly superior to nothing.
tom-jh将近 6 年前
We run in-browser end to end tests for our browser extension. There were several reasons for flakiness:<p>* Puppeteer (browser automation) bugs or improper use. Certain sequence of events could deadlock it, causing timeouts relatively rarely. The fix was sometimes upgrading puppeteer, sometimes debugging and working around the issue.<p>* Vendor API, particularly their oauth screen. When they smell automation, they will want to block the requests on security grounds. We have routed all requests through one IP address and reuse browser cookies to minimize this.<p>* Vendor API again, this time hitting limits on rare situations. We could have less parallel tests, but then you waste more time waiting.<p>Eventually, we will have to mock up this (fairly complex) API to progress. It&#x27;s got to a point where I don&#x27;t feel like adding more tests because they may cause further flakiness - not good.
mariefred将近 6 年前
Flaky tests are indeed a big issue, the main concern being loss of confidence in the results.<p>The otherwise good advice for randomization has its drawbacks-<p>- it complicates issue reproduction, especially if the test flow itself is randomized and not just the data<p>- the same way it catches more issues, it might as well skip some<p>Something else that was mentioned but not stressed enough is the importance of clean environment as the basis for the test infrastructure.<p>A cleanup function is nice but using a virtual environment, Docker or a clean VM will save you a lot of debugging time finding environmental issues. The same goes for mocked or simplified elements if they contribute to the reproducibility of the system- a simpler in-memory database can help re creating a clean database for each test instead of reverting for example
评论 #20029685 未加载
评论 #20029691 未加载
notacoward将近 6 年前
Here&#x27;s a Google testing blog post about the same thing in 2016.<p><a href="https:&#x2F;&#x2F;testing.googleblog.com&#x2F;2016&#x2F;05&#x2F;flaky-tests-at-google-and-how-we.html" rel="nofollow">https:&#x2F;&#x2F;testing.googleblog.com&#x2F;2016&#x2F;05&#x2F;flaky-tests-at-google...</a>
评论 #20029431 未加载
评论 #20029354 未加载
rellui将近 6 年前
Personally I&#x27;ve always called them flaky tests. I agree with the article that flaky tests shouldn&#x27;t be ignored completely. But the issue is they take much more effort than usual test failures to debug. So it comes down to a balancing act of how much effort you&#x27;re willing to spend debugging these vs the chance that it&#x27;s an actual issue.<p>In my few years of automation experience, I&#x27;ve only seen 2 actual instances where the flaky tests were an actual issues and one of them should&#x27;ve been found by performance testing. Almost all of the rest were environment related issues. It&#x27;s tough testing across all of the different platforms without running into some environment instability.
mannykannot将近 6 年前
Tests are part of the system too, and if you accept lower standards for your test suite than you think you hold the product to, you have actually lowered your standards for the product to those you accept for the tests.
ArturT将近 6 年前
For annoying flaky features tests, I use rspec-retry gem to repeat the test a few times before marking it as failed. It helped for integration tests with external sandbox API.<p>I noticed discourse had a lot of flaky tests while using their repo to test my knapsack_pro ruby gem to run test suite with CI parallelisation. A few articles with CI examples of parallelisation can be found here <a href="https:&#x2F;&#x2F;docs.knapsackpro.com" rel="nofollow">https:&#x2F;&#x2F;docs.knapsackpro.com</a><p>I need to try the latest version of discourse code, maybe now it will be more stable to run tests in parallel.
chippy将近 6 年前
One recent test that was sometimes failing was ordering a list. It was due to how I made a sequence of my fixtures using numbers as a affix to a string so it was ordering correctly unless e.g. &quot;string 8, string 9, string 10&quot;.<p>I fixed it for me by creating a random selection from &#x2F;usr&#x2F;share&#x2F;dict&#x2F;words to make a large array of sorted words to choose from. This made the fixtures have better and amusing names such as &quot;string trapezoidal, string understudy&quot;
boyter将近 6 年前
These sort of tests are perfect examples for me to add to <a href="https:&#x2F;&#x2F;boyter.org&#x2F;posts&#x2F;expert-excuses-for-not-writing-unit-tests&#x2F;" rel="nofollow">https:&#x2F;&#x2F;boyter.org&#x2F;posts&#x2F;expert-excuses-for-not-writing-unit...</a> Tongue in cheek it is but I’m always on the lookout for additional examples to flesh it out.
pavel_lishin将近 6 年前
Flaky tests are one of the factors that led me to leave a previous job. Test coverage was already so bad (and honestly, so was the code) that it was difficult to do anything with confidence - add to this that tests <i>sometimes</i> worked meant that writing code was basically a dice-roll. I got tired of the stress.
piokoch将近 6 年前
&quot;Non-deterministic tests have two problems, firstly they are useless, secondly they are a virulent infection that can completely ruin your entire test suite.&quot;<p>&quot;To this I would like to add that flaky tests are an incredible cost to businesses.&quot;<p>I think that the misconception here is that &quot;tests should not fail&quot;, because they are &quot;cost&quot;, &quot;has to be analyzed and fixed&quot;, etc.<p>An integration or functional test that is guaranteed to never fail is kind of useless for me. Good test with a lot of assertions will fail occasionally since things are happening - unexpected data are provided, someone manually played with the database, ntp service was accidentally stopped and date in not accurate and filtering by date might be failing, someone plugged in some additional system that alters&#x2F;locks data.<p>In case of unit tests, well, if everything is mocked and isolated then yes, such test probably should never fail, but unit tests are mostly useful only if there is some complicated logic involved.
评论 #20029804 未加载
评论 #20029096 未加载
rgoulter将近 6 年前
<i>&quot;You won&#x27;t have code like this obviously contrived example, but you might have code which is equivalent.&quot;</i><p>Ha, yes! The problem sounds super dumb and obvious once you explain it, but can be a PITA to track down or recognise in the code.
revskill将近 6 年前
To me, unit tests only make sense for pure code.<p>For impure code, it made no sense to make a unit test.<p>Ability to separate pure vs impure code determines your test suites, where should be put in unit test, where should be put in integration test.
评论 #20035009 未加载
评论 #20029766 未加载
评论 #20036131 未加载
jdlshore将近 6 年前
This is a great article. Grounded in experience, detailed, actionable. Nicely done.