TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: How do you manage automated test workflow and view test analytics?

18 pointsby jonstjohnover 10 years ago
During the past several years, my company has gone from running a handful of automated tests on a single server to running thousands of automated tests. These include unit tests, integration tests and Selenium tests. The Selenium tests are particularly slow, and although we can parallelize them, the builds still take upwards of an hour to run. We typically don’t run the full regression build on every push, but instead use them to validate code prior to merging a branch to master and tagging.<p>We’ve come up with our own workflow and hodge-podge of tools to generate test analytics, but we’re looking a tool or product that we could integrate into our build system to streamline the workflow and give us insight into identifying slow and intermittently failing tests (typically due to Selenium stability). We’re currently using Jenkins and CircleCI (different projects).<p>What options are out there? Have other developers encountered challenges managing their automated tests infrastructure? Have you developed in-house tools to address this? Or are there any software tools that you’d recommend?

3 comments

trcollinsonover 10 years ago
It&#x27;s great that you have embraced testing in your product development! And rest assured it is not uncommon for you to run into issues when your test suite becomes large.<p>Unfortunately there is no silver bullet to make your tests run faster. I have used both Jenkins and CircleCI and they are great for most environments.<p>To combat this speed issue I have added a timer to all of my individual test runs. In Ruby with rspec, for example, this is relatively simple by just adding --profile to your .rspec file. Every time your test suite is run, this will show the top 10 slowest running tests. Each testing framework has a similar mechanism or add-on which will give you a report. Next start refactoring those long running tests. At first it will seem like a slow processes, but working on it a bit each day will quickly show significant improvement in your testing suite speed.<p>Next, watch out for tests, particularly integration tests and Selenium tests, which either test the framework and not your code, or which are redundant. A number of times I have walked into clients offices and have been told that their test suite is as &quot;tight as it can be&quot; and yet still runs slow. I look through it and find many areas where engineers in their desire to test thoroughly were unfortunately testing areas they don&#x27;t need to. A bit of refactoring again showed massive dividends.<p>Finally, don&#x27;t be afraid to segment your tests to run only portions that are effected by the code that was changed. It&#x27;s true that running the entire suite is important, but you&#x27;re correct in only running the whole thing at certain times. My current project has segments for API, various Front End resources, and service integrations. There are a number of sub segments as well. Again I think of this as a test refactoring exercise, but it will pay off if designed well.<p>So, unfortunately I don&#x27;t have a software solution that will solve your problems. But I can say that a few refactoring activities can make all of the difference.
评论 #8527952 未加载
pbiggarover 10 years ago
I work at CircleCI, so I can take a stab at some of these questions.<p>Selenium tests are a pain, and we come across a lot of customers who have unstable selenium tests. We&#x27;re building a lot in the next few months to help with it.<p>Firstly, one feature that we&#x27;ve just put into beta is to show you failing tests. If you&#x27;re using RSpec or Cucumber or another framework that can generate junit.xml files, we should be able to show you which tests failed, which is shown on the build page, in chat integrations, etc. (Also available via API if you want to consume it via hubot or at the command line or similar).<p>We&#x27;re planning to use this to track flaky tests. Someone on our team has the objective to build it this year, so if things go as we optimistically guess that they might, we might see it this year. Otherwise, it should be easy to use the API to build that yourselves.<p>It seems like that might help you - let me know!
评论 #8536322 未加载
forgottenpassover 10 years ago
I have a similar problem but unfortunately the conventional wisdom to long-duration tests is that we&#x27;re just doing it wrong. I&#x27;d love to say there is a good system available for having a pile of infrastructure in a corner somewhere running tests, but I haven&#x27;t found anything with out of the box support for a more complicated execution model than running latest tests on latest build artifact.<p>The best I&#x27;ve been able to accomplish is a set of parametrized Jenkins jobs that default to running latest tests on latest code, and report results to a custom external application. The test monitor has a status console and can automatically schedule more Jenkins jobs using build parameters to run the tests with specific versions of application and tests to bisect failures. It&#x27;s ugly, but is tolerable for what we need.
评论 #8528384 未加载