TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: How Long Is Your CI Process?

94 pointsby chuckgreenmanabout 4 years ago
I&#x27;ve seen a couple of projects using CI to build the project and run their test suite.<p>All of them have been interpreted languages like PHP, Python and Ruby. Their builds and tests took between 30-45 minutes. As far as project size and complexity, these were projects that were built and maintained by four person teams over the course of 3-5 years, so it&#x27;s not like they were massive services with hundreds of developers.<p>I&#x27;m still kind of new, I worked at a couple internships and I&#x27;ve been working full time for a year so I might be totally wrong but I feel like these CI pipelines could be optimized to run faster.

57 comments

jartabout 4 years ago
2 minutes and 7 seconds <a href="https:&#x2F;&#x2F;github.com&#x2F;jart&#x2F;cosmopolitan&#x2F;runs&#x2F;2482398460" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;jart&#x2F;cosmopolitan&#x2F;runs&#x2F;2482398460</a><p>on travis for a repository that builds 14,479 objects, 67 libraries, and 456 static executables, 284 of which are test executables which are run too. If I want to run all the test binaries on freebsd openbsd netbsd rhel7 rhel5 xnu win7 win10 too, then it takes 15 additional seconds. On a real PC, building and testing everything from scratch takes 34 seconds instead of two minutes.
评论 #27006907 未加载
jasonpeacockabout 4 years ago
The latency of your CI process doesn&#x27;t matter[1].<p>What matters is the development process - local build &amp; test should be fast.<p>Otherwise, with CI&#x2F;CD, it&#x27;s a continually-moving release train where changes get pushed, built, tested, and deployed non-stop and <i>automatically</i> without human intervention. Once you remove humans from the process, and you have guard rails (quality) built into the process, it doesn&#x27;t matter if your release process <i>for a single change</i> takes 1min, 1hour, or 1day.<p>Even if it takes 1 day to release commit A, that&#x27;s OK b&#x2F;c 10min later commit B has been released (because it was pushed 10min after commit A).<p>I&#x27;ve seen pipelines that take 2 weeks to complete because they are deploying to regions all over the world - the first region deploys within an hour, and the next 2weeks are spent serially (and automatically) rolling out to the remaining regions at a measured pace.<p>If any deployment fails (either directly, or indirectly as measured by metrics) then it&#x27;s rolled back and the pipeline is stopped until the issue is fixed.<p>[1] Yes, even for fixing production issues. You should have a fast rollback process for fixing bad pushes and not rely on pushing new patches.
评论 #27010420 未加载
评论 #27012111 未加载
评论 #27007308 未加载
ilmiontabout 4 years ago
I don&#x27;t see how anyone can give you useful information without knowing more about the pipeline and the projects, and as everyone&#x27;s pipelines&#x2F;projects are going to work differently (I do web dev work, so pipelines are relatively simple, I can imagine that a game dev team creating Windows&#x2F;Mac&#x2F;Linux builds might have multi-hour pipelines though).<p>Anyway as the question is &quot;How Long Is Your CI Process&quot;, here we go!<p>I have two main types of pipelines, both running on a self-hosted GitLab instance which runs on an 8th-gen i3 Intel NUC. No project is particularly massive.<p>1. PHP Projects. Run PHPStan + unit tests on each branch. Most projects take 1-5 mins. On master, run PHPStan + unit tests, build a Docker image, and use Helm to deploy to managed Kubernetes on DigitalOcean. This takes 5-10 mins.<p>2. React Projects, again not massively huge, but sizable. Biggest time is to run ESLint on every branch. About 5 mins <i>(due to very poor caching which I keep meaning to fix)</i>. On master, run ESLint, create a Docker image, and deploy to managed Kubernetes. 5-10 mins.<p>There are opportunities to improve this by fixing&#x2F;optimising caching. Overall I&#x27;m reasonably happy with the pipeline performance. I&#x27;m also sure that upgrading the hardware would make a big difference, probably more so than fixing the caching; an i3 isn&#x27;t really ideal but this machine does well overall for my small team.
评论 #27009987 未加载
dmoyabout 4 years ago
Automated tests that my team runs vary from ~seconds to <i>multiple days</i>, depending on what&#x27;s being tested. Some of the tests involve compiling a multi-billion line repo using over 30+ languages, and doing some analysis on the resulting code graph. So that takes awhile.<p>30-45 minutes just for a simple test suite, even if it&#x27;s PHP, Python, and Ruby - that sounds long. But without any details on exactly what&#x27;s being tested, it&#x27;s hard to say.
评论 #27007249 未加载
wwwighamabout 4 years ago
Typescript takes less than a minute to build, and basic PR validations (the simple regression, conformance, and unit test suites) add around 10 minutes of test running to that (to be fair, I can run those locally in just under two minutes, we just use slow CI boxes, and local incremental build and test can bring that loop down even more). The extended test suites that run on a rolling basis on `master` and on-demand on PRs can be much longer and take up to two hours to run (the longest extra suite being the DefinitelyTyped suite, where the CI system runs all of DefinitelyTyped&#x27;s tests on both nightly and your PR&#x2F;master and reports any changes). Technically, there is also a github crawler running periodically that rebuilds anything public and open source it finds with the latest TS and reports new crashes, and that&#x27;s _constantly_ running, so I can&#x27;t really say that has a fixed run time, per sey. Turns out the closer you get to building the world with your (build tool) project, the longer it takes, but the more realistic your coverage becomes.
glacialsabout 4 years ago
A lot of the drag on CI for complex projects is tests, which are hard to argue against. Complexity : need for tests isn&#x27;t linear -- once you hit some critical mass of complexity where one person can&#x27;t know the whole application, the need for tests skyrockets.<p>I joined a company last year that&#x27;s trying to solve this [1] by tracing tests so it can skip any whose dependencies (functions, environment variables, etc.) haven&#x27;t changed. It&#x27;s amazing what &quot;what if we don&#x27;t run tests we know will pass?&quot; can do to a CI pipeline.<p>[1]: <a href="https:&#x2F;&#x2F;yourbase.io" rel="nofollow">https:&#x2F;&#x2F;yourbase.io</a>
评论 #27008813 未加载
nickjjabout 4 years ago
For Flask apps, a little over 2 minutes to git push code and then see passing tests in CI using GitHub Actions.<p>Most of that time is spent building the Docker image.<p>The CI pipeline does:<p>- Build Docker images for the project<p>- Run the project<p>- Run Shellcheck on any shell scripts<p>- Run flake8 to lint the code base<p>- Run black in check mode to ensure proper formatting<p>- Reset and initialize the DB<p>- Run test suite<p>That&#x27;s a baseline. At this point any increase to the ~2 min is a result of running more tests but it&#x27;s usually possible to run about 100 assorted tests in ~10 seconds (testing models, views, etc.).<p>An example of the above is here: <a href="https:&#x2F;&#x2F;github.com&#x2F;nickjj&#x2F;docker-flask-example" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;nickjj&#x2F;docker-flask-example</a><p>A similar pipeline with comparable tools for Rails takes ~4-5 minutes and Phoenix takes ~4-5 minutes too. You can replace &quot;flask&quot; with &quot;rails&quot; and &quot;phoenix&quot; in the above URL to see those example apps too, complete with GH Action logs and CI scripts. These mainly take longer due to the build process for installing package dependencies, plus Phoenix has a compile phase too.
wycabout 4 years ago
We have a CI pipeline for a cross-platform Rust library, and it currently takes an hour across C, Android, iOS, Java, WASM, etc. and different combinations of cryptographic libraries. This is probably something we’ll tune over this or next quarter, such as by throwing some beefy hardware at it and parallelizing. We also seem to be hitting some GitHub actions limits in terms of storage.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;spruceid&#x2F;didkit&#x2F;runs&#x2F;2468746631" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;spruceid&#x2F;didkit&#x2F;runs&#x2F;2468746631</a>
pydryabout 4 years ago
Between 30 seconds and about 45 minutes.<p>The only times when it was long enough that it was painful it was because there was a stage that couldn&#x27;t be debugged without running the build. That&#x27;s invariably what I actually preferred to fix, not the total lead time.<p>A 45 minute sanity check to verify nothing is fucked before releasing is fine. A 45 minute debugging feedback loop is a nightmare.<p>Faster CI builds are typically a nice-to-have rather than a critical improvement (&amp; doing too many nice to haves has killed many a project).
pyrophaneabout 4 years ago
We have a CI pipeline for a containerized python app and a react app. We have a monorepo and only trigger certain jobs depending on code changes. Our CI runs through Gitlab CI on a GKE cluster, which gives us a lot of control over the parallelism and the resources allocated.<p>Our pipeline typically takes 10-30 minutes, depending on what jobs run and where cache gets used.<p>The longest job, at a consistent 12 minutes is our backend test job. There’s not a lot we can do to speed this up any further because a lot of the tests run agains a test db so we can’t easily run them in parallel. Perhaps if we wanted to be really clever we could use multiple test dbs.<p>The build for our containers is usually very quick (a few minutes) unless we modify our package requirements.txt. That happens infrequently but it triggers an install step that will increase the overall time for the job to 10-12 minutes.<p>The deploy phase is very quick.<p>We spent a bit of time optimizing this and it came down mostly to:<p>1. Using cache where we can.<p>2. Ensuring we had enough resources allocated so that jobs were not waiting or getting slowed down by lack of available cpu.<p>2. Making sure that each command we run is executing optimally for performance. Some commands have flags that can speed things up, or there alternate utilities that do the same thing faster. One example of the latter is that we were using pytype as our type checker, but it often took about 15 minutes to run. We swapped it out for pyright, which takes under 5.
tomduncalfabout 4 years ago
Depends on the complexity of what you are doing. For web stuff with unit tests I’ve seen it run in a few minutes.<p>Our current CI takes an hour because it has to build quite a complex app on iOS and Android, this happens in parallel but the Azure build nodes we use are pretty slow. Ideally it would be faster but it’s not too huge an issue in practice, we have the lint&#x2F;unit tests etc. run first so the build will fail early for any glaring errors.
evantahlerabout 4 years ago
Hard to say without knowing &#x2F;what&#x2F; you want to accomplish in your CI process, so maybe some open source examples will help:<p>* A &quot;complex&quot; library (node-resque). In CI (CircleCI) we install deps, compile Typescript to JS, test on 3 versions of node, and build docs. 4 min w&#x2F; some parallelization <a href="https:&#x2F;&#x2F;app.circleci.com&#x2F;pipelines&#x2F;github&#x2F;actionhero&#x2F;node-resque&#x2F;959&#x2F;workflows&#x2F;c982294e-3cb2-4fa6-a200-1853c291004d" rel="nofollow">https:&#x2F;&#x2F;app.circleci.com&#x2F;pipelines&#x2F;github&#x2F;actionhero&#x2F;node-re...</a><p>* A web server framework (actionhero): In CI(Github Actions) we install deps, compile Typescript to JS, test on 3 versions of node, and build docs. 7 min w&#x2F; some parallelization <a href="https:&#x2F;&#x2F;github.com&#x2F;actionhero&#x2F;actionhero&#x2F;actions&#x2F;runs&#x2F;801273572" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;actionhero&#x2F;actionhero&#x2F;actions&#x2F;runs&#x2F;801273...</a><p>* A Monorepo (Grouparoo): In CI (CircleCI) we install deps, compile Typescript to JS, run migrations, check licenses, test UIs, CLI tools, Plugins, and try out a few different databases. 5 minutes with rather extreme parallelization <a href="https:&#x2F;&#x2F;app.circleci.com&#x2F;pipelines&#x2F;github&#x2F;grouparoo&#x2F;grouparoo&#x2F;8417&#x2F;workflows&#x2F;dd8f7c8c-41b1-4aa3-a62c-ea283212f8bc" rel="nofollow">https:&#x2F;&#x2F;app.circleci.com&#x2F;pipelines&#x2F;github&#x2F;grouparoo&#x2F;grouparo...</a><p>In my experience, the biggest wins in CI speed improvements come from parallelization. You can parallelize by either running multiple processes&#x2F;containers or by running tests in parallel on the same container (jest, parallel_tests, etc)
adamcharnockabout 4 years ago
About 5-10 minutes from push to deploy. Python&#x2F;Django monorepo. 1-2 devs for several years kinda project size.<p>Build and test steps take about equal time. We build from a common docker image which has most of the time consuming work already done.<p>It can take longer if the Python deps have changed and therefore the ‘poetry install’ step cannot be pulled from the cache.<p>Also, we deploy multiple individual Django projects, rather than one huge monolithic project. That probably gives some speed up. It means that changes to common code can trigger 5-15 pipelines, but they all take a similar amount of time.<p>30-45 minutes seems like a really long time to me. Maybe you have a lot of slow tests, but I’d also looking at the build process too. If you’re doing docker images you may find you can extract a lot of the time consuming work to a common base image. You can also get plugins that help docker pull already-built layers from a cache.<p>If it is the tests then you could always try running tests in parallel. One worker per CPU or some such.<p>FWIW - I find that these long feedback loops can really kill productivity and morale. 10 mins for a deploy is about my limit.
bastijnabout 4 years ago
This is such an open question it is hard to answer. You have to know what runs in the CI as well as size of the project, languages, number of projects, quality steps executed in build, etc. Anyway, to give it a shot:<p>* multi-million Loc<p>* number of projects &gt; 50<p>* languages C#, C, C++, typescript<p>* Frameworks: .NET Framework, .NET core, .NET standard, Angular, React<p>* Quality tools in build: TICS, Coverity, Roslyn, custom tools (&gt;10)<p>* Tests running in build: nunit, msvstestv2, jest, karma<p>* number of tests running in build &gt; 5000<p>* package managers used: Nuget, npm<p>* number of packages (private and public) &gt; 500<p>Still a lot I forgot now.<p>It all runs in approximately 45 mins for stage1 builds, stage2-4 run nightly and weekly and take much longer (&gt;2 hours to &gt;24 hours for long duration stage 4). Increasing stages run longer test suites, up to approx 50k or so for stage 3 and 4, more quality checks, etc.<p>P.s. We spend countless hours reducing our build times. In addition we have setups to split build pipelines for those who do not need the entire archive build for their dev purposes etc. Yet, CI server aways runs single-core and cold builds.
john-tells-allabout 4 years ago
CICD systems deliver value to audiences. CI is mostly for the developer team, so you can check your changes don&#x27;t break other&#x27;s work, or vice versa. Often there&#x27;s a CD to an internal system, so QA can take a look to see the new feature works according to the business expectations, and the business can play with it.<p>None of the above really matters, the important bit is that <i>USERS</i> actually see the work! Everything else is necessary, of course, but doesn&#x27;t create value in itself.<p>So, the question is, how does each system create VALUE for its audience, and what&#x27;s the latency (LAG)? CI is often for 4-10 developers, and takes ~10-20 minutes for smallish web shops. The value the business gets is that devs can check they didn&#x27;t forget to &quot;git add&quot; a file :)<p>Devs and the business <i>always</i> complain about the slowness of CICD, but rarely invest the modest effort to make it faster. Here are some ways to improve the development cycle:<p>Speed up databases. Move from &quot;install database and sample data interactively every time&quot; to having a pre-baked Docker image with the database and seed data. Much faster: you get lower LAG and the same VALUE for the team.<p>Run fewer tests. Running tests creates business value -- confidence a deployment will give features to users -- but takes time (LAG). However, for 90% of the cases Devs get value by running a subset of the tests. Thus, much faster: less LAG, same VALUE. Run all the tests before a real deploy, or run the full suite nightly. Devs get the value of a full test without having to wait for it.<p>Simplify. CICD should just run things Devs can run locally. That is, Devs can run fast local test subsets to get rapid feedback (low LAG), and get focused VALUE. When CICD tests fail, it&#x27;s very easy for Devs to figure out what went wrong, because CICD and local environments are nearly identical.<p>CICD creates a lot of value for several audiences. Plot out each one, and see what you, the business, want to improve upon!
thinkafterbefabout 4 years ago
We had the same kind of problem, where we saw our tests and builds take 20-30 minutes. We also noticed that our own machine could run the tests significantly faster, mainly due to that desktop CPU can easily boost the clock speed for intense work load. Comparably most CIs use Cloud VMs which hardly go beyond 3 GHz. We found this quite strange.<p>After some talk we decided to build a CI service based on this premise, i.e desktop CPU outform Cloud CPUs for the CI use case. After some months we managed to create BuildJet.<p>I would say it at minimum cuts the the build time in half and the best part is that it plugs right into Github Actions, just need to change one line in your Github Actions configuration.<p>If it sounds useful for you, check it out: <a href="https:&#x2F;&#x2F;buildjet.com" rel="nofollow">https:&#x2F;&#x2F;buildjet.com</a>
other_herbertabout 4 years ago
It’s really down to goals... you should have something that triggers based on open pull requests and does a sanity check and ideally deployed to a test environment... that should be “quick” to give feedback in addition to reviews...<p>Then in the main build that hopefully is deploying to a qa environment that can do more testing, bundle artifacts for whatever dependencies need them, all that kind of stuff...<p>That’s how ours is set up... we use Jenkins with parallel parts where possible (like build the ui while tests that hit the db are run) it’s a process that takes time to get right and time to optimize...<p>We’re at about 5 mins for the quick part and 8 or so for the slower part<p>Both of those will probably get worse as we are planning to include full ui testing on the deployed environment too
safeermabout 4 years ago
It&#x27;s going to depend on the size of your code base and tests but when I was at AWS by the time we built and ran our test suite it was close to that time as well (30-45 min).<p>It&#x27;s really interesting how many companies these days have a primary pricing model of build minutes.<p>If you are looking for a DIY solution for your CI, check out <a href="https:&#x2F;&#x2F;tinystacks.com" rel="nofollow">https:&#x2F;&#x2F;tinystacks.com</a>. We have the fastest way to launch and operate your Docker app on AWS. In one click, we setup infra and an automated pipeline on your AWS. Uses ECS with Fargate. All setup for you with a control center for logs, env vars and scaling. No config nightmare.<p>Email me safeer at tinystacks.com and I can get you onboarded.
eqvinoxabout 4 years ago
Several hours, which is far too long. It&#x27;s a massive waste of developer time and money, but try explaining to diverse contributors (&quot;features features features!&quot;) that some investment in the testing setup would save money in the long run...
erikpukinskisabout 4 years ago
My current job is a Node+React app that takes 8 minutes to go through CI. It feels subjectively a LOT better than my last job, which was Go+Node+React and took about 15 minutes, BUT...<p>The slowest part of the previous CI process was our integration tests on Selenium. And the new stack doesn’t have any of those (it just does unit tests in Karma).<p>And frankly, I think I’d take the 15 minutes with the extra security of knowing the whole stack is functioning together, over the speedup to my dev cycle.<p>But I feel a bit crazy saying that. In the end, the site doesn’t seem to go down due to the lack of integration tests. Maybe because we complement with manual testing. I never deploy without opening up the site in a browser anymore.
lackerabout 4 years ago
Probably your CI could be faster. Just take a look at where the time is going and see if you can speed it up. Here’s some tips that might be useful: <a href="https:&#x2F;&#x2F;charity.wtf&#x2F;2020&#x2F;12&#x2F;31&#x2F;why-are-my-tests-so-slow-a-list-of-likely-suspects-anti-patterns-and-unresolved-personal-trauma&#x2F;" rel="nofollow">https:&#x2F;&#x2F;charity.wtf&#x2F;2020&#x2F;12&#x2F;31&#x2F;why-are-my-tests-so-slow-a-li...</a>
rubyn00bieabout 4 years ago
Oh yeah; this was the best part of moving to Erlang&#x2F;OTP... Test suites are absurdly fast (nearly) all the time. Most test suites I use take less than 10 seconds with anywhere from 100-1000 unit tests. The worst I&#x27;ve seen &quot;monoliths&quot; take at most ~90 seconds to run and that is only ever the case if folks are creating insanely too many objects or needlessly testing the internals of `gen_server`.
house9-2about 4 years ago
Large ruby on rails application. Entire test suite takes around 40 minutes to run, however we use circleCI and parallelize the build so realtime is around 10 minutes.<p>What makes the build so slow is that the database is involved, if you want fast builds decouple your unit tests from the database. With rails including the database access in tests makes everything easier and you get closer to real-life execution but slow ...
detaroabout 4 years ago
CI generally is a topic where there is usually lots of potential for optimization, but many things are not easily done with common tooling. Some large companies put serious R&amp;D effort into improving CI with custom tooling.<p>What is applicable to the specific project depends, same as to what is worth which effort. To a degree, of course throwing more resources at the problem helps - faster build workers, parallelized tests, ... but isn&#x27;t always easily implemented on a chosen platform and costs money of course.<p>In projects I worked on, it varied greatly. From just a few minutes to cases where the full process took 6 hours (which then was only done as a nightly job, and individual merge requests only ran a subset of steps). I really would want &lt;15 mins as the normal case, but it&#x27;s often difficult to get the ability to do so.
some_developerabout 4 years ago
We&#x27;ve multiple PHP repositories but the longest one currently takes around 7 minutes wall time (that includes: tests, static analysis, code style and a few other misc smaller things). Scaling the phpunit tests is actually easy in terms of throwing money at it, as the suite of +15k tests can be diced and sliced to run segments in parallel (a bit of scripting + github action matrix). Billing time is around 40 minutes I&#x27;d say.<p>The frontend&#x2F;TS stuff takes longer, usually 10-11 minute, where it&#x27;s &quot;truly building&quot; and we can hardly parallelize this one. Or we lack the expertise to fix it probably.<p>At the moment though this is non-container environment; once we add building&#x2F;deploying into the mix, I&#x27;d assume the time will go up a bit.
innocentoldguyabout 4 years ago
Around 3 minutes. It was longer, but getting rid of Kubernetes helped speed things up.
systematicalabout 4 years ago
30 minutes? I don&#x27;t have exact numbers, but I know ours is under 10 minutes and IMO that is not optimized at all. Maybe 30 minutes is okay for a very large project, but for most applications that seems quite high to me.<p>Our pipeline runs in Jenkins and builds a docker image that runs composer installs, application copy, and that sort of thing. We also run phpunit, phpstan, phpmd, and phpcs in our pipeline. Finally the image gets pushed up to ECR.<p>I think that&#x27;s all pretty standard stuff. TBH I&#x27;d like us to move to github actions and optimize for more staged builds in our docker images, but we have higher priorities at the moment.
carlmrabout 4 years ago
Usually it depends on what you need to run. Running some tests on an interpreted language should be done in a few minutes. With a compiled language it takes longer, maybe half an hour. If you have a compiled language that&#x27;s slow to compile with additional checks, because the language is more footgun than anything (yes, C++) then you might want a standard build, build with various sanitizers, and do some static analysis, and you end up with hours of time spent on building and analysis.<p>It really depends on what you&#x27;re trying to achieve and how big the project is.
mhh__about 4 years ago
The D compiler takes about 25 minutes, GCC + D frontend tests takes about an hour.<p>There is absolutely a huge amount of room for performance in areas like this. With Python especially it&#x27;s very common so think &quot;Ah yes but numpy&quot;, when it comes to performance, and that is true in steady state where you are just number crunching, but there is mindnumbingly large amounts of performance left on the table vs. even a debug build with a compiler. Testing in particular is lots of new code running for a short amount of time, so it&#x27;s slow when interpreted.
wiredfoolabout 4 years ago
`make install &amp;&amp; make test` is about 15-20 seconds on a decent machine.<p>A full GHA CI run is ~30 minutes, but that involves 3 platforms, a (short) ci-fuzz run, and running the full test suite through valgrind.
nojvekabout 4 years ago
2 minutes on GitHub actions from commit -&gt; yarn install (90% we download from cache) -&gt; webpack build with esbuild-loader -&gt; netlify draft deploy for instant staging link -&gt; smoke tests in parallel that hit the netlify url with real chrome browser (use browser less for that). Eslint, prettier, unit tests and tsc typecheck run in parallel.<p>Basically cache + parallelize.<p>Once PR passes, merge to master deploys in a minute. If something is wrong we can revert within a minute.<p>It’s joyful to build things when your tools are fast and reliable.
maccardabout 4 years ago
Roughly 30-40 minutes before tests, and another 30 minutes of tests. I work in games, and compiling the game on one platform is ~5-10 minutes even for an incremental compile on &quot;compute optimized&quot; instances on azure and aws (compared to ~30s on my workstation). It takes 20 minutes (per platform) to generate the runtime texture&#x2F;audio files, and ~10 minutes to upload them to a shared drive. We do 4 platforms right now, my last project was ~10 bigger and did 10 platforms to boot.
tpxlabout 4 years ago
Similar scope to yours but Java and Gitlab CI. The CI to dev takes about 15 minutes or so. A shared lib is first built and tested, then several applications are built and tested in parallel. After everything is built, things are deployed serially. About 3 of those 15 minutes are startup times for runners (no clue what we use, but it&#x27;s super slow), another 2-3 for deployment, about 1:30 for compiling and the rest is E2E tests. The whole thing takes about 4 minutes on a ~2015 era Mac.
dupedabout 4 years ago
At my current job, the full build&#x2F;test&#x2F;release cycle is about 45 minutes. There is an effort to begin optimizing it but it is a high risk endeavor that only became worth it once the costs started growing faster than our team size and became unsustainable.<p>CI tech debt is very difficult to pay down, and imho not worth it unless the dollar costs are becoming excessive and you have a dedicated release or DevOps engineer who can own it as an internal product.
gravypodabout 4 years ago
I&#x27;ve setup multiple CI systems and it really depends on what you need to test. A long time ago I built a CI for a team that ran end to end integration tests and collected code coverage from each service running. This took between 4 minutes and 10 minutes to run for our 3 to 6 services. Another job I worked at I setup a git repo + CI for a team of about 15ish people. In the beginning we had no CI, then I containerized everything and the CI took at long time (~20 minutes). Then, I switched to a build&#x2F;test system that was more in tune with what we needed and I ultimately (through some hacks) got the entire CI time for ~20ish microservices down to &lt;1 minute since I was caching everything that wasn&#x27;t changed with Bazel. After that I added in a stage where I collected code coverage from all 20 of those services which was much slower since Bazel had a hard time understanding how to cache that for some reason. This brought it back up to 4ish minutes.<p>The main blockers I&#x27;ve seen to CI performance is:<p>1. Caching: Most build systems are intended to run on a developers laptop and do not cache things correctly. Because of this most CIs completely chuck all of your state out of the window. The only CI that I&#x27;ve found that lets you work around this is Gitlab CI (this is my secret for getting &lt;1 min build&#x2F;test CI pipeline)<p>2. What you do in CI: If you want to run end-to-end integration tests, it&#x27;s going to be slow. Any time you&#x27;re accessing a disk, accessing the network, anything that doesn&#x27;t touch memory, it&#x27;s slow. Make sure your unit tests are written to use Mocks&#x2F;Fakes&#x2F;Stubs instead of real implementations of DBs like sqlite or postgres or something.<p>3. The usage pattern: If you don&#x27;t have developers utilizing your CI machines 100% of the time you are &quot;wasting&quot; those resources. People will often say &quot;lets autoscale these nodes&quot; and, when you do, you&#x27;ll notice they scale down to 1 node when everyone is asleep, everyone starts work and pushes code, then the CI grinds to a halt. You can make a very inefficient CI just by having the correct number of runners available at the correct time.<p>Another thing to consider: anything you can make asynchronous doesn&#x27;t need to be fast. If you setup a bot to automatically rebase and merge your code after code review then you don&#x27;t really need to think about how fast the CI is.
bluGillabout 4 years ago
Deoends on what parts changed. 2 hours of tests for simple changes, 8 hours for the complex changed everything stuff. We have broken up the system so the first is far more common.<p>Note that half of the tests on the fast build are regression that can&#x27;t possibly fail based on my changes... we run them anyway because about once a month something has a completely unexpected interaction and so a test fails that the developer didn&#x27;t think to test.
sjburtabout 4 years ago
About 10 minutes. 1-2 minutes to refresh a docker container, ~2 minutes to build a mostly C codebase in the docker container, ~5 minutes to build a bunch of python environments and run unit tests in them. 3 or so minutes due to bad design choices in Gitlab-CI. Project has around 100k loc.<p>I think we could get it down to 3 minutes or so if we changed some things, but 10 minutes vs 3 minutes doesn&#x27;t really change the workflow for us.
jaredsabout 4 years ago
That seems reasonable to me if you require heavy integration test coverage. I work on several applications. The ones that are message driven and don&#x27;t require a database have test suites that run in a couple minutes. The one that has a large database component takes about 30 minutes to run the tests. This is because we actually run the tests against a real database which requires migrations, data loading, etc.
davewasthereabout 4 years ago
Most of our builds (.net Core with a react front-ends) take around five minutes from push to having a release ready. Haven&#x27;t really needed to optimise them at all. Roughly a minute for each of npm build, dotnet build, test and publish.<p>Deployment takes a little under a minute in total.<p>Worst one was probably a big Sharepoint application at one client&#x27;s site. But that still only took about 12 minutes in total.
thihtabout 4 years ago
From 2 minutes to 10 minutes. We have mostly Go microservices so building is fast.<p>The pipeline is: build, unit test, lint in parallel, then package and save the relevant artifacts, then build a Docker image, then run the integration tests, and finally deploy (staging, dev or prod depending on the branch).<p>We also have end to end tests that run periodically and are a bit longer, but they&#x27;re not on the path to prod.
megousabout 4 years ago
Yes, they probably could be optimized. Backend API tests are usually fast, or can be made so. Webdriver based tests are annoying to write and usually slow, so I don&#x27;t test frontend code automatically.<p>Kinda feels like a waste of time, especially if your code is well componentized and there are not many central points of failure. (and those are pretty easy to see with cursory manual testing)
1_playerabout 4 years ago
Always too long.<p>Between 5 and 10 minutes from push to staging deploy for our Elixir and Node apps. And most of it is spent compiling Javascript assets.
fmirasabout 4 years ago
We need more CI processes using <a href="https:&#x2F;&#x2F;bazel.build" rel="nofollow">https:&#x2F;&#x2F;bazel.build</a>
jdlshoreabout 4 years ago
About 100 seconds. 15-20 seconds to validate dependencies, lint, and test (Node.js codebase, ~1400 tests). The rest of the time is deploying to Heroku.<p>It’s fast because the code has very few end-to-end tests... only eight or so. They take six seconds. The rest of the tests average about 200&#x2F;sec, including narrow integration tests.
rcxdudeabout 4 years ago
Depending on the repo, between 3-4 minutes and 3 hours. Fast is some quick checks that the repo builds, slow is an FPGA synthesis and place&#x2F;route. None of them are particularly large in term of LOC. Probably the slowest part of the process on the non-FPGA builds is installing python packages.
nitwit005about 4 years ago
Think of build and test times as being determined by what people are willing to put up with. If people only start getting annoyed at the runtime once it&#x27;s past 45 minutes, it&#x27;ll probably take about 45 minutes. People will keep adding things that slow it down, such as new dependencies.
formerly_provenabout 4 years ago
For the kinds of projects you mention (scripting language, small-medium sized) I aim for 1-2 minutes max, which is usually not a problem. This precludes running a lot of integration tests requiring expensive setup&#x2F;teardown, though the need or value of those greatly depends on the project.
jiuxabout 4 years ago
Does anyone here have any tips&#x2F;tricks when it comes to iOS builds?<p>Currently experimenting with Travis-CI, but man it sure does take awhile at 45-60 minutes roughly in my personal case. Have heard a dedicated Mac of some kind to leave at the office may help. Overall, I am all ears to any advice.
评论 #27007294 未加载
throwaway189262about 4 years ago
I&#x27;m of the opinion that any large project will eventually take as long as devs will tolerate. About a half hour.<p>We run mostly Java backend and JS frontend, same story.<p>Tons of opportunities for optimization but company doesn&#x27;t want to spend the time and devs appreciate the extra fuckoff time
djxfadeabout 4 years ago
We use CircleCI to deploy PHP apps. It takes about 1-2 minutes to fetch and build the JavaScript and SCSS assets, and 1 minute to install dependencies from Composer. And about 1 minute to rsync the build artifact to the server.
Graffurabout 4 years ago
About 10 mins end to end. I am sure it could be way faster but we work in an agile&#x2F;SCRUM fashion so there is absolutely no time dedicated to looking at things like this :(
nevineraabout 4 years ago
28-34 minutes. Massive highly-tested rails application, running on circleci. Only about 12 minutes of that is actually running tests, we parallelize them across roughly 60 containers, worked on by a team of ~100 engineers.<p>The truth is that most slow pipelines &quot;could&quot; be optimized to run wildly faster, but that it is costly to do so. You may be able to find low-hanging fruit that affect the build-time significantly, but most of the optimizations to be done are very large projects, like updating thousands of tests to be isolated from the database.
TeeMassiveabout 4 years ago
10 Minutes if no one modifies the &quot;Global library&quot;. However 90% of builds modifies it so it takes 1h30.
thiago_fmabout 4 years ago
~30m: tests, security checks, linter, build images, k8s deploy etc.
gardnrabout 4 years ago
I like how many people responded but didn’t answer your question.
sparker72678about 4 years ago
About 5 minutes from commit to deployed. Rails codebase.
jimmyvalmerabout 4 years ago
You get what you pay for? Cloud CI bottom-feeds unused downmarket capacity like a magazine-stand calling card. And Cloud CI is still &quot;Cloud&quot;, i.e., a mass-market, one-size-fits-all solution like the Department of Motor Vehicles.