A list of fun things we've done for CI runners to improve CI:<p>- Configured a block-level in-memory disk accelerator / cache (fs operations at the speed of RAM!)<p>- Benchmarked EC2 instance types (m7a is the best x86 today, m8g is the best arm64)<p>- "Warming" the root EBS volume by accessing a set of priority blocks before the job starts to give the job full disk performance [0]<p>- Launching each runner instance in a public subnet with a public IP - the runner gets full throughput from AWS to the public internet, and IP-based rate limits rarely apply (Docker Hub)<p>- Configuring Docker with containerd/estargz support<p>- Just generally turning kernel options and unit files off that aren't needed<p>[0] <a href="https://docs.aws.amazon.com/ebs/latest/userguide/ebs-initialize.html" rel="nofollow">https://docs.aws.amazon.com/ebs/latest/userguide/ebs-initial...</a>
`apt` installation could be easily sped-up with `eatmydata`: `dpkg` calls `fsync()` on all the unpacked files, which is very slow on HDDs, and `eatmydata` hacks it out.
This is exactly the kind of content marketing I want to see. The IO bottleneck data and the fio scripts are useful to all. Then at the end a link to their product which I’d never heard of, in case you’re dealing with the issue at hand.
TLDR: disk is often the bottleneck in builds. Use 'fio' to get performance of the disk.<p>If you want to truly speed up builds by optimizing disk performance, there are no shortcuts to physically attaching NVMe storage with high throughput and high IOPS to your compute directly.<p>That's what we do at WarpBuild[0] and we outperform Depot runners handily. This is because we do not use network attached disks which come with relatively higher latency. Our runners are also coupled with faster processors.<p>I love the Depot content team though, it does a lot of heavy lifting.<p>[0] <a href="https://www.warpbuild.com">https://www.warpbuild.com</a>
If you can afford, upgrade your CI runners on GitHub to paid offering. Highly recommend, less drinking coffee, more instant unit test results. Pay as you go.
I'm maintaining a benchmark of various GitHub Actions providers regarding I/O speed [1]. Depot is not present because my account was blocked but would love to compare! The disk accelerator looks like a nice feature.<p>[1]: <a href="https://runs-on.com/benchmarks/github-actions-disk-performance/" rel="nofollow">https://runs-on.com/benchmarks/github-actions-disk-performan...</a>
I just migrated multiple ARM64 GitHub action Docker builds from my self hosted runner (Raspberry Pi in my homeland) to Blacksmith.io and I’m really impressed with the performance so far. Only downside is no Docker layer and image cache like I had on my self hosted runner, but can’t complain on the free tier.
Bummer there's no free tier. I've been bashing my head against an intermittent CI failure problem on Github runners for probably a couple years now. I think it's related to the networking stack in their runner image and the fact that I'm using docker in docker to unit test a docker firewall. While I do appreciate that someone at Github did actually look at my issue, they totally missed the point. <a href="https://github.com/actions/runner-images/issues/11786" rel="nofollow">https://github.com/actions/runner-images/issues/11786</a><p>Are there any reasonable alternatives for a really tiny FOSS project?
we are working on a platform that let's you measure this stuff in real-time for free.<p>you can check us out at <a href="https://yeet.cx" rel="nofollow">https://yeet.cx</a><p>we also have a anonymous guest sandbox you can play with<p><a href="https://yeet.cx/play" rel="nofollow">https://yeet.cx/play</a>