TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

We managed to speed up our CI to save 168 days of execution time per month

23 pointsby pyprismabout 1 year ago

5 comments

crohrabout 1 year ago
Any time I see engineering time spent on splitting/sharding test suites, I can't help but wonder if access to a single beefier runner (e.g. 64+ cpus) would have alleviated all that work. Also always find the duplicated setup time a bit wasteful on resources.
评论 #40355093 未加载
评论 #40354200 未加载
xnxabout 1 year ago
Every time I see this type of story I can only think how people get rewarded for heroically/dramatically rescuing bad systems. While those who choose boring/proven technology at an earlier point, don't get recognition (or even hired in many cases).
odie5533about 1 year ago
They call pytest --collect-only and parse the output before distributing it to the Github Action python matrix. I'm a little surprised pytest doesn't offer the ability to cache collection. Though a slow collection time may be an issue that should be addressed since developers must suffer it locally!
stuaxoabout 1 year ago
Could do with more detail, is this using pytest xdist ?
评论 #40355214 未加载
Arcuruabout 1 year ago
I must be misunderstanding. Every test run was spending 3 _hours_ just discovering which tests to run 9 times?