TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

We managed to speed up our CI to save 168 days of execution time per month

23 点作者 pyprism大约 1 年前

5 条评论

crohr大约 1 年前
Any time I see engineering time spent on splitting/sharding test suites, I can't help but wonder if access to a single beefier runner (e.g. 64+ cpus) would have alleviated all that work. Also always find the duplicated setup time a bit wasteful on resources.
评论 #40355093 未加载
评论 #40354200 未加载
xnx大约 1 年前
Every time I see this type of story I can only think how people get rewarded for heroically/dramatically rescuing bad systems. While those who choose boring/proven technology at an earlier point, don't get recognition (or even hired in many cases).
odie5533大约 1 年前
They call pytest --collect-only and parse the output before distributing it to the Github Action python matrix. I'm a little surprised pytest doesn't offer the ability to cache collection. Though a slow collection time may be an issue that should be addressed since developers must suffer it locally!
stuaxo大约 1 年前
Could do with more detail, is this using pytest xdist ?
评论 #40355214 未加载
Arcuru大约 1 年前
I must be misunderstanding. Every test run was spending 3 _hours_ just discovering which tests to run 9 times?