TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Shipping a compiler every six weeks

184 pointsby pietroalbiniover 5 years ago

7 comments

wwwighamover 5 years ago
Over at the TypeScript compiler we recently slowed our release cycle from 2 months to 3 months specifically because of an observation noted here: nobody used our beta (or RC) builds. On a 2 month schedule, we had 1-2 prelease builds during any given 6ish week period. The faster releases we had before were great for us - if something wasn&#x27;t ready, we could just sit on it till the next release, since they were so often. However because they were so often, we struggled with collecting feedback on prereleases - we always, consistently, got most regressions reported only after the full release. We didn&#x27;t really like this so a few things were tried. First we added an earlier prerelease cut and feature freeze (the beta) - this made the release smaller (as the earlier freezes meant more time focused on regressions or on the next release), and we still didn&#x27;t get any feedback. Some of our users gave us the feedback that they&#x27;d test against our betas, but only if we had less of them... So we lengthened our release cycle to try that. We&#x27;ve not done many on the longer 3 month cycles yet, so I can&#x27;t say if it&#x27;s helped or not yet, but from the feature development side, I can definitely say that the longer releases definitely slow down how quickly things are built on top of one another.<p>In parallel, we&#x27;ve also been improving our test infrastructure in ways similar to Crater. In the past few cycles we&#x27;ve added a bunch of tests which only exist to test compatibility - all of DefinitelyTyped is now tested on our builds, and a number of community projects that we&#x27;ve produced build scripts for are built as well. I would _love_ to be able to crawl GitHub and just build arbitrary stuff, but TS&#x2F;JS build chains are almost never a simple `npm run build`, so the best we can get is, approximately, loading a repo into an editor and checking for a crash (which we do have a tool doing). The rust ecosystem&#x27;s dogma around using `cargo build` and `cargo test` to handle _everything_ really does help make what they&#x27;re doing possible.
评论 #21624710 未加载
评论 #21623948 未加载
phillipcarterover 5 years ago
The author is a bit inaccurate, depending on how you count things. At Microsoft we ship the C#, F#, and CB compilers on a cadence faster than 6 weeks; probably every 2 weeks on average. New language features don’t make it in to each of these releases, but bug fixes and performance improvements certainly do. These releases are kore driven by tooling evolution (VS updates, .NET SDK updates) than language evolution though.
评论 #21623725 未加载
评论 #21624336 未加载
评论 #21624309 未加载
评论 #21623713 未加载
dxfover 5 years ago
At Google, we strive to ship a new Clang&#x2F;LLVM toolchain to our C++ developers every week.<p>Two of our toolchain engineers gave a talk on this at CppCon. Check it out if you&#x27;re interested: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=zh91if43QLM" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=zh91if43QLM</a>
评论 #21625851 未加载
whackover 5 years ago
It&#x27;s pretty cool to see a demonstrable example of how a solid testing framework can significantly boost development times. Especially one that includes end-to-end tests as well.<p>I&#x27;ve been involved in a lot of projects that rely entirely on unit tests and skimp on integration tests. Invariably the devs make up for this by doing a lot of manual testing, even though it is very time and labor intensive. People significantly underestimate the safety and velocity benefits of a solid suite of integration tests
surfsvammelover 5 years ago
This is the most inspiring thing I’ve read this week. I wish more would do this kind of thing.
CamJNover 5 years ago
The editions <i>were</i> a bad idea. And worse they led to completely shutting down the idea of a Rust 2.0 which dumps all the flawed ideas for which better solutions have been found. So they have to work twice as hard to continue maintaining code to support features that people shouldn&#x27;t be using anymore anyway. If dropping support for unmaintained crates lead to an ecosystem where one could reasonably integrate crates one wants to use together that&#x27;d be way better than the situation we currently have where old crates still compile but nothing agrees on the proper way to do anything and a large number of crates can&#x27;t update to the new idioms because they need to remain minimally usable in relation to slow moving or abandoned crates.
mbrodersenover 5 years ago
I am the maintainer of a Haskell like JIT compiler used for running complicated paring&#x2F;rostering optimizations for major airlines. It has never had a bug in production thanks to 9000+ tests. We routinely add new features and push into production with no fear because of auto tests. This can be done in less than a day if urgent.