I think it's worth mentioning another alternative build system I came across recently, redo:<p><pre><code> https://github.com/apenwarr/redo#readme
</code></pre>
Rather than yet another custom syntax, build-scripts are ordinary shell-scripts (or at your option, scripts written in any other language that can be called from a hashbang line). And yet, redo makes it much, much easier than make to record dependencies and track changes, and hence rebuild the exact minimum number of files necessary.<p>(previously: <a href="http://news.ycombinator.com/item?id=2104803" rel="nofollow">http://news.ycombinator.com/item?id=2104803</a>)
He should probably look at tup (<a href="http://gittup.org/tup/make_vs_tup.html" rel="nofollow">http://gittup.org/tup/make_vs_tup.html</a>) which, when using inotify (otherwise stat must be called O(n) times), pretty much always start building (or reports nothing to do) in a few milliseconds.
His point about scons is unfortunately well known. You can find an explanation from Steven Knight (head of the scons project) about why scons failed for chromium on scons mailing list (<a href="http://old.nabble.com/why-Chromium-stop-using-SCons-for-building--td29482303.html" rel="nofollow">http://old.nabble.com/why-Chromium-stop-using-SCons-for-buil...</a>).<p>It would be interesting to see what would happen if they were using waf instead of scons. Waf is also in python, and started as a fork of scons (but is so different that it can now be considered as a totally different design and codebase). Waf is much faster than scons (easily one order of magnitude), to the point that I think it would be hard to be much faster without losing features and/or system specific features (notifying systems, using checksumed file systems, etc...).<p>Samba has been using waf for > 6 months now, and they seem quite happy with it. As a former user/contributor of scons, I much prefer waf now, and anyone interested in complete build systems should look at it IMO.
I'll second the suggestion to take a look at tup -- it is based on some really good, clear-headed foundational thinking about how to make incremental builds fast, plus the implementation looks good (though I have only tried it out on experimental toy setups, and it is still pretty new, so who knows).<p>Regarded the specifically cited point of including dependencies on compilation flags, unless I am confused, I believe it can be done much more quickly in standard make, in one of two ways:<p>First way: make the build path of the object file dependent on the build flags. This has zero performance penalty, and also has the nice side-effect that when changing flags (e.g., from release to debug build and back again), you don't have to recompile everything, because you still have the previous build sitting around.<p>Second way: store the build flags in a separate makefile snippet (which you can either include or get the value of using $(shell)), and add that as a dependency of the object files. This has minimal performance impact since it's just another normal dependency for the object files. (This second trick is from one of the articles linked to about redo posted a few days ago; sadly I don't recall exactly which.)
Direct link to the GitHub project: <a href="https://github.com/martine/ninja" rel="nofollow">https://github.com/martine/ninja</a><p>I'm always interested in alternatives to Make because I just find it so painful. However, I'd say that only about half of Make-related pain comes from its dependency management. The other half, to me, is in using its language, and Ninja doesn't seem to do anything to ease that pain. Its manual says: "You should generate your ninja files using another program." That seems like a bad sign to me.<p>Tools like CMake can be helpful when there are lots of configurations available and dependencies to check, but on a small project I want to write a quick script that will just work. CMake and its ilk add another layer of complication that I don't want to have to deal with most of the time.
This is the exact reason why Chad Austin started working on ibb (I/O-Bound-Build): <a href="http://chadaustin.me/2010/03/your-version-control-and-build-systems-dont-scale-introducing-ibb/" rel="nofollow">http://chadaustin.me/2010/03/your-version-control-and-build-...</a>
Wow, I haven't been working on projects of such scale, but I supposed that existing infrastructure (make, gcc etc.) is good for large projects.<p>Does it mean that there's something wrong with the current state of affairs that you have to rebuild your infrastructure for a large project? Or does it mean that Google is so unbelievably great that everything is not good enough for them so if it's important they have to redo it from scratch?
Why do new build systems have to use some clunky old make style syntax? For me speed is hardly a primary goal. A build system must be understandable, readable and easy to debug. For starters, it should have an easy to read syntax.<p>If you have a build system which your users are also concerned about, readability and maintainability are a lot more important. SCons managed to achieve most of this by using a Python syntax. But its behavior can be quite unpredictable at times.
It's interesting, but probably not that surprising, that the Linux port has faster buildtimes, given that building the Linux kernel is the primary metric that kernel developers are interested in, and one they try to improve constantly.
Why not to reuse redo/Cmake/tup/Waf or myriad of other build tools? Why Google reinvents a wheel one more time?<p>Is it because its Not-Invented-Here culture?