I'd not heard of tup, thought I'd try it out on Windows. Unfortunately I hit a bug straight away: Tup is not directly compatible with MSVC 2015 (without disabling VCToolsTelemetry.dat generation in the registry) [0].<p>I don't fancy adopting a tool that forces me to opt-out of being able to send compiler debug telemetry to Microsoft the next time I hit a compiler bug.<p>There is a nice (but a bit dated, 2010) review here [1] which discusses some other features and shortcomings.<p>[0] <a href="https://github.com/gittup/tup/issues/182" rel="nofollow">https://github.com/gittup/tup/issues/182</a><p>[1] 2010: <a href="https://chadaustin.me/2010/06/scalable-build-systems-an-analysis-of-tup/" rel="nofollow">https://chadaustin.me/2010/06/scalable-build-systems-an-anal...</a>
I didn't see any examples of .phony type rules. Can tup do this?<p>I've recently found myself returning to make for multi-build system orchestration, e.g. Rust and C and PHP libraries.<p>Does anyone have examples of using tup for this type of thing?<p>Also, I've found myself really enjoying declarative build systems more, e.g. Cargo or Maven. It seems like for C there could be a simple set of standard tup files that are run by a tool like Cargo over a standard tree layout. I didn't notice this in there, but could see a simple wrapper to tup to give this experience to almost any language. In fact, maybe using Cargo as a base and adding tup as a supported src type or something through a Cargo extension. It would probably need to be a default for the entire project for sanities sake.
We use Tup to build Flynn [1], it's a pretty neat build system.
The only real drawback is that it's hard to get to work on some operating systems because of dependency on FUSE.
That said it still beats the crap out GNU Make or CMake etc.<p>[1] <a href="https://github.com/flynn/flynn" rel="nofollow">https://github.com/flynn/flynn</a>
While I can believe the claim it's better than make, it would be way more interesting to see how it compares to Bazel. After using it (or rather Blaze) at Google and now Bazel at Improbable, I consider it the gold standard in build tools.<p>If anything, I wish Google would open source the rest of the build "ecosystem" that together with Blaze let you build the whole codebase in seconds. It was pretty amazing.
This reminds me of DJB's ideas for a build system, redo [1]. However, it never seemed to gain any traction. (or did it? [2])<p>[1] <a href="http://cr.yp.to/redo.html" rel="nofollow">http://cr.yp.to/redo.html</a><p>[2] <a href="http://apenwarr.ca/log/?m=201012#14" rel="nofollow">http://apenwarr.ca/log/?m=201012#14</a>
My $0.02 on Tup:<p>First of all, I cannot express how much more I like it than make. If Tup is an option, I will use it.<p>What it does well:<p>1) It prevents you from making dependency mistakes: it hooks into the FS layer using fuse and tracks all input and output files that are inside your build directory. If you make any mistakes that could cause a future incremental build to be improper, it errors out rather than continuing.<p>2) It is opinionated about how your project should be structured. This has some negatives if you are trying to duplicate a particular structure from Make, but all in all does guide you in the right direction.<p>3) There isn't a lot of syntax to learn. This is good because the syntax is very different from anything else I've used.<p>#1 is really the killer feature for me; the amount of time I want to spend debugging makefiles is just slightly less than zero.<p>What it doesn't do, but I'm not bothered by:<p>Tup literally only manages commands that have 1 or more inputs and one or more outputs, and which must be run IFF the inputs have changed or the outputs do not exist; the outputs must be within the hierarchy of the project.<p>1) Configuration must be done before Tup is launched<p>2) Anything you might use a .PHONY for in make needs to be done outside of tup<p>3) Install commands must be done outside of tup.
This means that configuring, installing &c. must be done outside of tup.<p>I find that having a make file that handles the above 3 steps works fine; others using tup tend to use a shell script.<p>What it doesn't do that I wish it did:<p>1) No clean command; I currently work around this by having it generate .gitignore file and git clean -X; still it's annoying that this isn't possible.<p>2) It does not handle paths with spaces. This is actually safe as it enforces relative paths, so if it works on your system it should work everywhere even if the project is unpacked to a path with spaces.
The FUSE dependancy is pretty unfortunate. Is there any way to get rid of it?<p>Other than that, it's pretty cool. And the creator clearly has a sense of humor, something which is far rarer than it should be.
The related linux distribution, Gittup: <a href="http://gittup.org/gittup/" rel="nofollow">http://gittup.org/gittup/</a>
How does it handle building from LaTeX sources where you need to "rebuild" the document multiple times to get page numbers and references right?
Been using tup in production for about a year now and absolutely love it. The speed is nice, but compared to make projects where you need to 'make clean' to be sure everything gets properly rebuilt the confidence that tup will do the right thing every time is fantastic.
To the authors : It would be really great if you could compare the SCons build system to Tup (with some numbers) - so that I can convince my managers to switch :).
I may have missed how but..<p>Cmake solves the problem of "locate the library FOO of version X.Y, add the compilation flags, link flags, include folder, link folder, static link options, dynamic link options" and all the other details needed to make use of another software component. Sometimes that component is found in my operating systems "default" spot and other times it's in an install directory that I explicitly input. How do I tell Tup to find these components/libraries and then have Tup also add in everything needed for all the commands related to building things that use that component.<p>Also, I often have very different components going into different build targets that my project makes. How do the rules chain and build. In other words, just because I link one of my libraries with libssl doesn't mean I want every single source file and library I create in my project to then be linked with ssl
I can't recall what exactly but I hit expressiveness problems with tup (after proper RTFM). Probably some self referential issue. For the use case listed it's indeed very nice and very fast.
We've been using it for a few years. It's great. The only issue we ever had was when we tried it inside a docker container. It's related to fuse.
<a href="https://github.com/docker/docker/issues/1916" rel="nofollow">https://github.com/docker/docker/issues/1916</a>
Is tup capable of building out of tree? By this I mean having:<p>project_1/src/main.c
project_1/src/something/a.c
project_1/src/something/a.h<p>Is there some way for tup to manage discovering the files to build and everything else needed or will I need to add every file path in manually like make?
Previous HN discussion from Nov 1, 2014:<p><a href="https://news.ycombinator.com/item?id=8539564" rel="nofollow">https://news.ycombinator.com/item?id=8539564</a>
> tup, transitive verb: To have sex with.<p><a href="https://en.wiktionary.org/wiki/tup" rel="nofollow">https://en.wiktionary.org/wiki/tup</a>
Tup's main problem is it's unusual, and it doesn't have a library of build rules. But it's fast!<p>On a related note, I've always wondered if it was possible to have a build system based on dynamic library injection / strace.<p>The idea would be that you just write your build rules in shell script. Then, you run it with a special shell that catches open(), etc. in child processes (via library injection, etc). These system calls get tracked, and stored in a special build table. One that you <i>don't</i> have to edit.<p>Then, when you want to run the build again, you just re-run the magic shell. It catches the various commands, and checks their inputs / outputs, and then <i>skips running the command</i> if the targets are up to date.<p>e.g.<p><pre><code> $(CC) -c foo.c -o foo.o
</code></pre>
Hmm... "foo.o" is up to date with "foo.c", so I don't need to run the compiler. I just return "success!"<p>That would get rid of <i>all</i> magic build systems. All build syntax. All dependency ordering. The build system would just take care of it itself.<p>I've played with this before, enough to note that it's likely possible. But I haven't got far enough to publish it.