This is a good analysis. A potential option for his proposed build output format is <a href="http://nixos.org/nix/" rel="nofollow">http://nixos.org/nix/</a> expressions.<p>Nix is already kind of a fusion of a package manager and a build system. It's rather mature, has a decent ecosystem and does a lot of what he is looking for:<p>- Complete determinism, handling of multiple versions, total dependency graphs
- Parallel builds (using that dependency graph)
- Distribution<p>One major benefit of a solution like generating Nix files over compiler integration is that it works for cross-language dependencies. A lot of the time integrated solutions break down is for things like if a C extension of a Ruby gem relies on the presence of imagemagick. Nix has no problem handling that kind of dependency.<p>Also of course it is a lot less work to generate Nix expressions than it is to write a package manager. There are already scripts like <a href="https://github.com/NixOS/cabal2nix" rel="nofollow">https://github.com/NixOS/cabal2nix</a> which already solve problems of the packaging system they replace.
I'd prefer to see package managers be more separate. Downloading dependencies involves interacting with third parties and has security implications. That's fine when you're downloading software intentionally from a source you trust, but I've had the experience of downloading source code and noticing that the build system is downloading from sources I don't recognize or without https, and that's not so good.
There is a problem but you're looking at it the wrong way.<p>What's in the compiler's machine code generation phase that the build system needs to know about? If nothing, then making a monolithic system is only going to make your life miserable.<p>Well-designed compilers are already split into (at least) two subsystems: frontend and backend. Frontend takes the program and spits an AST (very roughly speakign, although there is also the semantic-analysis phase, and intermediate-representation-generation phase). Backend is concerned with code generation. What your build systems needs is just the frontend part. Not only your build system, but also an IDE can benefit greatly from a frontend (which as one of the commenter pointed out, results in wasteful duplication of effort when and IDE writer decides to roll his/her own language parser embedded in the tool).<p>I think AST and semantic-analyzer are going to play an increasing role in a variety of software development activities and it's a folly to keep them hidden inside the compiler like a forbidden fruit.<p>And that's the exact opposite of monolith. It's more fragmentation, and splitting the compiler into useful and resusable pieces.<p>(At this point I have a tendency to gravitate towards recommending LLVM. Unfortunately I think it's a needlessly complicated project, not the least because of being written in a needlessly complicated language. But if you're okay with that it might be of assistance to your problems)
I think a good way to handle this convergence would be to move away from repeated invocations of the compiler. Instead, your build system/IDE could keep an active instance of the compiler running that keeps the last generated ASG/intermediate representation/machine code in memory, along with bookkeeping information that remembers the relations between these. When a file changes/a rebuild is requested, this lets the build system to not only correctly identify dependencies, but also allows far more granular incremental compilation than is possible now. For example, if you really only change one function, then only that function's code (and possibly any code which relies on optimizations based on that function's form) need to be changed. You could even add more to this by tracking dependencies in unit tests and automatically re-running them when the result may have changed, or using it to facilitate some sort of live-coding.<p>This sounds like it would need a huge amount of memory, but IDEs already do this to the ASG level, and much memory and computation is wasted on the compiler re-generating the ASG in parallel when the IDE has a similar one already analyzed. The main disadvantage is it would restrict how build systems could be structured, as to pull this off the build system would need to have much more ability to reason about the build (call command X if Y has changed after Z won't cut it). Macro systems would also need to be more predictable.<p>As far as keeping things non-monolithic, you could still have plenty of separation between each phase of the compilation process, the only extra interface you would need between passes is the more granular dependency tracking.<p>edit: grammar
This is a great writeup of what I think is an unfortunate trend. When your compiler, build system and IDE are all tightly coupled, you end up locked into a single language.<p>It's hard to develop pieces of your code in multiple languages and have everything play well together. But for many projects that's a good way to do things. For example, in games programming, you might want to use an offline texture compression tool. Ideally that should be integrated into the overall build; but you shouldn't have to write your texture compressor in Haskell or C++ or whatever just because that's what the game is written in.<p>I think Xcode is what a lot of other IDEs and build systems are moving towards. Xcode is nice as long as you're working with normal code in permitted languages (C++, Obj-C, Swift) and permitted resource formats (NIBs). But if you need to do something slightly unusual, like calling a shell script to generate resources, it's horrible.<p>Oh, and I didn't even mention package managers! Having those tightly coupled to the other tools is horrible too.
Configuration management systems (chef, puppet, cfe2/3) is a superset of package management#, which is, in turn, depends on frozen static artifacts of each project's build system.<p># This is because configuration management installs files, packages, templates files and runs commands pre/post, similar to how most package managers work, but at a fine-grain level of <i>user customized</i> as opposed to <i>maintainer customized</i>.<p>The meta is that one could consider a "system" or approach by where the project build and configuration management systems were seamless. One main challenge in doing so would be that the staticness of artifacts allows for <i>reproducible</i> compatibility, whereas end-to-end configurability can easily become Gentoo.
Julia (<a href="http://julialang.org/" rel="nofollow">http://julialang.org/</a>) is an example of a system that combines the three. The JIT compiler is available at a REPL, pre-compiled packages available in v0.4 may be considered as coming from a build system, and the package manager comes with every Julia distribution.
In the days of Turbo Pascal even the editor was part of the whole. Compiler, build system, editor all tied together as a single executable. Package management wasn't on the horizon yet for that environment so it wasn't included. But there is definitely a precedent for this and this kind of convergence is hardly a new thing.<p>Personally I don't like it much. I prefer my tools to be separate and composable.
This is yet another problem caused by the fact compilers have a 'file in, file out' interface.
(the other problem is performance of
the compilation/linking/packaging process). There's simply no reason why input to a compiler should be a file, and no reason why the result should be one.
is sbt an early prototype of what the OP has in mind? I like sbt because it seems to me that by combining (at least parts) of each of compiling, building, and packaging, the entire workflow is streamlined. So for instance, sbt has an incremental re-compiler which applies a suite of heuristics to minimize the code required to be recompiled and is triggered automatically by any change to the source. In practice this is a huge time saver, but it wouldn't work without relying on sbt's obviously detailed knowledge of the dependency graph.
Another example: sbt can also handle package management and deployment, largely via plugins (eg, "native-packager", "sbt-ghpages", "assembly" (uber jar), "sbt-elasticbeanstalk"
This analysis misses one major part of the equation: configuring the build. Almost every non-trivial piece of software can be built in multiple configurations. Debug vs. Release, with/without feature X, using/not using library Y.<p>The configuration of the build can affect almost every aspect of the build. Which tool/compiler is called, whether certain source files are included in the build or not, compiler flags (including what symbols are predefined), linker flags, etc. One tricky part about configuration is that it often needs a powerful (if not Turing-complete) language to fully express. For example, "feature X can only be enabled if feature Y is also enabled." If you use the autotools, you write these predicates in Bourne Shell. Linux started with Bourne Shell, then Eric Raymond tried to replace it with CML2 (<a href="http://www.catb.org/~esr/cml2/" rel="nofollow">http://www.catb.org/~esr/cml2/</a>), until a different alternative called LinuxKernelConf won out in the end (<a href="http://zippel.home.xs4all.nl/lc/" rel="nofollow">http://zippel.home.xs4all.nl/lc/</a>).<p>Another thing missing from the analysis are build-time abstractions over native OS facilities. The most notable example of this is libtool. The fundamental problem libtool solves is: building shared libraries is so far from standardized that it is not reasonable for individual projects that want to be widely portable to attempt to call native OS tools directly. They call libtool, which invokes the OS tools.<p>In the status quo, the separation between configuration and build system is somewhat delineated: ./configure spits out Makefile. But this interface isn't ideal. "make" has way too much smarts in it for this to be a clean separation. Make allows predicates, complex substitutions, implicit rules, it inherits the environment, etc. If "make" was dead simple and Makefiles were not allowed any logic, then you could feasibly write an interface between "make" and IDEs. The input to make would be the configured build, and it could vend information about specific inputs/outputs over a socket to an IDE. It could also do much more sophisticated change detection, like based on file fingerprints instead of timestamps.<p>But to do that, you have to decide what format "simple make" consumes, and get build configuration systems to output their configured builds in this format.<p>I've been toying around with this problem for a while and this is what I came up with for this configuration->builder interface. I specified it as a protobuf schema: <a href="https://github.com/haberman/taskforce/blob/master/taskforce.proto" rel="nofollow">https://github.com/haberman/taskforce/blob/master/taskforce....</a>
Thanks for this overview and analysis.
You inspired me a new blogpost, which might be seen as an answer to your post: <a href="http://code.alaiwan.org/wp/?p=84" rel="nofollow">http://code.alaiwan.org/wp/?p=84</a>