My favorite use was during my PhD. My thesis could be regenerated from the source data, through to creating plots with gnuplot/GRI and finally assembled from the Latex and eps files into the final pdf.<p>It was quite simple really, but really powerful to be able to tweak/replace a dataset hit make, and have a fully updated version of my thesis ready to go.
I use Makefile as a wrapper for build / test bash commands. For example I often define these targets:<p>- make test : run the entire test suite on local environment<p>- make ci : run the whole test suite (using docker compose so this can easily be executed by any CI server without having to install anything other than docker and docker-compose) and generate code coverage report, use linter tools to check code standards<p>- make install-deps : installs dependencies for current project<p>- make update-deps : will check if there is a newer version of dependencies available and install it<p>- make fmt : formats the code (replace spaces for tabs or vice-versa, remove additional whitespaces from beginning/end of files etc)<p>- make build : would compile and build a binary for current platform, I would also defined platform specific sub commands like make build-linux or make build-windows
Teradata contributes the Facebook open-source project Presto. Presto uses Docker to run tests against Presto. Since the tests require Hadoop to do much of anything useful, we install Hadoop in docker containers.<p>And we run tests on 3 flavors of Hadoop (HDP, CDH, and IOP), each of which is broken down into a flavor-base image with most of the packages installed, and various other images derived from that, which means we have a dependency chain that looks like:<p>base-image -> base-image-with-java -> flavor-base => several other images.<p>Enter make, to make sure that all of these get rebuilt in the correct order and that at the end, you have a consistent set of images.<p><a href="https://github.com/Teradata/docker-images" rel="nofollow">https://github.com/Teradata/docker-images</a><p>But wait, there's more. Docker LABEL information is contained in a layer. Our LABEL data currently includes the git hash of the repo. Which means any time you commit, the LABEL data on base-with-java changes, and invalidates everything downstream. This is terrible, because downloading the hadoop packages can take a while. So I have a WIP branch that builds the images from an unlabelled layer.<p><a href="https://github.com/ebd2/docker-images/tree/from-unlabelled" rel="nofollow">https://github.com/ebd2/docker-images/tree/from-unlabelled</a><p>As an added bonus, there's a graph target that automatically creates an image of the dependency graph of the images using graphviz.<p>Arguably, all of the above is a pretty serious misuse of both docker and make :-)<p>I can answer complaints about the sins I've committed with make, but the sins we've committed with Docker are (mostly) not my doing.
I wanted to download a few hundreds of files, but the server was enabling only 4 simultaneous connections.<p>I did a makefile like<p><pre><code> file1:
wget http://example.com/file1
file2:
wget http://example.com/file2
file3:
wget http://example.com/file3
</code></pre>
And used make -j4 to download all of them, but only 4 parallel tasks at once. It starts another download when one finishes
I once implemented FizzBuzz in Make: <a href="https://www.reddit.com/r/programming/comments/412kqz/a_critique_of_how_to_c_in_2016/cyzxqlx/?context=2" rel="nofollow">https://www.reddit.com/r/programming/comments/412kqz/a_criti...</a><p>Even though Make does not have built-in support for arithmetic (as far as I know), it's possible to implement it by way of string manipulation.<p>I don't recommend ever doing this in production code, but it was a fun challenge!
Not particularly creative, but I'm using it to generate this blog:<p><a href="http://www.oilshell.org/blog/" rel="nofollow">http://www.oilshell.org/blog/</a> (Makefile not available)<p>and build a Python program into a single file (stripped-down Python interpreter + embedded bytecode):<p><a href="https://github.com/oilshell/oil/blob/master/Makefile" rel="nofollow">https://github.com/oilshell/oil/blob/master/Makefile</a><p>Although generally I prefer shell to Make. I just use Make for the graph, while shell has most of the logic. Although honestly Make is pretty poor at specifying a build graph.
I've used it when I was doing a pentest - searching a network for leaks of information. I wrote dozens of shell scripts that scanned the network for <i>.html files, then extracted URL's from them, downloaded all of the files referenced in them, and searched those files (</i>.doc, *.pdf, etc.) for metadata that contained sensitive information. This involved eliminating redundant URL's and files, using scripts to extract information which was piped into other scripts, and a dozen different ways of extracting metadata from from various file types. I wrote a lot of scripts that where long, single-use and complicated, and I used a Makefile to document and save these so I could re-do them if there was an update, or make variations of them if I had a new ideas.
I use Makefiles for two components of my research:<p>- Compilation of papers I am writing (in LaTeX). The Makefile processes the .tex and .bib files, and produces a final pdf. Fairly simple makefile<p>- Creation of initial conditions for galaxy merger simulations. This I obtained from a collaborator. We do idealized galaxy merger simulations and my collaborator has developed a scheme to create galaxies with multiple dynamical components (dark matter halos, stellar disks, stellar spheroids, etc.) very near equilibrium. We have makefiles that generate galaxy models, place those galaxies on initial orbits, and then numerically evolve the system.
To set up my dotfiles, although I'm not in enough of a routine for it to be truly useful.<p><pre><code> tmux:
ln -s $(CURDIR)/.tmux.conf $(HOME)/.tmux.conf
tmux source-file ~/.tmux.conf
reload-tmux:
tmux source-file ~/.tmux.conf
gitconfig:
ln -s $(CURDIR)/.gitconfig $(HOME)/.gitconfig
</code></pre>
cd ~/configs then make whatever. ~/configs itself is a git repository.
Not exactly creative but KISS. I use only Makefile for a C project that compiles on both Linux, BSD and Mac OS.<p>Point being that autoconf is often overkill for smaller C projects.
This is a bit late, but in the book <i>The Tao of tmux</i>, I delve into how I use Makefile's to create cross-platform file watchers that can trigger unit tests. <a href="https://leanpub.com/the-tao-of-tmux/read#file-watching" rel="nofollow">https://leanpub.com/the-tao-of-tmux/read#file-watching</a><p>I use Makefile's regularly on open source and personal profiles (e.g. <a href="https://github.com/tony/tmuxp/blob/master/Makefile" rel="nofollow">https://github.com/tony/tmuxp/blob/master/Makefile</a>). Feel free to take and use that code, it's available under the BSD license.<p>The creativity comes in when dealing with cross-platform compatibility: Not all file listing commands are implemented the same. ls(1) doesn't work the same across all shell systems, and find on BSD accepts different arguments than GNU's find. So to collect a list of files to watch, we use POSIX find and store it in a Make variable.<p>Then, there's a need to get a cross platform file watcher. This is tricky since file events work differently across operating systems. So we bring in entr(1) (<a href="http://entrproject.org/" rel="nofollow">http://entrproject.org/</a>). This works across Linux, BSD's and macOS and packaged across linux distros, ports, and homebrew.<p>Another random tip: For recursive Make calls, use $(MAKE). This will assure that non-GNU Make systems can work with your scripts. See here: <a href="https://github.com/liuxinyu95/AlgoXY/pull/16" rel="nofollow">https://github.com/liuxinyu95/AlgoXY/pull/16</a>
Not something I have personal experience with, but I have heard a story about a Makefile-operated tokamak at the local university. Apparently, the operator would do something like "make shot PARA=X PARB=Y ..." and it would control the tokamak and produce the output data using a bunch of shell scripts.
I once used make to jury-rig a fairly complex set of backup jobs for a customer on a very short notice. Jobs were grouped and each group was allowed to run a certain number of jobs in parallel, and some jobs had a non-overlap constraint. The problem was well beyond regular time-based scheduling, so I made a script to generate recursive makefiles for each group that started backups via a command-line utility, and a master makefile to invoke them with group-specific parallelism via -j.<p>File outputs were progress logs of the backups that got renamed after the backup, so if any jobs failed in the backup window, you could easily inspect them and rerun the failed jobs just by rerunning the make command.<p>Fun times. Handling filenames with spaces was an absolute pain, though.
Miki: Makefile Wiki <a href="https://github.com/a3n/miki" rel="nofollow">https://github.com/a3n/miki</a><p>A personal wiki and resource catalog. The only thing delivered is the makefile, which uses existing tools, and a small convenience script to run it.
Until recently we used them at Snowplow for orchestrating data processing pipelines, per this blog post:<p><a href="https://snowplowanalytics.com/blog/2015/10/13/orchestrating-batch-processing-pipelines-with-cron-and-make/" rel="nofollow">https://snowplowanalytics.com/blog/2015/10/13/orchestrating-...</a><p>We gradually swapped them out in favour of our own DAG-runner written in Rust, called Factotum:<p><a href="https://github.com/snowplow/factotum" rel="nofollow">https://github.com/snowplow/factotum</a>
I use it to setup my programming environment. One Makefile per project, semi-transferable to other pcs. It contains<p><pre><code> * a source code download,
* copying IDE project files not included in the source,
* creating a build folders for multiple builds (debug/release/converage/benchmark, clang & gcc),
* building and installing a specific branch,
* copying to a remote server for benchmark tests.</code></pre>
Lisp in make [0] is probably the most creative project I've seen. For myself, in some tightly controlled environments I've resorted to it to create a template language, as something like pandoc was forbidden. It was awful, but worked.<p>[0] <a href="https://github.com/kanaka/mal/tree/master/make" rel="nofollow">https://github.com/kanaka/mal/tree/master/make</a>
I use makefile as the library package dependency [1], maybe like what package.json was in node environment.<p>The idea is if you want to use the library, you just include the makefile inside your project makefile, define a TARGET values and you will automatically have tasks for build, debug, etc.<p>The key is a hack on .SECONDEXPANSION pragma of GNU make, which means it's only work in GNU/Linux environment.<p>[1] <a href="https://GitHub.com/shuLhan/libvos" rel="nofollow">https://GitHub.com/shuLhan/libvos</a><p>Edit: ah, turn out I write some documentation about it here: <a href="http://kilabit.info/projects/libvos/doc/index.html" rel="nofollow">http://kilabit.info/projects/libvos/doc/index.html</a>
I don't use it, but your question made me think of one: I would like to see it (mis)used as a way to bring up an operating system.<p>It probably will require quite a few changes, but if the <i>/proc</i> file system exposed running processes by name, and contained a file for each port that something listened to, one _could_ run make on that 'directory' with a makefile that describes the dependencies between components of the system.<p>Useful? Unlikely, as the makefile would have to describe all hardware and their dependencies, and it is quite unlikely nowadays that that is even possible (although, come to think of it, a true hacker with too much time in hand and a bit of a masochistic tendencies could probably use autotools to creative use)
I'm developing flight software at work on various Linux pc's that have support drivers installed for some PCIe cards. If I want to code on these PC's it's either sit inside a freezing clean room or "ssh -X" into a PC to bring up a editor. This sucks, so I have a makefile to rake in certain specifics of my flight software build with additional compile time switches for flexibility to build natively on my own computer. This allows me to essentially ignore installed drivers/libs and work comfortably in my own environment until I require the actual PC in the cleanroom to run my build.
I'm using ruby's rake in almost every project, even when it's not ruby otherwise.<p>It has much of the same functionality, but I already know (and love) ruby, whereas make comes with its own syntax that isn't useful anywhere else.<p>You can easily create workflows, and get parallelism and caching of intermediate results for free. Even if you're not using ruby and/or rails, it's almost no work to still throw together the data model and use it for data administration as well (although the file-based semantics unfortunately do not extend to the database, something I've been meaning to try to implement).<p>Lately, I've been using it for machine learning data pipelines: spidering, image resizing, backups, data cleanup etc.
Not mine but here's a Lisp interpreter written in Make: <a href="https://github.com/kanaka/mal/tree/master/make" rel="nofollow">https://github.com/kanaka/mal/tree/master/make</a>
I have a makefile I use for all of my AVR projects. It has targets to build, program, erase, and bring up a screen on ttyS0 and maybe more. I add targets whenever I realize I'm doing anything repetitive with the development workflow.
I haven't, but one of the cool uses that I've seen lately is how OpenResty's folks are using it for their own website, they convert markdown -> html, then with metadata to TSV, finally loading it into a postgres db. They then use OpenResty to interface with the DB etc. But all the documentation is originally authored in markdown files.<p>Makefile: <a href="https://github.com/openresty/openresty.org/blob/master/v2/Makefile" rel="nofollow">https://github.com/openresty/openresty.org/blob/master/v2/Ma...</a>
I use Ansible for deployment and Ansible Vault for storing encrypted config files in the repo. Of course, it's always a bit of a nightmare scenario that you accidentally commit unencrypted files, right?<p>Well, I have "make encrypt" and "make decrypt" commands that will iterate over the files in an ".encrypted-files" file. Decrypt will also add a pre-commit hook that will reject any commit with a warning.<p>This is tons easier than trying to remember the ansible-vault commands, and I never have to worry about trying to remember how to permanently delete a commit from GitHub.
To generate 100 Terabytes of data in parallel ... on Hadoop<p><a href="https://github.com/hortonworks/hive-testbench/blob/hive14/tpcds-setup.sh#L116" rel="nofollow">https://github.com/hortonworks/hive-testbench/blob/hive14/tp...</a><p>The shell script generates a Makefile and the Makefile runs the hadoop commands, so that the parallel dep handling is entirely handed off to Make.<p>This make it super easy to run 2 parallel workloads at all times - unlike xargs -P 2, this is much more friendly towards complex before/after deps and failure handling.
I used a Makefile for managing a large number of SSL certificates, private keys and trust stores. This was for an app that needed certs for IIS, Java, Apache and they all expect certificates to be presented in different formats.<p>Using a Makefile allowed someone to quickly drop in new keys/certs and have all of the output formats built in a single command. Converting and packaging a single certificate requires one or more intermediate commands and Makefile is setup to directly handle this type of workflow.
I guess it depends what you consider creative?<p>I use one to build my company's Debian Vagrant boxes: <a href="https://app.vagrantup.com/koalephant" rel="nofollow">https://app.vagrantup.com/koalephant</a><p>I use one to build a PHP library into a .phar archive and upload it to BitBucket<p>My static-ish site generator can create a self-updating Makefile: <a href="https://news.ycombinator.com/item?id=14836706" rel="nofollow">https://news.ycombinator.com/item?id=14836706</a><p>I use them as a standard part of most project setup
I'm creating a config.inc makefile during make to store config settings, analog to the config.h
<a href="https://github.com/perl11/potion/blob/master/config.mak#L275" rel="nofollow">https://github.com/perl11/potion/blob/master/config.mak#L275</a><p>Instead of bloated autotools I also call a config.sh from make to fill some config.inc or config.h values, which even works fine for cross-compiling.
We use Makefile "libraries" to reduce the amount of boilerplate each of our microservices have to contain. This then allows us to change our testing practices in bulk throughout all our repos.<p><a href="https://github.com/Clever/dev-handbook/tree/master/make" rel="nofollow">https://github.com/Clever/dev-handbook/tree/master/make</a>
The main question to ask if you really need to use make. If you do, there practically no limit of what you can do with it these days, including deployment to different servers, starting containers/dedicated instances etc.
But unless you are already using make or are forced to, it's better to check one of newer build systems. I personally like CMake (it actually generates Makefiles).
One "creative" use is project setup. Sometimes, less technical colleagues need to run our application, and explaining git and recursive submodules takes a lot of time, so I usually create a Makefile with a "setup" target that checks out submodules and generates some required files to run the project.
I use Makefiles that run "git push $branch" and then call a Jenkins API to start a build/deploy of that $branch. This way I never have to leave vim; I use the fugitive plugin for vim to "git add" and "git commit", then run ":make".
i use it to solve dependency graphs for me in my program language of choice, at the moment this involves setting up containers and container networking but i throw it at anything graph based<p>make seems to be easier to install/get running than the myriad of non packaged, github only projects i have found.
I use it to generate my latex CV.
In my case I have multiple target countries, so I have pseudo-i18n with pseudo-l10n, and different values like page size, addresses, phone numbers, and then I just make for the target country like make us or make ja.
I used it to make a blog once.<p><a href="http://old.storytotell.org/blog/2009/07/13/how-to-manage-a-website-destructively.html" rel="nofollow">http://old.storytotell.org/blog/2009/07/13/how-to-manage-a-w...</a>
I've used Makefiles to determine what order to run batch jobs in so that dependencies can be met. Instead of describing what order to run things in, you describe what depends on what.<p>It's pretty cool, but not ideal.
Nowadays I mostly use Tup. If I use make it is usually for when I'm working with other people on LaTeX documents, and often times it's enough to just call rubber from make x)