I think it's worth mentioning another alternative build system I came across recently, redo:
https://github.com/apenwarr/redo#readme
Rather than yet another custom syntax, build-scripts are ordinary shell-scripts (or at your option, scripts written in any other language that can be called from a hashbang line). And yet, redo makes it much, much easier than make to record dependencies and track changes, and hence rebuild the exact minimum number of files necessary.
"This approach is not portable: it'll only work on Unixy os."
I weep for the lost souls of those poor benighted developers still stuck on Windows. No, no wait. why should we care about Windows developers? If they're still stuck there most likely they are deeply committed to the Microsoft Visual Studio toolset, in which case they don't (or can't) care about what goes on in the rest of the universe, and they really don't matter. (By which of course I mean that as fellow human beings they matter, and of course I have sympathy for their suffering, but by their own choice they are locked in such an impregnable ivory tower that it is neither practical nor economic to try to break in and rescue them)
Other than that, who else uses Windows? Oh yes, you have corporate teams doing 'enterprise' Java. They'll typically be using some colossally retarded Ant build system that takes 7-30 minutes to run (sadly, I'm not even exaggerating for dramatic effect). The problem with Ant is it is so easy to just bolt on 'one more thing' that it rapidly evolves to some horrible beast of a thing that lumbers around sucking everything into it, Katamari Damacy style. Really, there's no help for them either. You can't for example suggest that blowing away the entire database and recreating it from scratch in order to run the entire set of automated unit tests is something best left to the continuous integration server, or that should be done once a day - maybe out of hours or at lunchtime. No no, it has to be done every time you do a build. And that doesn't even touch on the ginormous mess that is the enterprise components, where each application server has its own arcane and unholy rituals to create its beans from the blood of unicorns and tears of virgin developers... try messing with that abomination and you're in for a world of pain. There's just no helping them either, though in their case usually they want to be helped, but they are captive to the primary problem of enterprise development, which is that the people who choose technologies and mandate tools and processes are usually so far removed from the actual use of those tools as to be completely immune to the pain, and unable or unwilling to hear the wailing and gnashing of teeth of the programmers.
Don't even get me started on checkstyle with rules like no line can be longer than 80 characters (despite there being absolutely no good reason for this other than the horrible horrible UI of eclipse - in the absence of that you can easily get way more than 80 characters on the screen).
The first line of my above comment, when viewed in the comments section of my profile came to 216 characters. That is with two sidebars of whitespace each taking up 10% of the horizontal width. I have neither an especially large monitor, nor an especially small font. In fact, I have the font cranked up slightly from the norm in order to avoid eyestrain. (Ironically, when I copy and paste said first line into a word processor, at that font size it fits significantly less than 80 characters per line, which goes to show just how that 216 should be considered an understatement of the horizontal screen real estate available for most people)
In a corporate environment, it is not unreasonable to assume that the developer can have a second monitor. Sometimes you have to gasp be nice to someone in order to get it, but that is not too onerous. Hence I believe your view is outdated; in practice horizontal is much cheaper than vertical.
When printing, one should probably do a few things in order to improve the appearance on paper anyway. Examples - you might want to set the indentation depth a little less than normal, say 2 spaces. You should also twiddle the font until it looks good, or is good for purpose (there are different reasons to print out code) and you might want to concatenate several smaller classes onto a single page, or remove the 20 line header of corporate pseudo-legalese from the top of each class, or even remove the imports, or not print the getters and setters. In other words, unless your purpose is to murder trees, you will likely hand-tune the printing to optimise it, at which time you might choose to set an 80 column width - if you felt that having each page consist mostly of a thin column of text pressed up against the right hand side of the page was the most aesthetically pleasing thing.
----
Run over lines do look a little bit ugly when printed I admit, this is true. However, since code is viewed 10-100x more often on screen than on paper, I believe it is inefficient to prioritise for printing (in a cart before the horse sense). Moreover, since the choice is between a little bit of ugliness when printed (inifinitely many chars per line) compared to a lot of ugliness when viewed on screen (80 char limit causing frequent line breaks, which are themselves heavily indented, which makes the run on itself more likely to spawn a run on), I prefer the lesser of two evils.
I guess I see "fits in 80" as a virtue. If it's pushed up against the right hand margin, I tend to refactor. Code that I can't make look nice in 80 cols, is very often code I don't like maintaining over time.
He should probably look at tup (http://gittup.org/tup/make_vs_tup.html) which, when using inotify (otherwise stat must be called O(n) times), pretty much always start building (or reports nothing to do) in a few milliseconds.
Thanks for reminding me of tup again. I read about it earlier, and had always meant to go back to it and give it a try.
And I wonder whether we can make a version of git that uses inotify.
If he's willing to endure a long-lived server process, he can probably have no-op builds with a tup-like system in less than a few milliseconds. (Basically as long as it takes to run through a single `if' and return to the shell; since no news is good news.)
See also http://gittup.org/tup/tup_vs_mordor.html which compares tup to "the All-Seeing Eye of Mordor". Not an actual build system, alas, but a hypothetical perfect one that Just Knows what files need rebuilding and rebuilds them with no overhead.
It turns out (not terribly surprisingly, given how it works) that tup's overhead is very small.
He decided that inotify wasn't worth it for his purposes.
> I had originally intended to make Ninja be memory-resident and to use inotify to keep the build state hot at all times. But upon writing the code I found it was fast enough to run from scratch each time. Perhaps a slower computer would still benefit from inotify; the data structures are set up such that using inotify shouldn’t be hard.
It would be interesting to see what would happen if they were using waf instead of scons. Waf is also in python, and started as a fork of scons (but is so different that it can now be considered as a totally different design and codebase). Waf is much faster than scons (easily one order of magnitude), to the point that I think it would be hard to be much faster without losing features and/or system specific features (notifying systems, using checksumed file systems, etc...).
Samba has been using waf for > 6 months now, and they seem quite happy with it. As a former user/contributor of scons, I much prefer waf now, and anyone interested in complete build systems should look at it IMO.
I wish I had not used the WAF build system, it works - it’s okay, but it introduces more WTFs than necessary. I can perhaps dig out from under WAF at some point but it would be a monumental undertaking.
I cannot find a link, but if samba team seems happy from their choice of waf, and samba is closer to chromium in terms of what it needs from a build tool (large multiplatform compiled codebase). Your link is not really informative - he does not like it, but we don't know why. I am not surprised it has WTF for something like nodejs (which let's be honest, has rather simplistic needs for a build tool compared to samba or chromium). Waf is far from perfect for sure.
I have experience with quite a few build tools, from autoconf/make to waf, with custom ones, and waf is by far the one with the least WTF so far if you want to do something which is hard. It gives you the power of a real language, which is needed for complex builds IMHO. It looks like node.js is now using cmake, and its macro language is quite weak and error prone IMO, although it definitely works for non trivial projects. Waf is also fast, small enough that you can hack it if wanted (compared to cmake with C++ + architecture based on autogenerated makefiles...), and just enough usage by non-trivial projects I would expect for a tool I may depend on (samba and ardour are two quite big, multi-year, > 100 kloc of cross-platform, multi-language tools).
I'll second the suggestion to take a look at tup -- it is based on some really good, clear-headed foundational thinking about how to make incremental builds fast, plus the implementation looks good (though I have only tried it out on experimental toy setups, and it is still pretty new, so who knows).
Regarded the specifically cited point of including dependencies on compilation flags, unless I am confused, I believe it can be done much more quickly in standard make, in one of two ways:
First way: make the build path of the object file dependent on the build flags. This has zero performance penalty, and also has the nice side-effect that when changing flags (e.g., from release to debug build and back again), you don't have to recompile everything, because you still have the previous build sitting around.
Second way: store the build flags in a separate makefile snippet (which you can either include or get the value of using $(shell)), and add that as a dependency of the object files. This has minimal performance impact since it's just another normal dependency for the object files. (This second trick is from one of the articles linked to about redo posted a few days ago; sadly I don't recall exactly which.)
I'm always interested in alternatives to Make because I just find it so painful. However, I'd say that only about half of Make-related pain comes from its dependency management. The other half, to me, is in using its language, and Ninja doesn't seem to do anything to ease that pain. Its manual says: "You should generate your ninja files using another program." That seems like a bad sign to me.
Tools like CMake can be helpful when there are lots of configurations available and dependencies to check, but on a small project I want to write a quick script that will just work. CMake and its ilk add another layer of complication that I don't want to have to deal with most of the time.
It really would, but last I heard they'd decided on (and implemented) hg as their DVCS on the serverside. Fortunately, "clientside" git integrates with more serverside VCSes than any other system I'm aware of :)
At IO 2009 there was a talk about integrating Mercurial into code.google.com project hosting. As I recall it was referred to as the "first" DVCS to be integrated because its requirements lined up nicely with Google's existing infrastructure. It was implied that other DVCS's would be considered for integration in the future.
Wow, I haven't been working on projects of such scale, but I supposed that existing infrastructure (make, gcc etc.) is good for large projects.
Does it mean that there's something wrong with the current state of affairs that you have to rebuild your infrastructure for a large project? Or does it mean that Google is so unbelievably great that everything is not good enough for them so if it's important they have to redo it from scratch?
If you have a large project (speaking from experience with Mozilla here), you discover that gcc has issues (speed, generated code quality, etc), make has issues (especially if you want to use the same makefiles on all platforms; things quickly get very painful), the linker has algorithms that are O(N^2) in number of symbols, filesystems have issues, and your own code has issues. ;)
Chromium still uses a gnu make based build system. He did this "for fun" to investigate a potentially faster way. I am very envious, my "for fun" activities never result in something so cool.
Why do new build systems have to use some clunky old make style syntax? For me speed is hardly a primary goal. A build system must be understandable, readable and easy to debug. For starters, it should have an easy to read syntax.
If you have a build system which your users are also concerned about, readability and maintainability are a lot more important. SCons managed to achieve most of this by using a Python syntax. But its behavior can be quite unpredictable at times.
It's interesting, but probably not that surprising, that the Linux port has faster buildtimes, given that building the Linux kernel is the primary metric that kernel developers are interested in, and one they try to improve constantly.
I don't think you can take the merit away from the build system author so easily. As the article states and the manual mentions, a clever build system based on make, without using recursive Makefiles, took 10 seconds to start compiling after modifying a file, and the new build system takes less than one second, on the same Linux system. No doubt the OS has many things to say regarding process start time, filesystem access, etc. But the big differences come from the build system, entirely in the userspace side.
I think that he meant that the Linux tools were faster because the Linux kernel devs also use them and contribute back to those projects in an effort to improve their own build times. (i.e. if make was slowing down the kernel build times, then some kernel devs would either build something else or improve make)
That's absolutely true, and I do not want to take anything away from him.
I was trying to make the point that among the operating systems doing builds under his new system, Linux was compiling the fastest. I was suggesting an interesting possible explanation for that.
I had a similar initial reaction, perhaps even stronger than yours (mine was of the 'good grief does the world really need another build system') but the article is well written enough and interesting enough to justify itself I think.
(previously: http://news.ycombinator.com/item?id=2104803)