TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

C++ Headers are Expensive

107 pointsby kbwtover 6 years ago

11 comments

AndyKelleyover 6 years ago
In the Zig stage1 compiler (written in C++), I tried to limit all the C++ headers to as few files as possible. Not counting vendored dependencies, the compiler builds in 24 seconds using a single core on my laptop. It&#x27;s because of tricks like this:<p><pre><code> &#x2F;* * The point of this file is to contain all the LLVM C++ API interaction so that: * 1. The compile time of other files is kept under control. * 2. Provide a C interface to the LLVM functions we need for self-hosting purposes. * 3. Prevent C++ from infecting the rest of the project. *&#x2F; &#x2F;&#x2F; copied from include&#x2F;llvm&#x2F;ADT&#x2F;Triple.h enum ZigLLVM_ArchType { ZigLLVM_UnknownArch, ZigLLVM_arm, &#x2F;&#x2F; ARM (little endian): arm, armv.*, xscale ZigLLVM_armeb, &#x2F;&#x2F; ARM (big endian): armeb ZigLLVM_aarch64, &#x2F;&#x2F; AArch64 (little endian): aarch64 ... </code></pre> and then in the .cpp file:<p><pre><code> static_assert((Triple::ArchType)ZigLLVM_UnknownArch == Triple::UnknownArch, &quot;&quot;); static_assert((Triple::ArchType)ZigLLVM_arm == Triple::arm, &quot;&quot;); static_assert((Triple::ArchType)ZigLLVM_armeb == Triple::armeb, &quot;&quot;); static_assert((Triple::ArchType)ZigLLVM_aarch64 == Triple::aarch64, &quot;&quot;); static_assert((Triple::ArchType)ZigLLVM_aarch64_be == Triple::aarch64_be, &quot;&quot;); static_assert((Triple::ArchType)ZigLLVM_arc == Triple::arc, &quot;&quot;); ... </code></pre> I found it more convenient to redefine the enum and then static assert all the values are the same, which has to be updated with every LLVM upgrade, than to use the actual enum, which would include a bunch of other C++ headers.<p>The file that has to use C++ headers takes about 3x as long to compile than Zig&#x27;s ir.cpp file which is nearing 30,000 lines of code, but only depends on C-style header files.
评论 #18851914 未加载
评论 #18854027 未加载
beached_whaleover 6 years ago
You can know where you time is going, at least with clang, by adding -ftime-report to your compiler command line. The headers take a long time is often that the compiler can do a better job at optimizing and inlining as everything is visible. Just timing your compiles is like trying to find things in the dark, you know the wall is there but what are you stepping on :) Good to know what is taking a long time, but it may not be the header itself but how much more work the compiler can do now to give a better output(potentially)
评论 #18854646 未加载
nanolithover 6 years ago
I recommend three things for wrangling compile times in C++: precompiled headers, using forward headers when possible (e.g. ios_fwd and friends), and implementing an aggressive compiler firewall strategy when not.<p>The compiler firewall strategy works fairly well in C++11 and even better in C++14. Create a public interface with minimal dependencies, and encapsulate the details for this interface in a pImpl (pointer to implementation). The latter can be defined in implementation source files, and it can use unique_ptr for simple resource management. C++14 added the missing make_unique, which eases the pImpl pattern.<p>That being said, compile times in C++ are going to typically be terrible if you are used to compiling in C, Go, and other languages known for fast compilation times. A build system with accurate dependency tracking and on-demand compilation (e.g. a directory watcher or, if you prefer IDEs, continuous compilation in the background) will eliminate a lot of this pain.
评论 #18851707 未加载
评论 #18851730 未加载
评论 #18854034 未加载
评论 #18853954 未加载
AdieuToLogicover 6 years ago
If C++ compile time is a concern and&#x2F;or impediment to productivity, I recommend the seminal work regarding this topic by Lakos:<p>Large-Scale C++ Software Design[0]<p>The techniques set forth therein are founded in real-world experience and can significantly address large-scale system build times. Granted, the book is dated and likely not entirely applicable to modern C++, yet remains the best resource regarding insulating modules&#x2F;subsystems and optimizing compilation times IMHO.<p>0 - <a href="https:&#x2F;&#x2F;www.pearson.com&#x2F;us&#x2F;higher-education&#x2F;program&#x2F;Lakos-Large-Scale-C-Software-Design&#x2F;PGM136492.html" rel="nofollow">https:&#x2F;&#x2F;www.pearson.com&#x2F;us&#x2F;higher-education&#x2F;program&#x2F;Lakos-La...</a>
评论 #18854935 未加载
kazinatorover 6 years ago
Speaking about GNU C++ (and C), the headers are getting cheaper all the time compared to the brutally slow compilation.<p>Recently, after a ten year absence of not using <i>ccache</i>, I was playing with it again.<p>The speed-up from <i>ccache</i> you obtain today is quite a bit more more than a decade ago; I was amazed.<p><i>ccache</i> does not cache the result of preprocessing. Each time you build an object, <i>ccache</i> passes it through the preprocessor to obtain the token-level translation unit which is then hashed to see if there is a hit (ready made .o file can be retrieved) or miss (preprocessed translation unit can be compiled).<p>There is now more than a 10 fold difference between preprocessing, hashing and retrieving a .o file from the cache, versus doing the compile job. I just did a timing on one program: 750 milliseconds to rebuild with ccache (so everything is preprocessed and ready-made .o files are pulled out and linked). Without ccache 18.2 seconds. 24X difference! So approximately speaking, preprocessing is less than 1&#x2F;24th of the cost.<p>Ancient wisdom about C used to be that more than 50% of the compilation time is spent on preprocessing. That&#x27;s the environment from which came the motivations for devices like precompiled headers, #pragma once and having compilers recognize the #ifndef HEADER_H trick to avoid reading files.<p>Nowadays, those things hardly matter.<p>Nowdays when you&#x27;re building code, the rate at which .o&#x27;s &quot;pop out&quot; of the build subjectively appears no faster than two decades ago, even though the memories, L1 and L2 cache sizes, CPU clock speeds, and disk spaces are vastly greater. Since not a lot of development has gone into preprocessing, it has more or less sped up with the hardware, but overall compilation hasn&#x27;t.<p>Some of that compilation laggardness is probably due to the fact that some of the algorithms have tough asymptotic complexity. Just extending the scope of some of the algorithms to do a bit of better job causes the time to rise dramatically. However, even compiling with -O0 (optimization off), though faster, is still shockingly slow, given the hardware. If I build that 18.2 second program with -O2, it still takes 6 seconds: an 8X difference compared to preprocessing and linking cached .o files in 750 ms. A far cry from the ancient wisdom that character and token level processing of the source dominates the compile time.
评论 #18853207 未加载
评论 #18854166 未加载
RcouF1uZ4gsCover 6 years ago
&gt; The test was done with the source code and includes on a regular hard drive, not an SSD.<p>In my opinion, this makes any conclusion dubious. If you really care about compile times in C++, step 0 is to make sure you have an adequate machine (at least quadcore CPU&#x2F; lot of RAM&#x2F;SSD). If the choice is between spending programmer time trying to optimize compile times, versus spending a couple hundred dollars for an SSD, 99% of the time, spending money on an SSD will be the correct solution.
评论 #18852041 未加载
评论 #18853366 未加载
评论 #18853283 未加载
评论 #18852113 未加载
lbrandyover 6 years ago
All of msvc, gcc, clang, and the isocpp committee have active work ongoing for C++ modules.<p>We&#x27;ll have them Soon™.
评论 #18853973 未加载
_0w8tover 6 years ago
Opera contributed jumbo build feature to Chromium. The idea is to feed to the compiler not the individual sources, but a file that includes many sources. This way common headers are compiled only once. The compilation time saving can be up to factor of 2 or more on a laptop.<p>The drawback is that sources from the jumbo can not be compiled in parallel. So if one has access to extremely parallel compilation farm, like developers at Google, it will slow down things.
评论 #18854546 未加载
评论 #18854139 未加载
mcvover 6 years ago
This reminds me of my very first job after university. We used Visual C++, with some homebrew framework with one gigantic header file that tied everything together. That header file contained thousands or possibly tens of thousands of const uints, defining all sorts of labels, identifiers and whatever. And that header file was included absolutely everywhere, so every object file got those tens of thousands of const uints taking up space.<p>Compilation at the time took over 2 hours.<p>At some point I wrote a macro that replaced all those automatically generated const uints with #defines, and that cut compilation time to half an hour. It was quickly declared the biggest productivity boost by the project lead.
fizwhizover 6 years ago
Isn&#x27;t this the reason precompiled headers are a thing?
评论 #18851612 未加载
评论 #18853070 未加载
评论 #18852239 未加载
timviseeover 6 years ago
I would love to see the times of this on a Linux system (preferably on the same hardware).