I concur.<p>All programming languages have their intrinsic features, but they also have a culture that surrounds them.<p>C++ has a rather odd culture... it's very a very fundamentalist/extremist one, all providing the optimal way to do things with no / little extra runtime overhead, but with no regard at all to compile times, readability, etc. It's not necessarily a bad thing, and it has its uses for games, ultra low latency software, all that stuff.<p>I am proficient in C++, I've worked in it for a bunch of years and can get stuff done in it and solve problems and ship software. C++ job interviews tend not to pay much regard to that, and focus on the arcane finer points of move semantics, rvalue references, ranges, concepts, all this new stuff... which has its place, but there is a <i>lot</i> of that to learn, and in many cases it just doesn't matter. After all, the language existed without it for a bunch of years.<p>Compare this with Java, or python or whatever, where the culture and focus is very much about "can you solve a problem / implement a solution using this language <or not>?" rather than "of the dozens of different ways to do this... what are the differences between each one".<p>For me, I really don't mind using and programming in C++, and I would probably even choose it for many / most projects, but I can do without the C++ people and culture.
<p><pre><code> inline constexpr auto for_each =
[]<Range R,
Iterator I = iterator_t<R>,
IndirectUnaryInvocable<I> Fun>(R&& r, Fun fun)
requires Range<indirect_result_t<Fun, I>> {
return std::forward<R>(r)
| view::transform(std::move(fun))
| view::join;
};
</code></pre>
Unless it solves world hunger or COVID permanently I am not typing this abomination, much less try to understand it. Very seriously whatever "team" is working on C++{20,21,..} should be disbanded and sent to someplace where they don't have access to computers. Nothing modern including Kubernetes ecosystem induces this much rage in me.
I think people vastly undervalue static linking when it comes to compile times. If every file in a project changes every minute then yea - you're hooped... But usually certain groups will work in certain areas with those changes being synced only so often. If you examine the dependency tree and manage to split up the codebase into modules that you compile into .a files - then have a final step where you whip together any overly broad files along with just extracting the .o files from the archives[1] you'll live a happy life.<p>This can terribly fail if you have header files depended on by everything that constantly change - but honestly... that's a pretty bad code smell so either make an archive that's a dependency of all those others (and occasionally feel real pain when the header file actually gets updated) or else refactor things for sanity's sake.<p>A year and a half into my first job I was tasked with revising the build system and there are some really effective things you can do there without very much training. You want to lean on `make` a lot - no seriously, <i>a lot</i> - but if you do you can make some amazing things happen. You've also got a lot of cross platform tooling if you need to compile on different architectures - though scripting how those interact does get progressively more difficult in those cases.<p>1. Just in case you're unfamiliar - linux static libraries are essentially just an archive of a bunch of object files... so you can freely use ar to manipulate them in all sorts of ways <a href="https://tldp.org/HOWTO/Program-Library-HOWTO/static-libraries.html" rel="nofollow">https://tldp.org/HOWTO/Program-Library-HOWTO/static-librarie...</a>
Eh, of all the things to criticize C++ over, compile times is somewhere near the bottom of the list. Compile times matter in extreme cases, but 3 seconds is not really something I would worry about. I have seen Lisp compilers take longer than that just to start the REPL.<p>The fact that we are <i>still</i> dealing with weird problems with pointers, the fact that C++ has lambdas without garbage collection (explanation of the problem is too long for this comment), and the incoherent type system are much bigger issues than weird syntax or long compile times. The C++ feature set is a bunch of semi-compatible, sometimes outright incompatible (ahem destructors vs. exceptions and coroutines), ideas that keep getting extended further from compatibility by the standards committee.
I genuinely think that the 'zero-cost abstraction' feature of C++ is ultimately a poisoned apple that has caused far-reaching damage to the entire programming experience.
The problem, is that zero-cost in C++ means that - if all the cards line up - feature X will not cause <i>performance</i> overhead in a <i>Release build</i>.<p>In C, I think it's reasonable to assume that there is a linear relationship between the number of expressions in ones code, and the amount of assembly instructions it generates.<p>No such relationship exists in C++ code.<p>The problem is that due to this zero-cost mentality, brain-dead simple libraries, such as iterators, often have dozens of layers of abstractions, which all show up in a Debug build.<p>This makes Debug builds harder to well, debug, as you have to understand all these abstractions, (and the reason why the designers thought they were a good idea, but that falls under the umbrella of psychiatry), as well as making the build unusably slow, of†en forcing C++ devs to debug Release builds, which means that they are staring at 3 lines of x86 assembly, with hundreds of lines of 'helpful' compiler generated source code around it.
This is 95% criticism about C++20 ranges (specifically the ranges v3 implementation), but spun as being about some more general trend. I think some of the criticism of ranges is fair - compile times <i>do</i> matter and non-optimized performance can also matter.<p>Stuff like `iota` being obscure is less convincing. It wasn't invented here and it is anyway something you learn once and then it's part of your vocabulary.
I feel this. On the subject of Boost Geometry I cut multiple minutes out of our build times by removing all instances of "#include <boost/geometry/geometry.hpp>". I hate that the example code seems to encourage this - <a href="https://www.boost.org/doc/libs/1_64_0/libs/geometry/doc/html/geometry/reference/algorithms/transform/transform_3_with_strategy.html" rel="nofollow">https://www.boost.org/doc/libs/1_64_0/libs/geometry/doc/html...</a> as it adds a few seconds to the compilation time for each TU that does this (which can be most of them if you have it in another header).
> other viable systems programming languages simply did not exist (now you at least have Rust as a possible contender).<p>(I have no horse in this race) But if a big part of their complaint is compile times, Rust may not be the best example of a contender.
> So this lazy evaluation LINQ style [in the C# example] creates additional 0.03 seconds work for the compiler to do. In comparison, the C++ case was creating an additional 3 seconds of work, or 100x more! This is what you get when “features” are part of the language, as opposed to “it comes as hundred thousand lines of code for the compiler to plow through”.<p>imo this is the big takeaway here
I have 2 personal projects, one in C++ about 90k line. The other one in JavaScript (ReactJS). The full rebuild of the JS projects takes longer than the C++ project.
Yes, MSVC STL implementation is dog-slow in Debug. I solved the problem by creating a new build configuration called "RelNoOpt" that builds with "Release" runtime-libraries and STL, but turns off all optimizations. I get debugging experience of "Debug" build with none of its performance penalties. (Though the extra checks -- esp iterator invalidation -- in Debug STL have saved me tons of debugging time a couple of times on another project.)
It's interesting to note that the algorithm can be rewritten with 2 nested loops instead of 3; for each z and x, check that there exists an integer y. At 10,000 triples found, this is the difference between (on my system) 700ms and 42000ms, a factor of 60.
In my opinion, simpler is often better than complex. C++ gives control over memory layout and memory (de) allocations. Aside from complexity of newer C++ template/stl features, it becomes harder to stay in control of memory (and performance). Also, code becomes harder to read. Hence in games (and often in embedded) the use of C-style C++ is popular.
Discussed at the time:<p><i>“Modern” C++ Lamentations</i> - <a href="https://news.ycombinator.com/item?id=18777735" rel="nofollow">https://news.ycombinator.com/item?id=18777735</a> - Dec 2018 (249 comments)
I played around with ranges in dlang. The idea of C++ ranges kind of originates from there. In D it's very easy to use and the standard library lets you write beautiful code with it.<p>C++ made it so ugly. Especially if you wanna implement a custom range.<p>I can recommend this article by Andrei Alexandrescu, if you are interested in the idea of ranges.<p><a href="https://www.informit.com/articles/printerfriendly/1407357" rel="nofollow">https://www.informit.com/articles/printerfriendly/1407357</a>
I also have to agree that ranges functionality in C++20 is somewhat warty. I wonder, though, how much time the Pythagorian triplets example takes to compile with C++20. The ranges library simulated functionality which would later go into the language, so the comparison is not entirely fair.
if i could have function overloading, operator overloading (particularly (), ++, --, *) and i could auto-convert my existing codebase to C, i'd move right now to C. i stay with C++03 and thats enough.
The key is "(2018)". Things are better now.<p>Use ccache, ninja, the mold linker. Lean into asysnc. Split off libraries. A little attention, once, saves time all day every day.