The problem isn't "intmax_t". The problem is "int".<p>If you have an ABI, you need to put an explicit size and signedness on every parameter and return value. Period. No excuses.<p>No "int". No "unsigned int". If I'm being really pedantic, don't even use "char".<p>It should be "int32_t", "uint32_t", and "uint8_t".<p>Every time I see objections, it's always someone who wants to use some weird 16-bit architecture. The problem is that those libraries <i>probably won't work anyhow</i> since nobody tests their libraries on anything other than x86 and maybe Arm. If your "int" is 16 bits, you're likely to have a broken library <i>anyway</i>.
As a developer I complain that passing unique_ptr is a couple if cycles slower than it needs to be because of the ABI and I wish the committee/GCC was more aggressive with ABI breaking.<p>As an user I complain that I can't run linux games from 20 years ago because GCC broke the libstdc++ ABI 15 years ago and how WIN32 is the only stable unix ABI.<p>Luckily I'm not a compiler developer.
> the vast majority of the shared ecosystem depends on shared libraries/dynamically linked libraries for the standard.<p>The more I use C and C++, the more I am convinced that shared libraries are the biggest technical debt for the whole ecosystem. It is these shared libraries that are the driving impetus behind “ABI stability”. It is because of shared libraries that we can’t have nice performance or safety enhancing features.<p>Right now the C and C++ ecosystem is groaning under the weight of shared library technical debt.
intmax_t should be kept out of stable ABI definitions, and out of API's.<p>There has to be an ABI for it because we have to pin down what it means to pass an intmax_t as an argument to a function, how it is aligned on the stack if passed that way, and how it's placed into a structure and so on.<p>However, there could be a provision that the ABI treatment of intmax_t is not guaranteed; it is subject to change due to the redefinition of intmax_t.<p>And, for that reason, it should be kept out of API's.<p>That leaves API's that deal specifically with intmax_t itself rather than using it to represent something. Those can use aliasing and versioning.<p>Say we had a function like this:<p><pre><code> struct intmax_quot_rem intmax_div(intmax_t, intmax_t);
</code></pre>
today intmax_t might be 64 bits, so the application should be compiled in such a way that the call to intmax_div goes to some __intmax_div_64, which looks like this:<p><pre><code> struct intmax_quot_rem __intmax_div_64(int64_t, int64_t);
</code></pre>
Even when intmax_t changes to 128, that compiled program continues to reference __intmax_div_64 which uses int64_t parameters and structure members. A newly compiled program calls __intmax_div_128.<p>A particular problem would be functions in the printf family. Say we have a conversion specifier which prints intmax_t which is 64 bits today. Here, the solution is even simpler. The "PRI" macros introduced in C99 provide it. Given an intmax_t value x, we print it like this:<p><pre><code> printf("x = %" PRIdMAX "\n", x);
</code></pre>
so today that might expand to some conversion specifier that is identical to the one for PRIx64. And so that compiled program will have that baked into its conversion string, so everything will continue to be the same even if the platform moves to a 128 bit intmax_t.<p>A newly compiled program on the 128 bit intmax_t platform will get a different PRIdMAX
string from the header file, which expands to a conversion specifier matching int128_t.<p>Basically all the issues are solvable except the issue of some application code carelessly using intmax_t in its APIs without any plan for versioning.
I think C# did it right in 2000. Regardless of CPU architecture, integer types like short, int, or ulong have fixed size of 16, 32 and 64 bits respectively.<p>There're couple special types with machine-dependent size, like IntPtr, but these only used for opaque handles and C interop.