“<i>Whenever a new major CUDA Compute Capability is released, the ABI is broken. A new NVIDIA C++ Standard Library ABI version is introduced and becomes the default and support for all older ABI versions is dropped.</i>”<p><a href="https://github.com/NVIDIA/libcudacxx/blob/main/docs/releases/versioning.md" rel="nofollow">https://github.com/NVIDIA/libcudacxx/blob/main/docs/releases...</a>
> Promising long-term ABI stability would prevent us from fixing mistakes and providing best in class performance. So, we make no such promises.<p>Wait NVidia actually get it? Neat!
It really is a tiny subset of the C++ standard library, but I'm happy to see they're continuing to expand it: <a href="https://nvidia.github.io/libcudacxx/api.html" rel="nofollow">https://nvidia.github.io/libcudacxx/api.html</a>
For everyone wondering where are all the data structures and algorithms, vector and several algorithms are implemented by Thrust. <a href="https://docs.nvidia.com/cuda/thrust/index.html" rel="nofollow">https://docs.nvidia.com/cuda/thrust/index.html</a><p>Seems the big addition of the Libcu++ to Thrust would be synchronization.
Here's a somewhat related talk from CppCon '19:
"The One-Decade Task: Putting std::atomic in CUDA"<p><a href="https://www.youtube.com/watch?v=VogqOscJYvk" rel="nofollow">https://www.youtube.com/watch?v=VogqOscJYvk</a>
This is super-cool.<p>For those of us who can't adopt it right away, note that you can compile your cuda code with `--expt-relaxed-constexpr` and call any constexpr function from device code. That includes all the constexpr functions in the standard library!<p>This gets you quite a bit, but not e.g. std::atomic, which is one of the big things in here.
Unfortunate name, "cu" it's the most well known slang for "anus" in Brazil (population: 200+ million). "Libcu++" is sure to cause snickering.
1. How do we know what parts of the library are usable on CUDA devices, and which are only usable in host-side code?<p>2. How compatible is this with libstdc++ and/or libcu++, when used independently?<p>I'm somewhat suspicious of the presumption of us using NVIDIA's version of the standard library for our host-side work.<p>Finally, I'm not sure that, for device-side work, libc++ is a better base to start off of than, say, EASTL (which I used for my tuple class:
<a href="https://github.com/eyalroz/cuda-kat/blob/master/src/kat/tuple.hpp" rel="nofollow">https://github.com/eyalroz/cuda-kat/blob/master/src/kat/tupl...</a>
).<p>...<p>partial self-answer to (1.): <a href="https://nvidia.github.io/libcudacxx/api.html" rel="nofollow">https://nvidia.github.io/libcudacxx/api.html</a> apparently only a small bit of the library is actually implemented.
I really do not understand why a (very good) hardware provider is willing to create/direct/hint custom software for the users.<p>Isn't this exactly what a GPU firmware is expected to do ? Why do they need to run software in the same memory space as my mail reader ?