The two criticism at the end are... odd.<p>First, there is criticism that assigning to a shared_ptr is not synchronized so it would be bad to share a single shared_ptr object between threads. True, but that is no different than literally every other non-atomic object in C++. It's not surprising in any way.<p>Second, there is criticism that assigning to the object pointed at by the shared_ptr is not synchronized between threads. This is odd because that's not actually different than a single thread where there are two shared_ptrs pointing to the same object. That is, even with single threading you have a problem you must be careful about.
See also:<p><a href="https://www.boost.org/doc/libs/1_65_0/libs/smart_ptr/doc/html/smart_ptr.html#local_shared_ptr" rel="nofollow">https://www.boost.org/doc/libs/1_65_0/libs/smart_ptr/doc/htm...</a><p>i.e. a single-threaded non-atomic shared_ptr<p>Rust fans can dislike on the "C++ has no central library system like crates" all they want, but there's not many things you actually need when programming that don't exist for C++, even if you don't like them not coming in a little box that looks like other little boxes.
> Apparently, this is enough of an issue that C++20 added a partial template specialization to std::atomic<std::shared_ptr>. My advice, though, would be "don't do that!". Instead, keep your shared pointer in a single thread, and send copies to other threads as needed.<p>This is to support an atomic lock-free shared_ptr. You can then use this as a building block for building lock-free data structures.
Pretty clickbaitey title.<p>It's possible to implement in C++... so it's not "too dangerous" for C++. It's dangerous for people who don't have knowledge of what they're doing in C++; same as in any programming language.
From my experience, the biggest footgun with shared_ptr and multi threading is actually destruction.<p>It is very hard to understand which thread will call the destructor (which is by definition a non-thread-safe operation), and whether a lambda is currently holding a reference to the object, or its members. Different runs result different threads calling the destructor, which is very painful to predict and debug.<p>I think that rust suffers from the same issue, but maybe it is less relevant as it is a lot harder to cause thread safety issues there.
From the stackoverflow link within TFA:<p>> With GCC when your program doesn't use multiple threads shared_ptr doesn't use atomic ops for the refcount. This is done by updating the reference counts via wrapper functions that detect whether the program is multithreaded (on GNU/Linux this is done by checking a special variable in Glibc that says if the program is single-threaded[1]) and dispatch to atomic or non-atomic operations accordingly.<p>> I realised many years ago that because GCC's shared_ptr<T> is implemented in terms of a __shared_ptr<T, _LockPolicy> base class, it's possible to use the base class with the single-threaded locking policy even in multithreaded code, by explicitly using __shared_ptr<T, __gnu_cxx::_S_single>. You can use an alias template like this to define a shared pointer type that is not thread-safe, but is slightly faster[2]:
Refcounted memory management on a large scale is slow anyway, with or without atomic refcounting. The bigger problem is that Rc, Arc or shared_ptr often only manage one small object, and that object lives in a separate tiny heap allocation. So you end up with many tiny heap allocations spread more or less randomly around in memory and the likelyhood of getting cache misses on access is much highter than tightly packing the underlying data into arrays and walking over the array items in order.<p>And if you only have a small number of refcounted references in your program, the small performance difference between atomic and non-atomic refcounting doesn't matter either.<p>Same problem with Box and unique_ptr btw, a handful is ok, but once that number grows into the thousands all over the codebase it's hard to do any meaningful optimization (or even figure out how much performance you're actually losing to cache misses because it's a death-by-a-thousand-cuts scenario).
I am not going to be surprised to be downvoted, but you don't need shared_ptr in C++, that is itself overkill<p>The point of C++ is performance. If you don't need performance, why not just use Java or Python, why use Rust?
There are valid selling points to rust's safety features, but this just feels like "I use rust because I need my compiler to be my training wheels". More of a self-own than anything.
> <i>the Rc type does not support being sent between threads</i><p>So why even have such a thing in a language designed for concurrent programming from the ground up?<p>Arc should be called Rc, and that's it.