So this means modules like concurrent.futures can create a thread pool object (as opposed to processpool) and run in parallel?<p>Or is the functionality as of 3.13 still limited to low level python embedding applications?
> "No matter what happens there is going to be a lock occurring. No two PyThreadStates can execute Python bytecode at the same time. However, they can execute multiple C calls at the same time which is why for long running pure C operations extension and embedding developers are encouraged to release the GIL temporarily."<p>Very exciting! I wonder what the first set of motivating applications are and what kind of performance gains are they expecting.
> // 500 interpreter states<p>> static constexpr auto MAXIMUM_STATES = 463;<p>There is a joke here that I’m missing. Does anyone understand what it is?
Does this mean that even the Python 3.13 version is around the corner we have to expect some issues when running threads on a program?<p>Maybe I’m slow today, but are the author complaining (rightfully so) about the buggy no-gil implementation?<p>Should we wait until the feature is more stable before implementing new libraries? I was thinking on making use of it in new libraries I plan to develop.<p>What are the most blatant use case to test this feature?<p>PD: So many questions, I know, sorry everybody!