Not really anything new in there. Been dealing with python concurrency a lot and i dont find it great compared to other languages (eg kotlin).<p>One thing I am struggling with right now is how do I handle a function that its both I/O intensive and CPU-bound? To give more context, I am processing data which on paper is easy to parallelise. Say for 1000 lines of data, I have to execute my function f for each line, in any order. However f using the cpu a lot, but also doing up to 4 network requests.<p>My current approach is to divide 1000/n_cores, then launch n_cores processes and on each of them run the function f asynchronoulsy on all inputs of that process, async to handle switching on I/O. I wonder if my approach could be improved.