On the main question, I don’t think Python will disappear for LLM inference <i>soon</i>. But I think there are two processes that will determine the longer term process, as AI inference gets built into more things:<p>(1) Python’s dominance in AI inference is driving, and will continue to drive, more investment in improving Python for lots of things that it isn’t great at right now that people want to do a long with AI inference. We’ve actually seen a lot of that over the last few years, with physics engines and robotics simulation platforms for Python, some of which are Python bindings for existing libraries written in other languages, but some of which are built in Python (e.g., via Taichi or Numba, both of which can produce and execute GPU kernels from Python code, and the latter of which can JIT and parallelize (mostly numeric) Python code on CPU, as well.) This will also include investment in Python’s core and standard library to address pain points.<p>(2) The increasing importance of AI inference will at the same time drive more investment in AI inference libraries for non-Python platforms.<p>The relative balance between the progress of those two efforts will be a big factor in how much Python is used in inference going forward, for AI in general, and for LLM’s in particular.