It was necessary to move away from Lua to stay relevant within the machine learning community. Python was a natural choice because there was Theano and TensorFlow.<p>PyTorch could make use of the best API ideas from the other frameworks (also higher-level like Keras). And it was executed well. All these core principles of easy debuggability are indeed very important to win developers. Clean code, understandable code, flexibility, these are all very related to that, or mostly the same thing.<p>It's easy to get bloated, complex and complicated for a successful framework though. I wonder how PyTorch will look in a few years. I also remember the first TensorFlow releases, where the whole source code was also quite easy to understand. Then TensorFlow added more and more things, and many different types of APIs, starting to deprecate some earlier things, etc. The PyTorch internal code is also already much more complex than it was initially.<p>One reason JAX is now popular is because it again started with a fresh API. Despite being based on a new kind of idea of code transformations, which seems nice and powerful.<p>When looking at these developments, I really wonder what the future will look like. It's good to have new ideas and new or improved APIs. It's also good to adapt things for new kinds of hardware (GPUs, TPUs, maybe other neuromorphic hardware later).
> So, over the years, I absorbed and appreciated that Torch was a user-centric product, which stood for immediate-mode, easy-to-debug, stay-out-of-the-way explicitness. It was targeted at people somewhat familiar with programming matters, and who could reason about things like performance, and if needed, write a C function and bind it in quickly.<p>This paragraph sort of surprises me. In my experience if you want to do anything other than calling out to numeric libraries, you can do it in Lua and it will work, or you can do it in Python and suddenly your machine learning pipeline will spend 95% of its time running Python while your GPU idles. So the need to be able to drop down to C is much more severe in Python, and the difficulty of calling out to C is much greater.
This article does a good job explaining how PyTorch gained an advantage over TensorFlow. The 1.0 release of TensorFlow with graphs and feed_dicts was a little clunky but made sense. After 1.0 the second-system effect took hold quickly. Eager mode, Keras, TFX ... it all started to look like a mess.