At work, we switched over from TensorFlow to PyTorch when 1.0 was released, both for R&D and production... and our productivity and <i>happiness</i> with PyTorch noticeably, significantly improved.<p>Back when we were using TensorFlow, whenever we wanted to try something new that wasn't already provided out-of-the-box by existing APIs, sooner or later we would find ourselves <i>wrestling</i> with its machinery, especially for models with more complex control flow.<p>TensorFlow <i>feels</i> like it was built from the ground up to scale up to billions of users and all kinds of devices, with developer productivity and happiness a secondary priority. PyTorch <i>feels</i> like it was built the other way around, prioritizing developer productivity and happiness; other considerations were secondary.<p>That said, we are keeping an eye on Swift + MLIR + TensorFlow. We think it could unseat PyTorch for R&D and eventually, production, due to (a) the promise of automatic creation of high-performance GPU/TPU kernels without hassle, (b) Swift's easy learning curve, and (c) Swift's fast performance and type safety. Jeremy Howard has a good post about this: <a href="https://www.fast.ai/2019/03/06/fastai-swift/" rel="nofollow">https://www.fast.ai/2019/03/06/fastai-swift/</a>