I'm a professional scientist, so let me give my two cents on this matter. Being able to compare your work against SOTA (state of the art) is pretty critical in academic publications. If everyone else in your area uses framework X, it makes a lot of sense for you to do it too. For the last few years, Pytorch has been king for the topics I care about.<p>However .. one area where Tensorflow shined was the static graph. As our models get even more intensive and needs different parts to execute in parallel, we are seeing some challenges in PyTorch's execution model. For example:<p><a href="https://pytorch.org/docs/stable/notes/cuda.html#use-nn-parallel-distributeddataparallel-instead-of-multiprocessing-or-nn-dataparallel" rel="nofollow">https://pytorch.org/docs/stable/notes/cuda.html#use-nn-paral...</a><p>It appears to me that high performance model execution is a bit tricky if you want to do lots of things in parallels. TorchServe also seems quite simple compared to offerings from Tensorflow. So in summary, I think Tensorflow still has some features unmatched by others. It really depends on what you are doing.