I'm a professional scientist, so let me give my two cents on this matter. Being able to compare your work against SOTA (state of the art) is pretty critical in academic publications. If everyone else in your area uses framework X, it makes a lot of sense for you to do it too. For the last few years, Pytorch has been king for the topics I care about.<p>However .. one area where Tensorflow shined was the static graph. As our models get even more intensive and needs different parts to execute in parallel, we are seeing some challenges in PyTorch's execution model. For example:<p><a href="https://pytorch.org/docs/stable/notes/cuda.html#use-nn-parallel-distributeddataparallel-instead-of-multiprocessing-or-nn-dataparallel" rel="nofollow">https://pytorch.org/docs/stable/notes/cuda.html#use-nn-paral...</a><p>It appears to me that high performance model execution is a bit tricky if you want to do lots of things in parallels. TorchServe also seems quite simple compared to offerings from Tensorflow. So in summary, I think Tensorflow still has some features unmatched by others. It really depends on what you are doing.
Can someone please share the current state of deploying Pytorch models to productions? TensorFlow has TF serving which is excellent and scalable. Last I checked there wasn't a PyTorch equivalent.
I'm curious how these charts look for companies that are serving ML in production, not just research. Research is biased towards flexibility and ease of use, not necessarily scalability or having a production ecosystem.
If I may, there's no real reason to break out ACL vs. NAACL vs. EMNLP, since they're all run by the ACL and one would be hard-pressed to say how the EMNLP community might differ from the ACL community at this point. And if you're doing NAACL you might want to do EACL and IJCNLP too.
The graph on here seems similar to what I've noticed. My lab mainly uses Tensorflow mainly driven by my knowledge. And the only reason why I learned Tensorflow initially was that PyTorch was just starting when I was choosing a framework and the documentation wasn't as established. However, recently I recommended a student who was asking me which framework to probably use PyTorch due to the ease of implementation comparatively.
A big mistake on the side of tensorflow was trying to copy theano including those dreadful functional loops whereas in pytorch for loops are not pain to use and very well integrated with the language
I mostly used Tensorflow and I'm curious what makes PyTorch models easier to implement?
With Tensorflow 2 you get the Keras API which is really easy to use.