Only a few months ago people saying that the deep learning library ecosystem was starting to stabilize. I never saw that as the case. The latest frontier for deep learning libraries is ensuring efficient support for dynamic computation graphs.<p>Dynamic computation graphs arise whenever the amount of work that needs to be done is variable. This may be when we're processing text, one example being a few words while another being paragraphs of text, or when we are performing operations against a tree structure of variable size. This problem is particularly prominent in particular subfields, such as natural language processing, where I spend most of my time.<p>PyTorch tackles this very well, as do Chainer[1] and DyNet[2]. Indeed, PyTorch construction was directly informed from Chainer[3], though re-architected and designed to be even faster still. I have seen all of these receive renewed interest in recent months, particularly amongst many researchers performing cutting edge research in the domain. When you're working with new architectures, you want the most flexibility possible, and these frameworks allow for that.<p>As a counterpoint, TensorFlow does not handle these dynamic graph cases well at all. There are some primitive dynamic constructs but they're not flexible and usually quite limiting. In the near future there are plans to allow TensorFlow to become more dynamic, but adding it in after the fact is going to be a challenge, especially to do efficiently.<p>Disclosure: My team at Salesforce Research use Chainer extensively and my colleague James Bradbury was a contributor to PyTorch whilst it was in stealth mode. We're planning to transition from Chainer to PyTorch for future work.<p>[1]: <a href="http://chainer.org/" rel="nofollow">http://chainer.org/</a><p>[2]: <a href="https://github.com/clab/dynet" rel="nofollow">https://github.com/clab/dynet</a><p>[3]: <a href="https://twitter.com/jekbradbury/status/821786330459836416" rel="nofollow">https://twitter.com/jekbradbury/status/821786330459836416</a>