What surprised me most is the elegance of C++ API. Compared to its equivalence in Python, the C++ version is almost the same if we discard the "auto" keyword [0]. As mentioned in the doc, they put user-friendliness over micro-optimizations, which also proves the expressiveness of modern C++ (at least when they want to prioritize user-friendliness!)
[0]: <a href="https://pytorch.org/cppdocs/frontend.html#end-to-end-example" rel="nofollow">https://pytorch.org/cppdocs/frontend.html#end-to-end-example</a>
Really grateful to the FAIR team for Pytorch. I use deep learning for computational biology. Pytorch lets me focus on the problem rather than nitpicking with the framework (looking at you tensorflow) to make something work.
In case anybody else was wondering, since this isn't in the fine article:<p>"PyTorch is a Python package that provides two high-level features:<p>* Tensor computation (like NumPy) with strong GPU acceleration<p>* Deep neural networks built on a tape-based autograd system<p>You can reuse your favorite Python packages such as NumPy, SciPy and Cython to extend PyTorch when needed."
Link to blog post: <a href="https://code.fb.com/ai-research/pytorch-developer-ecosystem-expands-1-0-stable-release/" rel="nofollow">https://code.fb.com/ai-research/pytorch-developer-ecosystem-...</a>
<i></i>TL;DR<i></i><p>- New JIT feature that lets you run your model without python. It now seems trivial to load a pytorch model in C++<p>- New distributed computation package. Major redesign.<p>- C++ frontend<p>- New torch hub feature to load models from github easily
I've been using PyTorch, and the PyTorch 1.0 pre-release for a while now. I adore it but don't really want to write C++ backends in production.<p>Anyone want to start working on Golang bindings for C++ PyTorch?
Hm, the Mac version of LibTorch is suddenly unavailable!? [0] I swear it was available for download until a few days ago...<p>[0] <a href="https://pytorch.org/get-started/locally/" rel="nofollow">https://pytorch.org/get-started/locally/</a>
> The JIT is a set of compiler tools for bridging the gap between research in PyTorch
and production. It allows for the creation of models that can run without a dependency on the Python interpreter and which can be optimized more aggressively. Using program annotations existing models can be transformed into Torch Script, a subset of Python that PyTorch can run directly.<p>Isn't python bytecode simple enough that it can be run anywhere?