notable co-releases along with PyTorch 1.5:<p>- TorchServe: model serving infrastructure for scalable model deployment<p>- TorchElastic w/Kubernetes: fault-tolerant "elastic" neural network training, allowing nodes to join and leave (for eg. to leverage spot pricing)<p>- Torch_XLA: updates for PyTorch TPU support<p>- New releases of torchvision, torchaudio and torchtext<p>Summary blogpost at <a href="https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/" rel="nofollow">https://pytorch.org/blog/pytorch-library-updates-new-model-s...</a>
I'm curious about the NHWC layout they mentioned.<p>AFAIK CuDNN always had optimizations for NCHW and that was one of Tensorflow speed issue when they choose to default on NHWC, plus the related issues on writing transformation pipelines.<p>So what does NHWC enables that is new?<p>Relevant in-depth discussion including CuDNN team lead, Julien Demouth, and Scott Gray who implemented Winograd convolution for Nervana Neon (which interestingly was CHWN so batch last): <a href="https://github.com/soumith/convnet-benchmarks/issues/93#issuecomment-192621350" rel="nofollow">https://github.com/soumith/convnet-benchmarks/issues/93#issu...</a>
I like the fact that the C++ API now has the same features as the Python one. It was hard to find good C++ based NN libraries up to 2 - 4 years ago. The likes of Tensorflow had a C++ API, but the documentation was odd. Now that Facebook and Google (with Tensorflow) appear to be committed to maintaining well documented C++ API's for their ML projects, perhaps it might draw a few people away from using Python for this work.<p>While it's a notoriously verbose language, your deployment options do increase with C++, and you also get type safety, which seems like a good thing for ML work.
I'm not one to normally criticise some minor layout choices on a blog, but this font is really difficult to read for me.<p>On Firefox: <a href="https://i.imgur.com/zIHis3x.png" rel="nofollow">https://i.imgur.com/zIHis3x.png</a><p>It's so wavy