TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

PyTorch v0.2.0 released

150 点作者 evc123将近 8 年前

5 条评论

cs702将近 8 年前
I LOVE PyTorch for experimenting with <i>dynamic</i> deep neural nets (DNNs) -- that is, DNNs that can have different graphs for different input samples. I find it much, MUCH easier to create and tinker with dynamic DNNs using PyTorch than, say, TensorFlow Fold. PyTorch is great for R&amp;D experimentation.<p>For example, here&#x27;s how easy it is to construct a fully-connected neural net with a <i>dynamically random</i> number of recurrent hidden layers in PyTorch. Yes, it&#x27;s a silly example, but it shows how easy it is to construct dynamic DNNs with PyTorch:<p><pre><code> import random import torch class MySillyDNN(torch.nn.Module): def __init__(self, input_dim, hidden_dim, output_dim): super(MySillyDNN, self).__init__() self.input_layer = torch.nn.Linear(input_dim, hidden_dim) self.hidden_layer = torch.nn.Linear(hidden_dim, hidden_dim) self.output_layer = torch.nn.Linear(hidden_dim, output_dim) def forward(self, x, max_recurrences=3): hidden_relu = self.input_layer(x).clamp(min=0) for r in range(random.randint(0, max_recurrences)): hidden_relu = self.hidden_layer(hidden_relu).clamp(min=0) y_pred = self.output_layer(hidden_relu) return y_pred </code></pre> It would be a hassle to do something like this with other frameworks like TensorFlow or Theano, which require you to specify the computational graph (including conditionals, if any) before you can run the graph.<p>PyTorch&#x27;s define-the-graph-by-running-it approach is sooo nice for quick-n&#x27;-dirty experimentation with dynamic graphs.<p>You can even create and tinker with dynamic graphs <i>interactively</i> on a Python REPL :-)
评论 #14947846 未加载
评论 #14948132 未加载
senko将近 8 年前
For those of us who don&#x27;t know what this is:<p><i>PyTorch is a Python package that provides two high-level features:<p>Tensor computation (like NumPy) with strong GPU acceleration<p>Deep neural networks built on a tape-based autograd system<p>You can reuse your favorite Python packages such as NumPy, SciPy and Cython to extend PyTorch when needed.<p>We are in an early-release beta. Expect some adventures and rough edges.</i>
alexcnwy将近 8 年前
There was a great podcast with Soumith Chintala on the O&#x27;Reilly data show a couple of days back with more info on PyTorch and how it differs from Theano and Tensorflow:<p><a href="https:&#x2F;&#x2F;www.oreilly.com&#x2F;ideas&#x2F;why-ai-and-machine-learning-researchers-are-beginning-to-embrace-pytorch" rel="nofollow">https:&#x2F;&#x2F;www.oreilly.com&#x2F;ideas&#x2F;why-ai-and-machine-learning-re...</a>
kbullaughey将近 8 年前
Also of note is that the lua version of Torch seems to be in maintenance mode now.<p><a href="https:&#x2F;&#x2F;twitter.com&#x2F;soumithchintala&#x2F;status&#x2F;894173538008956928" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;soumithchintala&#x2F;status&#x2F;89417353800895692...</a>
toisanji将近 8 年前
it is my favorite neural network library as well. Tensorflow is very heavy for testing.