TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Distributing a Fully Connected Neural Network Across a Cluster

30 点作者 iamtrask超过 10 年前

2 条评论

ajtulloch超过 10 年前
How is this on the front page? This is a completely incoherent.<p>For anyone actually interested in some interesting techniques for multi-GPU DNN training, <a href="http://arxiv.org/pdf/1404.5997v2.pdf" rel="nofollow">http:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1404.5997v2.pdf</a> and references therein are probably a good start.
评论 #8656322 未加载
评论 #8656361 未加载
评论 #8656305 未加载
dhaivatpandya超过 10 年前
The exposition is not very clear. What exactly do you mean when you say &quot;No edges will be communicated over the network, only half of the nodes.&quot;? I&#x27;m puzzled, because a few sentences later, you claim &quot;The only network IO that would be required would be sending each edge value to its respective node in Q.&quot;; so the edge values are actually communicated?<p>From what I&#x27;ve understood, what you&#x27;re suggesting is that for every node in a layer, you colocate the edge on the same machine?
评论 #8656393 未加载