TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Distributing a Fully Connected Neural Network Across a Cluster

30 pointsby iamtraskover 10 years ago

2 comments

ajtullochover 10 years ago
How is this on the front page? This is a completely incoherent.<p>For anyone actually interested in some interesting techniques for multi-GPU DNN training, <a href="http://arxiv.org/pdf/1404.5997v2.pdf" rel="nofollow">http:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1404.5997v2.pdf</a> and references therein are probably a good start.
评论 #8656322 未加载
评论 #8656361 未加载
评论 #8656305 未加载
dhaivatpandyaover 10 years ago
The exposition is not very clear. What exactly do you mean when you say &quot;No edges will be communicated over the network, only half of the nodes.&quot;? I&#x27;m puzzled, because a few sentences later, you claim &quot;The only network IO that would be required would be sending each edge value to its respective node in Q.&quot;; so the edge values are actually communicated?<p>From what I&#x27;ve understood, what you&#x27;re suggesting is that for every node in a layer, you colocate the edge on the same machine?
评论 #8656393 未加载