TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Building a Deep Learning Workstation: Intel Phi and New Pascal Titan X?

4 点作者 ActsJuvenile将近 9 年前
I was running experiments on Torch and TensorFlow, and noticed that my old CPU was bottlenecking the GPU. One NVidia advisor I talked to corroborated that GPU acceleration mostly helps convolution layers, and rest of the layers are CPU restricted. That&#x27;s why NVidia are using dual 20-core Xeon CPUs to feed the Tesla GP100s in DGX-1.<p>This nugget of information in hand, I started researching massive multiple-core CPUs and came across Developer Preview of Intel Xeon Phi machine:<p>Specs and price: http:&#x2F;&#x2F;dap.xeonphi.com&#x2F;#platformspecs Video preview: https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=s2Z3O32am9I<p>It looks like a solid workstation with 64-core Xeon Phi, which is fully binary-compatible with normal Xeon. I was wondering if I buy their liquid cooled workstation for $5K and stick two NVidia Pascal Titan X in PCIx16 slots, would it be a good price-performance combo?<p>I understand AVX-512 will be wasted, but my purpose is to use 64 cores for normal x86 compute threads that feed the Titans. Seems like this system can match NVidia Digits Workstation for half the price, which is my main objective.<p>Thoughts? Critiques? Suggestions?<p>Thanks!

1 comment

wandering_logic将近 9 年前
Bottle necked on the CPU, yes, but probably only on a single (or a small number) of threads on the CPU. Xeon Phi will give you lots of slow threads. For the CPU you&#x27;d be better off getting a low-core-count processor with the best clock rate you can find&#x2F;afford. The gamer-oriented i7 extremes are usually good.
评论 #12166156 未加载
评论 #12175514 未加载