TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Building a Deep Learning Workstation: Intel Phi and New Pascal Titan X?

4 pointsby ActsJuvenilealmost 9 years ago
I was running experiments on Torch and TensorFlow, and noticed that my old CPU was bottlenecking the GPU. One NVidia advisor I talked to corroborated that GPU acceleration mostly helps convolution layers, and rest of the layers are CPU restricted. That&#x27;s why NVidia are using dual 20-core Xeon CPUs to feed the Tesla GP100s in DGX-1.<p>This nugget of information in hand, I started researching massive multiple-core CPUs and came across Developer Preview of Intel Xeon Phi machine:<p>Specs and price: http:&#x2F;&#x2F;dap.xeonphi.com&#x2F;#platformspecs Video preview: https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=s2Z3O32am9I<p>It looks like a solid workstation with 64-core Xeon Phi, which is fully binary-compatible with normal Xeon. I was wondering if I buy their liquid cooled workstation for $5K and stick two NVidia Pascal Titan X in PCIx16 slots, would it be a good price-performance combo?<p>I understand AVX-512 will be wasted, but my purpose is to use 64 cores for normal x86 compute threads that feed the Titans. Seems like this system can match NVidia Digits Workstation for half the price, which is my main objective.<p>Thoughts? Critiques? Suggestions?<p>Thanks!

1 comment

wandering_logicalmost 9 years ago
Bottle necked on the CPU, yes, but probably only on a single (or a small number) of threads on the CPU. Xeon Phi will give you lots of slow threads. For the CPU you&#x27;d be better off getting a low-core-count processor with the best clock rate you can find&#x2F;afford. The gamer-oriented i7 extremes are usually good.
评论 #12166156 未加载
评论 #12175514 未加载