I was running experiments on Torch and TensorFlow, and noticed that my old CPU was bottlenecking the GPU. One NVidia advisor I talked to corroborated that GPU acceleration mostly helps convolution layers, and rest of the layers are CPU restricted. That's why NVidia are using dual 20-core Xeon CPUs to feed the Tesla GP100s in DGX-1.<p>This nugget of information in hand, I started researching massive multiple-core CPUs and came across Developer Preview of Intel Xeon Phi machine:<p>Specs and price: http://dap.xeonphi.com/#platformspecs
Video preview: https://www.youtube.com/watch?v=s2Z3O32am9I<p>It looks like a solid workstation with 64-core Xeon Phi, which is fully binary-compatible with normal Xeon. I was wondering if I buy their liquid cooled workstation for $5K and stick two NVidia Pascal Titan X in PCIx16 slots, would it be a good price-performance combo?<p>I understand AVX-512 will be wasted, but my purpose is to use 64 cores for normal x86 compute threads that feed the Titans. Seems like this system can match NVidia Digits Workstation for half the price, which is my main objective.<p>Thoughts? Critiques? Suggestions?<p>Thanks!
Bottle necked on the CPU, yes, but probably only on a single (or a small number) of threads on the CPU. Xeon Phi will give you lots of slow threads. For the CPU you'd be better off getting a low-core-count processor with the best clock rate you can find/afford. The gamer-oriented i7 extremes are usually good.