Ok, its officially the new new thing when corporate communications types are sniping at each other :-)<p>Once we start seeing press releases of products that are not available yet being shown to kill the existing competition we'll know that the hype train has officially gone super-product-sonic (that is a hype wave travelling faster than the product releases can support it).
> Titan uses four-year-old GPUs<p>... as does nearly every public cloud provider. I agree with most of the article, but you can't fault Intel for benchmarking the hardware that cloud providers are actually offering.<p>I'm not sure what exactly NVIDIA is doing with their Tesla product line but whatever it is, it's really restricting the availability of recent GPU hardware. Even Azure's GPU instances released this month are using the Kepler architecture from 2012. It's fully two generations out of date now, and that's sad.
@imaleppert Agree getting the details on the testing would be helpful. I think NV was more pointing out that Intel were putting their latest against NV's oldest. It'd be like testing a RX480 against a GTX660. What's the use of that?<p>@modeless the new Azure instances have M60's or you can purchase a 1080 or new TitanX which are both available (although stock has been tight).<p><a href="https://azure.microsoft.com/en-us/blog/azure-n-series-preview-availability/" rel="nofollow">https://azure.microsoft.com/en-us/blog/azure-n-series-previe...</a>
Why don't they provide a link to their testing methodology? They need to back up their claims (on both sides) with the actual configuration, all versions, and sample datasets for people to independently verify.<p>A docker container that runs their performance suite would be ideal.
I do not usually believe in any vendor provided performance benchmark unless I fabricated it myself. On a more serious note, benchmarking is pretty hard and people usually discount things that seem minor until you found it the bottleneck, like the interconnect in this case for example. Another problem with synthetic benchmarks is that you can always optimize for your exact use case and it usually yields to pretty good improvements, comparable to buying faster equipment. The ultimate quiestion which is more cost efficient, buying faster CPU/GPU or hiring a performance expert.
They do not mention the version of Caffe used to test the Intel systems, Intel claims its numbers based on an optimized branch of Caffe and not the public (BVLC) version