The lowest price Xeon Phi in this generation is $2,348 (1.3ghz, 64 cores) - I can't help but feel Intel would do well to introduce an enthusiast product in to the lineup. Even 1.0ghz, 48 cores for $1000.<p>They're Tesla priced without an equivalent desktop gamer graphics card, and that means you can't just dip your toe into the water; you've got to buy the canoe up front.<p>Programming on a normal x86 doesn't really count, because there's no way to get a feel for what is fast and slow when you're using a monster of a core capable of running your poor code more quickly than it deserves.
I don't know if using Xeon Phi for rendering makes that much sense. It's sort of the problem it's least competitive to solve on a raw performance, performance per watt or development cost basis.<p>> However, ‘smaller’ is a relative term as current visualizations can occur on a machine that contains less than a terabyte of RAM. Traditional raster-based rendering would have greatly increased the memory consumption as the convoluted shape of each neuron would require a mesh containing approximately 100,000 triangles per neuron.<p>That sounds like a poor approach to this problem. You could write a shader that renders thick lines for the dendrites, and the rest of the geometry can be conventional meshes. The same shader could have a pass specially designed for lines and depth of field rendering. That's the one unusual shader. It's hard, but not super hard to write. [0]<p>Besides, unless you need this to run in real time (which the Xeon Phi doesn't anyway), you could just raster render and page in the mesh data from wherever. So what if it's slow.<p>I think highly technical platform decisions like Xeon Phi versus NVIDIA CUDA is really about the details. You have to educate the reader both on the differences that matter and why they should choose one over the other. The comment in the article, "no GPU dependencies," is a very PR-esque don't-mention-your-competitor dance around what they're actually trying to say: the CUDA ecosystem can be a pain since you can't buy the MacBook Pro with the GTX 750M easily, installing all its drivers is error-prone, SIP gets in the way of everything, Xcode and CUDA updates tend to break each other, etc. etc.<p>I sound like I know what I'm talking about, right? Intel's just not getting it. Show a detailed application of where Xeon Phi really excels. NVIDIA's accelerated science examples go back a decade, and some, like the accelerated grid solved Navier-Stokes fluids examples, are still state of the art.<p>The competition in rendering is intense. Some level of production-ready renderers like Arion, Octane and mental ray (specifically iRay, NVIDIA's GPU accelerated renderer) perform best or are exclusive to the CUDA platform. Conversely, you probably get the most flexibility from a platform like VRay or Renderman, whose support for GPU acceleration is limited. Intel embtree has a great presence today in baked lighting for game engines, but I think NVIDIA's OptiX is a lot faster.<p>[0] <a href="https://mattdesl.svbtle.com/drawing-lines-is-hard" rel="nofollow">https://mattdesl.svbtle.com/drawing-lines-is-hard</a>
One of the interesting things to keep in mind is that these new Xeon Phi cards can be used as standalone CPUs, not just as PCIe cards like a GPU. This is the "self-hosted mode" the article talks about. So one can now think about comparing a lone Xeon Phi doing both jobs versus a CPU plus an NVidia GPU.
This article is too fluffy, sounds like it had help from Intel's PR depr.<p>I certainly hope Phi has more advantages than the write once run anywhere / portability angle they kept pushing.<p>Has anyone chosen Phi for a real project that was in no way funded or subsidized by Intel?
I'm excited for Xenon Phi even with the expense of it, Intel needs to realize that even though they dominate in x86 they need to price competitively.