TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Using Intel’s Xeon Phi for Brain Research Visualization

90 pointsby cvursachealmost 9 years ago

6 comments

reitzensteinmalmost 9 years ago
The lowest price Xeon Phi in this generation is $2,348 (1.3ghz, 64 cores) - I can&#x27;t help but feel Intel would do well to introduce an enthusiast product in to the lineup. Even 1.0ghz, 48 cores for $1000.<p>They&#x27;re Tesla priced without an equivalent desktop gamer graphics card, and that means you can&#x27;t just dip your toe into the water; you&#x27;ve got to buy the canoe up front.<p>Programming on a normal x86 doesn&#x27;t really count, because there&#x27;s no way to get a feel for what is fast and slow when you&#x27;re using a monster of a core capable of running your poor code more quickly than it deserves.
评论 #12198927 未加载
评论 #12199025 未加载
评论 #12198941 未加载
评论 #12201881 未加载
评论 #12199620 未加载
评论 #12199405 未加载
yolesaberalmost 9 years ago
&quot;Figure 1: Even first in-silico models show the complexity and beauty of the brain&quot;<p>Man the human brain is such a narcissist
评论 #12200095 未加载
评论 #12199680 未加载
评论 #12200045 未加载
doctorpanglossalmost 9 years ago
I don&#x27;t know if using Xeon Phi for rendering makes that much sense. It&#x27;s sort of the problem it&#x27;s least competitive to solve on a raw performance, performance per watt or development cost basis.<p>&gt; However, ‘smaller’ is a relative term as current visualizations can occur on a machine that contains less than a terabyte of RAM. Traditional raster-based rendering would have greatly increased the memory consumption as the convoluted shape of each neuron would require a mesh containing approximately 100,000 triangles per neuron.<p>That sounds like a poor approach to this problem. You could write a shader that renders thick lines for the dendrites, and the rest of the geometry can be conventional meshes. The same shader could have a pass specially designed for lines and depth of field rendering. That&#x27;s the one unusual shader. It&#x27;s hard, but not super hard to write. [0]<p>Besides, unless you need this to run in real time (which the Xeon Phi doesn&#x27;t anyway), you could just raster render and page in the mesh data from wherever. So what if it&#x27;s slow.<p>I think highly technical platform decisions like Xeon Phi versus NVIDIA CUDA is really about the details. You have to educate the reader both on the differences that matter and why they should choose one over the other. The comment in the article, &quot;no GPU dependencies,&quot; is a very PR-esque don&#x27;t-mention-your-competitor dance around what they&#x27;re actually trying to say: the CUDA ecosystem can be a pain since you can&#x27;t buy the MacBook Pro with the GTX 750M easily, installing all its drivers is error-prone, SIP gets in the way of everything, Xcode and CUDA updates tend to break each other, etc. etc.<p>I sound like I know what I&#x27;m talking about, right? Intel&#x27;s just not getting it. Show a detailed application of where Xeon Phi really excels. NVIDIA&#x27;s accelerated science examples go back a decade, and some, like the accelerated grid solved Navier-Stokes fluids examples, are still state of the art.<p>The competition in rendering is intense. Some level of production-ready renderers like Arion, Octane and mental ray (specifically iRay, NVIDIA&#x27;s GPU accelerated renderer) perform best or are exclusive to the CUDA platform. Conversely, you probably get the most flexibility from a platform like VRay or Renderman, whose support for GPU acceleration is limited. Intel embtree has a great presence today in baked lighting for game engines, but I think NVIDIA&#x27;s OptiX is a lot faster.<p>[0] <a href="https:&#x2F;&#x2F;mattdesl.svbtle.com&#x2F;drawing-lines-is-hard" rel="nofollow">https:&#x2F;&#x2F;mattdesl.svbtle.com&#x2F;drawing-lines-is-hard</a>
评论 #12198906 未加载
评论 #12199086 未加载
评论 #12199660 未加载
dibanezalmost 9 years ago
One of the interesting things to keep in mind is that these new Xeon Phi cards can be used as standalone CPUs, not just as PCIe cards like a GPU. This is the &quot;self-hosted mode&quot; the article talks about. So one can now think about comparing a lone Xeon Phi doing both jobs versus a CPU plus an NVidia GPU.
WhitneyLandalmost 9 years ago
This article is too fluffy, sounds like it had help from Intel&#x27;s PR depr.<p>I certainly hope Phi has more advantages than the write once run anywhere &#x2F; portability angle they kept pushing.<p>Has anyone chosen Phi for a real project that was in no way funded or subsidized by Intel?
drwdalalmost 9 years ago
I&#x27;m excited for Xenon Phi even with the expense of it, Intel needs to realize that even though they dominate in x86 they need to price competitively.