TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

High-Performance GPU Computing in the Julia Programming Language

139 点作者 ceyhunkazel超过 7 年前

4 条评论

jlebar超过 7 年前
&gt; This is in part because of the work by Google on the NVPTX LLVM back-end.<p>I&#x27;m one of the maintainers at Google of the LLVM NVPTX backend. Happy to answer questions about it.<p>As background, Nvidia&#x27;s CUDA (&quot;CUDA C++?&quot;) compiler, nvcc, uses a fork of LLVM as its backend. Clang can also compile CUDA code, using regular upstream LLVM as its backend. The relevant backend in LLVM was originally contributed by nvidia, but these days the team I&#x27;m on at Google is the main contributor.<p>I don&#x27;t know much (okay, anything) about Julia except what I read in this blog post, but the dynamic specialization looks a lot like XLA, a JIT backend for TensorFlow that I work on. So that&#x27;s cool; I&#x27;m happy to see this work.<p><i>Full debug information is not supported by the LLVM NVPTX back-end yet, so cuda-gdb will not work yet.</i><p>We&#x27;d love help with this. :)<p><i>Bounds-checked arrays are not supported yet, due to a bug [1] in the NVIDIA PTX compiler.</i> [0]<p>We ran into what appears to be the same issue [2] about a year and a half ago. nvidia is well aware of the issue, but I don&#x27;t expect a fix except by upgrading to Volta hardware.<p>[0] <a href="https:&#x2F;&#x2F;julialang.org&#x2F;blog&#x2F;2017&#x2F;03&#x2F;cudanative" rel="nofollow">https:&#x2F;&#x2F;julialang.org&#x2F;blog&#x2F;2017&#x2F;03&#x2F;cudanative</a> [1] <a href="https:&#x2F;&#x2F;github.com&#x2F;JuliaGPU&#x2F;CUDAnative.jl&#x2F;issues&#x2F;4" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;JuliaGPU&#x2F;CUDAnative.jl&#x2F;issues&#x2F;4</a> [2] <a href="https:&#x2F;&#x2F;bugs.llvm.org&#x2F;show_bug.cgi?id=27738" rel="nofollow">https:&#x2F;&#x2F;bugs.llvm.org&#x2F;show_bug.cgi?id=27738</a>
评论 #15566989 未加载
评论 #15568569 未加载
评论 #15569519 未加载
评论 #15568217 未加载
dragontamer超过 7 年前
In my experience, CUDA &#x2F; OpenCL are actually rather easy to use.<p>The hard part is optimization, because the GPU architecture (SIMD &#x2F; SIMT) is so alien compared to normal CPUs.<p>Here&#x27;s a step-by-step example of one guy optimizing a Matrix Multiplication scheme in OpenCL (specifically for NVidia GPUs): <a href="https:&#x2F;&#x2F;cnugteren.github.io&#x2F;tutorial&#x2F;pages&#x2F;page1.html" rel="nofollow">https:&#x2F;&#x2F;cnugteren.github.io&#x2F;tutorial&#x2F;pages&#x2F;page1.html</a><p>Just like how high-performance CPU computing requires a deep understanding of cache and stuff... high-performance GPU computing requires a deep understanding of the various memory-spaces on the GPU.<p>------------<p>Now granted: deep optimization of routines on CPUs is similarly challenging, and actually undergoes a very similar process in how to partition your work problem into L1-sized blocks. But high-performance GPUs not only have to consider their L1 Cache... but also &quot;Shared&quot; (or OpenCL __local) memory and &quot;Register&quot; (or OpenCL __private) memory as well. Furthermore, GPUs in my experience have way less memory than CPUs per thread&#x2F;shader. IE: Intel &quot;Sandy Bridge&quot; CPU has 64kb L1 cache per core, which can be used as 2-threads if hyperthreading is enabled. A &quot;Pascal&quot; GPU has 64kb of &quot;Shared&quot; memory, which is extremely fast like L1 cache. But this 64kb is shared between 64 FP32 cores!!!.<p>Furthermore, not all algorithms run faster on GPGPUs either. For example:<p><a href="https:&#x2F;&#x2F;askeplaat.files.wordpress.com&#x2F;2013&#x2F;01&#x2F;ispa2015.pdf" rel="nofollow">https:&#x2F;&#x2F;askeplaat.files.wordpress.com&#x2F;2013&#x2F;01&#x2F;ispa2015.pdf</a><p>This paper claims that their GPGPU implementation (Xeon Phi) was slower than the CPU implementation! Apparently, the game of &quot;Hex&quot; is hard to parallelize &#x2F; vectorize.<p>---------------<p>Now don&#x27;t get me wrong, this is all very cool and stuff. Making various programming tasks easier is always welcome. Just be aware that GPUs are no silver bullet for performance. It takes a lot of work to get &quot;high-performance code&quot;, regardless of your platform.<p>And sometimes, CPUs are faster.
评论 #15570035 未加载
gravypod超过 7 年前
&gt; Julia has recently gained support for syntactic loop fusion, where chained vector operations are fused into a single broadcast<p>Wow. That&#x27;s very impressive.<p>I hope one day we get this sort of tooling with AMD GPUs.
评论 #15567925 未加载
评论 #15568584 未加载
jernfrost超过 7 年前
How does the Julia approach compare to the alternatives in performance and ease of use? Can e.g. Python or R do this in any way?
评论 #15569441 未加载