TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Will CUDA continue to dominate?

2 点作者 heynk大约 7 年前
Most of us who have played around with deep learning frameworks know that you need a CUDA-enabled GPU to get proper hardware acceleration. And, Nvidia is the only one who produces chips with that proprietary technology.<p>My question is, why haven&#x27;t we seen a shift away from this? It seems very limiting and closed for an ecosystem that is otherwise rather open and progressive. Will Tensorflow et al eventually adopt OpenCL &#x2F; Metal &#x2F; etc, or is there some reason that we&#x27;ll still be stuck with CUDA in the near term?<p>I am just a hobbyist, so my assumptions may be completely off, and maybe these things are already happening. This has just been a question on my mind lately, and I felt that this is a good place to ask.<p>Thanks!

2 条评论

lkurusa大约 7 年前
I am not an ML person by all means, but when I played with GPUs, I found that the time it takes to get something done using CUDA is substantially lower than with OpenCL, nota bene I have not yet tried Metal.<p>The barrier to entry also seems to be lower for CUDA, so this might be something that TensorFlow people are considering important.
billconan大约 7 年前
I don&#x27;t have experience with opencl or compute shader. My experience with cude told me that the language design tied closely to the hardware. And to write efficient cuda code, you have to understand the hardware architecture well. It&#x27;s hard for me to imagine a generate gpu language that works on all hardwares, unless the gpu hardware is standardized like x86 or arm.