Most of us who have played around with deep learning frameworks know that you need a CUDA-enabled GPU to get proper hardware acceleration. And, Nvidia is the only one who produces chips with that proprietary technology.<p>My question is, why haven't we seen a shift away from this? It seems very limiting and closed for an ecosystem that is otherwise rather open and progressive. Will Tensorflow et al eventually adopt OpenCL / Metal / etc, or is there some reason that we'll still be stuck with CUDA in the near term?<p>I am just a hobbyist, so my assumptions may be completely off, and maybe these things are already happening. This has just been a question on my mind lately, and I felt that this is a good place to ask.<p>Thanks!
I am not an ML person by all means, but when I played with GPUs, I found that the time it takes to get something done using CUDA is substantially lower than with OpenCL, nota bene I have not yet tried Metal.<p>The barrier to entry also seems to be lower for CUDA, so this might be something that TensorFlow people are considering important.
I don't have experience with opencl or compute shader. My experience with cude told me that the language design tied closely to the hardware. And to write efficient cuda code, you have to understand the hardware architecture well. It's hard for me to imagine a generate gpu language that works on all hardwares, unless the gpu hardware is standardized like x86 or arm.