I wish more attention would be towards open source alternatives for CUDA, such as AMD's ROCm[1][2] and Julia framework using it - AMDGPU.jl[3]. It's sad to see so many people praise NVIDIA which is the enemy of open source, openly hostile to anything except their oversized proprietary binary blobs.<p>[1] <a href="https://rocmdocs.amd.com/en/latest/index.html" rel="nofollow">https://rocmdocs.amd.com/en/latest/index.html</a><p>[2] <a href="https://github.com/RadeonOpenCompute/ROCm" rel="nofollow">https://github.com/RadeonOpenCompute/ROCm</a><p>[3] <a href="https://github.com/JuliaGPU/AMDGPU.jl" rel="nofollow">https://github.com/JuliaGPU/AMDGPU.jl</a>
A fun fact is that the GPUCompiler, which compiles the code to run in GPU's, is the current way to generate binaries without hiding the whole ~200mb of julia runtime in the binary.<p><a href="https://github.com/JuliaGPU/GPUCompiler.jl/" rel="nofollow">https://github.com/JuliaGPU/GPUCompiler.jl/</a> <a href="https://github.com/tshort/StaticCompiler.jl/" rel="nofollow">https://github.com/tshort/StaticCompiler.jl/</a>
Are there similar things for other types of GPUs?<p>Edit: the site has one project per GPU type, shame there isn't one interface that works with every GPU type instead.