While Apple M* chips seems to have an incredible unified memory access, the available learning resources seem to be quite restricted and often convoluted. Has anyone been able to get past this barrier?
I have some familiarity with general purpose software development with CUDA and C++. I want to figure how to work with/ use Apple's developer resources for general purpose programming.
If you're looking for a high level introduction to GPU development on Apple silicon I would recommend learning Metal. It's Apple's GPU acceleration language similar to CUDA for Nvidia hardware. I ported a set of puzzles for CUDA called GPU-Puzzles (a collection of exercises designed to teach GPU programming fundamentals)[1] to Metal [2]. I think it's a very accessible introduction to Metal and writing GPU kernels.<p>[1] <a href="https://github.com/srush/GPU-Puzzles">https://github.com/srush/GPU-Puzzles</a><p>[2] <a href="https://github.com/abeleinin/Metal-Puzzles">https://github.com/abeleinin/Metal-Puzzles</a>
You can help with the reverse engineering of Apple Silicon done by a dozen people worldwide, that is how we find out the GPU and NPU instructions[1-4]. There is over 43 trillion float operations per second to unlock at 8 terabit per second 'unified' memory bandwidth and 270 gigabits per second networking (less on the smaller chips)....<p>[1] <a href="https://github.com/AsahiLinux/gpu">https://github.com/AsahiLinux/gpu</a><p>[2] <a href="https://github.com/dougallj/applegpu">https://github.com/dougallj/applegpu</a><p>[3] <a href="https://github.com/antgroup-skyward/ANETools/tree/main/ANEDisassembler">https://github.com/antgroup-skyward/ANETools/tree/main/ANEDi...</a><p>[4] <a href="https://github.com/hollance/neural-engine">https://github.com/hollance/neural-engine</a><p>You can use a high level APIs like MLX, Metal or CoreML to compute other things on the GPU and NPU.<p>Shadama [5] is an example programming language that translates (with Ometa) matrix calculations into WebGPU or WebGL APIs (I forget which). You can do exactly the same with the MLX, Metal or CoreML APIs and only pay around 3% overhead going through the translation stages.<p>[5] <a href="https://github.com/yoshikiohshima/Shadama">https://github.com/yoshikiohshima/Shadama</a><p>I estimate it will cost around $22K at my hourly rate to completely reverse engineer the latest A16 and M4 CPU (ARMV9), GPU and NPU instruction sets. I think I am halfway on the reverse engineering, the debugging part is the hardest problem. You would however not be able to sell software with it on the APP Store as Apple forbids undocumented API's or bare metal instructions.
There is no general purpose GPU development on Apple M series.<p>There is Metal development. You want to learn Apple M-series gpu and gpgpu development? Learn Metal!<p><a href="https://developer.apple.com/metal/" rel="nofollow">https://developer.apple.com/metal/</a>
It's hard to answer not knowing exactly what your aim is, or your experience level with CUDA and how easily the concepts you know will map to Metal, and what you find "restricted and convoluted" about the documentation.<p><Insert your favorite LLM> helped me write some simple Metal-accelerated code by scaffolding the compute pipeline, which took most of the nuisance out of learning the API and let me focus on writing the kernel code.<p>Here's the code if it's helpful at all. <a href="https://github.com/rgov/thps-crack">https://github.com/rgov/thps-crack</a>
If you know CUDA, then I assume you know a bit already about GPUs and the major concepts. There’s just minor differences and different terminology for things like “warps” etc.<p>With that base, I’ve found their docs decent enough, especially coupled with the Metal Shader Language pdf they provide (<a href="https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf" rel="nofollow">https://developer.apple.com/metal/Metal-Shading-Language-Spe...</a>), and quite a few code samples you can download from the docs site (e.g. <a href="https://developer.apple.com/documentation/metal/performing_calculations_on_a_gpu" rel="nofollow">https://developer.apple.com/documentation/metal/performing_c...</a>).<p>I’d note a lot of their stuff was still written in Objective-C, which I’m not that familiar with. But most of that is boilerplate and the rest is largely C/C++ based (including the Metal shader language).<p>I just ported some CPU/SIMD number crunching (complex matrices) to Metal, and the speed up has been staggering. What used to take days now takes minutes. It is the hottest my M3 MacBook has ever been though! (See <a href="https://x.com/billticehurst/status/1871375773413876089" rel="nofollow">https://x.com/billticehurst/status/1871375773413876089</a> :-)
Check out MLX[1]. Its a bit like pytorch/tensorflow with added benefit of Apple Silicon.<p>1. <a href="https://ml-explore.github.io/mlx/build/html/index.html" rel="nofollow">https://ml-explore.github.io/mlx/build/html/index.html</a>
I’ve had a good time dabbling with Metal.jl: <a href="https://github.com/JuliaGPU/Metal.jl">https://github.com/JuliaGPU/Metal.jl</a>
People have already mentioned Metal, but if you want cross platform, <a href="https://github.com/gfx-rs/wgpu">https://github.com/gfx-rs/wgpu</a> has a vulkan-like API and cross compiles to all the various GPU frameworks. I believe it uses <a href="https://github.com/KhronosGroup/MoltenVK">https://github.com/KhronosGroup/MoltenVK</a> to run on Macs. You can also see the metal shader transpilation results for debugging.
I'd reccomend checking out the CUDA mode Discord server! They also have a channel for Metal <a href="https://discord.gg/ZqckTYcv" rel="nofollow">https://discord.gg/ZqckTYcv</a>