> (For context, Hotz raised $5M to improve RX 7900 XTX support and sell a $15K prebuilt consumer computer that runs 65B-parameter LLMs. A plethora of driver crashes later, he almost gave up on AMD.)<p>Again, I wish Hotz and TinyGrad the best, especially for training/experimentation on AMD, but I feel like Apache TVM and the Various MLIR efforts (like Pytorch MLIR, SHARK, Mojo) are much more promising for ML inference. Even triton in PyTorch is very promising, with an endorsement from AMD.