These ML-compilers are being overhyped. It's all the same trade-off as a traditional compiler: you get a lot more throughput than hiring a specialist performance programmer, but the latter will typically outperform, possibly by orders of magnitude.<p>These things are inferior at many levels:
- Algorithmic: These things aren't feeding back to their human masters tips and tricks on how to modify the network to go faster beyond some very basic signals.
- Loss of intent: ML network designers are specifying architecture in python, and by the time it's gone through many layers of lowering, you can get some complete garbage. Highly efficient garbage, but still garbage. (recent example, we caught one of these compilers doing a slice update operation by first forming the range of all possible indices to the array, slicing that to get indices to update, and then doing a scatter; we replaced it with a single memcpy call).
- Inefficient kernels. Every time we see the output of these compilers go head-to-head with an expert assembly programmer, the compiler loses, often by 30%+. This always seems like the sort of thing that should be easy to solve, but given no-one seems to have cracked it in the past 50 years, it's obviously not as simple as it sounds.