Apple is still behind the rest of FAANG in terms of publishing ML research, but they've taken the lead in terms of real world impact of ML. If you go back a couple of years the general consensus was that Apple is very far behind and lacks any credible machine learning groups. As with most of their products, they waited for the technology to mature (a bit) and then made magical experiences from it.<p>Some examples:<p>- On-device semantic image search<p>- Text highlighting (images and videos)<p>- FaceID<p>- Computational photography, depth mapping (sensor fusion)<p>- Background removal (iOS 16)<p>- Activity detection (Apple Watch)<p>Siri is still far behind vs. Alexa and Google Assistant, but for every other ML/AI application I'd argue that Apple has the smoothest experience. This should be a lesson for others building ML-powered products. You don't need to use the best, state of the art models to compete. You can compete on the overall experience.<p>Sharing details and publications doesn't seem to be a core part of Apple's culture, so I'm glad they're going out of their way to publish more details like this article. Their push to optimize ML inference for on-device rather than cloud is going to have the biggest impact on consumer experience. I'm fairly sure this is part of their AR/VR strategy too. Low-latency local machine learning that powers magical experiences.
Recently I bought a Mac Studio and wanted to experiment with Apple's GPU-accelerated ML API under the Metal Performance Shaders framework.<p>I downloaded a sample code project from WWDC 2019:
<a href="https://developer.apple.com/documentation/metalperformanceshaders/training_a_neural_network_with_metal_performance_shaders" rel="nofollow">https://developer.apple.com/documentation/metalperformancesh...</a><p>It didn't build on latest Xcode on the Mac Studio. I'm experienced with Cocoa and Apple's APIs, but couldn't fix the problem in 30 minutes of poking around.<p>Then I found another sample code project from WWDC 2020 which is apparently using a similarly named but different API for the same purpose:
<a href="https://developer.apple.com/documentation/metalperformanceshadersgraph/training_a_neural_network_using_mps_graph" rel="nofollow">https://developer.apple.com/documentation/metalperformancesh...</a><p>This one looked promising, but failed with a runtime assertion and I was unable to figure it out.<p>At this point I wish Apple spent more effort on making their existing frameworks usable. If the only available sample code doesn't even work on a brand new Mac, the API isn't going to be used by third parties.
I wonder how approachable it would be to optimise a custom model for ANE. According to the code examples at the bottom, the current implementation seems to be a custom model, so no generic solution.<p>Anyway, it seems that we are at dawn of deploying mode cool models which formerly required cloud computation to the hands of the users. Really cool!<p>Are we going to see more federated learning being pushed to user devices or is it a dead branch only useful for a few use cases?