One key difference here is that Apple was apparently actively trying to minimize any personal or sensitive information that leaked through to the “graders”, whereas I don’t think any other company gave a damn.<p>I certainly understand why ML systems need to be trained and once in production, they need ongoing training and tweaking at all levels.<p>On the whole, I don’t think Apple did anything wrong here, with the exception of running this service and not telling their users it was being done. They should have been more open about the need for ongoing training, and the extent to which they would go to anonymize the information being gathered.<p>I still would have opted out, just like I’ve opted out of all voice recognition/assistant systems from all other sources. But at least then Apple would have had a decent chance of keeping this service in operation.