This could improve privacy if all the processing remains internal, however there are several ways it could reduce privacy and create problems depending on what you do with the AI output. For example, the AI output could deny access to buildings if it detects a person is a child (good for industrial settings, not so good if you're a short adult) or it could deny access if it detects a person is white or female (bad for society). The video data might all be processed internally, but if the chip also has a video output the AI output could simply be used to decide what video to save. Or the AI output could trigger recording from an external video camera. So overall, the privacy aspects of this sensor seem pretty weak; it would have to be used in a properly designed system or else this privacy protection could be easily bypassed.<p>I've worked with similar systems that used local AI to process speech (to determine things like turn taking in conversations) and there was a claim that the system enhanced privacy because no speech was ever recorded, but in truth it would have been easy to compromise that privacy protection. If the ability to record or export video is not part of the sensor design, then it would be difficult for anyone to alter the chip to record video, but how do you verify that? The chip could have a secret "test" mode where it exports video and AI parameters for troubleshooting. You'd have to trust Sony, which might be reasonable in some circumstances, but not in others. The same is currently true for a variety of phones with "smart" features associated with the camera. It may seem like the phone will only unlock for you when it sees you, but what if it also secretly unlocks when presented with a specific QR code? How would you ever know?