A long time ago, when I was working on the initial implementation of android wear at Google, I actually worked with the code that does stuff like this, so I might be able to answer somewhat usefully. At least there, the way it worked, is that every 30 seconds, the device would wake up the accelerometer and collect 2 seconds of data at 120(?)Hz. After that, there was a relatively large decision tree based on the values, their derivatives, etc. This decision tree was an output of a large trained model, but was itself pretty small: a few thousand values. It could only classify things it was trained on - the output was an activity index. At Google at the time, the supported activities were: walking, biking, running, driving, sitting, unknown. The model cannot output anything other than the activity index.<p>The practical upshot: Could one detect such activities based on accelerometer data? Surely yes. However, unless somebody trained it on masturbation, it is unlikely that that is an actual possible output of it.<p>Details: model format was more or less this<p><pre><code> node {
int activity; //positive if this is the terminal
//node and this is the answer, else this is
//not terminal. Then it is the index of the
//input sample to read (times minus one) to
//compare to the next value
float compareWith;
unsigned gotoNodeIdxIfLessThan;
unsigned gotoNodeIdxIfGreaterOrEq;
}
model {
node nodes[];
}
</code></pre>
You’d start at node [0] and walk the tree as per comparison instructions (index of input samples and float to compare to) till you reached a terminal node.
Truthfully... Yes. <i>Many</i> other apps and services do as well. Careful what you give camera and mic permissions to, and <i>always</i> read the privacy agreements, especially for <i>FAA</i>N<i>G</i>. If you don't think Amazon, Facebook, Apple, Google, and Microsoft are watching/listening (with and without the light indicators), think again.