Looks like a joke, but seems to be real. From the draft:<p><pre><code> Use cases for EmotionML can be grouped into three broad types:
Manual annotation of material involving emotionality,
such as annotation of videos, of speech recordings,
of faces, of texts, etc;
Automatic recognition of emotions from sensors,
including physiological sensors, speech recordings,
facial expressions, etc., as well as from multi-modal
combinations of sensors;
Generation of emotion-related system responses, which
may involve reasoning about the emotional implications
of events, emotional prosody in synthetic speech, facial
expressions and gestures of embodied agents or robots,
the choice of music and colors of lighting in a room, etc.</code></pre>