FTA: "When Picard and el Kaliouby were calibrating their prototype, they were surprised to find that the average person only managed to interpret, correctly, 54 per cent of Baron-Cohen's expressions on real, non-acted faces. This suggested to them that most people - not just those with autism - could use some help sensing the mood of people they are talking to. "People are just not that good at it," says Picard. The software, by contrast, correctly identifies 64 per cent of the expressions."<p>I kind of wonder how they did this testing. Much of the information we get about a person's mood is from what context there is (far more than comes from what's on their faces). There are people who cry out of joy, but if I saw a picture of one of these people, I'm sure it would look like sadness to me.<p>Further, I don't like the sound of a future where people stop talking to other people because a light starts blinking.<p>However, for the purposes of autism research and development, this is good. Better than good - this is excellent. I really hope there will be more research in this area, for the purpose of helping those who can't communicate well. (I suppose that contradicts what I said above. Perhaps there is a particular scale on which to rank necessity of aid in communication?)