Key sentence - "the correct choice of face space axes is critical for achieving a simple explanation of face cells’ responses."<p>They did PCA over two sets of metrics, taking the top 25 components from each set and then combined that into a 50d space. Using these dimensions and measured responses to fit a model resulted in explaining 57% of variance in real cell firing rates. (Much better than other models including a 5 layer CNN).<p>This is pretty cool. I'd like to see a follow-up where the chosen dimensions were further refined using something a bit more iterative that an arbitrary PCA cutoff.<p>Also I really want to know what eye motion was present during each trial. This paper presents a very "instantaneous" recognition perspective and doesn't talk about integration over time or the impact of sequential perception of face components on recognition. (Eg an upside-down face is hard to recognize because your gaze has to move up from the eyes to see the mouth which is a sequence rarely encountered in the real world)
"It is a remarkable advance to have identified the dimensions used by the primate brain to decode faces, he added — and impressive that the researchers were able to reconstruct from neural signals the face a monkey is looking at."<p>"These dimensions create a mental “face space” in which an infinite number of faces can be recognized. There is probably an average face, or something like it, at the origin, and the brain measures the deviation from this base."<p>"Dr. Tsao said she was particularly impressed to find she could design a whole series of faces that a given face cell would not respond to, because they lacked its preferred combination of dimensions. This ruled out a possible alternative method of face identification: that the face cells were comparing incoming images with a set of standard reference faces and looking for differences."<p>I'm surprised that they didn't attempt to generate a face with exactly 0 on all dimensions.<p>It would be fascinating to know what the most memorable face looks like - and if it's different per-brain. (Presumably it is monkey shaped!)
This paper is highly exciting for anybody working on the neural code and so-called encoding models.<p>Only few days ago there was a similar study reading faces from human brains by trying to construct a latent space:<p><a href="https://arxiv.org/abs/1705.07109" rel="nofollow">https://arxiv.org/abs/1705.07109</a><p><a href="https://twitter.com/ccnlab/status/866548346751725568" rel="nofollow">https://twitter.com/ccnlab/status/866548346751725568</a> (animation, over time more dimensions from this latent space (afaik PCA components) are added)<p>Given the limitations of fMRI (we can not do single cell recordings in human brains) the results are not as accurate, but to my knowledge this is the best we can do in humans so far.
I remember reading about a study [1] that showed that humans recognize faces based on how similar they are to the faces of their parents. It's well known that humans are more easily able to differentiate faces within our own races. But what the study did was look at people who were adopted by parents of a different race. Those people were more easily able to differentiate faces of people of the same race as their adopted parents and had difficulty differentiating faces of people of their own race. The inference is that we actually store/recognize facial deltas, not full facial images.<p>I'm curious how this study would explain or contradict the results of that study. Also, were the monkeys raised by monkey parents or human scientists? Monkeys that were allowed to imprint on humans might be more similar to humans and, yet, unrepresentative of monkeys.<p>[1] I think it was <a href="https://www.ncbi.nlm.nih.gov/pubmed/15943669" rel="nofollow">https://www.ncbi.nlm.nih.gov/pubmed/15943669</a>
I'm skeptical. Like "faked results" skeptical. Crime witness studies show that most humans can't reproduce another human face that accurately. So-so when the face is at least of the same race, but when of a different race it's a coin flip as to whether they can even recognize it. (That said, I've only heard this on various TV shows, never seen actual research, so the presumption could be wrong). How can primates do so much better with an entirely different species? Or, not even primates, but some AI going through primate neural signals?
If I understand this correctly, this works similar to an embedding in e.g. deep learning: faces are represented by high-dimensional vectors.<p>Reading the Cell article on this [0] I couldn't help to see the similarities with OpenFace [1].<p>[0] <a href="http://www.cell.com/cell/fulltext/S0092-8674(17)30538-X" rel="nofollow">http://www.cell.com/cell/fulltext/S0092-8674(17)30538-X</a>
[1] <a href="https://cmusatyalab.github.io/openface/#overview" rel="nofollow">https://cmusatyalab.github.io/openface/#overview</a>
Amazing results, it is incredible to see that our brains do the same process as CNNs, encoding information using multiple layers of neurons to extract features. This makes me think that consciousness could be just an extreme high-level temporal representation of our own senses.
Can this model be translated into computer vision code? I always wonder if it means there are new more efficient models still to be found to copy from nature, or if the model ends up not being the most efficient and just the result of evolution.
How are these macaques able to so finely differentiate faces of a different species?
I'm pretty sure I wouldn't be able to differentiate many macaque faces from each other.