Cool result, but where is a publication showing the faces that were actually tested and "reconstructed"? Many, many submissions to HN (like this one) are press releases, and press releases are well known for spinning preliminary research findings beyond all recognition. This has been commented on in the PhD comic "The Science News Cycle,"[1] which only exaggerates the process a very little. More serious commentary in the edited group blog post "Related by coincidence only? University and medical journal press releases versus journal articles"[2] points to the same danger of taking press releases (and news aggregator website articles based solely on press releases) too seriously. I look forward to seeing how this finding develops as it is commented on and reviewed by other researchers in peer-reviewed publications and attempts to replicate the finding.<p>The most sure and certain finding of any preliminary study will be that more research is needed. Disappointingly often, preliminary findings don't lead to further useful discoveries in science, because the preliminary findings are flawed. If the technique reported here can generalize at sufficiently low expense, it could lead to a lot of insight into the workings of the known-to-be complicated neural networks of the human brain used for recognizing faces.<p>A useful follow-up link for any discussion of a report on a research result like the one kindly submitted here is the article "Warning Signs in Experimental Design and Interpretation"[3] by Peter Norvig, director of research at Google, on how to interpret scientific research. Check each news story you read for how many of the important issues in interpreting research are NOT discussed in the story.<p>[1] <a href="http://www.phdcomics.com/comics.php?f=1174" rel="nofollow">http://www.phdcomics.com/comics.php?f=1174</a><p>[2] <a href="http://www.sciencebasedmedicine.org/index.php/related-by-coincidence-only-journal-press-releases-versus-journal-articles/" rel="nofollow">http://www.sciencebasedmedicine.org/index.php/related-by-coi...</a><p>[3] <a href="http://norvig.com/experiment-design.html" rel="nofollow">http://norvig.com/experiment-design.html</a>
Although it's hard to tell from the images presented with the article, the face generation looks like it could be similar to the techniques used in Nishimoto et al., 2011, which used a similar library of learned brain responses, though for movie trailers:<p><a href="http://www.youtube.com/watch?v=nsjDnYxJ0bo" rel="nofollow">http://www.youtube.com/watch?v=nsjDnYxJ0bo</a><p>Their particular process is described in the YouTube caption:<p>The left clip is a segment of a Hollywood movie trailer that the subject viewed while in the magnet. The right clip shows the reconstruction of this segment from brain activity measured using fMRI. The procedure is as follows:<p>[1] Record brain activity while the subject watches several hours of movie trailers.<p>[2] Build dictionaries (i.e., regression models) that translate between the shapes, edges and motion in the movies and measured brain activity. A separate dictionary is constructed for each of several thousand points at which brain activity was measured.
(For experts: The real advance of this study was the construction of a movie-to-brain activity encoding model that accurately predicts brain activity evoked by arbitrary novel movies.)<p>[3] Record brain activity to a new set of movie trailers that will be used to test the quality of the dictionaries and reconstructions.<p>[4] Build a random library of ~18,000,000 seconds (5000 hours) of video downloaded at random from YouTube. (Note these videos have no overlap with the movies that subjects saw in the magnet). Put each of these clips through the dictionaries to generate predictions of brain activity. Select the 100 clips whose predicted activity is most similar to the observed brain activity. Average these clips together. This is the reconstruction.<p>With the actual paper here:<p><a href="http://www.cell.com/current-biology/retrieve/pii/S0960982211009377" rel="nofollow">http://www.cell.com/current-biology/retrieve/pii/S0960982211...</a>
Too bad the reconstructed faces don't look anything like the presented faces and I'm sure these two example are some of the best results.<p>I suspect the algorithm always outputs <i>some</i> face generated by a paramaterized face model (neutral net based?). Therefore even random output would generate a face. Then with some "tuning" and a little wishful thinking you might convince yourself this works.<p>Am I being too skeptical?
Let's stop with the "mind reading" warnings before they get too far out of hand and consider what's really happening: Six subjects were shown a "training" corpus of images first. Then shown new images. By comparing the subjects' responses to the new images, the software in the study presumably did its best to create composite images by pulling from the corpus.<p>So this raises many questions: How diverse were the faces in the training corpus? How close were the new images to those in the corpus? When you're looking at hundreds of images to train the machine, are you also unknowingly being trained to think about images in a certain way? What happens when you try to recreate faces based on the fMRI responses of subjects who didn't contribute to the initial training set?<p>The implications of the last question are pretty interesting . If different people have different brain responses to looking at the same image, does that help us begin to understand why you and I can be attracted to different types of people? Does it help begin to explain why two people can experience the same event but walk away with two completely different interpretations?
My first thought of this would be its use in constructing a image of a wanted criminal, as a way to replace police sketch artists. When I viewed their image, they're incredibly close, but I don't think they're quite there. I'm really looking forward to seeing this improve as they've stated it will.<p>I thought the woman looked close enough to be able to identify, but the man was not. Still, very impressive work.
Soon we will be able to finally do away with that hoary old libertarian canard that the state cannot judge you and punish you for what you think inside your own head. Just imagine how harmony and true equality will blossom then!
You know maybe people wearing foil hats were on to something after all.<p>In all seriousness this is a great advance in neuroscience that would help understand many things about brain. On the other hand, potential for misuse is enormous. Can you even prove you had been interrogated if such a device is used on you?
This one is crazy: <a href="https://www.youtube.com/watch?v=SjbSEjOJL3U" rel="nofollow">https://www.youtube.com/watch?v=SjbSEjOJL3U</a> (Mary Lou Jepsen on basically recreating video of what a person is watching through fMRI)
I found it interesting that the researcher thought that there was no possibility of receiving external funding for something like this. I would have thought the opposite. In fact I can think of a bunch of companies who would be only too happy to throw money at so ething like this.
I wonder whether this will work with different person, not the one who participated in the machine learning process. Someone, who has different brain activities when the same faces from the training set are shown. Is that possible that our brains analyze the seen differently?