TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Yale researchers reconstruct facial images locked in a viewer’s mind

159 pointsby turingabout 11 years ago

13 comments

tokenadultabout 11 years ago
Cool result, but where is a publication showing the faces that were actually tested and &quot;reconstructed&quot;? Many, many submissions to HN (like this one) are press releases, and press releases are well known for spinning preliminary research findings beyond all recognition. This has been commented on in the PhD comic &quot;The Science News Cycle,&quot;[1] which only exaggerates the process a very little. More serious commentary in the edited group blog post &quot;Related by coincidence only? University and medical journal press releases versus journal articles&quot;[2] points to the same danger of taking press releases (and news aggregator website articles based solely on press releases) too seriously. I look forward to seeing how this finding develops as it is commented on and reviewed by other researchers in peer-reviewed publications and attempts to replicate the finding.<p>The most sure and certain finding of any preliminary study will be that more research is needed. Disappointingly often, preliminary findings don&#x27;t lead to further useful discoveries in science, because the preliminary findings are flawed. If the technique reported here can generalize at sufficiently low expense, it could lead to a lot of insight into the workings of the known-to-be complicated neural networks of the human brain used for recognizing faces.<p>A useful follow-up link for any discussion of a report on a research result like the one kindly submitted here is the article &quot;Warning Signs in Experimental Design and Interpretation&quot;[3] by Peter Norvig, director of research at Google, on how to interpret scientific research. Check each news story you read for how many of the important issues in interpreting research are NOT discussed in the story.<p>[1] <a href="http://www.phdcomics.com/comics.php?f=1174" rel="nofollow">http:&#x2F;&#x2F;www.phdcomics.com&#x2F;comics.php?f=1174</a><p>[2] <a href="http://www.sciencebasedmedicine.org/index.php/related-by-coincidence-only-journal-press-releases-versus-journal-articles/" rel="nofollow">http:&#x2F;&#x2F;www.sciencebasedmedicine.org&#x2F;index.php&#x2F;related-by-coi...</a><p>[3] <a href="http://norvig.com/experiment-design.html" rel="nofollow">http:&#x2F;&#x2F;norvig.com&#x2F;experiment-design.html</a>
评论 #7483247 未加载
评论 #7484018 未加载
chchabout 11 years ago
Although it&#x27;s hard to tell from the images presented with the article, the face generation looks like it could be similar to the techniques used in Nishimoto et al., 2011, which used a similar library of learned brain responses, though for movie trailers:<p><a href="http://www.youtube.com/watch?v=nsjDnYxJ0bo" rel="nofollow">http:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=nsjDnYxJ0bo</a><p>Their particular process is described in the YouTube caption:<p>The left clip is a segment of a Hollywood movie trailer that the subject viewed while in the magnet. The right clip shows the reconstruction of this segment from brain activity measured using fMRI. The procedure is as follows:<p>[1] Record brain activity while the subject watches several hours of movie trailers.<p>[2] Build dictionaries (i.e., regression models) that translate between the shapes, edges and motion in the movies and measured brain activity. A separate dictionary is constructed for each of several thousand points at which brain activity was measured. (For experts: The real advance of this study was the construction of a movie-to-brain activity encoding model that accurately predicts brain activity evoked by arbitrary novel movies.)<p>[3] Record brain activity to a new set of movie trailers that will be used to test the quality of the dictionaries and reconstructions.<p>[4] Build a random library of ~18,000,000 seconds (5000 hours) of video downloaded at random from YouTube. (Note these videos have no overlap with the movies that subjects saw in the magnet). Put each of these clips through the dictionaries to generate predictions of brain activity. Select the 100 clips whose predicted activity is most similar to the observed brain activity. Average these clips together. This is the reconstruction.<p>With the actual paper here:<p><a href="http://www.cell.com/current-biology/retrieve/pii/S0960982211009377" rel="nofollow">http:&#x2F;&#x2F;www.cell.com&#x2F;current-biology&#x2F;retrieve&#x2F;pii&#x2F;S0960982211...</a>
评论 #7484681 未加载
评论 #7483283 未加载
spikelsabout 11 years ago
Too bad the reconstructed faces don&#x27;t look anything like the presented faces and I&#x27;m sure these two example are some of the best results.<p>I suspect the algorithm always outputs <i>some</i> face generated by a paramaterized face model (neutral net based?). Therefore even random output would generate a face. Then with some &quot;tuning&quot; and a little wishful thinking you might convince yourself this works.<p>Am I being too skeptical?
评论 #7483262 未加载
评论 #7483201 未加载
评论 #7484241 未加载
评论 #7483506 未加载
aasaravaabout 11 years ago
Let&#x27;s stop with the &quot;mind reading&quot; warnings before they get too far out of hand and consider what&#x27;s really happening: Six subjects were shown a &quot;training&quot; corpus of images first. Then shown new images. By comparing the subjects&#x27; responses to the new images, the software in the study presumably did its best to create composite images by pulling from the corpus.<p>So this raises many questions: How diverse were the faces in the training corpus? How close were the new images to those in the corpus? When you&#x27;re looking at hundreds of images to train the machine, are you also unknowingly being trained to think about images in a certain way? What happens when you try to recreate faces based on the fMRI responses of subjects who didn&#x27;t contribute to the initial training set?<p>The implications of the last question are pretty interesting . If different people have different brain responses to looking at the same image, does that help us begin to understand why you and I can be attracted to different types of people? Does it help begin to explain why two people can experience the same event but walk away with two completely different interpretations?
评论 #7483309 未加载
freehunterabout 11 years ago
My first thought of this would be its use in constructing a image of a wanted criminal, as a way to replace police sketch artists. When I viewed their image, they&#x27;re incredibly close, but I don&#x27;t think they&#x27;re quite there. I&#x27;m really looking forward to seeing this improve as they&#x27;ve stated it will.<p>I thought the woman looked close enough to be able to identify, but the man was not. Still, very impressive work.
评论 #7483091 未加载
评论 #7483911 未加载
WildUtahabout 11 years ago
Soon we will be able to finally do away with that hoary old libertarian canard that the state cannot judge you and punish you for what you think inside your own head. Just imagine how harmony and true equality will blossom then!
TrainedMonkeyabout 11 years ago
You know maybe people wearing foil hats were on to something after all.<p>In all seriousness this is a great advance in neuroscience that would help understand many things about brain. On the other hand, potential for misuse is enormous. Can you even prove you had been interrogated if such a device is used on you?
评论 #7484656 未加载
cmaabout 11 years ago
This one is crazy: <a href="https://www.youtube.com/watch?v=SjbSEjOJL3U" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=SjbSEjOJL3U</a> (Mary Lou Jepsen on basically recreating video of what a person is watching through fMRI)
bttfabout 11 years ago
We now have read access to a person&#x27;s mind and its visual information when dealing with faces. Naturally, write access cannot be far away ...
评论 #7483098 未加载
评论 #7483762 未加载
评论 #7483188 未加载
评论 #7483488 未加载
评论 #7483969 未加载
electricheadabout 11 years ago
I found it interesting that the researcher thought that there was no possibility of receiving external funding for something like this. I would have thought the opposite. In fact I can think of a bunch of companies who would be only too happy to throw money at so ething like this.
spektomabout 11 years ago
I wonder whether this will work with different person, not the one who participated in the machine learning process. Someone, who has different brain activities when the same faces from the training set are shown. Is that possible that our brains analyze the seen differently?
notastartupabout 11 years ago
so...we can eventually create games and movies by imagining it without the need to do it by hand and input it into a computer
评论 #7484156 未加载
systematicalabout 11 years ago
Great, now I can figure out who I slept with after blacking out.