TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Computer can read letters directly from the brain

97 pointsby turingover 11 years ago

13 comments

dnauticsover 11 years ago
Should be clear - if my understanding of this is correct, this is the computer reading letters directly from the VISUAL CORTEX. So, this isn&#x27;t the computer reading a mind, so much as, a computer tapping the visual processing conduit. You probably couldn&#x27;t &quot;think of a letter&quot; and have the machine figure it out. Something similar but more crude had been achieved in cats (horizontal lines vs vertical lines, using direct electrode implantation), about a decade ago.<p>What is impressive (if this article is not fraudulent or overinterpreting) is that it&#x27;s a) done in humans, which realistically shouldn&#x27;t be too much of a stretch from cats, and b) done non-invasively using MRI. We&#x27;re NOT entirely sure what we&#x27;re measuring with fMRI - it&#x27;s supposedly increased bloodflow to the brain, but what that has to do with voltammetric activity is not 100% sussed out.<p>Aside: When I was in grad school there was this brilliant girl who somehow got sidetracked and burnt out in the lab she was in, started dropping out for weeks to isolate psychogenic compounds from desert cacti. For her qualifying independent proposal her presentation was basically two powerpoint slides that said &quot;test out LSD in cats&quot;. Naturally, she failed, but she had this amazing hypothesis about how LSD works, and I understand why she wanted to do in cats.... And I&#x27;m 99% sure she failed to communicate this to her committee. She did, however, get a nice severance package and got to attend Albert Hoffman&#x27;s 100th birthday party.
评论 #6270049 未加载
评论 #6272191 未加载
评论 #6270824 未加载
评论 #6270175 未加载
1qaz2wsx3edcover 11 years ago
<i>tinfoil</i> 20 years from now the headline will be: &quot;TSA brain scanners achieved by NSA, citizens shocked but docile.&quot;
评论 #6269921 未加载
评论 #6269862 未加载
评论 #6270828 未加载
评论 #6269774 未加载
评论 #6271841 未加载
abrichrover 11 years ago
<i>The researchers ‘taught&#x27; a model how small volumes of 2x2x2 mm from the brain scans - known as voxels - respond to individual pixels.</i><p>There&#x27;s got to be millions of neurons per 8mm3 of brain matter. I&#x27;d be interested to see what the images looked like before the prior knowledge was introduced.
评论 #6269900 未加载
评论 #6269966 未加载
lambdaloopover 11 years ago
Link to original article: <a href="https://www.dropbox.com/s/x2z14gw8017ezbg/for%20reddit.pdf" rel="nofollow">https:&#x2F;&#x2F;www.dropbox.com&#x2F;s&#x2F;x2z14gw8017ezbg&#x2F;for%20reddit.pdf</a><p>(Via reddit: <a href="http://en.reddit.com/r/Scholar/comments/1kc8mw/request_linear_reconstruction_of_perceived_images/" rel="nofollow">http:&#x2F;&#x2F;en.reddit.com&#x2F;r&#x2F;Scholar&#x2F;comments&#x2F;1kc8mw&#x2F;request_linea...</a>)<p>The Gallant lab at UC Berkeley did something somewhat similar about 2 years ago. See here: <a href="https://www.youtube.com/watch?v=KMA23JJ1M1o" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=KMA23JJ1M1o</a><p>From what I understand, both reconstructions involve setting up models of brain activity for vision, learning the parameters by machine learning from patients, and then using Bayesian inference to determine what is being seen.<p>While incredibly cool, we are still a long way from reading thoughts, and even longer if we&#x27;re not allowed to learn the parameters for that subject first. Right now, we can only <i>kinda</i> reconstruct what someone is seeing, but that&#x27;s really not much better than a camera.
reustleover 11 years ago
I&#x27;m really looking forward to my brain powered keyboard. I was close to buying an Emotiv headset a few times to attempt a build, but I don&#x27;t think the resolution was there, nor was I able to build the machine learning end.
评论 #6270029 未加载
jacquesmover 11 years ago
There was a video a while ago about a paralysed woman controlling a robotic arm:<p><a href="http://www.theguardian.com/science/video/2012/dec/17/paralysed-woman-controls-robotic-arm-mind-video" rel="nofollow">http:&#x2F;&#x2F;www.theguardian.com&#x2F;science&#x2F;video&#x2F;2012&#x2F;dec&#x2F;17&#x2F;paralys...</a><p>Hard to pick between that one and this one which one is giving me more of a living in the future feeling.<p>Very impressive.
Zenstover 11 years ago
Not long ago lasers, phones, compters were all very large. MRI machines today are very large, but one day. Not saying this approach is the best or that there are alternatives that are easier to adapt to something consumerable.<p>One thing I do know, that in the not so distant future - HATS will come back into fashion and with that I hope that somebody is not allowed to pattern using hats to contain sensors or any kind. But I have hope that the whole patatent area will be in a far better state of play by then.<p>I also suspect a whole new area of social issue will arise in the form of thought tourretes, be it having SIRI searching for porn or downloading the latest XRAY filter for Glass - will be interesting times. Me I&#x27;m still waiting for a grammer nazi app that fix&#x27;s the mistakes instead of complaining about them. We all have out dreams and to think beer and have a robot fetch you a cold one is still a dream. But getting closer.
评论 #6272440 未加载
networkedover 11 years ago
How groundbreaking is this? On that note, what is the state of the art for brain-computer interfaces, invasive or non-invasive, with which the user can actually input data into a computer?<p>As far as I understand the method described in the article, it could eventually be employed as an alternative to eye tracking for computer input, i.e., instead of determining what letter the user&#x27;s eyes are looking at by using cameras pointed at their face and computer vision you would scan the user&#x27;s visual cortex directly. One can immediately think of applications this would have even outside of the assistive technology market, e.g., for mobile input.
shitlordover 11 years ago
It would be really cool if we could develop this to the point that it would work in humans and with a minimal amount of hardware. Imagine the possibilities, coupled with wearable computing: we could digitize SO much information, from landmarks to museums to captchas and more... all without an obnoxious camera.
评论 #6271947 未加载
Houshalterover 11 years ago
This is a much cooler example <a href="http://youtu.be/nsjDnYxJ0bo" rel="nofollow">http:&#x2F;&#x2F;youtu.be&#x2F;nsjDnYxJ0bo</a><p>If I&#x27;m understanding the description correctly, they are just training it to recognize what the image is closest to and taking an slice of a youtube video that most closely matches it.<p>I imagine if they used a more efficient method or trained it more, they could do way better. It seems like most of the data to build an accurate picture of what they are seeing is already there.
andyidsingaover 11 years ago
right now I&#x27;m thinking ... h e l l o n s a
tlrobinsonover 11 years ago
Get your tin foil hats ready.
评论 #6270166 未加载
mosselmanover 11 years ago
Hey, that is my University :), great surprise.