TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

IBM: Mind reading is less than five years away. For real.

124 pointsby azazoover 13 years ago

31 comments

larsover 13 years ago
I've recently completed a masters thesis on EEG based mind reading, and I think I have a fairly good grasp on the state of the art in this field. I also have a copy of Kurzweil's The Singularity is Near by my bed, and I'm usually strongly optimistic about technology. But if IBM are talking about EEG based technology here, I would have to bet that they are flat out wrong on this one. I'll explain why.<p>Something like moving a cursor around by thinking about it, or thinking about making a call and having it happened requires a hell of a lot of bits of information to be produced by the brain computer interface. With the current state of the art we can distinguish between something like 2-6 classes of thoughts sort-of reliably, and even then it's typically about thinking of particular movements, not "call mom".<p>Importantly, what most people look for in the signal (the feature in machine learning terms) are changes in signal variance. And there are methods to detect these changes that are in some sense mathematically optimal (which is to say they can be still be improved a little bit, but there won't be any revolutionary new discoveries.) There may be other features to look for, but we wont be getting much better at detecting changes in signal variance.<p>Some methods can report results like a 94% accuracy over a binary classification problem. Such a result may seem "close to perfect", but it is averaged over several subjects, and likely varies between for example 100% and 70%. For the people with 70% accuracy, the distinguishing features of their signals are hidden for various reasons. And this is for getting one bit of information out of the device. Seems like such a device would need to work for everyone to be commercially successful.<p>In computer vision we have our own brains to prove that the problems can be solved. For EEG based brain computer interfaces, such proofs don't exist. There are certain things you probably can't detect from an EEG signal, meaning the distinguishing information probably isn't there at all. I'm easily willing to bet IBM money that who I would like to call can not be inferred from the electrical activity on my scalp. (Seriously IBM, let's go on longbets.org and do this.)
评论 #3372384 未加载
评论 #3372081 未加载
评论 #3372063 未加载
评论 #3373228 未加载
评论 #3373169 未加载
评论 #3372457 未加载
brendnover 13 years ago
Can someone change this to link to the actual IBM blog entry [1] instead of the CNET fluff piece?<p>[1] <a href="http://asmarterplanet.com/blog/2011/12/the-next-5-in-5-our-forecast-of-five-innovations-that-will-alter-the-landscape-within-five-years.html" rel="nofollow">http://asmarterplanet.com/blog/2011/12/the-next-5-in-5-our-f...</a>
narratorover 13 years ago
The "No Passwords" prediction is overlooking a big stumbling block: biometric data is not that secret and cannot be changed once intercepted. You might as well just walk up to an ATM, and speak your social security number. So the ATM is secure, but it's just another trusted client with all its associated problems.<p>The only thing biometric data is really good for is keeping track of people when they don't want to be tracked or want to hide their identity. For example, it would be a useful means of tracking and identifiying people in a prison or a border checkpoint.
blhackover 13 years ago
Linkbaity headline, there.<p>"Mind reading" already exists kindof sortof maybe good enough to cnet to write an article about.<p>This is at the top of my Christmas list: <a href="http://emotiv.com/" rel="nofollow">http://emotiv.com/</a><p>In fact, here is a comparison of consumer Brain Computer Interfaces: <a href="http://en.wikipedia.org/wiki/Comparison_of_consumer_brain%E2%80%93computer_interfaces" rel="nofollow">http://en.wikipedia.org/wiki/Comparison_of_consumer_brain%E2...</a>
评论 #3372088 未加载
评论 #3371892 未加载
评论 #3371951 未加载
eykanalover 13 years ago
lars's comment (<a href="http://news.ycombinator.com/item?id=3371968" rel="nofollow">http://news.ycombinator.com/item?id=3371968</a>) is right on target. I recently finished my PhD in biomedical engineering, and <i>the</i> hot field that everyone wants to go into is what we're calling BMI - Brain-Machine Interfaces. The trick is, there are very few types of signals than can be reliably determined from these brain-signal reading devices.<p>Broadly speaking, there are two kinds of tasks that can be easily accomplished; anything involving moving limbs, or simple, low degree of freedom tasks (like moving a computer cursor). After months and months of training, a person can be trained to manipulate numerous degrees with pretty good reliability (i.e., move a robotic arm, AND control the mechanical pincer at the end), but this type of work doesn't generalize to other types of thought. We're nowhere near being able to extract sentences or words or being able to determine what complex scene is being viewed simply using brain activity patterns.
bgalbraithover 13 years ago
When talking about EEG-based "mind reading", there are three primary methods currently under study (when looking at locked-in patients at least):<p>1) P300 - This refers to a predictable change in the EEG signal that happens around 300 milliseconds after something you were expecting happens. For example, if I am looking for a particular letter to flash amongst a grid of letters all randomly flashing, a P300 will be triggered when the letter I want flashes.<p>2) SSVEP - This stands for steady state visually evoked potential. This approach uses EEG signals recorded from over the visual cortex, which responds to constantly flickering stimuli. Given a few seconds, the power of the frequency of the attended stimulus increases in the EEG, which can then be detected and used to make a decision.<p>3) SMR - This stands for sensorimotor rhythms, and is an approach that looks for changes in EEG activity over the motor cortex. Successful approaches have been able to identify when you imagine clenching your left or right fists, or pushing down on your foot. Unlike the other two, this does not require external stimuli.<p>SMR is the most like what we consider mind reading, as the user is initiating the signal while the other two infer what a person is looking at. It is limited to only 2-3 degrees of freedom at the moment, however, and is the hardest signal to work with. It is susceptible to external factors such as the current environment and mental state, and not everyone seems to be able to generate the needed signals. SSVEP, while lacking the wow factor of SMR, is much easier to work with and is a much more stable signal.<p>Disclosure: I work in this area. Here's a flashy NSF video highlighting our lab: <a href="http://www.nsf.gov/news/special_reports/science_nation/brainmachine.jsp" rel="nofollow">http://www.nsf.gov/news/special_reports/science_nation/brain...</a>
moocow01over 13 years ago
I would say rather the capability may be 5 years away. Whether consumers want it - I'm skeptical. I knew someone who for reasons I won't go into had a computer that they had to control with their eyes (basically has a webcam that tracks the eyes and moves the cursor and then clicks when you wink). It made me realize the further integration of computing control and a human's anatomy/biology can create more problems because there is a lack of a filtering mechanism. When you type on a computer you choose what your computer does by making deliberate actions rather than your computer monitoring you and interpreting your actions. The problem with the latter is there are many things you do that does not involve your computer... pick up the phone, throw a ball for your dog, talk to a coworker, etc. When your computer is monitoring you for input it never knows when the action is for it and when it is not. So in the case of the computers based on eye control, the experience is very problematic when you have to look somewhere else for any reason.<p>Now taking it a step further I can't even imagine how out of control a computer would be based on someone's mind. Our minds randomly fire off thoughts non-stop - its actually incredibly hard to concentrate on one deliberate thing for a long time (if you've ever tried meditation you realize this very quickly). How a computer could filter actions for it and actions that are just the randomness of the brain seems like it would be incredibly difficult in that there really isn't a definitive line there at all.
评论 #3371889 未加载
brianwillisover 13 years ago
Previous "five in five" predictions from IBM can be found here: <a href="http://www.ibm.com/smarterplanet/us/en/ibm_predictions_for_future/examples/index.html" rel="nofollow">http://www.ibm.com/smarterplanet/us/en/ibm_predictions_for_f...</a>
评论 #3372581 未加载
kingkawnover 13 years ago
i am setting an alert in my calender 5 years from now with the text of this article and the author's email address.
itmagover 13 years ago
Does anyone ever feel that neuroscience is getting more and more lovecraftian and challenging basic assumptions of what it means to be human? It sometimes feels like we're at a point in history where all the basic tenets of existence are being torn down by science and replaced with... nothing. Am I the only one who gets existential crisises from this kind of stuff? :p<p>It doesn't help, of course, that I'm currently reading this book: <a href="http://www.amazon.com/Conspiracy-Against-Human-Race-Contrivance/dp/098242969X" rel="nofollow">http://www.amazon.com/Conspiracy-Against-Human-Race-Contriva...</a><p>The luddite in me wishes that science will never be able to fully pick apart the human psyche. Here's to having an inscrutable ghost in the machine to keep us from being mere deterministic flesh-bots...
评论 #3372739 未加载
评论 #3372756 未加载
评论 #3373031 未加载
JoeAltmaierover 13 years ago
A little fanciful I think. The stuff about generating your own energy through captured kinetic energy is silly. My house has a 20KW feed - thats about 27 horsepower. On my bike I produce a tiny fraction of a horsepower. Its many orders of magnitude off.
评论 #3372003 未加载
评论 #3371860 未加载
评论 #3371811 未加载
评论 #3371936 未加载
评论 #3371937 未加载
6renover 13 years ago
In a sense, speech is mind-reading: you can have in your mind what the writer had in their's.<p>This isn't just sophistry, but shows there are two problems, 1. to transmit information into and out of a mind; 2. to transform the information into a form that can be understood by another. A common language if you will.<p>This has analogues in relational databases, where the internal physical storage representation is transformed into a logical representation of relations, from which yet other relations may be transformed; and in integrating heterogeneous web services, where the particular XML or JSON format is the common language and the classes of the programs at the ends are the representation within each mind.<p>There's no reason to think that the internal representation within each of our minds is terribly similar. It will have some common characteristics, but will likely differ as much as different human languages - or as much as other parts of ourselves, such as our fingerprints. Otherwise, everyone would communicate with that, instead of inventing common languages.
评论 #3371972 未加载
ratsbaneover 13 years ago
I'm guessing that when mind reading comes it will be more more of a machine learning exercise based on analysis of speech, vocal inflections, visible features, and previous actions than a portable EKG machine with wires on the scalp.<p>See Poe's detective Auguste Dupin, in, for example, "Murders in the Rue Morgue."
brown9-2over 13 years ago
I think it says something about this "prediction" that most of the text on the IBM page about it (<a href="http://asmarterplanet.com/blog/2011/12/the-next-5-in-5-mind-reading-is-no-longer-science-fiction.html" rel="nofollow">http://asmarterplanet.com/blog/2011/12/the-next-5-in-5-mind-...</a>) is:<p><i>Vote for this as the coolest IBM 5 in 5 prediction by clicking the “Like” button below.<p>Join the Twitter conversation at #IBM5in5</i>
figitalover 13 years ago
"Neurofeedback" already exists it's just still under the radar (it's like teaching yourself to roll your tongue). I've been trying to pull some demos together to demonstrate that the web browser is the place this will take off: <a href="http://vimeo.com/32059038" rel="nofollow">http://vimeo.com/32059038</a> (sorry I haven't pushed more of this extra-rough demo code yet). Consider using something like the wireless PendantEEG if you're going to be doing your own development OR be prepared to pay excessive licensing fees required from a few of the vendors mentioned here. If you are interested in helping develop this stuff mentioned in that video (and don't mind springing for some reasonbly cheap hardware) please ping me. I'd also like to plan a MindHead hackathon/mini-conference this spring in Boston (my personal interests are improving attention and relaxation, peak perfomance, and BCI).
cjfontover 13 years ago
Going down the list of 5, for each one I was thinking to myself, "Yeah right", then going through the explanations I was thinking, "Oh, well if <i>that</i> is what you mean by that, sure why not".
mw63214over 13 years ago
Slightly off-topic, but I've always thought that the first wave of HCI to hit the market and gain traction would be the integration of Affective sensing tech. products and API's into popular areas like music, social networks, and health care. I've always thought this would bring down the cost, increase investment in the HCI/BCI space, and speed up the adoption rates and lead to a much faster improvement of HCI technologies.
pmuharover 13 years ago
I dont see this happening, or being very accurate if it does. I dont know about you guys, but my mind thinks about something new every few seconds, and one tiny piece of a thought will turn into a whole new though. Its all very random and for a computer to be able to understand and filter that seems a little too sci-fi.
roundsquareover 13 years ago
I was under the impression that we were very close to being able to move sensors with our minds.<p><a href="http://www.ted.com/talks/tan_le_a_headset_that_reads_your_brainwaves.html" rel="nofollow">http://www.ted.com/talks/tan_le_a_headset_that_reads_your_br...</a>
catshirtover 13 years ago
probably depends on your definition of "mind reading", but sounds like it warrants a longbet.
benastonover 13 years ago
IBM constantly seems to release press releases about technology it hasn't yet developed to production quality. Said technology always vanishes without trace (as far as I can recall.) I'm not holding my breath on this one.
bdgover 13 years ago
Thanks for the awesome example of putting one of paul graham's essays into action.<p><a href="http://www.paulgraham.com/submarine.html" rel="nofollow">http://www.paulgraham.com/submarine.html</a>
tree_of_itemover 13 years ago
"ATM machine" in an IBM video? I'm slightly disappointed.
评论 #3372001 未加载
评论 #3372087 未加载
mkramlichover 13 years ago
The only one who can tell me something is N years away is someone who just stepped out of a time machine. I see no time machine, I pipe to dev null.
评论 #3372734 未加载
faricoover 13 years ago
"you can control the cursor on a computer screen just by thinking about where you want to move it."<p>Imagine writing code by thinking only?
overgardover 13 years ago
The great thing about bold predictions is nobody ever remembers them if you're wrong, but you look like a genius if you're right.
zyb09over 13 years ago
Well I guess you could link brain patterns to thoughts, but how do you gonna read them without a 5 ton MRI machine?
评论 #3371933 未加载
silon3over 13 years ago
How soon will it reach the quantum level of "you can't measure without changing"?
lurchpopover 13 years ago
Massively unsettling coming from the company who helped the nazis streamline their attempts at genocide.
technologyover 13 years ago
George orwell's vision from the book 1984 vision is becoming true
mjwalsheover 13 years ago
Seeing as at least 2 of the 5 are to be blunt crap why are we even discussing this - this is as relistic as the fusion "to cheap to meter" stories they ran in the 50's FFS