I've recently completed a masters thesis on EEG based mind reading, and I think I have a fairly good grasp on the state of the art in this field. I also have a copy of Kurzweil's The Singularity is Near by my bed, and I'm usually strongly optimistic about technology. But if IBM are talking about EEG based technology here, I would have to bet that they are flat out wrong on this one. I'll explain why.<p>Something like moving a cursor around by thinking about it, or thinking about making a call and having it happened requires a hell of a lot of bits of information to be produced by the brain computer interface. With the current state of the art we can distinguish between something like 2-6 classes of thoughts sort-of reliably, and even then it's typically about thinking of particular movements, not "call mom".<p>Importantly, what most people look for in the signal (the feature in machine learning terms) are changes in signal variance. And there are methods to detect these changes that are in some sense mathematically optimal (which is to say they can be still be improved a little bit, but there won't be any revolutionary new discoveries.) There may be other features to look for, but we wont be getting much better at detecting changes in signal variance.<p>Some methods can report results like a 94% accuracy over a binary classification problem. Such a result may seem "close to perfect", but it is averaged over several subjects, and likely varies between for example 100% and 70%. For the people with 70% accuracy, the distinguishing features of their signals are hidden for various reasons. And this is for getting one bit of information out of the device. Seems like such a device would need to work for everyone to be commercially successful.<p>In computer vision we have our own brains to prove that the problems can be solved. For EEG based brain computer interfaces, such proofs don't exist. There are certain things you probably can't detect from an EEG signal, meaning the distinguishing information probably isn't there at all. I'm easily willing to bet IBM money that who I would like to call can not be inferred from the electrical activity on my scalp. (Seriously IBM, let's go on longbets.org and do this.)
Can someone change this to link to the actual IBM blog entry [1] instead of the CNET fluff piece?<p>[1] <a href="http://asmarterplanet.com/blog/2011/12/the-next-5-in-5-our-forecast-of-five-innovations-that-will-alter-the-landscape-within-five-years.html" rel="nofollow">http://asmarterplanet.com/blog/2011/12/the-next-5-in-5-our-f...</a>
The "No Passwords" prediction is overlooking a big stumbling block: biometric data is not that secret and cannot be changed once intercepted. You might as well just walk up to an ATM, and speak your social security number. So the ATM is secure, but it's just another trusted client with all its associated problems.<p>The only thing biometric data is really good for is keeping track of people when they don't want to be tracked or want to hide their identity. For example, it would be a useful means of tracking and identifiying people in a prison or a border checkpoint.
Linkbaity headline, there.<p>"Mind reading" already exists kindof sortof maybe good enough to cnet to write an article about.<p>This is at the top of my Christmas list: <a href="http://emotiv.com/" rel="nofollow">http://emotiv.com/</a><p>In fact, here is a comparison of consumer Brain Computer Interfaces: <a href="http://en.wikipedia.org/wiki/Comparison_of_consumer_brain%E2%80%93computer_interfaces" rel="nofollow">http://en.wikipedia.org/wiki/Comparison_of_consumer_brain%E2...</a>
lars's comment (<a href="http://news.ycombinator.com/item?id=3371968" rel="nofollow">http://news.ycombinator.com/item?id=3371968</a>) is right on target. I recently finished my PhD in biomedical engineering, and <i>the</i> hot field that everyone wants to go into is what we're calling BMI - Brain-Machine Interfaces. The trick is, there are very few types of signals than can be reliably determined from these brain-signal reading devices.<p>Broadly speaking, there are two kinds of tasks that can be easily accomplished; anything involving moving limbs, or simple, low degree of freedom tasks (like moving a computer cursor). After months and months of training, a person can be trained to manipulate numerous degrees with pretty good reliability (i.e., move a robotic arm, AND control the mechanical pincer at the end), but this type of work doesn't generalize to other types of thought. We're nowhere near being able to extract sentences or words or being able to determine what complex scene is being viewed simply using brain activity patterns.
When talking about EEG-based "mind reading", there are three primary methods currently under study (when looking at locked-in patients at least):<p>1) P300 - This refers to a predictable change in the EEG signal that happens around 300 milliseconds after something you were expecting happens. For example, if I am looking for a particular letter to flash amongst a grid of letters all randomly flashing, a P300 will be triggered when the letter I want flashes.<p>2) SSVEP - This stands for steady state visually evoked potential. This approach uses EEG signals recorded from over the visual cortex, which responds to constantly flickering stimuli. Given a few seconds, the power of the frequency of the attended stimulus increases in the EEG, which can then be detected and used to make a decision.<p>3) SMR - This stands for sensorimotor rhythms, and is an approach that looks for changes in EEG activity over the motor cortex. Successful approaches have been able to identify when you imagine clenching your left or right fists, or pushing down on your foot. Unlike the other two, this does not require external stimuli.<p>SMR is the most like what we consider mind reading, as the user is initiating the signal while the other two infer what a person is looking at. It is limited to only 2-3 degrees of freedom at the moment, however, and is the hardest signal to work with. It is susceptible to external factors such as the current environment and mental state, and not everyone seems to be able to generate the needed signals. SSVEP, while lacking the wow factor of SMR, is much easier to work with and is a much more stable signal.<p>Disclosure: I work in this area. Here's a flashy NSF video highlighting our lab: <a href="http://www.nsf.gov/news/special_reports/science_nation/brainmachine.jsp" rel="nofollow">http://www.nsf.gov/news/special_reports/science_nation/brain...</a>
I would say rather the capability may be 5 years away. Whether consumers want it - I'm skeptical. I knew someone who for reasons I won't go into had a computer that they had to control with their eyes (basically has a webcam that tracks the eyes and moves the cursor and then clicks when you wink). It made me realize the further integration of computing control and a human's anatomy/biology can create more problems because there is a lack of a filtering mechanism. When you type on a computer you choose what your computer does by making deliberate actions rather than your computer monitoring you and interpreting your actions. The problem with the latter is there are many things you do that does not involve your computer... pick up the phone, throw a ball for your dog, talk to a coworker, etc. When your computer is monitoring you for input it never knows when the action is for it and when it is not. So in the case of the computers based on eye control, the experience is very problematic when you have to look somewhere else for any reason.<p>Now taking it a step further I can't even imagine how out of control a computer would be based on someone's mind. Our minds randomly fire off thoughts non-stop - its actually incredibly hard to concentrate on one deliberate thing for a long time (if you've ever tried meditation you realize this very quickly). How a computer could filter actions for it and actions that are just the randomness of the brain seems like it would be incredibly difficult in that there really isn't a definitive line there at all.
Previous "five in five" predictions from IBM can be found here: <a href="http://www.ibm.com/smarterplanet/us/en/ibm_predictions_for_future/examples/index.html" rel="nofollow">http://www.ibm.com/smarterplanet/us/en/ibm_predictions_for_f...</a>
Does anyone ever feel that neuroscience is getting more and more lovecraftian and challenging basic assumptions of what it means to be human? It sometimes feels like we're at a point in history where all the basic tenets of existence are being torn down by science and replaced with... nothing. Am I the only one who gets existential crisises from this kind of stuff? :p<p>It doesn't help, of course, that I'm currently reading this book: <a href="http://www.amazon.com/Conspiracy-Against-Human-Race-Contrivance/dp/098242969X" rel="nofollow">http://www.amazon.com/Conspiracy-Against-Human-Race-Contriva...</a><p>The luddite in me wishes that science will never be able to fully pick apart the human psyche. Here's to having an inscrutable ghost in the machine to keep us from being mere deterministic flesh-bots...
A little fanciful I think. The stuff about generating your own energy through captured kinetic energy is silly. My house has a 20KW feed - thats about 27 horsepower. On my bike I produce a tiny fraction of a horsepower. Its many orders of magnitude off.
In a sense, speech is mind-reading: you can have in your mind what the writer had in their's.<p>This isn't just sophistry, but shows there are two problems, 1. to transmit information into and out of a mind; 2. to transform the information into a form that can be understood by another. A common language if you will.<p>This has analogues in relational databases, where the internal physical storage representation is transformed into a logical representation of relations, from which yet other relations may be transformed; and in integrating heterogeneous web services, where the particular XML or JSON format is the common language and the classes of the programs at the ends are the representation within each mind.<p>There's no reason to think that the internal representation within each of our minds is terribly similar. It will have some common characteristics, but will likely differ as much as different human languages - or as much as other parts of ourselves, such as our fingerprints. Otherwise, everyone would communicate with that, instead of inventing common languages.
I'm guessing that when mind reading comes it will be more more of a machine learning exercise based on analysis of speech, vocal inflections, visible features, and previous actions than a portable EKG machine with wires on the scalp.<p>See Poe's detective Auguste Dupin, in, for example, "Murders in the Rue Morgue."
I think it says something about this "prediction" that most of the text on the IBM page about it (<a href="http://asmarterplanet.com/blog/2011/12/the-next-5-in-5-mind-reading-is-no-longer-science-fiction.html" rel="nofollow">http://asmarterplanet.com/blog/2011/12/the-next-5-in-5-mind-...</a>) is:<p><i>Vote for this as the coolest IBM 5 in 5 prediction by clicking the “Like” button below.<p>Join the Twitter conversation at #IBM5in5</i>
"Neurofeedback" already exists it's just still under the radar (it's like teaching yourself to roll your tongue). I've been trying to pull some demos together to demonstrate that the web browser is the place this will take off: <a href="http://vimeo.com/32059038" rel="nofollow">http://vimeo.com/32059038</a> (sorry I haven't pushed more of this extra-rough demo code yet). Consider using something like the wireless PendantEEG if you're going to be doing your own development OR be prepared to pay excessive licensing fees required from a few of the vendors mentioned here. If you are interested in helping develop this stuff mentioned in that video (and don't mind springing for some reasonbly cheap hardware) please ping me. I'd also like to plan a MindHead hackathon/mini-conference this spring in Boston (my personal interests are improving attention and relaxation, peak perfomance, and BCI).
Going down the list of 5, for each one I was thinking to myself, "Yeah right", then going through the explanations I was thinking, "Oh, well if <i>that</i> is what you mean by that, sure why not".
Slightly off-topic, but I've always thought that the first wave of HCI to hit the market and gain traction would be the integration of Affective sensing tech. products and API's into popular areas like music, social networks, and health care. I've always thought this would bring down the cost, increase investment in the HCI/BCI space, and speed up the adoption rates and lead to a much faster improvement of HCI technologies.
I dont see this happening, or being very accurate if it does. I dont know about you guys, but my mind thinks about something new every few seconds, and one tiny piece of a thought will turn into a whole new though. Its all very random and for a computer to be able to understand and filter that seems a little too sci-fi.
I was under the impression that we were very close to being able to move sensors with our minds.<p><a href="http://www.ted.com/talks/tan_le_a_headset_that_reads_your_brainwaves.html" rel="nofollow">http://www.ted.com/talks/tan_le_a_headset_that_reads_your_br...</a>
IBM constantly seems to release press releases about technology it hasn't yet developed to production quality. Said technology always vanishes without trace (as far as I can recall.) I'm not holding my breath on this one.
Thanks for the awesome example of putting one of paul graham's essays into action.<p><a href="http://www.paulgraham.com/submarine.html" rel="nofollow">http://www.paulgraham.com/submarine.html</a>
Seeing as at least 2 of the 5 are to be blunt crap why are we even discussing this - this is as relistic as the fusion "to cheap to meter" stories they ran in the 50's FFS