> In tests, the man was able to achieve writing speeds of 90 characters per minute (about 18 words per minute), with approximately 94 percent accuracy (and up to 99 percent accuracy with autocorrect enabled).<p>I'd be interested in knowing how this metric changes over time as the user gains more experience with the BCI device. The article mentions that researchers recorded his neural activity while he was thinking about writing letters. Would the man eventually find that the system is more accurate or faster when he instead learns how to think "the thought that generates the letter A in my BCI device"? Fascinating stuff all around.
System diagram (how it works):<p><a href="https://github.com/fwillett/handwritingBCI/raw/main/systemDiagram.png" rel="nofollow">https://github.com/fwillett/handwritingBCI/raw/main/systemDi...</a><p>--<p>Code and data (for replicating results offline):<p><a href="https://github.com/fwillett/handwritingBCI" rel="nofollow">https://github.com/fwillett/handwritingBCI</a><p>--<p>Published paper (you can find its full contents online):<p><a href="https://www.nature.com/articles/s41586-021-03506-2" rel="nofollow">https://www.nature.com/articles/s41586-021-03506-2</a>
Interesting that the user has to think of specific letters to spell out a word. I guess the 26 distinct English letters are much easier to parse and separate than the untold thousands of words, especially when those words could be used in different contexts.<p>I bet the next step wouldn't be to even parse words to make a sentence. It seems to me the next low hanging fruit would be to enable this machine to parse common ideas. I wonder how complex it would be to translate full sentences like "Good Morning", "I gotta take a dump", or "I'm hungry". It doesn't seem like it would be that much of a leap, since the user already has to imagine the idea of different letters. Admittedly I have no idea how different those concepts are, or how they would express themselves in the brain to be interpreted by the machine.
I've spoken to Abe Caplan, one of the Braingate researchers a few times online on clubhouse. <a href="https://clubhousedb.com/user/abecaplan" rel="nofollow">https://clubhousedb.com/user/abecaplan</a> <a href="https://www.researchgate.net/scientific-contributions/Abraham-H-Caplan-33403213" rel="nofollow">https://www.researchgate.net/scientific-contributions/Abraha...</a> He said it did not need a high resolution to work well off the top of my head (it did not need to be very accurate, it does not have to be a single method of working, there are many places it can be linked and still work, this refers to the signal processing on the BCI), companies like neuralink often repackages these research projects with slick marketing as original but it was simply rebuilding this project with another interface (they used clunkier probes while neuralink wants to do an implant, but that was before they filed a patent for an implant as well).<p>This says it requires an implant, but I am not sure if its true, his contributions are much older though and they might not be the same as the current Braingate research, but they were also for aiding with disabled people with controlling prosthetics, and also with signal processing and calibration to the user, they have made text input before inplants were required, so I don't see why its required, its benefit is being more convenient than setting up the probes or as a wearable. <a href="https://www.frontiersin.org/articles/10.3389/fnins.2012.00072/full" rel="nofollow">https://www.frontiersin.org/articles/10.3389/fnins.2012.0007...</a>
I'm curious how his imagined writing compares to the handwriting he had before he was paralyzed. Was it always that messy or are the BCI controls difficult to use? For example, his comma seems exaggerated, as though he had to imagine grand gestures to get the SW to recognize it as a valid character. But on the other hand, his "m" and "n" look fairly normal. It's possible that's just what his handwriting looks like.
Very cool. The opportunity for people with physical impairments to continue communicating - and convey their ability to still experience the world - gives us the ability to help these people lead happier lives.
How do you measure accuracy? Is there another way the man can communicate?<p>edit: The researchers compared the BCI output to a prompt that T5 was supposed to restate. I was thinking that T5 was communicating without prompts in the experiment. This isn't my idea of translation accuracy, but you've got to have some baseline.
> In this case, the man – called T5 in the study,<p>i wonder if it's a coincidence?<p><a href="https://huggingface.co/transformers/model_doc/t5.html" rel="nofollow">https://huggingface.co/transformers/model_doc/t5.html</a>
Something even more impressive, IMHO, from the same source: <a href="https://www.sciencealert.com/a-brain-implant-has-allowed-a-blind-woman-to-see-simple-2d-shapes-and-letters" rel="nofollow">https://www.sciencealert.com/a-brain-implant-has-allowed-a-b...</a><p>It's Star Trek like technology (think TNG series' Geordi La Forge).
The title is clickbait. This isn't "reading thoughts", it's reading motor movements, which is a different part of the brain than cognition. Most people will assume thoughts = cognition reading the title.
I wonder how something like this would function with chorded typing? A keyboard like this [1] with only one key per finger I would imagine would be relatively easy (easier than handwriting even? Since it's not limited to only your finger muscles - you attach every easily controllable muscle in the body to a button on the 'keyboard', and it seems to be a binary value whether a finger is clicking down or not, instead of what letter your hand is writing) for a brain implant to register. And a lot faster than handwriting.<p>[1] - <a href="https://www.gboards.ca/product/ginni" rel="nofollow">https://www.gboards.ca/product/ginni</a>
Total dystopian thought, but I wonder if this could ever be used to extract info people unwillingly, as in some interrogation scenario. I mean, you have a lot more control over what you say than what you think.
This is freaking awesome. Those mind-control drone pilots we have today mean there's an exciting near-future for mind-machine interfaces. It will be fantastic not just for people who are paralyzed, but also perhaps we will have better prostheses, and maybe in the metaverse we will have additional appendages to do more tasks.<p>I am looking forward to our new world.
The next truly massive tech revolution will have to be something even we tech peeps reject. And I think bodily embedded stuff would do the trick.<p>It could be amazingly effective though if this is where we’re at already. Imagine the speed and enjoyment increase for anything from typing to gaming to driving a car. You’d get completely left behind if you rejected it.
> (Don't think this nurse is hot! Don't think this nurse is hot! Holy shit she's hot.)<p>> (You can do this)<p>> (OMG, is it on already?)<p>> (You can do this)<p>> (What a nice bottom)<p>> (Don't think bottom you idiot)<p>> (She's entering the room, think something else, now, fast, bunny bunny)<p>> (Bunny)<p>> (Bunny)<p>> (Bunny, you got this)
This too will be weaponized. How? Capture a North Korean official, put one of these in his brain and siphon off all the juicy intel.
Capture a narco trafficker. ditto.
Capture a terrorist. ditto.
Have we made the world better?
It seems that whenever we develop ANY device for improved information processing, it disrupts the world we live in and displaces the uniquely human style of information processing-rendering humans less necessary. It is paradoxical. We make our lives 'easier' and human talent is lost.