I'm highly skeptical. I mean, a hash function that has four output states also maps <i>anything</i> to one of those four states. That doesn't mean it's some next-level classifier.<p>The problem here is EEG. EEG bandwidth is not enough to capture that much information. There is far too much noise introduced by the skull and muscles. It's most likely physically impossible to do something like this with EEG.<p>What's likely happening here is that there's some large scale oscillations that are sufficiently unique to discern the images from each other. This does not mean they are reproducing the images. I am highly skeptical of the methods used here -- they are almost certainly flawed.<p>I, too, once had dreams of conquering the planet with EEG when I was a grad student. I quickly learned that physics makes this infeasible. Anyone who is serious about BMIs are studying invasive BMIs and how to make them as safe as possible. Going inside the brain is unavoidable, I'm afraid.
This model is incredibly overfit.<p>Video: <a href="https://youtu.be/nf-P3b2AnZw" rel="nofollow">https://youtu.be/nf-P3b2AnZw</a><p>Watch how it has preconceived notions of these scenes. It frequently fails to reconstruct the correct scene from video, and it also turns completely blank input into one of the scenes it was trained on.
Imagine the shitshow this will cause once law enforcement adopts this.<p>Currently eyewitness criminal sketches are still drawn by artist so they are naturally low fidelity.<p>That will change once you can generate a photo of a face (like <a href="https://thispersondoesnotexist.com/" rel="nofollow">https://thispersondoesnotexist.com/</a>) based on your brain waves.<p>This will be disastrous on so many levels. The eyewitness might not have a good sample of a minority race. The GAN dataset itself might also only be trained on celebrity faces so it doesn't know how to generate anything else (e.g., a teen).<p>But it will be deceptively high resolution so police will rely on it.<p>If you have a generic face your life is fucked.
My research group is doing the same thing but with music. Music may be more promising than images because of the Frequency Following Response -- a sort of direct resonance effect in the brain in response to sound.<p>We have 24 subjects listening to 12 songs in random order, with 128 channel EEG sampling at 1000hz. We can then label all these data points with the musical features at the time the data is collected.<p>We don't have a public repo yet, but we are sharing data.
I dont think their model is working, and Im not sure it ever will. Simply reading brainwaves, a bi-product as I understand it, of the actual neuron activity, couldn't possibly give you an accurate result.
The end results are much, much better than I thought they would be. Luckily, I think it would be easy to fool the training by thinking about a totally different image to the baseline one. Idk if that would stand up to rubber hose cryptanalysis, but there’s got to be a way that can.
I read in some article some time about use of SQUIDs (<a href="https://en.wikipedia.org/wiki/SQUID" rel="nofollow">https://en.wikipedia.org/wiki/SQUID</a>) to map activity of a single neuron non-invasively. There was a lot of hype at the time for brain-computer interfaces based on that, but then same as with many technologies that were "just 5 years away" those 5 years came and went with no deliverables expected.
There’s a great movie called <i>Until The End Of The World</i> that centers on this kind of technology. Once the scientists get it to work, they realize that they can record and play back their dreams, and they become addicted to watching them.
Lena source image resulting in some other random woman "reconstruction" = model over fitted AF. Put a dead fist and it will continue generating "reconstructions".