“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” Tang said. “We want to make sure people only use these types of technologies when they want to and that it helps them.”<p>This will obviously be abused decades from now. Why wouldn’t authoritarians listen in on brains that have trained transformer models based on forced textual consumption over years.
For now the system has to be trained per person. Is the difference in brain activity across individuals significant enough to stop building a universal model?<p>It is not clear whether this is language dependent or not. If it is, I hope being bilingual or multilingual is a viable defense against this technology.
If this is trained to detect language in your brain, I wonder how this fares with people like me who lack a clear "inner voice" (or "inner monologue"). I don't doubt that it can be trained on a brain to detect that brain's patterns but given how different human brains can be in many ways, I'm inclined to believe this won't be turned into a "universal brain reader" so easily.
So not even your mind would be safe.<p>If true, everyone who took part in it should be tried as terrorists for crimes against humanity and never see the light again.
“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” Huth said. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”<p>Thats pretty scary. Who would work on this?<p>Refine this and in 10 years put it into wifi base stations that scan everything and you can read the thoughts of anyone in a room. Auto scan for "dangerous" keywords and voila: automated police state!