In a similar vein, Usenix Security 2012 had a session called "The Brain" with these two papers:
<a href="https://www.usenix.org/conference/usenixsecurity12/neuroscience-meets-cryptography-designing-crypto-primitives-secure" rel="nofollow">https://www.usenix.org/conference/usenixsecurity12/neuroscie...</a>
<a href="https://www.usenix.org/conference/usenixsecurity12/feasibility-side-channel-attacks-brain-computer-interfaces" rel="nofollow">https://www.usenix.org/conference/usenixsecurity12/feasibili...</a><p>The first is only slightly related to this article; it uses implicit learning to train users to authenticate with secrets that they cannot recall consciously (and therefore can't be coerced into revealing).<p>The second is about recovering secret information from brain-computer interfaces, and though this seems very relevant to the proposal of authenticating via "passthoughts", neither of these papers seem to cite each other.<p>(The Berkeley paper is at <a href="http://www.kisc.meiji.ac.jp/~ethicj/USEC13/submissions/usec13_submission_06.pdf." rel="nofollow">http://www.kisc.meiji.ac.jp/~ethicj/USEC13/submissions/usec1...</a>)
Summary of the actual paper: they take a single sensor EEG sample of your brain doing some simple task and compare it to both a set of samples of your brain doing the task (this comparison results is the selfsim value) and of a bunch of other people doing the task(resuliting in the crossSim score). "if the percent dierence between selfSim and crossSim is greater than or equal to T, we accept the authentication attempt. If not, we
reject it."<p>Of course, this actually says nothing about the feasibility of emulating someone else's signal (which may get way easier if its a single sensor).<p>Im skeptical of this both that it will hold of to an adversarial attacker and that its actually right. Deciding that something is a unique identifier off a small sample size reminds me of some of the really bad forensic techniques people used (e.g. [0])<p>[0]<a href="http://www.washingtonpost.com/wp-dyn/content/article/2007/11/17/AR2007111701681.html" rel="nofollow">http://www.washingtonpost.com/wp-dyn/content/article/2007/11...</a>
Has anyone used this MindSet in the article? I remember a similar technology coming out, but didn't see anything happening there. Has anyone used any of these for gaming or UI control?
I wonder how this would interact with duress—especially considering sometimes it's especially import to log on under duress, sometime's it's especially important to NOT log on.
that reminds me of my 5 year old little poem <a href="http://information-man.com/googles-personal_healthcare_gmail_brainwave_id-generation_2b/" rel="nofollow">http://information-man.com/googles-personal_healthcare_gmail...</a>