<i>Last year I was attacked by three policemen at a demonstration who thought I was filming them. I told them I was listening to colours, but they thought I was mocking them and tried to pull the camera off my head.</i><p>For some reason, this strikes me as particularly awful. Not that police don't want to be filmed, that's predictably repugnant for its own clear reasons. Its that they had no problem ripping a prosthesis off someone before even bothering to try to understand it because it looked different. What's next, wrestling old folk to the ground and ripping out their hearing aids because they might be recording devices? Will there be an unwritten "normalcy code" that disabled people will have to follow to avoid assault?
I expected yet another “synesthesia—isn’t it interesting!” article, and was pleasantly surprised. As a synesthete (grapheme→colour and sound→colour), I find that the topic has been done to death, and non-synesthetic writers tend to romanticise it to the point of outright misrepresentation. Anyway, the brain’s peculiar propensity for conflating senses seems to have proved useful for once. Props to this guy for hacking his brain to get around a stroke of bad genetic luck.
Very cool!<p>Somewhat related: my main research area is actually in sonification (representing data through non-speech sound) - imagine listening to changes in the stock market through changes in pitch, or loudness, or tempo. We can use sonification for the visually impaired, communicating data and patterns in new places, as this guy has done. But we can also use it to revolutionize how we interact with computers - we can be mobile, multitasking, visually overloaded, and still process data through sonification. IMO a potentially revolutionary technology!
Some context: there is an entire neuroscientific field of study devoted to substituting one sensory modality with another: <a href="http://en.wikipedia.org/wiki/Sensory_substitution" rel="nofollow">http://en.wikipedia.org/wiki/Sensory_substitution</a><p>The field was pioneered by Paul Bach-y-Rita (<a href="http://en.wikipedia.org/wiki/Paul_Bach-y-Rita" rel="nofollow">http://en.wikipedia.org/wiki/Paul_Bach-y-Rita</a>) who most notably invented a setup that allowed blind people to "see" via a camera connected to a vibrating grid attached to their their backs, effectively substituting visual with haptic input.<p>In a nutshell, there is nothing intrinsically "visual" about neurons in the visual cortex, nor are neurons in the, e.g., auditory cortex exclusively tuned towards sound - the brain is plastic enough to "make sense" of a new type of input signal, which typically takes a couple of weeks.<p>My co-founder Peter König at EyeQuant.com - a neuroscience professor at the University of Osnabrueck - is working on similar projects with his feelspace group, where they created a compass-belt that vibrates whereever north is, taking sensory substitution a step further by effectively creating a <i>new</i> sensory modality of direction (Wired article: <a href="http://www.wired.com/wired/archive/15.04/esp.html" rel="nofollow">http://www.wired.com/wired/archive/15.04/esp.html</a>)<p>As an excellent philosophical take on this I would recommend Alva Noe's "Action in Perception":
<a href="http://www.amazon.com/dp/0262140888/" rel="nofollow">http://www.amazon.com/dp/0262140888/</a>
This is some really cool tech. But while light frequencies can obviously be translated into other media we can perceive, "color" as such always comes attached with a ton of spacial information, so I wonder how well the eyeborg conveys that? It seems like this would feel like being <i>extremely</i> nearsighted, which is suggested when he mentions getting close to peoples' faces when doing portraits. I also wonder how much this is a constraint of the technology, and how much are the limits of our sense perception? For example, if the device were able to encode arbitrarily specific spacial information, could one train themselves to be able to instantly distinguish among 100s of unique, simultaneous sounds (like we do with sight), or would the experience always be din?
I have fully working eyes, but the concept of self-induced synesthesia is interesting. Especially for the purposes of getting extra-human senses. (Even if they're not that useful in practice.) Of course, if your eyes can already see color; glasses with a screen filter might be more efficient.
This does look really cool, however, surly his resolution is only 1 pixel?<p>He is limited to hearing one note at a time, therefore he can only perceive one color at a time?<p>Am I missing the point?
Is the color->note map arbitrary or is there some logic behind it? I imagine it greatly affects the associations (including emotions) he has built up over the years.
Wow that's really cool tech.<p>Not gonna like though, I thought this was going to be about <a href="http://en.wikipedia.org/wiki/Aphex_Twin" rel="nofollow">http://en.wikipedia.org/wiki/Aphex_Twin</a>