They've got excellent PR, but they're not doing anything particularly novel. They're advertising a solution to a long-solved part of the puzzle, but saying nothing about the really difficult bit of electronic music control - interoperability.<p>A quick bit of synthesiser history:<p>Traditional electronic keyboards sense how hard you hit the key with two switches, set a distance apart vertically. Measure the time between each switch closing and you have one side of v=d/t. Simple, cheap and all you need for controlling an emulation of a piano.<p>That was improved by adding aftertouch, a simple pressure strip underneath the keybed which provided an additional axis of control. That approach evolved quite quickly, peaking in 1976 with Yamaha's CS-80 synthesiser, which could sense both vertical and horizontal pressure on each key polyphonically. This was a hugely elaborate and expensive instrument but was extremely expressive, as best heard on Vangelis' soundtrack to Blade Runner.<p>Progress in this field came to a grinding halt in 1983, with the release of MIDI, an interoperable standard for electronic music instruments that became completely ubiquitous. MIDI was a tremendous breakthrough and made all sorts of previously very difficult things quite easy, but it had all of the usual failings of a successful standard.<p>MIDI has left us stuck with design decisions from 1983, the worst of these being the incredibly low data-rate. MIDI is an 8-bit protocol, operating at 31.25kbaud. The standard doesn't include any means of transmitting expression data other than velocity on a per-note basis. You can bend the pitch of all the notes currently sounding, but not one note out of a chord. If you try and send too much controller information over a channel, the timing goes to pot as you run out of bandwidth.<p>There's an obvious problem of platform lock-in, which nobody seems able to break - a controller of this type can't usefully control any existing sound source. There are numerous extant controllers of similar sophistication to the Seaboard, but they've all failed commercially because of their inability to usefully control existing sound generators.<p>The hoped-for solution to this problem is the Open Sound Control protocol, but this faces numerous problems. The most obvious is that MIDI is deeply entrenched, to the point that its shortcomings are rendered non-obvious to most musicians - our sense of the boundaries of electronic music are often inseperable from the limits of MIDI, so we don't often think about the sounds that we're unable to make with current technology. The other big issue is that of all reforms of old standards - an inability to manage scope and complexity.<p>The developers of OSC are obviously fearful of repeating the problems of MIDI, so they've gone in completely the opposite direction and designed a totally open-ended protocol. This has massively limited adoption of OSC by musicians, because it's extremely difficult to understand. A MIDI message is just a single byte and it's not too difficult to memorise the entire protocol - until the development of graphical computer-based sequencers, it was quite common to tidy up a recorded sequence of MIDI messages in a hex editor - any given MIDI message was just a single octet. OSC is designed to deal with every possible edge case, which of course makes it needlessly complex for the most common use-cases.<p>Controllers like these are doomed to niche appeal unless the manufacturers focus on the real problem - how to use the control data they generate in a manner which is both musically useful, and comprehensible to the musician.<p>The last major attempt was the Eigenharp, which used a bespoke software suite with a number of software instruments specifically designed for the instrument. It garnered a great deal of attention in the popular press, but nobody has made any worthwhile music with it yet.