This is an excellent project, congrats.<p>However, it is in no sense "new or unique" as the authors suggest. Extensive (20+ years) of research literature on data sonification is out there, so...<p><a href="http://www.icad.org/knowledgebase" rel="nofollow">http://www.icad.org/knowledgebase</a><p>Note also (very many) art-led sonification projects, turning everything from live IP traffic to gene-sequence or x-ray astronomy datasets, carried out since the early 90s. Prix Ars Electronica may be a good place to look for these.<p>My summary of the field in general, FWIW, is this - it's trivial to turn a realtime data stream into sound. It's slightly harder to turn the stream into either a) music or b) non-dissonant sound streams, and it's very hard indeed to create a <i>legible</i> (ie useful, reversible) general purpose sonification framework, because auditory discrimination abilities vary so widely from individual to individual and are highly context-dependent.<p>Of course, because sound exists in time not space, there's no simple back-comparison of data with and relative to itself, as when one looks at a visual graph. Listeners rely on shaky old human memory: did I hear that before? Was it lower, louder? And so on.<p>That said, I remain fascinated by the area and propose that a sonic markup language for the web would be interesting.<p>Sneaky plug: My current project (<a href="http://chirp.io" rel="nofollow">http://chirp.io</a>) began by looking at 'ambient alerts' until we reached the point above, and decided to put machine-readable data into sound, instead of attaching sound to data for humans.<p>Good luck, and I very much look forward to hearing more!