This announces a new dataset where recorded performances are precisely synchronized to MIDI transcriptions. Obviously the article doesn't seem to get the implications quite right (it's very useful for performance-related research, not so much for AI composition).<p>As a composer, the coolest potential I see here is training a model to create realistic mockups from MIDI compositions. For that purpose, though, it would be better to start with a fully monophonic/solo-instrument dataset, which would simplify the learning. Also, MIDI data is not entirely sufficient: annotations on dynamics and playing technique would be necessary to make a good mockup tool, since this is the kind of information one might even give to human performers.<p>Anyways, it would be tough for such a tool to catch up with current state-of-the-art, sample-based mockup tools, which are already baffling in their realism, although they usually require a lot of work to get good results. But one can always dream of a "Stokowski" or "Karajan" neural network that interprets your MIDI composition with emotion and sensibility!