Steve Engels at the University of Toronto has done some work on exactly this. You can read a bit about the work and listen to some samples here: <a href="http://www.magazine.utoronto.ca/leading-edge/computer-music-composition-steve-engels-daniel-eisner/" rel="nofollow">http://www.magazine.utoronto.ca/leading-edge/computer-music-...</a><p>It used similar techniques, using a note-by-note Markov Chain on MIDI to generate music similar to an initial piece of training data. The difference with his model is that it's only trained on a single piece at a time. This leads to significantly more coherent music, but at the cost of making it effectively a variation on the original piece.<p>The biggest challenge in this kind of work is trying to get an overall structure for the entire song. In talks at the university, Engels has described the output of his model as that of a "distracted jazz pianist"—the moment-to-moment melodies are coherent but the song lacks overall form and direction.