Author (<a href="http://research.google.com/pubs/author39086.html" rel="nofollow">http://research.google.com/pubs/author39086.html</a>) of the paper here. I'm amused this is on Hacker News. The goal was to learn very long-timescale limit cycle behavior in a recurrent neural network. The chord changes are separated by many intervening melodic events (notes). As it turns out, even LSTM is pretty fragile when it comes to this. One problem is stability: if the network gets too perturbed, it can move into a space from which it never recovers. I'm not all that proud of the specific improvizations from that network, but I did enjoy learning what's possible and impossible in the space. I think now, with new ways to train larger networks on more data, it's time to revisit this challenge.<p>Edit: Formatting. I clearly don't post much on HN.
Really fascinating. I wonder if distinguishing between a motif and a random set of notes would help provide structure here. So, the model would decide "I'm going to build a motif and save it for variation later" for 4 bars, then could decide to play randomness in the turn around. Next pass through it provides randomization to the pre-established motif?
Interesting, but it feels like the music is forever stuck in an into of some kind. I never quite get the feeling that it's building towards something.