If you think this is impressive, consider that Mozart himself wrote an algorithm to randomly generator music... and it still sounds like Mozart: <a href="http://www.rationalargumentator.com/index/blog/2015/06/variations-minuet-mozart/" rel="nofollow">http://www.rationalargumentator.com/index/blog/2015/06/varia...</a>
Noob question: It intuitively seems to me that by feeding raw text of a structured format (such as the music notation in the article) we're making the algorithm unnecessarily learn the syntax in addition to interesting stuff, which is the high level musical patterns. What kind of results would you expect from running the same experiment, but with an input encoding more specialized to the problem domain? Would the performance benefits be significant?
A whole time back I created a program which uses genetic algorithms to generate melodies. I used the generated melodies as inspiration for music composition. The idea was (never implemented though) to add a fitness function based on training a neural network which could be trained by looking at other melodies or user input. More information can be found here: <a href="http://jcraane.blogspot.nl/2009/06/melody-composition-using-genetic.html" rel="nofollow">http://jcraane.blogspot.nl/2009/06/melody-composition-using-...</a><p>The source code is available here: <a href="https://github.com/jcraane/melodycomposition_genetic" rel="nofollow">https://github.com/jcraane/melodycomposition_genetic</a><p>Some sample melodies are in de docs/samples folder.<p>I may cost some time to get it to work again but should not be that hard.
Another (hilarious) tool which demonstrates the difference in quality of lightly-trained RNNs and strongly-trained RNNs is RoboRosewater, which generates Magic: the Gathering cards using networks of varying quality/sanity, indicated by the card art: <a href="https://twitter.com/RoboRosewater" rel="nofollow">https://twitter.com/RoboRosewater</a>
The authors of these music-generators should submit some of the compositions to online music libraries, to song competitions, etc, and see if they get accepted! "A la" what happened, back in the day, with peer-reviewed journals and: <a href="http://www.elsewhere.org/journal/pomo/" rel="nofollow">http://www.elsewhere.org/journal/pomo/</a>
I am a neural network noob and only know the basic feedforward network.<p>So the training set is just text files containing songs? How does it test if the output is correct or not? If I understand correctly the goal here was just to produce outputs in the correct format. If one wanted to train for quality as well would one need to grade every output the network produces by hand?
Isn't this the third project like this on HN?<p>These things generate tunes which sound OK for a few seconds, but after tens of seconds, you realize there's no higher level structure at all. It's just random.
The music came out quite fun sounding. I could almost imagine hearing at least passages within the music in videogames. Perhaps some old-school zelda/jrpg game which would suit the folky quality of the music.<p>The bass line was generally quite simplistic, I wonder what happens if you codified gradus ad parnassum and taught the RNN counterpoint[0]<p>[0] <a href="https://en.m.wikipedia.org/wiki/Johann_Joseph_Fux" rel="nofollow">https://en.m.wikipedia.org/wiki/Johann_Joseph_Fux</a>
Is there a better music database to work from for music generation? I'm surprised there isn't a massive db of 19th century sheet music or player piano rolls somewhere.