Raymond Scott[0] (1908-1984) comes to mind.<p>He designed rhythm generators and automatic baseline sequencers. He died trying to complete building a monster electronic instrument called the "Electronium" of which only one incomplete and partially functional example exists, owned by Mark Mothersbaugh.<p>I only wish his devices were documented or understood such that simulators or reproductions could be built.<p>[0]: <a href="https://en.m.wikipedia.org/wiki/Raymond_Scott" rel="nofollow">https://en.m.wikipedia.org/wiki/Raymond_Scott</a><p>[1]: <a href="https://youtu.be/0V2TZKcWnXE" rel="nofollow">https://youtu.be/0V2TZKcWnXE</a><p>[2]: <a href="https://youtu.be/o6VsZiNjjZE" rel="nofollow">https://youtu.be/o6VsZiNjjZE</a>
It's interesting: some of the earliest European polyphony could be described as "procedurally generated"; the cantus firmus was assigned liturgically, and the descant could be determined by what we would today call an algorithm (sacred music was at this point considered science rather than a creative art). Plus ça change...
Academia is significantly behind the times in this area<p>Artists like Autechre and related links in this thread have been doing this for years using Max/MSP, to a much greater effect (albeit it's apparently a hybrid where the humans are at the "control panel" of the generative engine guiding it in the direction they want)<p><a href="https://www.youtube.com/watch?v=wdKIJHXzPkk" rel="nofollow">https://www.youtube.com/watch?v=wdKIJHXzPkk</a>
Years ago I remember listening to algomusic on the Amiga. Every so often I do a search to see if anyone has made a version that works on Linux, but with no luck. This is getting there, but still feels a bit rough around the edges.
Oh this is very cool. The structure reminds me of Matthew Brown's "Music for Shuffle" project: <a href="http://musicforshuffle.com/2014/04/06/session1/" rel="nofollow">http://musicforshuffle.com/2014/04/06/session1/</a>
Who owns the copyright on procedurally generated music?<p>Mostly curious since some of these sound better than the ambient tracks you get in games. Even better, you could constrain the parameters of the generation to have each instance of a game generate different, but thematically similar, procedural music.
This was actually much better than I expected. Wondering what kind of neural net can do this, and what are the outputs. It sounds a bit "phasey" as if there is an IFFT step.