Every experiments in neural network music writing I've come across, it seems neural networks only receive inputs as notes and time (through MIDI scores I presume). And from my limited understanding, they base their future generations on their "understanding" of the notation layout.<p>Has anyone heard about a NN experiment that would use pitch instead, (with natural harmonics included, maybe)?<p>From my point of view, because perceived harmony and intervals depend on the ratios between two frequencies, if we do not tell the machine "how" notes superposition sounds, the machine won't be able to generate harmony in conscience (and ends up accidentally stacking thirds like it seems to be)