To the author: Have you tried to use a logarithmic frequency scale in the spectrogram? [1] That representation is closer to the way humans perceive sound, and gives you finer resolution in the lower frequencies. [2] If you want to make your representation even closer to the human's perception, take a look at Google's CARFAC research. [3] Basically, they model the ear. I've prepared a Python utility for converting sound to Neural Activity Pattern (resembles a spectrogram when you plot it) here: <a href="https://github.com/iver56/carfac/tree/master/util" rel="nofollow">https://github.com/iver56/carfac/tree/master/util</a><p>[1] <a href="https://sourceforge.net/p/sox/feature-requests/176/" rel="nofollow">https://sourceforge.net/p/sox/feature-requests/176/</a><p>[2] <a href="https://en.wikipedia.org/wiki/Mel_scale" rel="nofollow">https://en.wikipedia.org/wiki/Mel_scale</a><p>[3] <a href="http://research.google.com/pubs/pub37215.html" rel="nofollow">http://research.google.com/pubs/pub37215.html</a>
Wow, I find it incredible that this works. As I understand it, the approach is to do a Fourier transform on a couple seconds of the song to create a 128x128 pixel spectrogram. Each horizontal pixel represents a 20 ms slice in time, and each vertical pixel represents 1/128 of the frequency domain.<p>Then treating these spectrograms as images, train a neural net to classify them using pre-labelled samples. Then take samples from the unknown songs, and let it classify them. I find it incredible that 2.5 seconds of sound represented as a tiny picture captures information enough for reliable classification, but apparently it does!
1. I wonder how the continuous wavelet transform would compare to the windowed Fourier transform used here. See [1] an python implementation, for example.<p>2. The size of frequency analysis blocks seems arbitrary. I wonder if there is a "natural" block size based on a song's tempo, say 1 bar. This would of course require a priori tempo knowledge or a run-time estimate.<p>[1]: <a href="https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.signal.cwt.html" rel="nofollow">https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/...</a>
See also <a href="http://everynoise.com/" rel="nofollow">http://everynoise.com/</a> which is a view into how Spotify classifies music.<p>The creator wrote about it here:<p><a href="http://blog.echonest.com/post/52385283599/how-we-understand-music-genres" rel="nofollow">http://blog.echonest.com/post/52385283599/how-we-understand-...</a><p>and writes a lot about it on their blog:<p><a href="http://www.furia.com/page.cgi?terms=noise&type=search" rel="nofollow">http://www.furia.com/page.cgi?terms=noise&type=search</a><p>Of course those are going in the other direction, not generating the classification from the data, but it's probably one of the best data sets as far as classifying existing music.
Unless I'm misunderstanding the validation set, I'm skeptical of the ability of this classifier to tag unlabeled tracks, given that it is only being trained and tested on tracks which are already known to belong to one of the few trained genres. I'd be curious to see the performance if you were to additionally test on tracks which are not any of (Hardcore, Dubstep, Electro, Classical, Soundtrack and Rap), with a correct prediction being no tag.
Nice approach, and well explained! By the way, Niland is a startup that also does music labeling with the help of deep learning.<p>Demo available here: <a href="http://demo.niland.io/" rel="nofollow">http://demo.niland.io/</a><p>For example, it can output Drum Machine: 87%, House: 88%, Female Voice: 55%, Groovy: 93%
See also Bob Sturm's work on genre classification: <a href="http://link.springer.com/article/10.1007/s10844-013-0250-y" rel="nofollow">http://link.springer.com/article/10.1007/s10844-013-0250-y</a>
That's pretty cool, I'd like to use something like this to tell me what genre my own songs are. It's annoying to write a song and then upload it to some service or another and have no idea what genre to pick. :-) My stuff is somewhere in the jazz-influenced singer-songwriter american piano pop realm which is a combination that works for me but it generally feels like I'm selling the song short if I have to pick only one.
Hmm, convolution is perfectly good operation to run on wave forms as well. In fact the wikipedia article (<a href="https://en.wikipedia.org/wiki/Convolution" rel="nofollow">https://en.wikipedia.org/wiki/Convolution</a>) shows the operation on functions which would correspond to time-domain wave forms. What is the point of converting everything to pictures and then using 2D convolutions when that step could have been skipped entirely?<p>Converting to pictures is unnecessary. It makes the processing harder. The pooling should just happen on segments of the wave form instead of the fourier transform (frequency-domain) picture spectrograms.
i'm not super familiar with deep learning so forgive me if i'm missing some nuance, but what's the purpose of writing/reading to/from images? seems like it would add a ton of processing time. could the CNN not just read from a 50 item array of tuples representing the data from the 20ms slice?