It's not "convincing" and there is no sense of "style change" at all - it's very clearly just awkwardly re-orchestrating the melody with some timbral changes. The tonality is often entirely lost (the Rihanna to Mozart example is the most egregious).<p>I really hate that we have to criticize these awful articles and press releases all the time, because this work is <i>actually really cool</i>, but because the researchers and article writers constantly go overboard with their claims and rhetoric, the level-headed people have to come along and say: "look, this is cool, but it's nowhere near the claims you're making". I'm really sick of this happening with <i>every</i> DeepMind/FAIR/Microsoft/IBM result that gets published. It's so tiring.
> Our results present abilities that are, as far as we know, unheard of.<p>Look, I gotta say, I'm pretty disappointed with the ridiculous level of salesmanship that authors now feel is necessary to get a paper into NIPS. I can only hope the reviewers might ask the researchers to tone back the crazy hype pitch, but I know better than to expect that.<p>Presenting things that were previously unheard of... well, that would be what scientific papers are for, aren't they?
I mean... this mostly just feels like the originals were fed through some sort of vocoder... None of this is impressive at all to me. Translate orchestral output to hip hop or something and then I'll be impressed. At this point, you've fed one song in and output some kind of grainy sounding similar sound that is basically just triggering notes in a vocoder...
That indiana jones theme whistling to organ was hilariously awful -> <a href="https://youtu.be/vdxCqNWTpUs?t=124" rel="nofollow">https://youtu.be/vdxCqNWTpUs?t=124</a> . Though I guess you can blame a lot of that on how bad the whistling was in the first place.<p>But I would totally play with this if given the chance. I don't foresee this tech replacing human composition for pop music, as one comment here expects, but it will be a tool for musicians in the same way that auto-tune became it's own sort of sound/style. I definitely can see it being useful for prototyping arrangements for instruments that you can't play (or don't keep around).
It's convincing in that they labeled each style "as such", but it's not convincing in a musical sense. Music is playful and joyous and inspirational, these clips are academic exercises. Let's see what can be done with the tech in years to come, but I won't put down the virtuosi for this anytime soon.
It is my firm belief that in maximum 10 years we will have number one hit songs completely produced end-to-end by a neural network - instrumental, lyrics, vocals, mixing.
Interesting to consider its apparent strengths vs weaknesses, and what that may mean about each musical style (or at least the training sets). It seems to completely suck at generating listenable Beethoven, and its Bach is better but still not great, but this system creates quite passable Mozart.
Conflicted between being all for a company like Facebook throwing their money at AI vanity projects and that these researchers are working on AI vanity projects rather than something else.<p>Hopefully the patents don't get walled off.