TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Music Generation AI Models

39 pointsby peab3 months ago

9 comments

ipsum23 months ago
I wonder if this article is AI generated.<p>&gt; Vocal Synthesis: This allows one to generate new audio that sounds like someone singing. One can write lyrics, as well as melody, and have the AI generate an audio that can match it. You could even specify how you want the voice to sound like. Google has also presented models capable of vocal synthesis, such as googlesingsong.<p>Google&#x27;s singsong paper does the exact opposite. Given human vocals, it produces an musical accompaniment.
评论 #42994603 未加载
评论 #42995125 未加载
chaosprint3 months ago
I got into AI music back in 2017, kind of sparked by AlphaGo. Started by looking at machine listening stuff, like Nick Collins&#x27; work. Always been really curious about AI doing music live coding.<p>In 2019, I built this thing called RaveForce [github.com&#x2F;chaosprint&#x2F;RaveForce]. It was a fun project.<p>Back then, GANsynth was a big deal, looked amazing. But the sound quality… felt a bit lossy, you know? And MIDI generation, well, didn&#x27;t really feel like &quot;music generation&quot; to me.<p>Now, I&#x27;m thinking about these things differently. Maybe the sound quality thing is like MP3 at first, then it becomes &quot;good enough&quot; – like a &quot;retina moment&quot; for audio? Diffusion models seem to be pushing this idea too. And MIDI, if used the right way, could be a really powerful tool.<p>Vocals synthesis and conversion are super cool. Feels like plugins, but next level. Really useful.<p>But what I really want to see is AI understanding music from the ground up. Like, a robot learning how synth parameters work. Then we can do 8bit music like the DRL breakthrough. Not just training on tons of copyrighted music, making variations, and selling it, which is very cheap.
pier253 months ago
Are there models that generare MIDI instead of audio?<p>IMO this would be much more useful.
评论 #42994055 未加载
评论 #42994600 未加载
评论 #42994334 未加载
评论 #42996697 未加载
评论 #42994038 未加载
TheAceOfHearts3 months ago
One obvious area of improvement will be allowing you to tweak specific sections of an AI generated song. I was recently playing around with Suno, and while the results with their latest models are really impressive, sometimes you just want a little bit more control over specific sections of a track. To give a concrete example: I used deepseek-r1 to generate lyrics for a song about assabiyyah, and then used to Suno to generate the track [0]. The result was mostly fine, but it pronounced assabiyyah as ah-sa-BI-yah instead of ah-sah-BEE-yah. A relatively minor nitpick.<p>[0] <a href="https:&#x2F;&#x2F;suno.com&#x2F;song&#x2F;0caf26e0-073e-4480-91c4-71ae79ec0497" rel="nofollow">https:&#x2F;&#x2F;suno.com&#x2F;song&#x2F;0caf26e0-073e-4480-91c4-71ae79ec0497</a>
评论 #42994168 未加载
评论 #42994756 未加载
vunderba3 months ago
From the article:<p><i>&gt; Stem Splitting: This allows one to take an existing song, and split the audio into distinct tracks, such as vocals, guitar, drums and bass. Demucs by Meta is an AI model for stem splitting.</i><p>+1 for Demucs (free and open source).<p>Our band went back and used Demucs-GUI on a bunch of our really old pre-DAW stuff - all we had was the final WAVs and it did a really good job splitting out drums, piano, bass, vocals, etc. with the htdemucs_6s model. There was some slight bleed between some of the stems but other than that it was seamless.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;CarlGao4&#x2F;Demucs-Gui">https:&#x2F;&#x2F;github.com&#x2F;CarlGao4&#x2F;Demucs-Gui</a>
评论 #42994276 未加载
xvector3 months ago
In the future we may have music gen models that dynamically generate a soundtrack to our life, based off of ongoing events, emotions, etc. as well as our preferences.<p>If this happens, main character syndrome may get a bit worse :)
评论 #42994208 未加载
echelon3 months ago
&gt; code is now being written with the help of LLMs, and almost all graphic design uses photoshop.<p>AI models are tools, and engineers and artists should use them to do more per unit time.<p>Text prompted final results are lame and boring, but complex workflows orchestrated by domain practitioners are incredible.<p>We&#x27;re entering an era where small teams will have big reach. Small studio movies will rival Pixar, electronic musicians will be able to conquer any genre, and indie game studios will take on AAA game releases.<p>The problem will be discovery. There will be a long tail of content that caters to diverse audiences, but not everyone will make it.
评论 #42994074 未加载
评论 #42993862 未加载
评论 #42995073 未加载
intalentive3 months ago
AI tools can also emulate analog signal processors like guitar amps (e.g. NeuralDSP). I made an emulation of a popular studio EQ that sounds great.
r33b333 months ago
Are there any music generation models that work with sheet music or produce sheet music outputs that are actually good?