Oh this is very nice, I hadn't seen it before. A few random thoughts:<p>- The Vamp Plugin Pack for Mac <i>finally</i> got an ARM/Intel universal build in its 2.0 release last year, so hopefully the caveat mentioned about the M1 Mac should no longer apply<p>- Most of the Vamp plugins in the Pack pre-date the pervasive use of deep learning in academia, and use classic AI or machine-learning methods with custom feature design and filtering/clustering/state models etc. (The associated papers can be an interesting read, because the methods are so explicitly tailored to the domain)<p>- Audacity as host only supports plugins that emit time labels as output - this obviously includes beats and chords, but there are other forms of analysis plugins can do if the host (e.g. Sonic Visualiser) supports them<p>- Besides the simple host in the Vamp SDK, there is another command-line Vamp host called Sonic Annotator (<a href="https://vamp-plugins.org/sonic-annotator/" rel="nofollow">https://vamp-plugins.org/sonic-annotator/</a>) which is even harder to use, equally poorly documented, and even more poorly maintained, but capable of some quite powerful batch analysis and supporting a wider range of audio file formats. Worth checking out if you're curious<p>(I'm the main author of the Vamp SDK and wrote bits of some of the plugins, so if you have other questions I may be able to help)
any dylan beattie post must be accompanied with a recommendation for his hit single, You Give REST a Bad Name<p><a href="https://www.youtube.com/watch?v=nSKp2StlS6s" rel="nofollow">https://www.youtube.com/watch?v=nSKp2StlS6s</a>
> I’ve created 5-channel mixes of all the backing tracks so we can fade out specific instruments if somebody wants to play them live<p>How was this done? This seems like an even more difficult task to do well than what’s described in the article
Excellent, also check out the <a href="https://alphatab.net" rel="nofollow">https://alphatab.net</a> library which would let you render guitar pro tracks for the video.<p>Around 2013 I built a guitar tab synced to youtube video proof of concept thing and promptly let it rot, should have done more with it!
Not sure if the author will eventually show up here, but I'm curious if they managed to get it working at scale and if they ran into other challenges?<p>One 'feature' that immediately came to mind for me is automatic transposing for use with a capo. Many hobby guitarists cannot play barre chords for an entire track, especially if they don't know it already. Transposing is already a thing for vocal karaoke and quite common. Some players may be skilled enough to transpose in their head to take advantage of the capo, but juggling the lyrics, instrument, and transposing at once is quite taxing mentally.<p>Cool project!
What a cool thread! I like how you put the specifics of your workflow and especially details of the commands you used! Particularly with the vamp commands, because as you say, they are somewhat inscrutably named/documented.<p>I started dabbling with vamp as well a couple years ago, but lost track of the project as my goals started ballooning. Although the code is still sitting (somewhere), waiting to be resuscitated.<p>I have had an idea for many years of the utility of having chord analysis further built out such that a functional chart can be made from it. With vamp most of/all the ingredients are there. I think that's probably what chordify.com does, but they clearly haven't solved segmentation or time to musical time, as their charts are terrible. I don't think they are using chordino, and whatever they do use is actually worse.<p>I got as far as creating a python script which would convert audio files in a directory into different midi files, to start to collect the necessary data to construct a chart.<p>For your use case, you'd probably just need to quantize the chords to the nearest beat, so you could maybe use:<p>vamp-aubio_aubiotempo_beats, or
vamp-plugins_qm-barbeattracker_bars<p>and then combine those values with the actual time values that you are getting from chordino.<p>I'd love to talk about this more, as this is a seemingly niche area. I've only heard about this rarely if at all, so I was happy to read this!
hey I built this <i>exact</i> concept months ago. beat detection. video generation. automated video creation checkout the videos I uploaded at <a href="https://youtube.com/@nevertwenty" rel="nofollow">https://youtube.com/@nevertwenty</a>
Really good work. I like the way the author breaks down the procedures using open source tools. Nice and thanks for sharing.
May I add Chord ai for someone who wants to see similar projects? It's a paid app supported by ai. Personally, it helped mre where I could not figure out the chord progressions myself.
I don't know if this is off topic but I searched a while back for a tool to do a summary of research papers. I found out some and I was really flabbergasted by the progress of these tools with ai. When I graduated from uni 22 years ago, these things were only in the science action movies. Oh well...
Very cool indeed! Does anyone know how it's possible for Vamp to extract guitar chords from audio? What if there are multiple guitars, like lead and bass, or lead and rhythm?
Pretty offtopic to the article but somehow related: does anybody know opensource or at least not subscription based solutions to things like Songsterr [1], whose mobile app is really nice to learn playing a song with an instrument?<p>[1] <a href="https://www.songsterr.com/" rel="nofollow">https://www.songsterr.com/</a>
Very cool! I think an interesting variation could be to try and generate ASS subtitles for the video with the chords and/or karaoke. There are some very cool transition and vector graphic effects possible there.
I still remember his “The Art of Code” talk, highly recommended: <a href="https://www.youtube.com/watch?v=6avJHaC3C2U" rel="nofollow">https://www.youtube.com/watch?v=6avJHaC3C2U</a>