That's impressive! The video got my attention immediately:<p><a href="https://www.youtube.com/watch?v=FjkpVbbDtMY" rel="nofollow">https://www.youtube.com/watch?v=FjkpVbbDtMY</a><p>> Grab a track from spotify, split into stem files, and then chop it up.<p>I'm guessing the "AI" plays a role in determining the tempo, vocal/instrument layers, where to chop up the beats.<p>The first few minutes of the video demonstrates how practical the application is, to drag a file/URL(?) in and start looping, mixing and matching.<p>It reminds me a bit of Ableton, with their automatic detection and marking of "beats" (amplitude peaks?).<p>I'd be curious to learn more about the technical aspects. For example, the readme says both Java and Python are required - but I didn't see any Python code in there. I suppose it's bundling an external library, probably for machine learning. I do see Java functions for training models - I wonder what datasets were used, what musical asepcts the pattern recognition works on.<p>The feature set is extensive, with synthesizers (like TB-303/808/909), sampler, live sessions, record/export - I can see it's a long-term project built up over years. Nice work!