I have been doing lots of experiments with visuals too for example this: <a href="https://github.com/m-onz/artifice">https://github.com/m-onz/artifice</a>. I made an installation at Corsica Studio's <a href="https://fakedac.net" rel="nofollow">https://fakedac.net</a> with genai content all night. Drum and bass, techno and many other genre's.<p>That 2 hour mix was created in an extremely short amount of time. Limited by the fact suno and udio don't have API's. I've done tonnes of work using replicate.com musicGen A.I that I threw in the bin once I heard udio & suno... once we get API access... there will be new genre's of music and hyper media beyond what we can consider normal music now.<p>There is some scope for prompt engineering with text-to-* media generators... combined with automation. Machine listening and categorization: For example a network of virtual listeners able to refine and generate new work before a human hears it... think music agent systems by Nick Collins on steroids.
I'm less impressed with the "quality" of AI music but amazed by what
AI savvy artists use it for. A generative artist called m-onz took a
blog post of mine from yesterday and in a few hours created a... well
I don't even know what to call it - kinda experimental music/noise
prompted by the blog so that words are sung, scratched, dubbed in a
hundred different styles. As a creative lever this is surely
impressive even if it ain't 'beutiful'.<p>[0] <a href="https://soundcloud.com/m-onz/carnival-of-clowns" rel="nofollow">https://soundcloud.com/m-onz/carnival-of-clowns</a>
It is naive to assume that all the things AI might produce are equal. Some things are meant to communicate at a human to human level, and those turn out to be much much harder. When AI can absorb the experience of being human it will do it.