Hey, one of the Suno founders/creators of Bark here. Thanks for all the comments, we love seeing how we can improve things in the future. At Suno we work on audio foundation models, creating speech, music, sounds effects etc….<p>Text to speech was a natural playground for us to share with the community and get some feedback. Given that this model is a full GPT model, the text input is merely a guidance and the model can technically create any audio from scratch even without input text, aka hallucinations or audio continuation.<p>When used as a TTS model, it’s very different from the awesome high quality TTS models already available. It produces a wider range of audio – that could be a high quality studio recording of an actor or the same text leading to two people shouting in an argument at a noisy bar. Excited to see what the community can build and what we can learn for future products.<p>Please let us know with any feedback, or if you’re interested in working on this: bark@suno.ai
Very cool. Side note: bark-gpt.com is already taken for a dog translator: "The world’s first AI powered, real-time communications tool between humans and their furry best friends."[0] I only know this because my law firm partner's name is Bark, and I wanted to automate some legal work and name the software "Bark GPT" after him.<p>[0] <a href="https://www.bark-gpt.com/" rel="nofollow">https://www.bark-gpt.com/</a>
Am I hallucinating or didn't several of the examples have background audio artifacts, like it's been trained on speech with noisy backgrounds, I'm guessing audio from movies paired with subtitles? Having random background audio can make it quite hard to use in production.
Man I know this is HN, and I know we have a certain decorum we should be maintaining, but with the recent activity in this field the most appropriate response to these posts is "4bit when?" or "f16 when?". Not sure which one is applicable. I am having no luck running it on a 6GB vram gpu, so I guess its the 16 bit floating point one.
A few years a back someone had the genius idea of making a robo-answering phone program that would give vague but encouraging replies when it received some unsolicited sales pitch. Although it was a fixed sequence of responses, it fooled some callers for a surprisingly long time.<p><a href="https://www.youtube.com/watch?v=XSoOrlh5i1k">https://www.youtube.com/watch?v=XSoOrlh5i1k</a><p>Someone needs to hook create the plumbing to capture speech to text, feed it to a GPT script that has been told how to reply to such call center calls, then send that back through a TTS generator like this one.<p>To overcome any latency issues, it could build in a ploy to buy time like the old script did, eg, make the robo-answerer sound like a somewhat addled old man who has to think before each reply, perhaps prefixing responses with "hmm, ahh, ..." to buy time to generate the response.
It seems like a lot of the entries in TTS are either close sourced saas apps or something like this with limitations on customizing it. It seems clearly inevitable and likely only months away that a high quality unrestricted open source option for things like voice cloning will emerge so i'm not sure why these projects are even really bothering trying to stop it. I think in order for TTS to have its StableDiffusion moment it will just be a matter of an unrestricted easily trainable open source model.
Is there likely to be a way to stream this audio in the future? As in, here's an incoming stream of text, generate the audio on-the-fly instead of all at once.
Great news! It's astounding how quickly technology is advancing. Only yesterday, I was wondering about when a new model for text-to-speech would be developed, and today a game-changing model has been released! This new model is simply incredible!
Any idea what the training data for this is? Looking at the model, it looks like it is literally just copy-paste from Karpathy's nanoGPT, so the training data is what's most interesting. Pretty amazing anyway.
Imagine Torvalds saying the same in the context of linux- 'to mitigate misuse of this technology, we limit the audio history prompts to a limited set of Suno-provided...'
The fact that this is open source and can generate more thann just speech is really nice, but for speech itself, it's much lower quality than what Eleven Labs provides.<p>All the open source models I've seen so far have this weird kind of neural fuzziness to them. I don't know what Eleven does better, but there's definitely a big difference.
"However, to mitigate misuse of this technology, we limit the audio history prompts to a limited set of Suno-provided, fully synthetic options to choose from for each language."<p>Isn't this open source and can be easily removed or am I missing something?
Some of it is very impressive although some of it seems about equal to the TTS built into my phone. How long until someone can package this up and make a program that takes in epubs and spits out mp3s?
I tried it. It seems to hallucinate easily. The generated audio isn't what I provided.<p>It seems to be easily reproducible if I specify a non-existing speaker?<p>audio_array = generate_audio(text_prompt, 'en_speaker_3')
“Last nights Phish performance is proof that God loves us.” But in Portuguese.<p><a href="https://suno-ai.notion.site/Bark-Examples-5edae8b02a604b54a42244ba45ebc2e2" rel="nofollow">https://suno-ai.notion.site/Bark-Examples-5edae8b02a604b54a4...</a>
Does it sound fairly robotic/static-y to anyone else or just me? Doesn't sound any better than any other TTS software I've tried and in fact sounds a bit worse, like it's noisy.