Great idea.
I wonder how long until we'd see a lot of "autogenerated" podcasts with syndicated advertising inside spamming the podcast space.<p>Like with robovoiced videos on YT reading some scraped content.
Very clever use case. I'm presuming the set up here is as follows:<p>- LLM-driven back and forth with the paper as context<p>- Text-to-speech<p>Pricing for high quality text to speech with Google's studio voices run at USD 160.00/1M count. And given the average 10 minute recording at the average 130 WPM is 1,300 words and at 5 characters per word is 6500, we can estimate an audio cost of $1. LLM cost is probably about the same given the research paper processing and conversation.<p>So only costs about $2-3 per 10 minute recording. Wild.
One problem I see with this is legitimizing LLM-extracted content as canon. The realistic human speech masks the fact that the LLM might be hallucinating or highlighting the wrong parts of a book/paper as important.
This is really cool, and it got me thinking - is there any missing piece to creating a full AI lecturer based on this?<p>What I'm thinking of is that I'd input a pdf, and the AI will do a bit of preprocessing leading to the creation of learning outcomes, talking points, visual aids and comprehension questions for me; and then once it's ready, will begin to lecture to me about the topic, allowing me to interrupt it at any point with my questions, after which it'll resume the lecture while adapting to any new context from my interruptions.<p>Are we there yet?
Listening to an AI generated discussion-based podcast on the topic of anticipating the scraping of deceased people's digital footprint to create an AI copy of your loved one makes the cells that make up my body want to give up on fighting entropy.
A related experiment from Google: NotebookLM (notebooklm.google.com), which takes a group of documents and provides a RAG Gemini chatbot in return.<p>I wish Google would make these experiments more well-known!
I’ve been using the ElevenLabs Reader app to read some articles during my drive and it’s been amazing. It’s great to be able to listen to Money Stuff whenever I want to. The audio quality is about 90% there. Occasionally, the tone of the sentence is wrong (like surprised when it should be sad) and the wrong enunciation (bow, like bowing down or tying a bow) but still very listenable.
I made something like this for my kids:<p>1. Take a science book. I used one Einstein loved as a kid, in German. But I can also use Asimov in English. Or anything else. We’ll handle language and outdated information on the LLM level.<p>2. Extract the core ideas and narrative with an LLM and rewrite it into a conversation, say, between a curious 7 year old girl and her dad. We can take into account what my kids are interested in, what they already know, facts from their own life, comparisons with their surroundings etc. to make it more engaging.<p>3. Turn it into audio using Text-to-Speech (multiple voices).
While this is very nice what I need is my computer to take voice commands, read content in various formats and structure, and take dictation for all of my apps. I need this in my phone too. I can do this now but I have to use a bunch of different tools that don't work seamless together. I need the Voice and Conversational User Interface that is built into the operating system.
I like how it generates a conversation, rather than just "reading out" or simplifying the content. You can extend this idea to enhance the dynamics of agent interactions
One useful use case would be helping making academic papers more accessible. It would be useful also for people to listen to arxiv papers that seems interesting. It would be useful tool in academic world. Also useful for students who would have more accessible form of learning.<p>I have a project idea already to use arxiv RSS API to fetch interesting papers based on keywords (or some LLM summary) and then pass it to something like illuminate and then you have a listening queue to follow latest in the field. Though there will be some problems with formatting but then you could just open the pdf to see the plots and equations.
I can see this working reasonably for text that you can understand without referring to figures, and for texts for which there is external content available that such a conversation could be based on. For a new, say, math paper, without prose interspersed, I’d be surprised if the generated conversation will be worth much. On the other hand, that is a corner case and, personally, I suspect I will be using this for the many texts where all I need is a presentation of the material that is easy to listen to.
Occasionally there's a podcast or video I'd like to listen to, but one of the voices is either difficult to understand, or in some way awful to listen to, or maybe the sound quality is really bad. It would be nice to have a an option for an automatically redubbed audio.
I listen to 5 mins of this and all I can feel is sadness and how cringe it is.<p>Please do not replace humanity with a faint imitation of what makes use human, actual spontaneity.<p>If you produce AI content, don't emulate small talk and quirky side jabs. It's pathetic.<p>This is just more hot garbage on top of a pile of junk.<p>I imagine a brighter future where we can choose to turn that off and remove it from search, like the low quality content it is. I would rather read imperfect content from human beings, coming from the source, than perfectly redigested AI clown vomit.<p>Note: I use AI tools every day. I have nothing against AI generated content, I have everything against AI advancements in human replacement, the "pretend" part. Classifying and returning knowledge is great. But I really dislike the trend of making AI more "human like", to the point of deceiving, such as pretending small talk and perfect human voice synthesis.
Google launched similar functionality in NotebookLM today. You can generate podcasts from a wide range of sources: <a href="https://blog.google/technology/ai/notebooklm-audio-overviews/" rel="nofollow">https://blog.google/technology/ai/notebooklm-audio-overviews...</a><p>Looks like you can generate from Website URLs if you add them as sources to your notebook, as well as Slides, Docs, PDFs etc. Anything NotebookLM supports.
What a fantastic idea!
Great way to learn about those pesky research papers I keep downloading (but never get to reading them).
I tried a few, e.g. Attention is All You Need, etc. The summary was fantastic, and the discussion was, well, informative.<p>Does anyone know how the summary was generated? (text summarization, I suppose?) Is there a bias towards "podcast-style discussion"? Not that I'm complaining about it - just that I found it helpful.
AI voices sound particularly good at higher playback rates, with silence removal. Which is granted is an acquired taste, but common feature for podcast players so there's audience for it. Fast talkers feel more competent and one kind of stops interrogating on quality of speech.
What does this accomplish? Who does this help? How does this make the world a better place?<p>This only seems like it would be useful for spammers trying to game platforms, which is silly because spam is probably the number one thing bringing down the quality of Google's own products and services.
How about making the program work in the other direction. It could take one of those 30 minute youtube tutorial videos that is full of fluff and music, and turn it into an instructables-like text article with a few still pictures.
This is as impressive as it is scary and creepy.<p>It also tells us something about humans, because it really does feel more engaging having two voices discussing a subject than simple text-to-speech, even though the information density is smaller.
The choice of intonement even mimics creatives which I'm sure they'll love. The vocal fry, talking through a forced smile, bumbling host is so typical. Only, no one minds demanding better from a robot so it's even more excruciating fluff with no possible parasocial angle.<p>Limiting choice to frivolous voices is really testing the waters for how people will respond to fully acted voice gen from them, they want that trust from the creative guild first. But for users who run into this rigid stuff it's going to be like fake generated grandma pics in your google recipe modals.
Books I can understand, but I'm genuinely curious: would anyone here find it useful to hear scientific papers as narrated audio? Maybe it depends on the field, but when I read e.g. an ML paper, I almost always have to go through it line-by-line with a pen and scratchpad, jumping back and forth and taking notes, to be sure I've actually "got it". Sometimes I might read a paragraph a dozen times. I can't see myself getting any value out of this, but I'm interested if others would find it useful.
Maybe I'm the odd one out but "That's interesting. Can you elaborate more?", "Good question", "That sounds like a clever way" etc were annoying filler.
Synthesized voices are legitimately a great way to read more and give your eyes a break. I personally prefer just converting a page or book to an audiobook myself locally.
The new piper TTS models are easy to run locally and work very well. I made a simple CLI application and some other folks here liked it so figured I post it.<p><a href="https://github.com/C-Loftus/QuickPiperAudiobook">https://github.com/C-Loftus/QuickPiperAudiobook</a>
I'm fairly excited for this use case. I recently made the switch from Audible to Libby for my audiobook needs. Overall, it's been good/fine, but I get disappointed when the library only has text copies of a book I want to listen to. Often times they aren't especially popular books so it seems unlikely they'll get a voiceover anytime soon. Using AI to narrate these books will solve a real problem I experience currently :)
So podcasts are now automated, anything with a speaker or a screen is now assumed to be not human.<p>Is this supposed to be a good thing that we want to accelerate (e/acc) towards?
Works surprisingly well. I actually bothered to listen "discussions" about these boring-looking papers.<p>English is particularly bad to read aloud because it is like programming language Fortran based on immutable tokens. If you want tonal variety, you have to understand the content.<p>Some other languages modify the tokens themselves, so just one word can be pompous, comical, uneducated etc.
Interesting - listening to the first example (Attention is all you need)[1] - I wonder what illuminate would make of Fielding's REST thesis?<p>[1] <a href="https://illuminate.google.com/home?pli=1&play=SKUdNc_PPLL8" rel="nofollow">https://illuminate.google.com/home?pli=1&play=SKUdNc_PPLL8</a>
I'm bullish on podcasts as a Passive learning counterpart to the Active learning style in traditional educational instruction. Will be releasing a general purpose podcast generator for educational purposes in reasonote.com within the next few days, along with the rest of the core featureset.
This is really cool. Although I wouldn't put money on a Google project sticking around even if it was a full fledged product!<p>More of a tech demo than anything else.<p>What's wild about this is that the voices seem way better than GCP's TTS that I've seen. Any way to get those voices as an API?
We are working on something content driven (for an ad or subscription model) with lot of effort and time and I am concerned how this technology will affect all that effort and eventually monetization ideas. But I can see how helpful this tool can be for learning new stuff.
Why not, if you could also interject with questions, remarks, or "cut the chase" like remarks.<p>Also it's weird that they focus only on AI papers in the demo, and not more interesting social stuff, like environment protection, climate change, etc
This is a good idea and well executed. I think the hard part now is pointing it in an appropriate direction.<p>If it's just used for generating low quality robo content like we see on TikTok and YouTube then it's not so interesting.
I've been meaning be the all you need is attention paper for yours and never have. And I finally listened to that little generated interview as their first example. I think this is going to be very very useful to me!
I got in the beta a couple weeks ago and tried it out on some papers [0]<p>[0] <a href="https://news.ycombinator.com/item?id=41020635">https://news.ycombinator.com/item?id=41020635</a>
founder of podera.ai here, we're building this right now (turn anything into a podcast) with custom voices, customization, and more. would love some hn feedback!
Amazing. I see great future ahead. We are already able to turn audiobooks into eBooks and Illuminate finally completes the circle of content regurgitation.
Why is this appealing?<p>Why would one prefer this AI conversation to the actual source?<p>Can these be agents and allow the listener to ask questions / interact?
By now, we can find thousands of hours of discussions online about popular papers such as "Attention is All You Need". It should be possible to generate something similar without using the paper as a source -- and I suspect that's what the AI does.<p>In other words: I suspect that the output is heavily derivative from online discussions, and not based on the papers.<p>Of course, the real proof would be to see the output for entirely new papers.
This is insane! To be able to listen to a conversation to learn about any topic is amazing. Maybe it's just me because I listen to so many podcasts but this is Planet Money or The Indicator from NPR about anything.<p>Definitely one of the coolest things I have seen an LLM do.
I wonder how soon until this waitlisted service eventually gets thrown on the trash heap that Google Reader is on.<p>Building trust with your users is important, Google.
I guess I am in my grouchy old person phase but all I could think of what the Gilfoyle quote from Silicon Valley when presented with a talking refrigerator.<p>> "Bad enough it has to talk, does it need fake vocal tics...?" - Gilfoyle<p>Found it: <a href="https://youtu.be/APlmfdbjmUY?si=b4-rgkxeXigU_un_&t=179" rel="nofollow">https://youtu.be/APlmfdbjmUY?si=b4-rgkxeXigU_un_&t=179</a>
This is something I don't get about Google.<p>I saw they launched NotebookLM Audio Overview today: <a href="https://blog.google/technology/ai/notebooklm-audio-overviews/" rel="nofollow">https://blog.google/technology/ai/notebooklm-audio-overviews...</a><p>So what the heck is illuminate and why would they simultaneously launch a competing product?
I think I just discovered a new emotion. Simultaneous feelings of
excitement and disappointment.<p>No matter how great the idea, it's hard to stay excited for more than
a few microseconds at the sight of the word "Google". I can already
hear the gravediggers shovels preparing a plot in the Google
graveyard, and hear the sobs of the people who built their lives,
workflows, even jobs and businesses around something that will be
tossed aside as soon as it stops being someone's pet play-thing at
Google.<p>A strange ambivalent feeling of hope already tarnished with tragedy.