I'm a CS/AI teacher in an engineering school. A few days ago, towards the end of my course on convolutional neural networks, I asked my students to explain why tha first linear layer of the example PyTorch network had a specific number of neurons. This is a non-trivial question whose answer isn't directly available online (it depends on the input dimensions and the nature of all previous layers).<p>They struggled for a while, and the first student who gave the right answer explained how he did it. All morning, he interacted with ChatGPT while following my course, asking questions each time my own explanations weren't sufficient for him to understand. He managed to give the LLM enough context and information for it to spit not only the right answer, but also the whole underlying process to obtain it. In French, qui plus est ;)<p>This was for me an eye-opening, but also a bit unsettling experience. I don't use ChatGPT & co much for now, so this might seems pretty mundane to some of you. Anyway, I realized that during any lecture or lab, teachers will soon face (or are already facing) augmented students able to check and consolidate their understanding in real time. This is great news for education as a whole, but it certainly interrogates our current teaching model.
The important bit here is that you spent time with your kid and engaged with her, not the tech that you put in front of her.<p>Neal Stephenson, when asked on how to make the Young Lady’s Illustrated Primer real:
“Kids need to get answers from humans who love them”
The NotebookLM podcast feature is so unbelievably good! I've been feeding it random papers I've always wanted to read, and it's giving me these entertaining high-level summaries. The banter is almost addictive. Also, reading the papers in detail is quite easy after you understand the topics.
I am really impressed by the podcast it generates and I hope they will introduce other voices/styles because this style can be too much for some people (including me). Not so impressed about the chat experience though. Sometimes it sounds the bot is annoyed at my questions :) Which is a refreshing change from the pathological politeness of ChatGPT.
I'm looking forward to this expanding to other languages. There is so much great information out there, but unfortunately mostly in English. LLMs are really great at translating, and together with recent text-to-speech, seems straight-forward for the tool to consume inputs in English and do the podcast generation in another language.<p>Would be a great step in spreading knowledge across cultures and backgrounds.
It’s crazy how good NotebookLM is at summarizing information. I've been using it non-stop to quickly get through a pile of articles I usually put off, and it’s honestly freeing up so much time.<p>It's not just the saving time part but it feels like I'm actually getting more out of what I read. Makes me wonder what we'll be able to do with this kind of technology in the next few years!
Tectonic, not "techtonic."<p>And poor kid, being fed this AI sludge. If you want to teach an 8-year-old about subduction, use the Magic School Bus!
I've been trying to add high quality elevation data to the maps on my blog. I'm way out of my depth trying to convert things from one format to another. There are seemingly endless ways to store the data and a bunch of acronyms (DEM, DTM, DSM, SRTM etc.)<p>Anyway, I resorted to ChatGPT to help. It gave me a complete end to end process, including how to install tools, that was almost entirely wrong at every step. But, in one step it pointed me towards a tool I didn't know about, and that too finally plugged the gap I needed.<p>I was able to do this because I treat ChatGPT like any other source of knowledge: with a healthy dose of scepticism. Almost everything I do is piecing together knowledge from different sources, written at different levels for different audiences etc. What worries me is whether kids are going to miss out on this. Seeing a fully typeset document or listening to pristine audio of someone who intentionally sounds so confidently correct seems like it will really fuck with their ability to discern the truth. They'll either believe everything, or they'll believe nothing.<p>What worries me even more is these models are controlled by an oligopoly who twist and bend them to fit their own beliefs and narratives. Instead of going to the public library children will be getting knowledge filtered through layers of "AI" that will never tell them anything it's not supposed to tell them.
Thank you Stranger!<p>I have been seeing lots of articles about NotebookLM, but had never tried it. Today I am going to try this (with a different article) for my 6 year old daughter. This looks super amazing to teach tough concepts to kids<p>Thank you!
On a parallel note - came across this damn useful use case of NotebookLM : atomicideas.ai - creating engaging audiobooks from the book summaries.<p>Is NotebookLM the most innovative AI product from Google in the GenAI space?
Impressive! I'm wondering if something like this is possible with other models, because of Gemini's extremely large context length certainly helping <i>a lot</i> with parsing such large documents.
Wow, I honestly was not intending to click through and pay much attention to TFA, but I pressed "play" just to see, and I was honestly immediately hooked.<p>Maybe because I'm saturated with LLM stories, and yet another summarization use case, blah blah blah.<p>There is something fascinating here to me that two (synthetic) people talking to each other engaged my attention, and then amazement that they were turning this esoteric paper into a radio show at the popular level. (or for any paper whose content would never actually warrant two real people making a radio show about it, if you could even pay them to do so) Maybe an aspect of it is that if two people are talking about something, it inherently has passed a filter that "it might be interesting" to know about.<p>Surprisingly eye opening about the technology here.
[originally posted to a previous submission: <a href="https://news.ycombinator.com/item?id=41696561">https://news.ycombinator.com/item?id=41696561</a>]<p>Honestly, the resulting "podcast" is kind of underwhelming as an actual artifact. It's basically "plate tectonics for kids" mixed with some gobbledygook from the input paper. Sure it's kind of neat as a tech demo, but to actually try to use that just show the mediocritizing to enshittfying influence of LLMs.<p>Honestly, I think the guy would have gotten an even better response from his kid from any competently done "plate tectonics for kids" video (e.g. from Khan Academy: <a href="https://www.khanacademy.org/science/middle-school-earth-and-space-science/x87d03b443efbea0a:the-geosphere/x87d03b443efbea0a:plate-tectonics/v/introduction-to-plate-tectonics" rel="nofollow">https://www.khanacademy.org/science/middle-school-earth-and-...</a>).<p>Curation and digital distribution basically solved this problem earlier with better results. LLMs are just a shittier private label article factory version (<a href="https://blog.smashwords.com/2010/01/scam-of-private-label-articles.html" rel="nofollow">https://blog.smashwords.com/2010/01/scam-of-private-label-ar...</a>).