Open-Source LLMs are currently one of the biggest trends in AI to be preparing for! Vinod has a remarkable history with open-source in software, having famously published the "Halloween Documents", a set of internal strategy memos at Microsoft explaining the oncoming wave of open-source. Vinod shares some fascinating insights on how building business around open-source has evolved and then Bob sparks the debate around "stateless" models.<p>Bob has been intensely exploring the new business philosophy around AI models, making connections to traditional ways of thinking about stateless and stateful software. For example, an mp3 file of a song is stateless, whereas say, your playlist, and the application that serves it, is stateful. AI models sit at an interesting intersection of this, the model itself is stateless, but once you connect it to your data with RAG (Retrieval-Augmented Generation), for example, you make it stateful. The podcast discusses this viewpoint and the nuances of it. Vinod pushes back on the analogy, preferring to describe the models as "pre-baked". I think this is such an interesting topic on the future of value capture in AI models, curious what people think!<p>The podcast then continues to discuss the future of RAG. Vinod outlines several directions the technology may emerge, such as joint end-to-end training, training one model or the other, models tailor made for attending to retrieved context, task-specific models, and the resurgence of knowledge graphs and explicit representations of knowledge. This sparks further debate around implicit / explicit representations of knowledge in LLMs and RAG systems, and the exciting, but counter-intuitive research around knowledge editing such as ROME, MEMIT, GRACE, ...<p>I hope this quick synopsis inspires your interest in the podcast! There are many other topics covered such as MemGPT and Gorilla, DSPy, and Generative Feedback Loops!<p>YouTube: https://www.youtube.com/watch?v=ySNX2cPh5Tk<p>Spotify: https://spotifyanchor-web.app.link/e/v4FIPyf5AGb