No subtitles because it is a recorded demonstration (of a chat).<p>The description is:<p>> <i>Retrieval-augmented generation is a good solution to avoid hallucinations but directionally, context memory is getting much better at storing data accurately while retaining all the flexibility of LLM interaction // All operations show in the video were done directly in context memory. NO EXTERNAL AGENTS OR EXTERNAL SYSTEMS were used. Results were checked against a real Postgres database for accuracy. Some minor early stumbles but by the time we get to over 100 rows across multiple tables, it still joins correctly. From my research, it appears to be mostly confused by ASCII collation btw uppercase and lowercase. The 3 hour cap with GPT4 required multiple sessions to be stitched together // Andrej Karpathy described the Transformer as a "general purpose differentiable computer". But what does that mean? Contextual embeddings are somewhat analogous to a differentiable type system (think classifier meets Voevodsky higher dimensional types) and Transformers enrich this type system with generative dependency rules that compose types (think calculus of constructions or Minecraft style crafting rules)</i>