TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Hallucinations are all you need – context memory may replace RAG

3 pointsby rainy59about 2 years ago

1 comment

mdp2021about 2 years ago
No subtitles because it is a recorded demonstration (of a chat).<p>The description is:<p>&gt; <i>Retrieval-augmented generation is a good solution to avoid hallucinations but directionally, context memory is getting much better at storing data accurately while retaining all the flexibility of LLM interaction &#x2F;&#x2F; All operations show in the video were done directly in context memory. NO EXTERNAL AGENTS OR EXTERNAL SYSTEMS were used. Results were checked against a real Postgres database for accuracy. Some minor early stumbles but by the time we get to over 100 rows across multiple tables, it still joins correctly. From my research, it appears to be mostly confused by ASCII collation btw uppercase and lowercase. The 3 hour cap with GPT4 required multiple sessions to be stitched together &#x2F;&#x2F; Andrej Karpathy described the Transformer as a &quot;general purpose differentiable computer&quot;. But what does that mean? Contextual embeddings are somewhat analogous to a differentiable type system (think classifier meets Voevodsky higher dimensional types) and Transformers enrich this type system with generative dependency rules that compose types (think calculus of constructions or Minecraft style crafting rules)</i>