Hey HN fam,<p>We’ve seen developers spend a lot of time implementing advanced RAG techniques from scratch.<p>While these techniques are essential for improving performance, their implementation requires a lot of effort and testing!<p>To help with this process, our team (Athina AI) has released Open-Source Advanced RAG Cookbooks.<p>This is a collection of ready-to-run Google Colab notebooks featuring the most commonly implemented techniques.<p>Please show us some love by starring the repo if you find this useful!
One of the challenges I have with RAG is excluding table of contents, headers/footers and appendices from PDFs.<p>Is there a tool/technique to achieve this? I’m aware that I can use LLMs to do so, or read all pages and find identical text (header/footer), but I want to keep the page number as part of the metadata to ensure better citation on retrieval.
I would strongly advise against people learning based on LangChain.<p>It is abstraction hell, and will set you back thousands of engineers hours the moment you want to do something differently.<p>RAG is actually very simple thing to do; just too much VC money in the space & complexity merchants.<p>Best way to learn is outside of notebooks (the hard parts of RAG is all around the actual product), and use as little frameworks as possible.<p>My preferred stack is a FastAPI/numpy/redis. Simple as pie. You can swap redis for pgVector/Postgres when ready for the next complexity step.
Interesting discussion! While RAG is powerful for document retrieval, applying it to code repositories presents unique challenges that go beyond traditional RAG implementations. I've been working on a universal repository knowledge graph system, and found that the real complexity lies in handling cross-language semantic understanding and maintaining relationship context across different repo structures (mono/poly).<p>Has anyone successfully implemented a language-agnostic approach that can:
1. Capture implicit code relationships without heavy LLM dependency?
2. Scale efficiently for large monorepos while preserving fine-grained semantic links?
3. Handle cross-module dependencies and version evolution?<p>Current solutions like AST-based analysis + traditional embeddings seem to miss crucial semantic contexts. Curious about others' experiences with hybrid approaches combining static analysis and lightweight ML models.
Thanks for sharing.<p>If you want notebooks that do some of this with local open models: <a href="https://github.com/neuml/txtai/tree/master/examples">https://github.com/neuml/txtai/tree/master/examples</a> and here: <a href="https://gist.github.com/davidmezzetti" rel="nofollow">https://gist.github.com/davidmezzetti</a>