TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

A Comprehensive Guide for Building Rag-Based LLM Applications

184 pointsby robertnishiharaover 1 year ago

10 comments

version_fiveover 1 year ago
FWIW, having written a simple RAG system from &quot;scratch&quot; (meaning not using frameworks or api calls), it&#x27;s not more complicated than doing it this way with langchain etc.<p>This post is mostly about plumbing. It&#x27;s probably the right way to do it if it needs to be scaled. But for learning, it obscures what is essentially simple stuff going on behind the scenes.
评论 #37508848 未加载
评论 #37510006 未加载
评论 #37509945 未加载
bguberfainover 1 year ago
What brings to my attention in this article is the section named &quot;Cold Start&quot;, where it generates questions based on a provided context. I think it is a good way to cheaply generate an Q&amp;A dataset that can later be used to finetune a model. But the problem is that it generates some questions and answers of bad quality. All generated examples have issues: - &quot;What is the context discussing about?&quot; - which context? - &quot;The context does not provide information on what Ray Tune is.&quot; - Not an answer - &quot;The context does not provide information on what external library integrations are.&quot; - same as before I could only think of manual review to remove these noise questions. Any ideas on how to improve this QA generation? I&#x27;ve tried it before, but with paltry results.
评论 #37512239 未加载
pplonski86over 1 year ago
Can it be easier to do RAG? Do we always need to have Vector DB? Why LLM can&#x27;t do search through the context by itself?
评论 #37507500 未加载
评论 #37507417 未加载
评论 #37508459 未加载
评论 #37509578 未加载
评论 #37509895 未加载
评论 #37515339 未加载
评论 #37508207 未加载
ajhaiover 1 year ago
Kudos to the team for a very detailed notebook going into things like pipeline evaluation wrt performance and costs etc. Even if we ignore the framework specific bits, it is a great guide to follow when building RAG systems in production.<p>We have been building RAG systems in production for a few months and have been tinkering with different strategies to get the most performance out of these pipelines. As others have pointed out, vector database may not be the right strategy for every problem. Similarly there are things like lost in the middle problems (<a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2307.03172" rel="nofollow noreferrer">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2307.03172</a>) that one may have to deal with. We put together our learnings building and optimizing these pipelines in a post at <a href="https:&#x2F;&#x2F;llmstack.ai&#x2F;blog&#x2F;retrieval-augmented-generation" rel="nofollow noreferrer">https:&#x2F;&#x2F;llmstack.ai&#x2F;blog&#x2F;retrieval-augmented-generation</a>.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;trypromptly&#x2F;LLMStack">https:&#x2F;&#x2F;github.com&#x2F;trypromptly&#x2F;LLMStack</a> is a low-code platform we open-sourced recently that ships these RAG pipelines out of the box with some app templates if anyone wants to try them out.
zackproserover 1 year ago
While you don&#x27;t strictly &quot;need&quot; a vector db to do RAG, as others have pointed out, vector databases excel when you&#x27;re dealing with natural language - which is ambiguous.<p>This will be the case when you&#x27;re exposing an interface to end users that they can submit arbitrary queries to - such as &quot;how do I turn off reverse breaking&quot;.<p>By converting the user&#x27;s query to vectors before sending it to your vector store, you&#x27;re getting at the user&#x27;s actual intent behind their words - which can help you retrieve more accurate context to feed to your LLM when asking it to perform a chat completion, for example.<p>This is also important if you&#x27;re dealing with proprietary or non-public data that a search engine can&#x27;t see. Context-specific natural language queries are well suited to vector databases.<p>We wrote up a guide with examples here: <a href="https:&#x2F;&#x2F;www.pinecone.io&#x2F;learn&#x2F;retrieval-augmented-generation&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.pinecone.io&#x2F;learn&#x2F;retrieval-augmented-generation...</a><p>And we&#x27;ve got several example notebooks you can run end to end using our free-tier here: <a href="https:&#x2F;&#x2F;docs.pinecone.io&#x2F;page&#x2F;examples" rel="nofollow noreferrer">https:&#x2F;&#x2F;docs.pinecone.io&#x2F;page&#x2F;examples</a>
评论 #37508668 未加载
deanmoriartyover 1 year ago
My question is: if I want to use LLM to help me sift through a large amount of structured data, say for example all the logs for a bunch of different applications from a certain cloud environment, each with their own idiosyncrasies and specific formats (many GBs of data), can the RAG pattern be useful here?<p>Some of my concerns:<p>1) Is sentence embedding using an off-the-shelf embedding model going to capture the &quot;meaning&quot; of my logs? My answer is &quot;probably not&quot;. For example, if a portion of my logs is in this format<p><pre><code> timestamp_start,ClassName,FunctionName,timestamp_end </code></pre> Will I be able to get meaningful embeddings that satisfy a query such as &quot;what components in my system exhibited an anomalously high latency lately?&quot; (this is just an example among many different queries I’d have)<p>Based on the little I know, it seems to me off-the-shelf embeddings wouldn&#x27;t be able to match the embedding of my query with the embeddings for the relevant log lines, given the complexity of this task.<p>2) Is it going to be even feasible (cost&#x2F;performance-wise) to use embeddings when one has a firehose of data coming through, or is it better suited for a mostly-static corpus of data (e.g. your typical corporate documentation or product catalog)?<p>I know that I can achieve something similar with a Code Interpreter-like approach, so in theory I could build a multi-step reasoning agent that starting from my query and the data would try to (1) discover the schema and then (2) crunch the data to try to get to my answer, but I don&#x27;t know how scalable this approach would effectively be.
评论 #37512120 未加载
评论 #37510374 未加载
评论 #37510392 未加载
gsuuonover 1 year ago
Wow this was indeed super comprehensive. A few things I noticed:<p>- In the cold start section, a couple of the synthetic_data responses say &#x27;context does not provide info..&#x27;<p>- It&#x27;s strange that retrieval_score would decrease while quality_score increases at the higher chunk sizes. Could this just be that the retrieved chunk is starting to be larger than the reference?<p>- Gpt 3.5 pricing looks out of date, it&#x27;s currently $0.0015 for input for the 4k model<p>- Interesting that pricing needs to be shown on a log scale. Gpt-4 is 46x more expensive than llama 2 70B for ~.3 score increase. Training a simple classifier seems like a great way to handle this.<p>- I wonder how stable the quality_score assessment is given the exact same configuration. I guess the score differences between falcon-180b, llama-2-70b and gpt-3.5 are insignificant?<p>Is there a similarly comprehensive deep dive into chunking methods anywhere? Especially for queries that require multiple chunks to answer at all - producing more relevant chunks would have a massive impact on response quality I imagine.
yujianover 1 year ago
Anyscale consistently posts great projects. Very cool to see the cost comparison and quality comparison. Not surprising to see that OSS is less expensive, but also rated as slightly lower quality than gpt-3.5-turbo.<p>I do wonder, is there some bias in quality measures? Using GPT 4 to evaluate GPT 4&#x27;s output? <a href="https:&#x2F;&#x2F;www.linkedin.com&#x2F;feed&#x2F;update&#x2F;urn:li:activity:7103398601090863104&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.linkedin.com&#x2F;feed&#x2F;update&#x2F;urn:li:activity:7103398...</a>
robertnishiharaover 1 year ago
Here is the blog post accompanying the notebook<p><a href="https:&#x2F;&#x2F;www.anyscale.com&#x2F;blog&#x2F;a-comprehensive-guide-for-building-rag-based-llm-applications-part-1" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.anyscale.com&#x2F;blog&#x2F;a-comprehensive-guide-for-buil...</a>
tshrjn007over 1 year ago
What do you use to generate the diagrams in the post? Super Neat.