From some experience I've had with this:<p>* Is that the right chunk size? How much of a chunk might contain the relevant information? Is it better for your use case to chunk by sentence? I've done RAG with document chunks, sentences, and triplets (source -> relation -> target). How you chunk can have a big impact.<p>* One approach that I've seen work very well is (1) first, use keyword or entity search to limit results, then (2) use semantic similarity to the query to rank those results. This is how, for example, they do it at LitSense for sentences from scientific papers: <a href="https://www.ncbi.nlm.nih.gov/research/litsense/" rel="nofollow noreferrer">https://www.ncbi.nlm.nih.gov/research/litsense/</a>. Paper here: <a href="https://academic.oup.com/nar/article/47/W1/W594/5479473" rel="nofollow noreferrer">https://academic.oup.com/nar/article/47/W1/W594/5479473</a>.<p>* You still need metadata. For example, if a user asks for something like "show me new information about X," the concept of "new" won't get embedded in the text. You'll need to convert that to some kind of date search. This is where doing RAG with something like OpenAI function calls can be great. It can see "new" and use that to pass a date to a date filter.<p>* I've found some embeddings can be frustrating because they conflate things that can even be opposites. For example, "increase" and "decrease" might show up as similar because they both get mapped into the space for "direction." This probably isn't an issue with better (I assume higher dimensional) embeddings, but it can be problematic with some embeddings.<p>* You might need specialized domain embeddings for a very specific domain. For example, law, finance, biology, and so forth. Certain words or concepts that are very specific to a domain might not be properly captured in a general embedding space. A "knockout" means something very different in sports, when talking about an attractive person, or in biology when it refers to genetic manipulation.