> Limit additional context in retrieval-augmented generation (RAG): When providing additional context or documents, include only the most relevant information to prevent the model from overcomplicating its response.<p>This databricks blog/paper seemingly contradicts this:<p>> OpenAI o1 models show a consistent improvement over Anthropic and Google models on our long context RAG Benchmark up to 128k tokens.<p><a href="https://www.databricks.com/blog/long-context-rag-capabilities-openai-o1-and-google-gemini" rel="nofollow">https://www.databricks.com/blog/long-context-rag-capabilitie...</a>