LangChain? Cohere? LLamaIndex? DIY?<p>Are you finding specific pros/cons for some of the ones that try to be a platform. As an example, we've found LangSmith's integration with LangChain super useful, even though LangChain itself has its pros and its cons.
I'm mainly hacking around with my LLM CLI tool, experimenting with different combinations of embedding models and LLMs: <a href="https://til.simonwillison.net/llms/embed-paragraphs#user-content-answering-a-question" rel="nofollow">https://til.simonwillison.net/llms/embed-paragraphs#user-con...</a><p>I really need to add a web interface to that so it's a bit more accessible to people who don't live in the terminal!
I'm taking a DIY approach to RAG/function calling for a work tool. We're looking for data sovereignty, so we're probably going to self-host. To that end, I'm using Ollama to serve some models. If you want to do DIY I would highly recommend using NexusRaven for your function calling model.<p>No promises but I'm hopeful we can opensource our work eventually.
I used LangChain and models hosted on Ollama for my latest project [1]. Since I have a GPU now and Ollama is now available for Windows I can build LLM based applications quickly with local debugging.<p>[1] <a href="https://github.com/bovem/chat-with-doc">https://github.com/bovem/chat-with-doc</a>