A combination of RAG and FineTuning will be much more useful IMHO.<p>Think of LLMs as generic models that can answer anything but with lower accuracy.<p>You fine-tune them to learn specifics of a particular domain. This way LLMs can provide more factual answers to domain specific questions.<p>Finally, you can add RAG on top of fine tuned models to get answers in context of your organisation or specific documents.<p>How all this will pan out remains to be seen, but surely there are interesting applications to come out of these technologies.