This is an example of how to build a RAG app on FastAPI with vector embeddings and LLM inference broken out as separate services. Using Runhouse, those services can be hosted on your own infra (A10 GPU on your own AWS, for example).<p>Hoping that this is helpful for anyone considering ways to scale out components of a more complex RAG application.