I was trying Langflow recently for some experiments in our open source project - https://github.com/middlewarehq/middleware to build a RAG over DORA metrics.<p>In my machine, langflow literally brings makes it super slow so testing each model for output is painful. Is there a way I can try out a parallel output from different models to compare?
Running models locally, on your development machine, will be slow. You need beefy GPUs to get good token/sec speeds.<p>Run the models in the cloud, each one on a separate machine, and then invoke them remotely. You can skip the time/cost and use various APIs from 3rd parties directly.