Hi everyone,<p>I’m learning LLM and AI. And I’m building a multi-modal full stack LLM chat agent. [0]<p>Using semantic-router for dynamic conversation routing, and LiteLMM for model providers.<p>It was lots of fun to learn and build.<p>Here is the full list of large language models. I will update more models in the future. [1]<p>And, of course you can use Llama 3 via Ollama locally!<p>I will be adding function calling support (tools use) for the models to have it more capable, like an agent, in the future.<p>Hope this project helps everyone to try out using multi-modalities LLM providers agent!<p>[0] GitHub: <a href="https://github.com/vinhnx/VT.ai">https://github.com/vinhnx/VT.ai</a>
[1] List of LLM models currently supported: <a href="https://github.com/vinhnx/VT.ai/blob/main/src/vtai/llms_config.py">https://github.com/vinhnx/VT.ai/blob/main/src/vtai/llms_conf...</a>
I am really interested in a LLM router that can track the usage and tokens used to measure the cost and put int rate limits. What this will do is help keep cost down even while using multiple LLMs.<p>I think this project will be a great starting point.