TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

VT.ai – Multi-Modal LLM Chat Application

2 pointsby vinhnxabout 1 year ago
Hi everyone,<p>I’m learning LLM and AI. And I’m building a multi-modal full stack LLM chat application. [0]<p>Using semantic-router for dynamic conversation routing, and LiteLMM for model providers.<p>It was lots of fun to learn and build.<p>Here is the full list of large language models. I will update more models in the future. [1]<p>And, of course you can use Llama 3 via Ollama locally!<p>I will be adding function calling support (tools use) for the models to have it more capable, like an agent, in the future.<p>Hope this project helps everyone to try out using multi-modalities LLM providers!<p>[0] GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;vinhnx&#x2F;VT.ai">https:&#x2F;&#x2F;github.com&#x2F;vinhnx&#x2F;VT.ai</a>

1 comment

cranberryturkeyabout 1 year ago
vision aware looks cool -- i wish ollama had that.