TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Show HN: Manage LLM providers while factoring in Cost and Speed

3 pointsby veryrealsid10 months ago
Hey y&#x27;all!<p>We built this framework for ourselves internally and decided to open source it.<p>It&#x27;s really similar to open router but allows you to do a couple more things<p>1. Allows you to factor in &#x27;speed&#x27;. If you want your API call to go as fast as possible regardless of cost you can specify that. Otherwise it&#x27;ll default to the cheapest available provider 2. Allows you to pass in a &#x27;validator&#x27; function and &#x27;fallback&#x27; models. This way you can ensure the response you get is valid according to your own internal logic 3. Uses the OpenAPI content&#x2F;role format to interact with models so you can quickly swap between them while you test.<p>We&#x27;ve found a bunch of use cases for this (<a href="https:&#x2F;&#x2F;yc-bot.lytix.co&#x2F;">https:&#x2F;&#x2F;yc-bot.lytix.co&#x2F;</a>, <a href="https:&#x2F;&#x2F;notes.lytix.co">https:&#x2F;&#x2F;notes.lytix.co</a>) and it&#x27;s allowed us to move much faster since we don&#x27;t have to think about credentials between all our providers.<p>Hope you enjoy it and would love to get any feedback on the project

no comments

no comments