TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Most ML applications are just request routers

10 pointsby rossamurphy12 months ago

4 comments

rossamurphy12 months ago
Moving an internal ML project from "a quick demo on localhost", to "deployed in production", is hard. We think latency is one of the biggest problems. We built OneContext to solve that problem. We launched today. Would love your feedback + feature requests!
评论 #40516114 未加载
cwmdo12 months ago
“simply by cutting out the network latency between the steps, OneContext reduces the pipeline execution time by 57%)”<p>how does this fit in with barebones langchain&#x2F;bedrock setup?
georgespencer12 months ago
Amazing! Congrats on launching. Company motto: &quot;dumb enough to actually have attempted this already&quot;.
the_async12 months ago
Seems like a great product !