TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Show HN: LLMem, a read through cache for OpenAI chat completions

1 pointsby c0gabout 1 year ago
When building a system around OAI, I found myself sending the same request multiple times as part of developing&#x2F;testing some other part of the system. On top of wasting money in this way, I was also throwing away potentially useful later training data to specialize a smaller LLM for my use case.<p>I’m hosting an open server atm since I hit it from various different networks for my projects, or you easily enough run it as a local service.

no comments

no comments