TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: LLMem, a read through cache for OpenAI chat completions

1 点作者 c0g大约 1 年前
When building a system around OAI, I found myself sending the same request multiple times as part of developing&#x2F;testing some other part of the system. On top of wasting money in this way, I was also throwing away potentially useful later training data to specialize a smaller LLM for my use case.<p>I’m hosting an open server atm since I hit it from various different networks for my projects, or you easily enough run it as a local service.

暂无评论

暂无评论