TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Show HN: Remembrall – Long-term memory proxy for LLMs

4 pointsby raunakchowdhuriover 1 year ago
Hi HN,<p>I built Remembrall, a proxy on top of your OpenAI queries that gives your chat system long-term memory.<p>How it works: Just add an extra user id to your OpenAI call. When a user stops chatting actively, it will trigger an &quot;autosave&quot; and use GPT to save&#x2F;update important details about the conversation into a vector db. When the user continues the conversation, we&#x27;ll query the db for relevant info and prepend it into the system prompt.<p>All happens in &lt; 100 ms latency on the edge, with only two lines of code needed for integration. You also get observability (i.e. a log of all LLM requests) for free in a simple dashboard.

2 comments

pplonski86over 1 year ago
Congratulations on launch! I&#x27;ve seen your tweet about unlimited context for LLMs :)<p>Do you have option to manually provide text that will be used as LLM memory?
评论 #37525483 未加载
QuantumCodesterover 1 year ago
Really interesting — can you explain a bit more about how the long-term memory works?
评论 #37516762 未加载