TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The New Moat: Memory

17 pointsby jeffmorrisjrabout 2 months ago

6 comments

benttabout 2 months ago
This is a great reason to learn from our mistakes of the 2010s and not give ourselves away to OpenAI and other cloud AI providers.<p>I would like to see a memory provider&#x2F;system that allows us to own this data and put OpenAI et al on the customer end. They should be paying US for that.
评论 #43719831 未加载
评论 #43719166 未加载
xnxabout 2 months ago
Can&#x27;t speak for anyone else, but my own AI chat history has low&#x2F;no relevance to the quality of response to the next question I ask. This is not a moat any more than search history is.<p>My email and work documents are obviously important if I&#x27;m querying for information about them, but that is self evident and also not a moat (I could grant another tool access to these things).<p>Computational efficiency is a moat. If Google can provide an AI response for $0.05 of infrastructure and electricity, but it takes OpenAI $0.57, that&#x27;s bad news for OpenAI.
评论 #43718875 未加载
natriusabout 2 months ago
I haven&#x27;t been able to figure out how there&#x27;s a moat for AI products that, if they work as advertised, can build a bridge over any most with near zero user effort.
评论 #43718612 未加载
cs702about 2 months ago
Sorry, but the OP is all fluffy hype, zero substance. There are no explanations, no links to research, and no links to code.<p>When the author mentions &quot;memory,&quot; what does <i>that</i> mean? Is this about RAG-style memory? I&#x27;m not sure that&#x27;s a &quot;moat.&quot;
评论 #43718647 未加载
cadamsdotcomabout 2 months ago
Solid prediction.<p>You can see this in the reddit memes that say things like “open chatgpt and ask it for your 5 biggest blind spots right now. Mind. Blown.”<p>Those who know it’s a tool call - plus some clever algorithms governing what the tool returns - could not be rolling their eyes harder. People who know what’s up will keep pasting things into new chats, and keep using delete and “forget memories” buttons. Maybe even multiple accounts.<p>But increasingly that’ll be “the old slow way”. You can see it in the comments here - people are grateful not to have to explain the stack again. They don’t want a blank unprimed conversation - and rather than copy-pasting a priming prompt (or having the model write a Cursor rule) they’d rather abdicate control over the AI’s behavior to an opaque priming process and a tool with unknown recall.<p>But everyone else is doing it, so a great many eye-rollers will give up and be swept up too.<p>AI memory has already captured the type of person who obeys instructions in reddit memes. Next is normies (your parents) who will find it pleasant the AI seems to know them well. They won’t understand how creepy it is, nor how much power is in the hands of someone who can train an AI on their chats. And experts will do their best to make the AI forget with delete buttons and the like; but even they will need to let the tools remember their patterns just to keep up with society.<p>Ergo, lock-in &amp; network effects.<p>So yes, it’s a pretty reasonable prediction.
etaioinshrdluabout 2 months ago
Does anyone really like and enjoy LLM products with memory at this point? To me this seems to be a case where the technical ability to do memory vastly exceeds its actual usefulness (for most people).
评论 #43718754 未加载
评论 #43718832 未加载