TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The New Moat: Memory

17 点作者 jeffmorrisjr大约 1 个月前

6 条评论

bentt大约 1 个月前
This is a great reason to learn from our mistakes of the 2010s and not give ourselves away to OpenAI and other cloud AI providers.<p>I would like to see a memory provider&#x2F;system that allows us to own this data and put OpenAI et al on the customer end. They should be paying US for that.
评论 #43719831 未加载
评论 #43719166 未加载
xnx大约 1 个月前
Can&#x27;t speak for anyone else, but my own AI chat history has low&#x2F;no relevance to the quality of response to the next question I ask. This is not a moat any more than search history is.<p>My email and work documents are obviously important if I&#x27;m querying for information about them, but that is self evident and also not a moat (I could grant another tool access to these things).<p>Computational efficiency is a moat. If Google can provide an AI response for $0.05 of infrastructure and electricity, but it takes OpenAI $0.57, that&#x27;s bad news for OpenAI.
评论 #43718875 未加载
natrius大约 1 个月前
I haven&#x27;t been able to figure out how there&#x27;s a moat for AI products that, if they work as advertised, can build a bridge over any most with near zero user effort.
评论 #43718612 未加载
cs702大约 1 个月前
Sorry, but the OP is all fluffy hype, zero substance. There are no explanations, no links to research, and no links to code.<p>When the author mentions &quot;memory,&quot; what does <i>that</i> mean? Is this about RAG-style memory? I&#x27;m not sure that&#x27;s a &quot;moat.&quot;
评论 #43718647 未加载
cadamsdotcom30 天前
Solid prediction.<p>You can see this in the reddit memes that say things like “open chatgpt and ask it for your 5 biggest blind spots right now. Mind. Blown.”<p>Those who know it’s a tool call - plus some clever algorithms governing what the tool returns - could not be rolling their eyes harder. People who know what’s up will keep pasting things into new chats, and keep using delete and “forget memories” buttons. Maybe even multiple accounts.<p>But increasingly that’ll be “the old slow way”. You can see it in the comments here - people are grateful not to have to explain the stack again. They don’t want a blank unprimed conversation - and rather than copy-pasting a priming prompt (or having the model write a Cursor rule) they’d rather abdicate control over the AI’s behavior to an opaque priming process and a tool with unknown recall.<p>But everyone else is doing it, so a great many eye-rollers will give up and be swept up too.<p>AI memory has already captured the type of person who obeys instructions in reddit memes. Next is normies (your parents) who will find it pleasant the AI seems to know them well. They won’t understand how creepy it is, nor how much power is in the hands of someone who can train an AI on their chats. And experts will do their best to make the AI forget with delete buttons and the like; but even they will need to let the tools remember their patterns just to keep up with society.<p>Ergo, lock-in &amp; network effects.<p>So yes, it’s a pretty reasonable prediction.
etaioinshrdlu大约 1 个月前
Does anyone really like and enjoy LLM products with memory at this point? To me this seems to be a case where the technical ability to do memory vastly exceeds its actual usefulness (for most people).
评论 #43718754 未加载
评论 #43718832 未加载