This is a great reason to learn from our mistakes of the 2010s and not give ourselves away to OpenAI and other cloud AI providers.<p>I would like to see a memory provider/system that allows us to own this data and put OpenAI et al on the customer end. They should be paying US for that.
Can't speak for anyone else, but my own AI chat history has low/no relevance to the quality of response to the next question I ask. This is not a moat any more than search history is.<p>My email and work documents are obviously important if I'm querying for information about them, but that is self evident and also not a moat (I could grant another tool access to these things).<p>Computational efficiency is a moat. If Google can provide an AI response for $0.05 of infrastructure and electricity, but it takes OpenAI $0.57, that's bad news for OpenAI.
I haven't been able to figure out how there's a moat for AI products that, if they work as advertised, can build a bridge over any most with near zero user effort.
Sorry, but the OP is all fluffy hype, zero substance. There are no explanations, no links to research, and no links to code.<p>When the author mentions "memory," what does <i>that</i> mean? Is this about RAG-style memory? I'm not sure that's a "moat."
Solid prediction.<p>You can see this in the reddit memes that say things like “open chatgpt and ask it for your 5 biggest blind spots right now. Mind. Blown.”<p>Those who know it’s a tool call - plus some clever algorithms governing what the tool returns - could not be rolling their eyes harder. People who know what’s up will keep pasting things into new chats, and keep using delete and “forget memories” buttons. Maybe even multiple accounts.<p>But increasingly that’ll be “the old slow way”. You can see it in the comments here - people are grateful not to have to explain the stack again. They don’t want a blank unprimed conversation - and rather than copy-pasting a priming prompt (or having the model write a Cursor rule) they’d rather abdicate control over the AI’s behavior to an opaque priming process and a tool with unknown recall.<p>But everyone else is doing it, so a great many eye-rollers will give up and be swept up too.<p>AI memory has already captured the type of person who obeys instructions in reddit memes. Next is normies (your parents) who will find it pleasant the AI seems to know them well. They won’t understand how creepy it is, nor how much power is in the hands of someone who can train an AI on their chats. And experts will do their best to make the AI forget with delete buttons and the like; but even they will need to let the tools remember their patterns just to keep up with society.<p>Ergo, lock-in & network effects.<p>So yes, it’s a pretty reasonable prediction.
Does anyone really like and enjoy LLM products with memory at this point? To me this seems to be a case where the technical ability to do memory vastly exceeds its actual usefulness (for most people).