So just yesterday I asked ChatGPT 4o something in a new chat and after answering the question, it referred to a plan of mine from a multiple weeks old chat that was 100% not in current context.<p>This hit me as being incredibly scary since so far I've been treating chats like disposable browser tabs and my intuition was that GPT was limited by context anyway. It looks to me like there's a silent change behind the scenes that starts aggregating a profile of mine to inject into context.<p>From interviews, I know they're after building a helpful assistant that "knows everything about you", but this struck me as very surprising anyhow, and I'm still trying to understand why.<p>I think it's the combination of<p>- silently adding this feature without info or opt-in<p>- retroactively applying it to chats<p>- assuming all previous chats and questions are from the same person<p>- apparently assuming no sensitive stuff has been discussed so far<p>Not sure if others think this is okay and especially since I've opted out of training on my data, I'm even wondering whether this is legal under GDPR Art. 19 (I'm a EU "data subject").<p>Thoughts and experiences appreciated.
i kind of like i can make a new chat and it remembers an old one. Because it spits out so many repeated stuff, especially with code - each time re-iterating a whole source part etc., or having tons of docs explanations repeated each time. the chat becomes really slow! a new chat is fast again. if it retains context, that's great :D
The memory feature was announced at <a href="https://openai.com/index/memory-and-new-controls-for-chatgpt/" rel="nofollow">https://openai.com/index/memory-and-new-controls-for-chatgpt...</a> and includes controls for it. It was also described in a popup when entering the chat interface.