Amazing post, thank you.<p>I really can't see how security can be solved <i>within</i> a probabilistic model, which is what we'd need to happen here, and that in turn effectively puts a huge limit on the scale at which we can use LLMs.<p>Lots of food for thought.
Soo.. Expect your personal GPT to be persistently compromised/hacked, remote-controlled and used to exfiltrate all your data. Security of LLMs is in a bad state right now.