> The devious part about what is happening is that none of it is visible to the user: When you expand the “Finished browsing” box and click on the linked website that it supposedly looked at, the website is entirely empty. The remote takeover is invisible:<p>Serving separate context to OpenAI's crawler is a fun twist to this project. Client-specific payloads are obviously not a new attack in general, but I'm not sure if I've seen them applied to LLMs yet, and the attack is a good example of how websites in the future might target their content towards individual AIs to poison search results in invisible ways that aren't obvious to the user; and it shows that OpenAI isn't really doing anything to mitigate that at the moment.<p>If GPT shared the link it visited with the user, the user wouldn't see the attack. Even though it <i>feels</i> like GPT is locally browsing to the website to see its content, it's of course not doing that, the crawling is happening on a remote server that can be separately targeted from the user's computer/IP. So unless OpenAI starts proxying web requests through all of its own servers and building in a browser into chat (unlikely), there's no easy way for an ordinary user to know what GPT sees when it looks at a website. So this even goes beyond your computer or your IP address getting a different payload -- this is an <i>agent</i>-specific payload.