This is pretty interesting, although the attack seems pretty brittle & not generalizable to other LLM's or other tools similar to auto-gpt. Even future versions of auto-gpt seem likely to break the attack vector, IMO.<p>More importantly, it serves as a great reminder that containers are not a security tool and if you rely on them for security you will get burned and it will be your fault