It seems like a good way to mitigate these attacks is to train a separate "supervisor" AI that watches all conversations for things like content policy violations and prompt injections. The supervisor AI wouldn't be a chat-based LLM, it wouldn't ever change its behavior based on prompts. Its job would basically just be to watch the chat and either approve or deny the input or output. If it did block input or output, the user could get a message in the UI explaining that a supervisor blocked the chat. For infractions too severe, it could even terminate the chat.