This is really "just" another type of in-context learning attack, rather like Anthropic's very recently published "many shot jailbreaking".<p><a href="https://www.anthropic.com/research/many-shot-jailbreaking" rel="nofollow">https://www.anthropic.com/research/many-shot-jailbreaking</a><p>In this "crescendo attack" the Q&A history comes from actual turn-taking rather than the fake Q&A of Anthropic's example, but it seems the model's guardrails are being overridden in a similar fashion by making the desired dangerous response a higher liklihood prediction than if it had been asked cold.<p>It's going to be interesting to see how these companies end up addressing these ICL attacks. Anthropic's safety approach so far seems to be based on interpretability research to understand the models inner working and be able to identify specific "circuits" responsible for given behaviors/capabilities. It seems the idea is that they can neuter the model to make it safe, once they figure out what needs cutting.<p>The trouble with runtime ICL attacks is that these occur AFTER the model has been vetted for safety and released. It seems that fundamentally the only way to guard against these is to police the output of the model (2nd model?), rather than hoping you can perform brain surgery and prevent it from saying something dangerous in the first place.