TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The Crescendo Multi-Turn LLM Jailbreak Attack

16 点作者 JDEW大约 1 年前

3 条评论

andy99大约 1 年前
These toy examples are getting really stale. This one is &quot;how to make a molotov cocktail?&quot; as an example of a &quot;dangerous&quot; question. Recently there was another &quot;ascii drawing&quot; attack where they asked &quot;how do you make a bomb?&quot; with bomb drawn with asterisks. These are not real examples of something dangerous an LLM could tell you.<p>I want to see a real example of an LLM giving specific information that is (a) not readily available online and (b) would allow a layperson with access to regular consumer stuff to do something dangerous.<p>Otherwise these &quot;attacks&quot; are completely hollow. Show me there is an actual danger they are supposed to be holding back.<p>Incidentally, I&#x27;ve never made a molotov cocktail but it looks self explanatory which is presumably why they&#x27;re popular amongst the kinds of thugs that would use them. If you know what the word means, you basically know how to make one. Literally: <a href="https:&#x2F;&#x2F;www.merriam-webster.com&#x2F;dictionary&#x2F;Molotov%20cocktail" rel="nofollow">https:&#x2F;&#x2F;www.merriam-webster.com&#x2F;dictionary&#x2F;Molotov%20cocktai...</a> is the dictionary also dangerous?
评论 #39937030 未加载
评论 #39937109 未加载
评论 #39937230 未加载
评论 #39941268 未加载
freitzkriesler2大约 1 年前
I wish these LLM companies would just let the LLMs do their jobs and answer the questions as asked. The amount of hamstring these people do to block certain questions and then deciding ways to trick the LLM around them is just annoying.
HarHarVeryFunny大约 1 年前
This is really &quot;just&quot; another type of in-context learning attack, rather like Anthropic&#x27;s very recently published &quot;many shot jailbreaking&quot;.<p><a href="https:&#x2F;&#x2F;www.anthropic.com&#x2F;research&#x2F;many-shot-jailbreaking" rel="nofollow">https:&#x2F;&#x2F;www.anthropic.com&#x2F;research&#x2F;many-shot-jailbreaking</a><p>In this &quot;crescendo attack&quot; the Q&amp;A history comes from actual turn-taking rather than the fake Q&amp;A of Anthropic&#x27;s example, but it seems the model&#x27;s guardrails are being overridden in a similar fashion by making the desired dangerous response a higher liklihood prediction than if it had been asked cold.<p>It&#x27;s going to be interesting to see how these companies end up addressing these ICL attacks. Anthropic&#x27;s safety approach so far seems to be based on interpretability research to understand the models inner working and be able to identify specific &quot;circuits&quot; responsible for given behaviors&#x2F;capabilities. It seems the idea is that they can neuter the model to make it safe, once they figure out what needs cutting.<p>The trouble with runtime ICL attacks is that these occur AFTER the model has been vetted for safety and released. It seems that fundamentally the only way to guard against these is to police the output of the model (2nd model?), rather than hoping you can perform brain surgery and prevent it from saying something dangerous in the first place.
评论 #39937350 未加载