TE
科技回声
首页
24小时热榜
最新
最佳
问答
展示
工作
中文
GitHub
Twitter
首页
Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
2 点
作者
haneefmubarak
超过 1 年前
1 comment
schoen
超过 1 年前
Basically, by putting a relatively small number of adversarial examples into the training data of a text-to-image model (that don't necessarily look suspicious to a human observer), they can make it completely mislearn a concept.