TE
TechEcho
Home
24h Top
Newest
Best
Ask
Show
Jobs
English
GitHub
Twitter
Home
Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
2 points
by
haneefmubarak
over 1 year ago
1 comment
schoen
over 1 year ago
Basically, by putting a relatively small number of adversarial examples into the training data of a text-to-image model (that don't necessarily look suspicious to a human observer), they can make it completely mislearn a concept.