TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models

2 pointsby haneefmubarakover 1 year ago

1 comment

schoenover 1 year ago
Basically, by putting a relatively small number of adversarial examples into the training data of a text-to-image model (that don't necessarily look suspicious to a human observer), they can make it completely mislearn a concept.