Setting aside the efficacy of this tool, I would be very interested in the legal implications of putting designs in your art that could corrupt ML models.<p>For instance, if I set traps in my home which hurt an intruder we are both guilty of crimes (traps are illegal and are never considered self defense, B&E is illegal).<p>Would I be responsible for corrupting the AI operator's data if I intentionally include adversarial artifacts to corrupt models, or is that just DRM to legally protect my art from infringement?<p>edit:<p>I replied to someone else, but this is probably good context:<p>DRM is legally allowed to disable or even corrupt the software or media that it is protecting, if it detects misuse.<p>If an adversarial-AI tool attacks the model, it then becomes a question of whether the model, having now incorporated my protected art, is now "mine" to disable/corrupt, or whether it is in fact out of bounds of DRM.<p>So for instance, a court could say that the adversarial-AI methods could only actively prevent the training software from incorporating the protected media into a model, but could not corrupt the model itself.