Of course, this all raises the question of whether there are only unethical use cases for bots pretending to be human, or whether a law like this could hit benign uses for bots as well.<p>For instance, ARGs could have bot accounts for fictional characters on social media sites. These accounts could give pre recorded messages that then hint that the user should visit some third party site for more clues or information. Is that legally dubious? I can see it being so under this law, but I don't think it's comparable to a business running say an automated chat support system and pretending its bots are human.<p>Same goes with roleplaying bots on online community sites. These aren't a huge thing right now, but they could be in future, with accounts that act like NPCs do in video games or interact with the players account in side quests or what not. These don't seem like they'd be morally 'wrong' things to have on a site, but they'd probably get hit by this law regardless.<p>Point is, these types of bots don't necessarily only have dodgy use cases.