So? When I built a system that could collect and classify millions of images with close-to-perfect accuracy circa 2010 (with some tricks), I found that unacceptable images (dead nazis, micropenises, etc.) that occur at a rate of 1-to-10,000 would get my advertising turned off.<p>Circa 2017 YouTube had a crisis from beheading videos and such that caused them to demonetize almost all user-contributed content until advertisers felt safe.<p>In the advertising-based economy social acceptability (a non-functional requirement if I ever saw one.) is more important than anything else, including the quality of results.<p>The problem with ChatGPT recommending people for targeted assassination is not that it does it but that the story gets covered in the media. When there are 100 chatbots like that it is not going to be news that one of them will write you a story where your next-door neighbor gets raped, as awful as that is. For the first one or two or three it's a showstopper.<p>The PRC would not have some of the moral scruples we have but they'd feel similarly prudish about a chatbot that talks about what happened on June 4, 1989 or that explains why Taiwan should be an independent country, etc. All of these values are relative and you'd better believe that Israel already does use network analysis to target Palestinians for assassination (and really assassinating them) and they do it with a real network analysis, not just a bullshitting machine that will bullshit about death and destruction as well as about <i>Ren and Stimpy</i>.<p>Part of it is that ChatGPT is cosplaying as a person and appears to have moral agency. Nobody complains that the Mengele Cartel is using pocket calculators and software like <i>Quickbooks</i> to help manage international sales of cocaine and all the mayhem that entails.