When I became aware of the increasing performance of the *GPT models and their peers, a couple of threat models to online communication and publishing came to my mind immediately.<p>As an experiment, I’ve let ChatGPT write an essay on how itself could be used by a theoretical “antidemocratic conservative libertarian actor" (replace with any political agenda as desired, mostly — these just happened to be the combination of attributes I gave my adversary when toying around, which I include here to explain their explicit occurrence in the result) from a short prompt with a few keywords, which worked surprisingly well — although the result is a bit repetitive. It outlined the attacks as follows:<p><i>ChatGPT is a powerful tool that can be used to flood social media and the blogosphere with content in order to achieve a number of nefarious goals. For example, an antidemocratic conservative libertarian could use ChatGPT to astroturf, or create the appearance of widespread support for their views when in reality there is little to none.<p>By using ChatGPT to generate a large number of seemingly independent voices all pushing the same message, an antidemocratic conservative libertarian could create the impression that their ideas are widely held and supported. This could be particularly effective when combined with other tactics, such as bot networks and fake social media accounts, to create a sense of momentum and popular support for their views.<p>Another potential use for ChatGPT in this context would be to increase the chilling effect on public discourse. By generating a deluge of content that is critical or hostile to certain ideas or viewpoints, an antidemocratic libertarian could create an atmosphere of fear and intimidation that discourages others from speaking out against their views. This could be particularly effective when used to target marginalized or vulnerable groups, who may be more likely to self-censor out of fear of reprisal.<p>Finally, ChatGPT could be used to undermine trust in public discourse and goodwill in public discourse more generally. By generating a large number of content that is false, misleading, or inflammatory, an antidemocratic libertarian could create confusion and
chaos in the public sphere. This could lead to a loss of trust in the information that is available, and a general erosion of the norms of civility and mutual respect that are necessary for public discourse to function effectively.<p>Overall, ChatGPT offers a powerful tool for antidemocratic conservative libertarians to promote their views and undermine the functioning of public discourse.
By using ChatGPT to generate large volumes of content, they can create the appearance of widespread support for their ideas, increase the chilling effect on public discourse, and undermine trust in public discourse and goodwill in public discourse more generally. This would be consistent with their political agenda of limiting the scope of democracy and protecting the interests of the wealthy and powerful.</i><p>No matter the (theoretical) political orientation of the adversary/adversaries: I would expect that rather sooner than later the GPT family and similar models will be used to at least attempt to alter the experience of the digital discourse enough to suffocate the desire to consume or partake in online discourse, leading to a "digital reclusion".<p>My question to HN is twofold:<p>1) would you agree with the above assessment or is it overly pessimistic?<p>2) which measures could prevent the aforementioned outcome?<p>Edit: escaped the asterisk in the first paragraph, emphases.