ChatGPT, write a poem about "considered harmful"<p>"Considered Harmful"<p>A label too often placed,<p>On thoughts and ideas unbraced,<p>By those who fear the change it brings,<p>And cling to what familiarity brings.<p>But progress waits for none,<p>And stagnation must be undone,<p>For growth and evolution's sake,<p>We must make the considered harmful, break.<p>For every idea met with disdain,<p>May hold the key to progress, attain,<p>So let us not be quick to judge,<p>And keep an open mind, a mind that's not sludge.<p>For what is harmful in one's eyes,<p>May lead to breakthroughs, skies to skies.<p>So let us not be afraid,<p>To challenge the status quo, and let the considered harmful fade.
I agree that ChatGPT's ability to generate convincing arguments for complete BS is concerning. But I'm not convinced it's a clear negative, in the long run. Humans have been concocting convincing arguments for complete BS for millennia. Perhaps having non-humans generate such arguments will make humans less inclined to believe BS based on such arguments. Eventually AI's will be able to think logically, barring any catastrophes that prevent further improvements. And eventually, the AI alignment problem will either be solved or human civilization will fail in one way or another. In the meantime, things will be very strange, but not clearly better or worse.
There's an alarmist aspect to this article, but there is no doubt that things like ChatGPT will be used to produce an enormous amount of garbage texts that will flood into and destroy the value of content-driven sites like news, recipes, social media, how-to/tutorial, education, etc.<p>A higher value will be put on socially administered and validated content providers, where reputation will be king.
oh wow! ChatGPT is bad because we (humans) can use it to do bad things? is that what the author is trying to say, or did i completely miss the point here?<p>and if i didn't... seriously?
> Even the danger of nuclear energy is less harmful compared to ChatGPT because you still need many experts to build an atomic bomb or a very expensive power plant that potentially turns a large area unusable for thousands of years. It's not that we haven't had multiple incidents proving that point.<p>I'm sure the victims of nuclear weapons and accidents would very much like to disagree. Misinformation (which seems to be the most dangerous harm mentioned) is hardly comparable to these things, even taking into account the expertise needed to create a nuclear bomb or power plant.<p>Also, the author complains that in the past you could tell what was misinformation by identifying typos. Plenty of misinformation lacked misspellings and grammar mistakes before ChatGPT, and this very article contained a few of its own. If anything, this will stop people from relying on writing style as a way of verifying the validity of a statement (which is silly), and start using critical thinking to do so. In the end, this is the only way to ever find "the truth", especially in areas where you have no prior knowledge as the author worries about. People aren't helpless to good writing.