While it might be good for a non-technical essay, for technical matters, it has a bad habit of spewing nonsense, both in answers and citations. Professor Alex Wellerstein (of Nukemap fame) gives two anecdotes highlighting their issues.<p>1. <a href="https://old.reddit.com/r/AskHistorians/comments/11u21ie/the_consensus_from_a_brief_search_of_previous/jcn3aee/" rel="nofollow">https://old.reddit.com/r/AskHistorians/comments/11u21ie/the_...</a><p><pre><code> An anecdote, but I recently was asked to review the essay of a student who I had not taught. I became highly suspicious it had been generated by ChatGPT, because it had the "feel" of its output. The clincher was that it had an entire page of references... all of which were fake. They all looked plausible, and even had URLs. But not one of them was accurate, and all of the URLs were dead, and all investigation made it clear there references had never existed. I was somewhat amazed, both at the gall of a chatbot inventing fake references, and for the student who clearly did not click on even one of the generated links, yet had still asked for an essay re-grade!!
</code></pre>
2. <a href="https://old.reddit.com/r/AskHistorians/comments/11u21ie/the_consensus_from_a_brief_search_of_previous/jcn3w2q/" rel="nofollow">https://old.reddit.com/r/AskHistorians/comments/11u21ie/the_...</a><p><pre><code> One experiment I ran with it recently was to ask it about the RIPPLE, which is a nuclear weapon design that was tested in the 1960s. The details of the RIPPLE are not public, but the fact of its existence, who invented it, and its testing are, as well as the some very broad pieces of information about it. Anyway, I repeatedly asked ChatGPT how the RIPPLE worked, and why it was called the RIPPLE, and every time it gave me a totally new and contradictory answer, freely making it up each time. After giving me maybe 6 different answers in a row it then noticed it was giving me contradictions, and from that point onward claimed that the most recent answer was correct. I was impressed at how inconsistent it was, that you could just ask it the same thing over and over again and it would just make new things up each time. The only consistency it gave me was wrong: it repeatedly emphasized that the design was entire hypothetical and never tested, which is false (it was tested at least four times).
In a separate exchange, I asked it to ask me a question, and when (for whatever reason) I told it I was interested in nuclear weapons, it began to lecture me on how this was a topic that should be left to experts. I then told it I was an expert, and it then started lecturing me on how an expert on this topic ought to behave and think. It almost seemed defensive. I thought it was pretty rich — an impressive mansplaining simulator, indeed.
</code></pre>
3. The full discussion outlined in (2): <a href="https://old.reddit.com/r/nuclearweapons/comments/117hssn/chatgpt_makes_up_shit_about_ripple/" rel="nofollow">https://old.reddit.com/r/nuclearweapons/comments/117hssn/cha...</a>