That article links to the actual paper, the abstract of which is itself quite readable: <a href="https://dl.acm.org/doi/pdf/10.1145/3613904.3642596" rel="nofollow">https://dl.acm.org/doi/pdf/10.1145/3613904.3642596</a><p>> Q&A platforms have been crucial for the online help-seeking behav-
ior of programmers. However, the recent popularity of ChatGPT is
altering this trend. Despite this popularity, no comprehensive study
has been conducted to evaluate the characteristics of ChatGPT’s an-
swers to programming questions. To bridge the gap, we conducted
the first in-depth analysis of ChatGPT answers to 517 programming
questions on Stack Overflow and examined the correctness, consis-
tency, comprehensiveness, and conciseness of ChatGPT answers.
Furthermore, we conducted a large-scale linguistic analysis, as well
as a user study, to understand the characteristics of ChatGPT an-
swers from linguistic and human aspects. Our analysis shows that
52% of ChatGPT answers contain incorrect information and 77%
are verbose. Nonetheless, our user study participants still preferred
ChatGPT answers 35% of the time due to their comprehensiveness
and well-articulated language style. However, they also overlooked
the misinformation in the ChatGPT answers 39% of the time. This
implies the need to counter misinformation in ChatGPT answers to
programming questions and raise awareness of the risks associated
with seemingly correct answers.