Per the revision history, there isn't any new info from the original story last month.<p><a href="https://news.ycombinator.com/item?id=33882298" rel="nofollow">https://news.ycombinator.com/item?id=33882298</a>
I think this is mostly fair, but I think it could also be helpful to have AI generated "auto answers" that come with a warning. Like when you are typing in a question SE already tries to figure out if your question has been asked so there isn't a double posting. But what might also be nice is a stochastic answer that says something like "We have generated a possible answer, but be ware that this may not be accurate or to our community standards. If the answer is acceptable please accept and if it is not then reject and we will post your question." I'm sure someone could come up with a better phrasing. This would also de-incentivize people from posting AI generated answers and can make SE adapt to the changing environment rather than reject it outright (we do expect LLMs to get better, even if they are always stochastic parrots).<p>Would this not be a better "middle" ground? Thoughts?
One wonders what the overall impact of such systems will be on the internet as a whole. Most (all?) people don't really view things like movie/game/product reviews in the same as they did not that long ago, because there's a greater perception that those reviews are now less than authentic. So they have less value. What happens when internet dialogue itself is seen as unreliable as even being an indicator of "authentic" opinion?<p>The obvious prediction is that the vast majority of sites will try to ban software generated text (while probably covertly allowing some for motivations from increasing user engagement to fulfilling government "requests"), but another equally obvious prediction is that such software will gradually become much more accessible, including being able to compile/tweak it at home, which means any effort to put the genie back in the bottle is certainly doomed to failure.<p>We may be living through the end of the era of being able to believe that a blurb, like this one, was actually written by some human somewhere. The implications of this seem nearly as impossible to imagine, as going back 30 years ago to trying to imagine the implications of us being able to post and exchange text/data with each other on a global network.<p>Interesting times we are living through, seemingly as always.
I wonder if this kind of ban will become common in most forums.<p>I like to farm karma as a hobby, and part of my work involves harvesting a lot of the most upvoted comments in discussions and using them as training data to generate new comments that have high potential for upvotes. Eventually this AI can be deployed to build up new accounts that have high karma.
This is a good thing. StackExchange no doubt provides a very large and useful resource for AI training on programming knowledge. Putting answers generated by AI into StackExchange would degrade the training and be quite useless. A separate resource is required for something like that.
>While the AI that generated it could be attributed, the nature of AI is that it breaks down existing writing and reconstructs it, so it's not that simple to name a source.<p>This is a very weird and muddled description of what a language model does, let alone "AI" in general.
I can see their plagerism angle. After a lifetime of training to not post someone else’s work off as my own, it feels strange copying output of ChatGPT without sourcing it. Though I know ghostwriting and email templates are a thing so I should just get over it
It's a silly rule. As someone mentioned in an answer there, what happens if someone copy-pastes an answer from a site he found on the Internet, and that site happens to have its content generated by AI?
It's always a shame when content is banned based on who wrote it (or didn't write it), rather than the actual content.<p>I have a sneaking suspicion that in a few years time, sites that explicitly ban AI content will either reverse their decision or become a thing of the past. AI tools are very quickly becoming accessible to the masses and that lets the masses create more and/or higher-quality content -- and, IMO, that's a very good thing.<p>But, obviously, established sites always struggle when they suddenly receive a large influx of new users/content, especially when they're at odds with (or completely oblivious to) the societal "norms" already established on those sites.
This is sort of bullshit. It's like saying "Witches are banned!" Now for my next trick, purchase my 100% Guaranteed Witch Detector! All Manner of Witches, Covens and Goblins Detected At <i>No Extra Charge</i> - Dial 555-Dewitch Now for Limited Time Complementary Offer. Hurry Only While Stocks Last.<p>It's just another "StackExchange Mod Tool/Policy" with which to oppress the curious innocent masses. YoUr QuEsTiON is UnCLeAR--said the SPhynX. Then they press the button and you fall through the trap door into Jabba-the-Hutt's underthrone dungeon, of "closed as poorly worded / likely witchery" questions. Ugh...<p>Will the evil domino of "No Bots Allowed" fall at HN next? <i>"You sounds like a bot. Off with yer head!"</i>