I know there's going to be a lot of pushback on this because HN is sensitive to censorship, but let's try to look at it a little more objectively than that. I'd like to draw on one example, one that is near and dear to many hearts in the US and abroad: the US election.<p>Throughout the course of the election, opinions and comments were being shared all over the place. Twitter, Facebook, here on HN, bathroom stalls, news broadcasts and websites, comments on blogs and videos. There was no shortage of opinions. This is great, and showcases the power of the internet in its capability to transmit and receive all types of information. But is it not important how an opinion is formed? Surely you wouldn't enjoy or find valuable a blog post that was sparse on details, proof or a coherent line of thinking. And yet, there it was: in every corner of the internet, anyone who could operate an internet device could share their opinion on the matter. It doesn't matter if they spent 1 second on their response, or 1 hour. Most comments received the same amount of attention and value.<p>The question is, should all thoughts and opinions be valued the same when information is in incredible supply? Most of us don't think so, and we've shown that by creating voting systems which allow for humans to filter out the things we find to be deconstructive. But we don't really stop there, do we? Humans are also incredibly biased on average: you see it here, you see it a lot on reddit. People vote things down not on the merit of the level of attention the commenter gave to their response, but generally on whether or not they agree with the sentiment expressed by the commenter.<p>How many arguments has this biased fuelled? I wonder how many people have been pushed further away from a centrist perspective because of the shaming and bashing that goes on in online threads.<p>I think Hacker News is a great example of humans doing much better than average at filtering out strictly toxic comments (and the mods are certainly at least partially to thank!) We're really lucky to be able to have people engage in conversations which have opposing views here, and also be able to see many different perspectives treated with the same level of respect. But even here, quite often we're prevented from having discussions that are truly political, because of the toxicity that arises. And I have to say I think I've noticed an increase in the past couple years.<p>There aren't a lot of immediately obvious solutions to this problem, but I propose that AI intervention isn't the worst solution, and may be the best, even compared to humans. I'm gonna give Google the recognition they deserve for this service. I think an increase in this approach to online conversation could change dramatically the way we choose to engage each other in conversation, and generally will lead to more positive perspectives of one another -- something we could all use a little help with.<p>Edit: I will say, however, that this needs to work. If it's not doing its job correctly, or well enough, it could lead to problems which I don't need to address here.