TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ex-Reddit CEO on Twitter moderation

1175 点作者 kenferry超过 2 年前

139 条评论

dang超过 2 年前
All: this is an interesting submission—it contains some of the most interesting writing about moderation that I&#x27;ve seen in a long time*. If you&#x27;re going to comment, please make sure you&#x27;ve read and understand his argument and are engaging with it.<p>If you dislike long-form Twitter, here you go: <a href="https:&#x2F;&#x2F;threadreaderapp.com&#x2F;thread&#x2F;1586955288061452289.html" rel="nofollow">https:&#x2F;&#x2F;threadreaderapp.com&#x2F;thread&#x2F;1586955288061452289.html</a> - and please <i>don&#x27;t</i> comment about that here. I know it can be annoying, but so is having the same offtopic complaints upvoted to the top of every such thread. This is why we added the site guideline: &quot;<i>Please don&#x27;t complain about tangential annoyances—e.g. article or website formats</i>&quot; (and yes, this comment is also doing this. Sorry.)<p>Similarly, please resist being baited by the sales interludes in the OP. They&#x27;re also offtopic and, yes, annoying, but this is why we added the site guideline &quot;<i>Please don&#x27;t pick the most provocative thing in an article to complain about—find something interesting to respond to instead.</i>&quot;<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;newsguidelines.html" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;newsguidelines.html</a><p>* even more so than <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=33446064" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=33446064</a>, which was also above the median for this topic.
ufo超过 2 年前
In the US, where Twitter &amp; Facebook are dominant, the current consensus in the public mind is that political polarization and radicalization are driven by the social media algorithms. However, I have always felt that this explanation was lacking. Here in Brazil we have many of the same problems but the dominant social media are Whatsapp group chats, which have no algorithms whatsoever (other than invisible spam filters). I think Yishan is hitting the nail on the head by focusing the discussion on user behavior instead of on the content itself.
评论 #33458865 未加载
评论 #33465288 未加载
评论 #33459916 未加载
评论 #33460336 未加载
评论 #33461581 未加载
评论 #33463073 未加载
评论 #33458815 未加载
评论 #33467270 未加载
评论 #33459318 未加载
评论 #33458827 未加载
评论 #33472362 未加载
评论 #33469335 未加载
评论 #33463307 未加载
评论 #33467456 未加载
评论 #33460276 未加载
评论 #33461949 未加载
评论 #33468161 未加载
评论 #33460264 未加载
jameskilton超过 2 年前
Every single social media platform that has ever existed makes the same fundamental mistake. They believe that they just have to remove or block the bad actors and bad content and that will make the platform good.<p>The reality is <i>everyone</i>, myself included, can be and will be a bad actor.<p>How do you build and run a &quot;social media&quot; product when the very act of letting anyone respond to anyone with anything is itself the fundamental problem?
评论 #33455312 未加载
评论 #33455170 未加载
评论 #33455168 未加载
评论 #33455035 未加载
评论 #33450532 未加载
评论 #33455943 未加载
评论 #33469393 未加载
评论 #33456822 未加载
评论 #33450944 未加载
评论 #33451341 未加载
评论 #33459770 未加载
评论 #33455501 未加载
评论 #33463933 未加载
评论 #33456302 未加载
评论 #33457484 未加载
评论 #33450252 未加载
评论 #33450502 未加载
评论 #33451345 未加载
评论 #33450780 未加载
评论 #33450598 未加载
评论 #33451549 未加载
motohagiography超过 2 年前
I&#x27;ve had to give this some thought for other reasons, and after a couple decades solving analogous problems to moderation in security, I agree with yishan about signal to noise over the specific content, but what I have effectively spent a career studying and detecting with data is a single factor: malice.<p>It&#x27;s something every person is capable of, and it takes a lot of exercise and practice with higher values to reach for something else when your expectations are challenged, and often it&#x27;s an active choice to recognize the urge and act differently. If there were a rule or razor I would make on a forum or platform, it&#x27;s that all content has to pass the bar of being without malice. It&#x27;s not &quot;assume good intent,&quot; it&#x27;s recognizing that there are ways of having very difficult opinions without malice, and one can have conventional views that are malicious, and unconventional ones that are not. If you have ever dealt with a prosecutor or been on the wrong side of a legal dispute, these are people fundamentally actuated by malice, and the similar prosecution of ideas and opinions (and ultimately people) is what wrecks a forum.<p>It&#x27;s not about being polite or civil, avoiding conflict, or even avoiding mockery and some very funny and unexpected smackdowns either. It&#x27;s a quality that in being universally capable of it, I think we&#x27;re also able to know it when we see it. &quot;Hate,&quot; is a weak substitute because it is so vague we can apply it to anything, but malice is ancient and essential. Of course someone malicious can just redefine malice the way they have done other things and use it as an accusation because words have no meaning other than as a means in struggle, but really, you can see when someone is actuated by it.<p>I think there is a point where a person decides, consciously or not, that they will relate to the world around them with malice, and the first casulty of that is an alignment to honesty and truth. What makes it useful is that you can address malice directly and restore an equillibrium in the discourse, whereas accusations of hate and others are irrevocable judgments. I&#x27;d wonder if given it&#x27;s applicability, this may be the tool.
评论 #33454968 未加载
评论 #33456116 未加载
评论 #33458782 未加载
评论 #33457315 未加载
评论 #33463095 未加载
kmeisthax超过 2 年前
This is a very good way to pitch your afforestation startup accelerator in the guise of a talk on platform moderation. &#x2F;s<p>I&#x27;m pretty sure I&#x27;ve got some bones to pick with yishan from his tenure on Reddit, but everything he&#x27;s said here is pretty understandable.<p>Actually, I would like to develop his point about &quot;censoring spam&quot; a bit further. It&#x27;s often said that the Internet &quot;detects censorship as damage and routes around it&quot;. This is propaganda, of course; a fully censorship-resistant Internet is entirely unusable. In fact, the easiest way to censor someone online is through harassment, or DDoS attacks - i.e. have a bunch of people shout at you until you shut up. Second easiest is through doxing - i.e. make the user feel unsafe until they jump off platform and stop speaking. Neither of these require content removal capability, but they still achieve the goal of censorship.<p>The point about old media demonizing moderation is something I didn&#x27;t expect, but it makes sense. This <i>is</i> the same old media that gave us cable news, after all. Their goal is not to inform, but to allure. In fact, I kinda wish we had a platform that explicitly refused to give them the time of day, but I&#x27;m pretty sure it&#x27;s illegal to do that now[0], and even back a decade ago it would be financial suicide to make a platform only catering to individual creators.<p>[0] For various reasons:<p>- The EU Copyright Directive imposes an upload filtering requirement on video platforms that needs cooperation with old media companies in order to implement. The US is also threatening similar requirements.<p>- Canada Bill C-11 makes Canadian content (CanCon) must-carry for all Internet platforms, including ones that take user-generated content. In practice, it is easier for old media to qualify as CanCon than actual Canadian individuals.
评论 #33459535 未加载
评论 #33458455 未加载
评论 #33458997 未加载
digitalsushi超过 2 年前
I can speak only at a Star Trek technobabble level on this, but I&#x27;d like it if I could mark other random accounts as &quot;friends&quot; or &quot;trusted&quot;. Anything they upvote or downvote becomes a factor in whether I see a post or not. I&#x27;d also be upvoting&#x2F;downvoting things, and being a possible friend&#x2F;trusted.<p>I&#x27;d like a little metadata with my posts, such as how controversial my network voted it. The ones that are out of calibration, I can view, see their responses, and then I could see if my network has changed. It would be nice to click on a friend and get a report across months of how similar we vote. If we started drift, I can easily cull them and get my feed cleaned up.
评论 #33455453 未加载
评论 #33456693 未加载
评论 #33459268 未加载
评论 #33455588 未加载
评论 #33460147 未加载
评论 #33462856 未加载
评论 #33459201 未加载
评论 #33456326 未加载
jacobsenscott超过 2 年前
The solution is simple - only show users tweets from people they follow. People may say twitter can&#x27;t make money this way, but with this model you don&#x27;t need much money. You don&#x27;t need moderation, or AI, or a massive infrastructure, or tracking, etc. You don&#x27;t need managers or KPIs or HR, or anything beyond an engineer or two and a server or two. Musk could pay for this forever and it would never be more than a rounding error in his budget.<p>But this isn&#x27;t what twitter is for. Twitter is for advertising.
评论 #33459671 未加载
评论 #33459584 未加载
评论 #33465131 未加载
评论 #33460350 未加载
评论 #33460340 未加载
评论 #33462859 未加载
评论 #33459388 未加载
MichaelZuo超过 2 年前
There are some neat ideas raised by Yishan.<p>One is &#x27;put up or shutup&#x27; for appeals of moderator decisions.<p>That is anyone who wishes to appeal needs to also consent to have all their activities on the platform, relevant to the decision, revealed publicly.<p>It definitely could prevent later accusations of secretiveness or arbitrariness. And it probably would also make users think more in marginal cases before submitting.
评论 #33455441 未加载
评论 #33457162 未加载
评论 #33455233 未加载
评论 #33457436 未加载
评论 #33457439 未加载
ItsBob超过 2 年前
Here&#x27;s a radical idea: let me moderate my own shit!<p>Twitter is a subscription-based system (by this, I mean that I have to subscribe to someone&#x27;s content) so if I subscribe to someone and don&#x27;t like what they say then buh-bye!<p>Let me right click on a comment&#x2F;tweet (I don&#x27;t use social media so not sure of the exact terminology the kids use these days) with the options of:<p>- Hide this comment<p>- Hide all comments in this thread from &lt;name&gt;<p>- Block all comments in future from &lt;name&gt; (you can undo this in settings).<p>That would work for me.
评论 #33456805 未加载
评论 #33455252 未加载
评论 #33451035 未加载
评论 #33457567 未加载
评论 #33451504 未加载
评论 #33452375 未加载
RockyMcNuts超过 2 年前
see also -<p>Hey Elon: Let Me Help You Speed Run The Content Moderation Learning Curve<p><a href="https:&#x2F;&#x2F;www.techdirt.com&#x2F;2022&#x2F;11&#x2F;02&#x2F;hey-elon-let-me-help-you-speed-run-the-content-moderation-learning-curve&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.techdirt.com&#x2F;2022&#x2F;11&#x2F;02&#x2F;hey-elon-let-me-help-you...</a>
评论 #33459416 未加载
paradite超过 2 年前
I recently started my own Discord server and had my first experience in content moderation. The demographics is mostly teenagers. Some have mental health issues.<p>It was the hardest thing ever.<p>In first incident I chose to ignore a certain user being targeted by others for posting repeated messages. The person left a very angry message and left.<p>Comes the second incident, I thought I learnt my lesson. Once a user is targeted, I tried to stop others from targeting the person. But this time the people who targeted the person wrote angry messages and left.<p>Someone asked a dumb question, I replied in good faith. The conversation goes on and on and becomes weirder and weirder, until the person said &quot;You shouldn&#x27;t have replied me.&quot;, and left.<p>Honestly I am just counting on luck at this time that I can keep it running.
评论 #33457271 未加载
评论 #33456235 未加载
评论 #33456308 未加载
评论 #33456404 未加载
评论 #33458237 未加载
评论 #33457413 未加载
评论 #33457866 未加载
评论 #33465280 未加载
评论 #33458731 未加载
评论 #33457264 未加载
评论 #33456156 未加载
评论 #33456215 未加载
blfr超过 2 年前
&gt; Because it is not TOPICS that are censored. It is BEHAVIOR.<p>&gt; (This is why people on the left and people on the right both think they are being targeted)<p>An enticing idea but simply not the case for any popular existing social network. And it&#x27;s triply not true on yishan&#x27;s reddit which both through administrative measures and moderation culture targets any and all communities that do not share the favoured new-left politics.
评论 #33455706 未加载
评论 #33455945 未加载
评论 #33456183 未加载
评论 #33456224 未加载
评论 #33456446 未加载
评论 #33455806 未加载
评论 #33468570 未加载
评论 #33456973 未加载
评论 #33455489 未加载
评论 #33456416 未加载
dbrueck超过 2 年前
At least one missing element is that of <i>reputation</i>. I don&#x27;t think it should work exactly like it does in the real world, but the absence of it seems to always lead to major problems.<p>The cost of being a jerk online is too low - it&#x27;s almost entirely free of any consequences.<p>Put another way, not everyone deserves a megaphone. Not everyone deserves to chime in on any conversation they want. The promise of online discussion is that everyone should have the <i>potential</i> to rise to that, but just granting them that privilege from the outset and hardly ever revoking it doesn&#x27;t work.<p>Rather than having an overt moderation system, I&#x27;d much rather see where the reach&#x2F;visibility&#x2F;weight of your messages is driven by things like your time in the given community, your track record of insightful, levelheaded conversation, etc.
评论 #33458010 未加载
评论 #33458701 未加载
评论 #33457330 未加载
评论 #33457813 未加载
评论 #33459552 未加载
评论 #33460210 未加载
评论 #33457848 未加载
评论 #33461664 未加载
评论 #33459109 未加载
评论 #33457262 未加载
kalekold超过 2 年前
I wish we could all go back to phpBB forums. Small, dedicated, online communities were great. I can&#x27;t remember massive problems like this back then.
评论 #33455176 未加载
评论 #33455742 未加载
评论 #33463797 未加载
评论 #33467271 未加载
评论 #33456727 未加载
ptero超过 2 年前
This topic was adjacent to the sugar and L-isomer comments. Which probably influenced my viewpoint:<p>Yishan is saying that Twitter (and other social networks) moderate bad behavior, not bad content. They just strive for higher SNR. It is just that specific types of content seems to be disproportionately responsible for starting bad behavior in discussions; and thus get banned. Sounds rational and while potentially slightly unfair looks totally reasonable for a private company.<p>But what I think is happening is that this specific moderation on social networks in general and Twitter in particular has pushed them along the R- (or L-) isomer path to an extent that a lot of content, however well presented and rationally argued, just cannot be digested. Not because it is objectively worse or leads into a nastier state, but simply because deep inside some structure is pointing in the wrong direction.<p>Which, to me, is very bad. Once you reach this state of mental R- and L- incompatibility, no middle ground is possible and the outcome is decided by an outright war. Which is not fun and brings a lot of causalties. My 2c.
评论 #33458361 未加载
评论 #33457180 未加载
hunglee2超过 2 年前
&quot;there will be NO relation between the topic of the content and whether you moderate it, because itʻs the specific posting behavior thatʻs a problem&quot;<p>some interesting thoughts from Yishan, a novel way to look at the problem.
评论 #33451593 未加载
ilyt超过 2 年前
It&#x27;s kinda funny that many of the problems he&#x27;s mentioning is exactly how moderation on reddit currently works.<p>Hell, newly revamped &quot;block user&quot; mode got extra gaslighting as a feature, now person blocked can&#x27;t reply to <i>anyone</i> under the comment of person that blocked them, not just the person that blocked them so anyone that doesn&#x27;t like people discussing how they are wrong can just ban the people that disagree with them and they will not be able to answer to any of their comments.
评论 #33455845 未加载
评论 #33471962 未加载
csours超过 2 年前
Is there a better name than &quot;rational jail&quot; for the following phenomenon:<p>We are having a rational, non-controversial, shared-fact based discussion. Suddenly the first party in the conversation goes off on a tangent and starts saying values or emotions based statements instead of facts. The other party then gets angry and or confused. The first party then gets angry and or confused.<p>The first party did not realize they had broken out of the rational jail that the conversation was taking place in; they thought they were still being rational. The second party detected some idea that did not fit with their rational dataset, and detected a jailbreak, and this upset them.
im-a-baby超过 2 年前
A few thoughts:<p>1) Everyone agrees that spam should be &quot;censored&quot; because (nearly) everyone agrees on what spam is. I&#x27;m sure (nearly) everyone would also like to censor &quot;fake news&quot;, but not everyone agrees on the definition of fake news, which is why the topic is more contentious than spam.<p>2) Having a &quot;1A mode&quot;, where you view an unmoderated feed, would be interesting, if only to shut up people who claim that social media companies are supposed to be an idealistic bastion of &quot;free speech.&quot; I&#x27;m sure most would realize the utility is diminished without some form of moderation.
karaterobot超过 2 年前
There were indeed some intelligent, thoughtful, novel insights about moderation in that thread. There were also... two commercial breaks to discuss his new venture? Eww. While discussing how spam is the least controversial type of noise you want to filter out? I appreciate the good content, I&#x27;m just not used to seeing product placement wedged in like that.
评论 #33458683 未加载
monksy超过 2 年前
&gt; No, whatʻs really going to happen is that everyone on council of wise elders will get tons of death threats, eventually quit...<p>Yep if you can&#x27;t stand being called an n* (or other racial slurs) don&#x27;t be a reddit moderator. Also I&#x27;ve been called a hillary boot licker and a trump one.<p>Being a reddit moderator isn&#x27;t for the thin of skinned.I hosted social meetups so this could have run out in the real world..Luckily I had a strong social support in the group where that would have been taken care of real quick. I&#x27;ve only had one guy that tried to threaten to come and be disruptive at one of the meetups. He did come out. He did meet me.<p>----<p>&gt; even outright flamewars are typically beneficial for a small social network:<p>He&#x27;s absolutely correct. It also helps to define community boundries and avoid extremism. A lot of this &quot;don&#x27;t be mean&quot; culture only endorses moderators stepping in and dictating how a community talks and how people who disagree are officially bullied.
spaceman_2020超过 2 年前
At how many tweets in the thread do you just go &quot;maybe I should just write this as a blog post?&quot;
评论 #33459285 未加载
incomingpain超过 2 年前
This CEO did the same thread 6 months ago and was blasted off the internet. You can see his thread here: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;yishan&#x2F;status&#x2F;1514938507407421440" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;yishan&#x2F;status&#x2F;1514938507407421440</a><p>edit&#x2F; Guess it is working now?<p>The most important post in his older thread: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;yishan&#x2F;status&#x2F;1514939100444311560" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;yishan&#x2F;status&#x2F;1514939100444311560</a><p>He never ever justifies this point. The world absolutely has not changed in the context of censorship. Censorship apologetics notwithstanding.<p>The realization is the world changed is a reveal. He as CEO learnt about where the censorship is coming from.
评论 #33450823 未加载
评论 #33450874 未加载
评论 #33450436 未加载
cwkoss超过 2 年前
Yishan could really benefit from some self editing. There are like 5 tweets worth of interesting content in this hundred tweet meandering thread.
评论 #33456687 未加载
ojosilva超过 2 年前
There are so many tangible vectors in content! It makes me feel like moderation is a doable, albeit hard to automate, task:<p>- substantiated &#x2F; unsubstantiated - extreme &#x2F; moderate - controversial &#x2F; anodyne - fact &#x2F; fun &#x2F; fiction - legal &#x2F; unlawful - mainstream &#x2F; niche - commercial &#x2F; free - individual &#x2F; collective - safe &#x2F; unsafe - science &#x2F; belief - vicious &#x2F; humane - blunt &#x2F; tactful - etc. etc.<p>Maybe I&#x27;m too techno-utopic, but can&#x27;t we model AI to detect how these vectors combine to configure moderation?<p>Ex: Ten years ago masks were <i>niche</i>, therefore <i>unsubstantiated</i> news on the drawbacks of wearing masks were still considered <i>safe</i> because very few people were paying attention and&#x2F;or could harm themselves, so that it was not <i>controversial</i> and did not require moderation. Post-covid, the vector values changed, questionable content about masks could be flagged for moderation with some intensity indexes, user-discretion-advised messages and&#x2F;or links to rebuttals if applicable.<p>Let the model and results be transparent and reviewable, and, most important, editorial. I think the greatest mistake of moderated social networks is that many people (and the network themselves) think that these internet businesses are not &quot;editorial&quot;, but they are not very different from regular news sources when it comes to editorial lines.
评论 #33455348 未加载
评论 #33453366 未加载
whatshisface超过 2 年前
He says there is no principled reason to ban spam, but there&#x27;s an obvious one, it isn&#x27;t really speech. The same goes for someone who posts the same opinion everywhere with no sense of contextual relevance. That&#x27;s not real speech, it&#x27;s just posting.
评论 #33458050 未加载
pphysch超过 2 年前
Musk is betting on the $8 membership being a big hit, which immediately addresses a lot of the moderation issues.<p>It&#x27;s gonna be a completely different paradigm than reddit. Herding cats into a box painted onto the ground vs. herding cats into a 8&#x27; high cage.
评论 #33456955 未加载
评论 #33483308 未加载
jmyeet超过 2 年前
This is a good post.<p>I&#x27;m one of those who likes to bring out the &quot;fire in a theater&quot; or doxxing as the counterexample to disprove literally nobody is a free speech absolutist. This on top of it not being a 1A issue anyway because the first five words are &quot;Congress shall pass no law&quot;.<p>But spam is a better way to approach this and show it really isn&#x27;t a content problem but a user behaviour problem. Because that&#x27;s really it.<p>Another way to put this is that the <i>total experience</i> matters, meaning the experience of all users: content creators, lurkers <i>and advertisers</i>. Someone could go into an AA meeting and not shut up about scientology or coal power and you&#x27;ll get kicked out. Not because your free speech is being violated but because you&#x27;re annoying and you&#x27;re worsening the experience of everyone else you come in contact with.<p>Let me put it another way: just because you have a &quot;right&quot; to say something doesn&#x27;t mean other people should be forced to hear it. That platform has a greater responsibility that your personal interests and that&#x27;s about behaviour (as the article notes), not content.<p>As this thread notes, this is results-oriented.
rootusrootus超过 2 年前
Having read everything he wrote, it makes it interesting to see how the discussion on HN matches.
klyrs超过 2 年前
We&#x27;ve seen some laws passed recently, which attempt to prevent social media companies from effective moderation. Yishan repeatedly makes a point here, that most forms of spam are not illegal. Rather recent case law[1, 2] has confirmed that even panhandling is protected speech. Prior to that, we saw Lloyd vs Tanner[3], which ruled that private property could function as a &quot;town square&quot; and censorship runs afoul of the first amendment. Section 230 of the Communications Decency Act carves out a special exemption for websites that host user-generated content, and politicians on both sides of the aisle have set their sights on remodeling that law.<p>I&#x27;m really curious to see how this plays out. As far as I see it, a well-lawyered bot operator could completely undermine the ability of websites to moderate their content, and as Yishan aptly points out, they wouldn&#x27;t stop at inflammatory content. Their goal would be to open the floodgates for commercial communications. It could completely ruin the open internet as we know it. Or, perhaps, it would merely limit the size of social media companies: once their user-base crosses whatever &quot;town square&quot; threshold is decided on, spammers have free reign. Interesting times we live in.<p>[1] <a href="https:&#x2F;&#x2F;www.npr.org&#x2F;2019&#x2F;04&#x2F;02&#x2F;709251256&#x2F;judge-throws-out-panhandling-law-says-physical-interaction-is-free-speech" rel="nofollow">https:&#x2F;&#x2F;www.npr.org&#x2F;2019&#x2F;04&#x2F;02&#x2F;709251256&#x2F;judge-throws-out-pa...</a><p>[2] <a href="https:&#x2F;&#x2F;media.arkansasonline.com&#x2F;news&#x2F;documents&#x2F;2019&#x2F;04&#x2F;01&#x2F;order_2019-4-1.pdf" rel="nofollow">https:&#x2F;&#x2F;media.arkansasonline.com&#x2F;news&#x2F;documents&#x2F;2019&#x2F;04&#x2F;01&#x2F;o...</a> [pdf]<p>[3] <a href="https:&#x2F;&#x2F;www.mtsu.edu&#x2F;first-amendment&#x2F;article&#x2F;582&#x2F;lloyd-corporation-ltd-v-tanner" rel="nofollow">https:&#x2F;&#x2F;www.mtsu.edu&#x2F;first-amendment&#x2F;article&#x2F;582&#x2F;lloyd-corpo...</a>
throwzway7524s超过 2 年前
While the points made were interesting, I had to stop reading almost half way because I found this post insincere and way too manipulative. And unlike most people on HN, I am very tolerant of marketing and enjoy receiving unsollicited commercial offers via email. This is the first time in many years that someone puts me off like this author, despite the fact that the points made are quite interesting.<p>However his content marketing scheme just felt way too inauthentic for me and made me feel that this guy isn&#x27;t here to educate me, doesn&#x27;t have my best interest in mind and does not give a crap about me.<p>Just posted it because many people on HN are &quot;cargo-culting&quot; (as people say here) tech figures, so wanted to advise people not to imitate this kind of marketing.<p>I guess my real problem here is that his product plugs are way too intellectually dishonest.
thrwaway349213超过 2 年前
What yishan is missing is that the point of a council of experts isn&#x27;t to effectively moderate a product. The purpose is to deflect blame from the company.<p>It&#x27;s also hilarious that he says &quot;you canʻt solve it by making them anonymous&quot; because a horde of anonymous mods is precisely how subreddits are moderated.
评论 #33464460 未加载
anonymid超过 2 年前
Isn&#x27;t it inconsistent to both say &quot;moderation decisions are about behavior, not content&quot;, and &quot;platforms can&#x27;t justify moderation decisions because of privacy reasons&quot;.<p>It seems like you wouldn&#x27;t need to reveal any details about the content of the behavior, but just say &quot;look, this person posted X times, or was reported Y times&quot;, etc... I find the author to be really hand-wavy around why this part is difficult.<p>I work with confidential data, and we track personal information through our system and scrub it at the boundaries (say, when porting it from our primary DB to our systems for monitoring or analysis). I know many other industries (healthcare, education, government, payments) face very similar issues...<p>So why don&#x27;t any social network companies already do this?
评论 #33458952 未加载
评论 #33458619 未加载
invalidusernam3超过 2 年前
Just add a dislike button and put controversial tweets collapsed at the bottom. It works well for reddit. Let the community moderate themselves.
评论 #33450749 未加载
excite1997超过 2 年前
He frames this as a behavior problem, not content problem. The claim is that your objective as a moderator should to get rid of users or behaviors that are bad for your platform, in the sense that they may drive users away or make them less happy. And that if you do that, you supposedly end up with a fundamentally robust and apolitical approach to moderation. He then proceeds to blame others for misunderstanding this model when the outcomes appear politicized.<p>I think there is a gaping flaw in this reasoning. Sometimes, what drives your users away or makes them less happy <i>is</i> challenging the cultural dogma of a particular community, and at that point, the utilitarian argument breaks down. If you&#x27;re on Reddit, go to &#x2F;r&#x2F;communism and post a good-faith critique of communism... or go to &#x2F;r&#x2F;gunsarecool and ask a pro-gun-tinged question about self-defense. You will get banned without any warning. But that ban passes the test outlined by the OP: the community does not want to talk about it precisely because it would anger and frustrate people, and they have no way of telling you apart from dozens of concern trolls who show up every week. So they proactively suppress dissent because they can predict the ultimate outcome. They&#x27;re not wrong.<p>And that happens everywhere; Twitter has scientifically-sounding and seemingly objective moderation criteria, but they don&#x27;t lead to uniform political outcomes.<p>Once you move past the basics - getting rid of patently malicious &#x2F; inauthentic engagement - moderation becomes politics. There&#x27;s no point in pretending otherwise. And if you run a platform like Twitter, you will be asked to do that kind of moderation - by your advertisers, by your users, by your employees.
评论 #33457286 未加载
asddubs超过 2 年前
i love that, this fucking twitter thread has a commercial break in the middle of it.<p>edit: it has multiple commercial breaks!
belorn超过 2 年前
&gt; Moderating spam is very interesting: it is almost universally regarded as okay to ban (i.e. CENSORSHIP) but spam is in no way illegal.<p>Interesting, in my country spam is very much illegal and I would hazard a guess that it is also illegal in the US, similar to how littering, putting up posters on peoples buildings&#x2F;cars&#x2F;walls, graffiti (a form of spam), and so on is also illegal. If I received the amount of spam I get in email as phone calls I would go as far as calling it harassment, and of course robot phone calls are also illegal. Unsolicited email spam is also again the law.<p>And if spam is against the service agreement on twitter then that could be a computer crime. If the advertisement is fraudulent (as is most spam), it is fraud. Countries also have laws about advertisement, which most spam are unlikely to honor.<p>So I would make the claim that there is plenty of principled reasons for banning spam, all backed up by laws of the countries that the users and the operators live in.
评论 #33450710 未加载
评论 #33455479 未加载
thomastjeffery超过 2 年前
The biggest problems with Twitter&#x27;s moderation are what OP explicitly didn&#x27;t talk about.<p>1. There isn&#x27;t enough communication from moderators about <i>why</i> tweets are removed and users are banned. There is a missed learning opportunity when users don&#x27;t get to hear why they are being moderated.<p>2. Bans are probably too harsh. If you can&#x27;t come back having learned from your mistakes, why learn at all?<p>Most of it is a scaling issue, which is the same reason that popular subreddits are a predictably negative experience while niche subreddits tend to be well regarded.
评论 #33459479 未加载
fastball超过 2 年前
Honestly this comes across as a fairly disingenuous take from yishan given how moderation has actually played out on Reddit.<p>Reddit was able to scale by handing off moderation to the communities themselves and to the unpaid volunteers who <i>wanted</i> to moderate them. In general, I think it is obvious to any casual observer that those volunteers don&#x27;t see moderation in the same way (or with the same goals) as the platform. For example, many (most?) moderators on Reddit absolutely <i>do</i> ban people not because they are starting flame wars or spamming but because said users aren&#x27;t toeing the party line. A huge number of subreddits are created specifically for that purpose – &quot;this community has X opinion about Y and if you don&#x27;t like that you can GTFO&quot;.<p>However even if you ignore the unpaid volunteers moderating subreddits and focus only on the &quot;Admins&quot; that were specifically chosen by Reddit, you can see that the only priority was not increasing the signal-to-noise ratio, including during yishan&#x27;s tenure. In most cases when a community is banned it is not because the signal-to-noise ratio is too high but because that community has received too much of the negative PR in the press that yishan referred to. Sure, the claim is still &quot;we&#x27;re trying to maintain the integrity of the platform as a whole and are banning communities for brigading, etc&quot;, but you can see based on which communities are banned that this is clearly not the whole story.
dimva超过 2 年前
His argument makes no sense. If this is indeed why they are banning people, why keep the reasoning a secret? Honestly, every ban should come with a public explanation from the network, in order to deter similar behavior. The way things are right now, it&#x27;s unclear if, when, and for what reason someone will be banned. People get banned all the time with little explanation or explanations that make no sense or are inconsistent. There is no guidance from Twitter on what behavior or content or whatever will get you banned. Why is some rando who never worked at Twitter explaining why Twitter bans users?<p>And how does Yishan know why Twitter bans people? And why should we trust that he knows? As far as I can tell, bans are almost completely random because they are enacted by random low-wage contract workers in a foreign country with a weak grasp of English and a poor understanding of Twitter&#x27;s content policy (if there even is one).<p>Unlike what Yishan claims, it doesn&#x27;t seem to me like Twitter cares at all about how pleasant an experience using Twitter is, only that its users remain addicted to outrage and calling-out others, which is why most Twitter power-users refer to it as a &quot;hellsite&quot;.
评论 #33450995 未加载
评论 #33455258 未加载
评论 #33452695 未加载
评论 #33454840 未加载
e40超过 2 年前
Easier to read this:<p><a href="https:&#x2F;&#x2F;threadreaderapp.com&#x2F;thread&#x2F;1586955288061452289.html" rel="nofollow">https:&#x2F;&#x2F;threadreaderapp.com&#x2F;thread&#x2F;1586955288061452289.html</a>
quadcore超过 2 年前
I think tiktok is doing incredibly well in this regards and in almost every social network aspect. Call me crazy but I now prefer the discussions there as HN&#x27;s most of the time. I find high-quality comments (and there is still good jokes in the middle). The other day I felt upon a video about physics which had the most incredibly deep and knowlegeable comments Ive ever seen (<i>edit: found the video, it is not as good as I remembered but still close to HN level imo</i>). It&#x27;s jaw dropping how well it works.<p>There is classical content moderation (the platform follows local laws) but mostly it kind of understand you so well that it put you right in the middle of like minded people. At least it feels that way.<p>I dont have insider hinsights on how it trully works I can only guess but the algorithm feels like a league or two above everything I have seen so far. It feels like it understand people so well that it prompted deep thought experiments on my end. Like let say I want to know someone I could simple ask &quot;show me your tiktok&quot;. It&#x27;s just a thought experiments but it feels like tiktok could tell how good of a person you are or more precisely what is your level of personal development. Namely, it could tell if youre racist, it could tell if youre a bully, a manipulator or easily manipulated, it could tell if youre smart (in the sense of high IQ), if you have fine taste, if you are a leader or a loner... And on and on.<p>Anyway, this is the ultimate moderation: follow the law and direct the user to like minded people.
评论 #33458404 未加载
tianshuo超过 2 年前
Interesting point about SNR, social network and moderation. Ten years ago, when I interviewed for the Chinese clone of Facebook(renren 人人网), I told that the one of the major problems of social media is the SNR, and my interviewer insisted it was a feature problem. I didn&#x27;t get the job, but within 3-4 years, renren goes downhill, and today nobody uses it anymore. Increasing SNR is actually a difficult question, because user behavior doesn&#x27;t actually mean signal, something that is eye-catching, click-baity, is actually a good signal for the user. Something interesting about Tiktok is it was designed for optimizing SNR, especially, only having one item(which is a video) each screen, instead of having a list of content. This, and autoplaying videos was breaking all the rules of app design guidelines. Now it is copied everywhere. So how does Tiktok influence SNR, if user behavior does not compeletely correlate on what the real signal is. The secret recipe for Tiktok is human moderation, not volunteers, but tens of thousands of people, curating, moderating, to complement its realtime recommendation system.
antod超过 2 年前
For some reason, this makes me wonder how Slashdot&#x27;s moderation would work in the current age. Too nerdy? Would it get overwhelmed by today&#x27;s shit posters?
评论 #33457318 未加载
mcguire超过 2 年前
Correct me if I&#x27;m wrong, but this sounds very much like what dang does here.
thoughtstheseus超过 2 年前
The idea of a single point of moderation will not work imo. We need to empower users (both individuals &amp; groups) to moderate and curate their own information feeds. Create a market for moderation and curation!<p>Verified accounts will be instrumental as well. It’s important to understand who or what you are having a conversation with.
gist超过 2 年前
&gt; No, you canʻt solve it by making them anonymous, because then you will be accused of having an unaccountable Star Chamber of secret elites (especially if, I dunno, you just took the company private too). No, no, they have to be public and “accountable!”<p>This is bulls... Sorry.<p>Who cares what you are accused of doing?<p>Why does it matter if people perceive that there is a star chamber. Even that reference. Sure the press cares and will make it an issue and tech types will care because well they have to make a fuss about everything and anything to remain relevant.<p>After all what are grand juries? (They are secret). Does the fact that people might think they are star chambers matters at all?<p>You see this is exactly the problem. Nobody wants to take any &#x27;heat&#x27;. Sometimes you just have to do what you need to do and let the chips fall where they fall.<p>The number of people who might use twitter or might want to use twitter that would think anything at all about this issue is infinitesimal.
TheCapeGreek超过 2 年前
I like yishan&#x27;s content and his climate focus, but this &quot;we interrupt your tweet thread for sponsored content&quot; style tangent is a bit annoying - not directly for doing it or its content, but because I can see other thread writers picking this up and we end up the same as Youtube with sponsored sections of content that you can&#x27;t ad block<i>.<p></i>FWIW With YT you can block them with Sponsorblock, which works with user submitted timestamps of sponsored sections in videos. If this tweet technique takes off I&#x27;d imagine a similar idea for tweets.
评论 #33450159 未加载
评论 #33455729 未加载
评论 #33456057 未加载
phendrenad2超过 2 年前
At a deeper level, content moderation isn&#x27;t about stopping hate speech and harmful speech. That&#x27;s just chasing the symptom, not the cause. The cause is a certain type of mentality that becomes obsessed with the idea of beating their thoughts into the fabric of the universe, no matter what it takes. These are the ones who stoop to spamming, flaming, mocking meme GIFs, hate-speech, death threats, etc. (and generally spend all day online posting such things).<p>This is why Reddit has been so successful. Community moderation is much more effective than top-down moderation at combatting people with this mentality, because it discourages them as soon as they show their hostility, not once they have passed some threshold of badness.
cco超过 2 年前
Many comments here talking about the substance, so I&#x27;ll tackle the irony.<p>Didn&#x27;t Yishan sorta show how easy it is to bypass &quot;spam detection&quot; by embedding multiple instances of spam within his otherwise &quot;clean&quot; thread?<p>The asides to pitch his current work undercut his point to some degree.
numlock86超过 2 年前
Reddit has terrible moderation. So bad that it&#x27;s a literal joke&#x2F;meme at this point, down to a personal level in some cases even. Why would anyone ask for moderation advice in that general direction? To get a script on what not to do?
评论 #33469139 未加载
carapace超过 2 年前
&gt; working on climate: removing CO2 from the atmosphere is critical to overcoming the climate crisis, and the restoration of forests is one of the BEST ways to do that.<p>As a tangent, Akira Miyawaki has developed a method for &#x27;reconstitution of &quot;indigenous forests by indigenous trees&quot;&#x27; which &quot;produces rich, dense and efficient protective pioneer forests in 20 to 30 years&quot;<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Akira_Miyawaki#Method_and_conditions_for_success" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Akira_Miyawaki#Method_and_cond...</a><p>It&#x27;s worth quoting in full:<p>&gt; Rigorous initial site survey and research of potential natural vegetation<p>&gt; Identification and collection of a large number of various native seeds, locally or nearby and in a comparable geo-climatic context<p>&gt; Germination in a nursery (which requires additional maintenance for some species; for example, those that germinate only after passing through the digestive tract of a certain animal, need a particular symbiotic fungus, or a cold induced dorming phase)<p>&gt; Preparation of the substrate if it is very degraded, such as the addition of organic matter or mulch, and, in areas with heavy or torrential rainfall, planting mounds for taproot species that require a well-drained soil surface. Hill slopes can be planted with more ubiquitous surface roots species, such as cedar, Japanese cypress, and pine.<p>&gt; Plantations respecting biodiversity inspired by the model of the natural forest. A dense plantation of very young seedlings (but with an already mature root system: with symbiotic bacteria and fungi present) is recommended. Density aims at stirring competition between species and the onset of phytosociological relations close to what would happen in nature (three to five plants per square metre in the temperate zone, up to five or ten seedlings per square metre in Borneo).<p>&gt; Plantations randomly distributed in space in the way plants are distributed in a clearing or at the edge of the natural forest, not in rows or staggered.
greendestiny_re超过 2 年前
@yishan doesn&#x27;t mention the most obvious solution — let the public vote on content. Wait, Twitter and Reddit already have voting mechanisms in place? What&#x27;s wrong with them? Oh, they semi-secretly sell access to their voting mechanisms and allow unscrupulous entities to manipulate vote counts to astroturf? Oh...<p>The real problem is the lack of transparency as platforms fight tooth and nail to retain total control over content while appearing to foster freedom of speech.
socceroos超过 2 年前
The guy is literally describing how to shut down discussion on topics by escalating behaviours around it.<p>The great problem with this approach is that there are very many groups happy to see discussion of divers topics quashed and they&#x27;re already familiar with how to get it done on platforms like Twitter.
Izkata超过 2 年前
&gt; Spam actually passes the test of “allow any legal speech” with flying colors. Hell, the US Postal Service delivers spam to your mailbox.<p>We&#x27;re not yet in <i>I, Robot</i> territory; bots don&#x27;t get freedom of speech.
评论 #33467168 未加载
modeless超过 2 年前
It seems like he&#x27;s arguing that people claiming moderation is censoring them are wrong, because moderation of large platforms is dispassionate and focused on limiting behavior no one likes, rather than specific topics.<p>I have no problem believing this is true for the vast majority of moderation decisions. But I think the argument fails because it only takes a few exceptions or a little bit of bias in this process to have a large effect.<p>On a huge platform it can simultaneously be true that platform moderation is <i>almost</i> always focused on behavior instead of content, and a subset of people and topics <i>are</i> being censored.
评论 #33458228 未加载
评论 #33457249 未加载
评论 #33457956 未加载
ece超过 2 年前
Having read Yishan&#x27;s older threads, the point he makes about spam is important, it&#x27;s to increase people&#x27;s comfort level with platforms. I think chat is maybe the easiest to feel comfortable on, then forums and social media platforms.<p>Every one is different and moderation isn&#x27;t going to be to everyone&#x27;s liking, but as long as it&#x27;s encouraging respectful engagement and rejecting the trolls who show no such interest, there is enough social media for everyone who keeps an open mind on all sorts of subjects.
ecommerceguy超过 2 年前
Reddit and Twitter are 2 of the top reasons why this country, and the world for that matter, is so divided politically. Keep in mind that if there weren&#x27;t division politicians would have less influence.
评论 #33460442 未加载
hourago超过 2 年前
&gt; Our current climate of political polarization makes it easy to think itʻs about the content of the speech, or hate speech, or misinformation, or censorship, or etc etc.<p>Are we sure that it is not the other way around? Didn&#x27;t social platforms created or increased polarization?<p>I always see this comments from social platforms that take as fact that society is polarized and they work hard to fix it, when I believe that it is the other way around. Social media has created the opportunity to increase polarization and they are not able to stop it for technical, social or economic reasons.
评论 #33450537 未加载
评论 #33450340 未加载
评论 #33450226 未加载
评论 #33450411 未加载
urbandw311er超过 2 年前
<a href="https:&#x2F;&#x2F;threadreaderapp.com&#x2F;thread&#x2F;1586955288061452289.html" rel="nofollow">https:&#x2F;&#x2F;threadreaderapp.com&#x2F;thread&#x2F;1586955288061452289.html</a>
lightedman超过 2 年前
&quot;The first thing most people get wrong is not realizing that moderation is a SIGNAL-TO-NOISE management problem&quot;<p>Which your entire staff ignored when one user destroyed several LED businesses thanks to one user vote-manipulating everything, despite every one of those people coming to you with verifiable proof of vote-manipulation.<p>This Ex-CEO has zero room to be speaking about anything like this while they&#x27;ve not fixed the problems their ignorance directly-caused.
LegitShady超过 2 年前
He was CEO of a company that has volunteer moderators, what he knows about handling moderation is tainted by the way reddit is structured. Also, reddit&#x27;s moderation is either heavy handed or totally ineffective depending on the case so not sure he&#x27;s the right person to talk to.<p>Also, I stopped reading when he did an ad break on a twitter thread. Who needs ads in twitter threads? It makes him seem desperate and out of touch. Nobody needs his opinion, and they need his opinion with ad breaks even less.
largbae超过 2 年前
This thread was a great read, even the tree parts, but it fails to address the line which I feel that social media crossed in 2020: I was inattentively ok with the behavioral spam filtering, I didn&#x27;t notice it but very likely would have appreciated it if I had.<p>The line was crossed by fact checks and user bans that were clearly all about content and not about machine-detectable behavior patterns. This thread seems to avoid or ignore that category of moderation.<p>And I hope the new Twitter does too.
StanislavPetrov超过 2 年前
&gt;Why is this? Because it has no value? Because itʻs sometimes false? Certainly itʻs not causing offline harm.<p>&gt;No, no, and no.<p>Fundamentally disagree with his take on spam. Not only does spam have no value, it has negative value. The content of the spam itself is irrelevant when the same message is being pushed out a million times and obscuring all other messages. Reducing spam through rate-limiting is certainly the easiest and most impactful form of moderation.
lawrenceyan超过 2 年前
You can tell this guy is a genius at marketing.<p>Smart to comment on his current pursuits in environmental terraforming knowing he&#x27;s going to get eyeballs on any thread he writes.
评论 #33458711 未加载
bravura超过 2 年前
Yishan&#x27;s points are great, but there is a more general and fundamental question to discuss...<p>Moderation is the act removing content. i.e. of assigning a score of 1 or 0 to content.<p>If we generalize, we can assign a score from 1 to 0 to all content. Perhaps this score is personalized. Now we have a user&#x27;s priority feed.<p>How should Twitter score content using personalization? Filter bubble? Expose people to a diversity of opinions? etc. Moderation is just a special case of this.
评论 #33456709 未加载
mmastrac超过 2 年前
Unrolled thread: <a href="https:&#x2F;&#x2F;mem.ai&#x2F;p&#x2F;D0AfFRGYoKkyW5aQQ1En" rel="nofollow">https:&#x2F;&#x2F;mem.ai&#x2F;p&#x2F;D0AfFRGYoKkyW5aQQ1En</a>
评论 #33450494 未加载
Havoc超过 2 年前
&gt;Machine learning algorithms are able to accurate identify spam, and itʻs not because they are able to tell itʻs about Viagra or mortgage refinancing, itʻs because spam has unique posting behavior and patterns in the content<p>I&#x27;m amazed that this is still true (assuming Yishan is right). Would have though GPT-3 spam would be the normal already &amp; it becomes a cat and mouse game from there
DelightOne超过 2 年前
Can there be a moderation bot that detects flamewars and steps in? It could enforce civility by limiting discussion to only go through the bot and by employing protocols like &quot;each side summarize issues&quot;, &quot;is this really important here&quot;, or &quot;do you enjoy this&quot;.<p>Engaging with the bot is supposed to be a rational barrier, a tool to put unproductive discussions back on track.
WeylandYutani超过 2 年前
<a href="https:&#x2F;&#x2F;nos.nl&#x2F;artikel&#x2F;2451021-organisatie-protestbijeenkomst-wil-kort-geding-om-weigering-david-icke" rel="nofollow">https:&#x2F;&#x2F;nos.nl&#x2F;artikel&#x2F;2451021-organisatie-protestbijeenkoms...</a><p>Stop insane people at the airport. We are going to have to act if we want to live in a different world or be doomed to become America.
jchw超过 2 年前
The commentary is interesting, but it does unfortunately gloss over the very real issue of actually controversial topics. Most platforms don&#x27;t typically set out to ban controversial stuff from what I can tell, but the forces that be (advertisers, government regulators, payment processors, service providers, etc.) tend to be quite a bit more invested in such topics. Naughty language on YouTube and porn on Twitter are some decent examples; these are <i>not</i> and never have been signal to noise ratio problems. While the media may be primarily interested in the problem of content moderation as it impacts political speech, I&#x27;d literally filter all vaguely politically charged speech (even at the cost of missing plenty of stuff I&#x27;d rather see) if given the option.<p>I think that the viewpoints re: moderation are very accurate and insightful, but I honestly have always felt that it&#x27;s been more of a red herring for the actual scary censorship creep happening in the background. Go find the forum threads and IRC logs you have from the 2000s and think about them for a little while. I think that there are many ways in which I&#x27;d happily admit the internet has improved, but looking back, I think that a lot of what was discussed and how it was discussed would not be tolerated on many of the most popular avenues for discourse today—even though there&#x27;s really nothing particularly egregious about them.<p>I think this is the PoV that one has as a platform owner, but unfortunately it&#x27;s not the part that I think is interesting. The really interesting parts are always off on the fringes.
评论 #33457166 未加载
MrPatan超过 2 年前
A bunch of things that make sense about banning spam and spammy behaviour and then the payload: How banning discussion of the lab leak hypothesis back then made sense and wasn&#x27;t politically motivated.<p>Of course it wasn&#x27;t about the content, of course. Neither was Hunter&#x27;s laptop story ban about the content, no, of course not.<p>Give me a break.
评论 #33465515 未加载
saurik超过 2 年前
&gt; there will be NO relation between the topic of the content and whether you moderate it, because itʻs the specific posting behavior thatʻs a problem<p>I get why Yishan wants to believe this, but I also feel like the entire premise of this argument is then in some way against a straw man version of the problem people are trying to point to when they claim moderation is content-aware.<p>The issue, truly, isn&#x27;t about what the platform moderates so much as the bias between when it bothers to moderate and when it doesn&#x27;t.<p>If you have a platform that bothers to filter messages that &quot;hate on&quot; famous people but doesn&#x27;t even notice messages that &quot;hate on&quot; normal people--even if the reason is just that almost no one sees the latter messages and so they don&#x27;t have much impact and your filters don&#x27;t catch it--you have a (brutal) class bias.<p>If you have a platform that bothers to filter people who are &quot;repetitively&quot; anti large classic tech companies for the evil things they do trying to amass money and yet doesn&#x27;t filter people who are &quot;repetitively&quot; anti crypto companies for the evil things <i>they</i> do trying to amass money--even if it feels to you as the moderator that the person seems to have a point ;P--that is another bias.<p>The problem you see in moderation--and I&#x27;ve spent a LONG time both myself being a moderator and working with people who have spent their lives being moderators, both for forums and for live chat--is that moderation and verification of everything not only feels awkward but simply <i>doesn&#x27;t scale</i>, and so you try to build mechanisms to moderate <i>enough</i> that the forum seems to have a high <i>enough</i> signal-to-noise ratio that people are happy and generally stay.<p>But the way you get that scale is by automating and triaging: you build mechanisms involving keyword filters and AI that attempt to find and flag low signal comments, and you rely on reports from users to direct later attention. The problem, though, is that these mechanisms inherently have biases, and those biases absolutely end up being inclusive of biases that are related to the content.<p>Yishan seems to be arguing that perfectly-unbiased moderation might seem biased to some people, but he isn&#x27;t bothering to look at where or why moderation often isn&#x27;t perfect to ensure that moderation actually works the way he claims it should, and I&#x27;m telling you: it never does, because moderation isn&#x27;t omnipresent and cannot be equally applied to all relevant circumstances. He pays lip service to it in one place (throwing Facebook under the bus near the end of the thread), and yet fails to then realize <i>this is the argument</i>.<p>At the end of the day, real world moderation is certainly biased. <i>And maybe that&#x27;s OK!</i> But we shouldn&#x27;t pretend it isn&#x27;t biased (as Yishan does here) or even that that bias is always in the public interest (as many others do). That bias may, in fact, be an important part of moderating... and yet, it can also be extremely evil and difficult to discern from &quot;I was busy&quot; or &quot;we all make mistakes&quot; as it is often subconscious or with the best of intentions.
pfoof超过 2 年前
When problem #1 is spam then problem #0 is bots and paid trolls.<p>Whenever there is a profile with handle in a format similar to: @jondoe123456, emojis in the name, and emojis and hashtags in bio, especially related to political&#x2F;religious topics: 99% chance that this is a bot or a troll with multiaccount.
fulafel超过 2 年前
There seem to be no mention of (de)centralization or use of reputation in the comments here or in the twitter thread.<p>Everyone is discussing a failure mode of a centralized and centrally moderated system and aren&#x27;t questioning those properties, but it&#x27;s really counter to traditional internet based communication platforms like email, usenet, irc etc.
ethotool超过 2 年前
Nobody has the answers. Social media is an experiment gone wrong. Just like dating apps and other pieces of software that exist that are trying to replace normal human interaction. These first generation prototypes have a basic level of complexity and I expect by 2030 technology should evolve to the point where better solutions exist.
评论 #33451302 未加载
alldayeveryday超过 2 年前
&quot;The fallacy is that it is very easy to think itʻs about WHAT is said, but Iʻll show you why itʻs not…&quot;<p>Let&#x27;s test this theory. Create a post across 10 social media platforms disparaging the white race. Now do the same amount jews. See which set of posts gets taken down at a higher rate.
jryhjythtr超过 2 年前
I think this is a limitation of faceless communication, and boils down to the respect that users of a platform have for the other users and the platform itself. Ie - there isn&#x27;t enough. And that&#x27;s ok, because we should spend more time talking in real life.
anigbrowl超过 2 年前
Reposting this paper yet again, to rub in the point that social media platforms play host to <i>communities</i> and communities are often very good at detecting interlopers and saboteurs and pushing them back out. And it turns out the most effective approach is to let people give bad actors a hard time. Moderation policies that require everyone to adhere to high standards of politeness in all circumstances are trying to reproduce the dynamics of kindergartens, and are not effective because the moderators are easily gamed.<p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1803.03697.pdf" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1803.03697.pdf</a><p>Also, if you&#x27;re running or working for a platform and dealing with insurgencies, you will lose if you try to build any kind of policy around content analysis. Automated context analysis is generally crap because of semantic overloading (irony, satire, contextual humor), and manual context analysis is labor-intensive and immiserating, to the point that larger platforms like Facebook are legitimately accused of abusing their moderation staff by paying them peanuts to wade through toxic sludge and then dumping them as soon as they complain or ask for any kind of support from HR.<p>To get anywhere you need to look at patterns of behavior and to scale you need to do feature&#x2F;motif detection on dynamic systems rather than static relationships like friend&#x2F;follower selections. However, this kind of approach is fundamentally at odds with many platforms&#x27; goal of maximizing engagement as means to the end of selling ad space.
danuker超过 2 年前
&gt; Spam is typically easily identified due to the repetitious nature of the posting frequency, and simplistic nature of the content (low symbol pattern complexity).<p>Now that we have cheap language models, you could create endless variations of the same idea. It&#x27;s an arms race.
goatcode超过 2 年前
&gt; youʻll end up with a council of third-rate minds and politically-motivated hacks, and the situation will be worse than how you started.<p>Wow, surprising honesty from someone affiliated with Reddit. I&#x27;m sad that I wasn&#x27;t on the site during the time of the old guard.
评论 #33458181 未加载
UI_at_80x24超过 2 年前
I&#x27;ve always thought that slashdot handled comment moderation the best. (And even that still had problems.)<p>In addition to that these tools would help:<p>(1)Client-side: Being able to block all content from specific users and the replies to specific users.<p>(2)Server-side: If userA always &#x27;up votes&#x27; comments from userB apply a negative weighting to that upvote (so it only counts as 0.01 of a vote). Likewise, with &#x27;group-voting&#x27;; if userA, userB, and userC always vote identically down-weight those votes. (this will slow the &#x27;echo chamber&#x27; effect)<p>(3)Account age&#x2F;contribution scale: if userZ has been a member of the site since it&#x27;s inception, AND has a majority of their posts up-voted, AND contributes regularly, then give their votes a higher weighted value.<p>Of course these wouldn&#x27;t solve everything, as nothing ever will address every scenerio; but I&#x27;ve often thought that these things combined with how slashdot allowed you to score between -1 to 5, AND let you set the &#x27;post value&#x27; to 2+, 3+, or 4+ would help eliminate most of the bad actors.<p>Side note: Bad Actors, and &quot;folks you don&#x27;t agree with&quot; should not be confused with each other.
评论 #33455701 未加载
评论 #33455668 未加载
评论 #33455855 未加载
评论 #33455745 未加载
nxmnxm99超过 2 年前
&quot;Death threats&quot; isn&#x27;t a point of failure. If it was, the entire judicial system in the US with the Supreme Court wouldn&#x27;t exist.<p>Thus, the entire premise of this opinion falls apart.
teddyh超过 2 年前
So what I’m hearing is that ads are moderated spam. Yeah, I can see that.
tinglymintyfrsh超过 2 年前
Last time I used it, Twitter devolved into ridiculousness.<p>The phrase &quot;tar and feather&quot; was deemed instaban-worthy.<p>Apparently, there is an entire secret forbidden phrase list.<p>It&#x27;s not moderation, it&#x27;s lazyness.
rongopo超过 2 年前
Imagine there would be many shades of up and down voting in HN, according to your earned karma points, and to your interactions outside of your regular opinion echo Chambers.
swarnie超过 2 年前
Twitter has to be the worst possible medium for reading an essay.
评论 #33455956 未加载
shadowgovt超过 2 年前
&gt; (Everyone from Pittsburgh who is reading this has now been convinced of the veracity and utter reasonableness of my thinking on this topic)<p>He&#x27;s right you know.
deckard1超过 2 年前
I did not see any mention of structure.<p>Reddit has a different structure than Twitter. In fact, go back to before Slashdot and Digg and the common (HN, Reddit) format of drive-by commenting was simply not a thing. Usenet conversations would take place over the course of days, weeks, or even months.<p>Business rules. Twitter is driven by engagement. Twitter is practically the birthplace of the &quot;hot take&quot;. It&#x27;s what drives a lot of users to the site and keeps them there. How do you control the temper of a site when your <i>goal</i> is inflammatory to begin with?<p>Trust and Good Faith. When you enter into a legal contract, both you and the party you are forming a contract with are expected to operate in <i>good faith</i>. You are signaling your intent is to be fair and honest and to uphold the terms of the contract. Now, the elephant in the room here is what happens when the CEO, Elon Musk, could arguably (Matt Levine has done so, wonderfully) not even demonstrate good faith during the purchase of Twitter, itself. Or has been a known bully to Bill Gates regarding his weight or sex appeal, or simply enjoys trolling with conspiracy theories. What does a moderation system even mean in the context of a private corporation owned by such a person? Will moderation apply to Elon? If not, then how is trust established?<p>There is a lot to talk about on that last point. In the late &#x27;90s a site called Advogato[1] was created to explore trust metrics. It was not terribly successful, but it was an interesting time in moderation. Slashdot was also doing what they could. But then it all stopped with the rise of corporate forums. Corporate forums, such as Reddit, Twitter, or Facebook, seem to have no interest in these sorts of things. Their interest is in conflict: they need to onboard as many eyeballs as possible, as quickly as possible, and with as little user friction as possible. They also serve advertisers, who, you could argue, are the <i>real</i> arbiters of what can be said on a site.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Advogato" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Advogato</a>
onetimeusename超过 2 年前
free speech might be self regulating. A place that gets excessive spam attracts no one and then there wouldn&#x27;t be much motivation to spam it anymore.<p>I don&#x27;t recall spam restrictions on old IRC. A moderator could boot you off. My own theory is having an exponential cool off timer on posts could be the only thing needed that still is technically 100% free speech.
评论 #33457695 未加载
whiddershins超过 2 年前
This is a great article. Thoughtful and substantive.<p>Nothing in this article has anything to do with why all the platforms banned Alex Jones.<p>Which is the part no one seems to be addressing.<p>Once we accepted banning Alex Jones, which was relatively easy to accept because he is so hated, we opened the door to deplatforming as something distinct from moderation.<p>But the distinction isn’t made, and it all gets conflated instead.<p>That is how the platforms lost all of our trust. This must be directly addressed.
novon超过 2 年前
We&#x27;re working on some solutions around this problem - a browser level filter on toxic comments &#x2F; blatant misinformation found on ad-supported platforms, helpful context as a layer on top of content you&#x27;re reading, and moderated community debate around current events, with enforced norms. Still early if anyone wants to join what&#x27;s likely to be a non-profit:<p><a href="https:&#x2F;&#x2F;brightgood.com" rel="nofollow">https:&#x2F;&#x2F;brightgood.com</a>
datan3rd超过 2 年前
I think email might be a good system to model this on. In addition to an inbox, almost all providers provide a Spam folder, and others like Gmail separate items into &#x27;Promotions&#x27; and &#x27;Social&#x27; folders&#x2F;labels. I imagine almost nobody objects to this.<p>Why can&#x27;t social media follow a similar methodology? There is no requirement that FB&#x2F;Twitter&#x2F;Insta&#x2F;etc feeds be a single &quot;unit&quot;. The primary experience would be a main feed (uncontroversial), but additional feeds&#x2F;labels would be available to view platform-labeled content. A &quot;Spam Feed&quot; and a &quot;Controversial Feed&quot; and a &quot;This Might Be Misinformation Feed&quot;.<p>Rather than censoring content, it segregates it. Users are free to seek&#x2F;view that content, but must implicitly acknowledge the platform&#x27;s opinion by clicking into that content. Just like you know you are looking at &quot;something else&quot; when you go to your email Spam folder, you would be aware that you are venturing off the beaten path when going to the &quot;Potential State-Sponsored Propaganda Feed&quot;. There must be some implicit trust in a singular feed which is why current removal&#x2F;censorship schemas cause such &quot;passionate&quot; responses.
fsckboy超过 2 年前
&gt; <i>Because it is not TOPICS that are censored. It is BEHAVIOR.</i><p>if this were only true. In the past few years the major platforms including HN started censoring TOPICS, in addition to behavior<p>Also, sites have moderator moderation, and community moderation, and very little is done to rein the community in from squelching unpopular opinions.
pyinstallwoes超过 2 年前
I think the most interseting thing as a result from that post is the realization that given an intelligence tasked with reducing the most harm for humanity as a whole, it will identify behaviors that lead to physical confrontation and censor&#x2F;out-gas&#x2F;diminish&#x2F;remove interactions&#x2F;prevent paths to that behavior interacting with the network for as long as it sees the predicted behaviors leading to harm.<p>tl;dr AI bans any content that is likely to lead to physical confrontation. It&#x27;s not the content that sucks, it&#x27;s that people suck. It&#x27;s not that people suck, it&#x27;s that people are easily influenced into a state of mind and behavior that leads to harming other humans.<p>The bigger questionm which is also the oldest question: is human nature so luke-warm? Can we aspire to be &#x27;wise elders&#x27; throughout the entire species while still retaining child-like curiosity and wonder?<p>Funny how it circles back around.
wormslayer666超过 2 年前
I got my first experience in running a small-medium sized (~1000 user) game community over the past couple years. This is mostly commentary on running such a community in general.<p>Top-level moderation of any sufficiently cliquey group (i.e. all large groups) devolves into something resembling feudalism. As the king of the land, you&#x27;re in charge of being just and meting out appropriate punishment&#x2F;censorship&#x2F;other enforcement of rules, as well as updating those rules themselves. Your goal at the end of the day is continuing to provide support for your product, administration&#x2F;upkeep for your gaming community, or whatever else it was that you wanted to do when you created the platform in question. However, the cliques (whether they be friend groups, opinionated but honest users, actual political camps, or any other tribal construct) will always view your actions through a cliquey lens. This will happen no matter how clear or consistent your reasoning is, unless you fully automate moderation (which never works and would probably be accused of bias by design anyways).<p>The reason why this looks feudal is because you still must curry favor with those cliques, lest the greater userbase eventually buys into their reasoning about favoritism, ideological bias, or whatever else we choose to call it. At the end of the day, the dedicated users have <i>much</i> more time and energy to argue, or propagandize, or skirt rules than any moderation team has to counteract it. If you&#x27;re moderating users of a commercial product, it hurts your public image (with some nebulous impact on sales&#x2F;marketing). If you&#x27;re moderating a community for a game or software project, it hurts the reputation of the community and makes your moderators&#x2F;developers&#x2F;donators uneasy.<p>The only approach I&#x27;ve decided unambiguously works is one that doesn&#x27;t scale well at all, and that&#x27;s the veil of secrecy or &quot;council of elders&quot; approach which Yishan discusses. The king stays behind the veil, and makes as few public statements as possible. Reasoning is only given insofar as is needed to explain decisions, only responding directly to criticism as needed to justify actions taken anyways. Trusted elites from the userbase are taken into confidence, and the assumption is that they give a marginally more transparent look into how decisions are made, and that they pacify their cliques.<p>Above all, the most important fact I&#x27;ve had to keep in mind is that the outspoken users, both those legitimately passionate as well as those simply trying to start shit, are a tiny minority of users. Most people are rational and recognize that platforms&#x2F;communities exist for a reason, and they&#x27;re fine with respecting that since it&#x27;s what they&#x27;re there for. When moderating, the outspoken group is nearly all you&#x27;ll ever see. Catering to passionate, involved users is justifiable, but must still be balanced with what the majority wants, or is at least able to tolerate (the &quot;silent majority&quot; which every demagogue claims to represent). That catering must also be done carefully, because &quot;bad actors&quot; who seek action&#x2F;change&#x2F;debate for the sake of stoking conflict or their own benefit will do their best to appear legitimate.<p>For some of this (e.g. spam), you can filter it comfortably as Yishan discusses without interacting with the content. However, more developed bad actor behavior is really quite good at blending in with legitimate discussion. If you as king recognize that there&#x27;s an inorganic flamewar, or abuse directed at a user, or spam, or complaint about a previous decision, you have no choice but to choose a cudgel (bans, filters, changes to rules, etc) and use it decisively. It is only when the king appears weak or indecisive (or worse, absent) that a platform goes off the rails, and at that point it takes immense effort to recover it (e.g. your C-level being cleared as part of a takeover, or a seemingly universally unpopular crackdown by moderation). As a lazy comparison, Hacker News is about as old as Twitter, and any daily user can see the intensive moderation which keeps it going despite the obvious interest groups at play. This is in spite of the fact that HN has <i>less</i> overhead to make an account and begin posting, and seemingly <i>more</i> ROI on influencing discussion (lots of rich&#x2F;smart&#x2F;fancy people <i>post</i> here regularly, let alone read).<p>Due to the need for privacy, moderation fundamentally cannot be democratic or open. Pretty much anyone contending otherwise is just upset at a recent decision or is trying to cause trouble for administration. Aspirationally, we would like the general <i>direction</i> of the platform to be determined democratically, but the line between these two is frequently blurry at best. To avoid extra drama, I usually aim to do as much discussion with users as possible, but ultimately perform all decisionmaking behind closed doors -- this is more or less the &quot;giant faceless corporation&quot; approach. Nobody knows how much I (or Elon, or Zuck, or the guys running the infinitely many medium-large discord servers) actually take into account user feedback.<p>I started writing this as a reply to paradite, but decided against that after going far out of scope.
fleddr超过 2 年前
In the real world, when you&#x27;re unhinged, annoying, intrusive...you face almost immediate negative consequences. On social media, you&#x27;re rewarded with engagement. Social media owners &quot;moderate&quot; behavior that maximizes the engagement they depend on, which makes it somewhat of a paradox.<p>It would be similar to a newspaper &quot;moderating&quot; their journalists to bring news that is balanced, accurate, fact-checked, as neutral as possible, with no bias to the positive or negative. This wouldn&#x27;t sell any actual news papers.<p>Similarly, nobody would watch a movie where the characters are perfectly happy. Even cartoons need villains.<p>All these types of media have exploited our psychological draw to the unusual, which is typically the negative. This attention hack is a skill evolved to survive, but now triggered all day long for clicks.<p>Can&#x27;t be solved? More like unwilling to solve. Allow me to clean up Twitter:<p>- Close the API for posting replies. You can have your weather bot post updates to your weather account, but you shouldn&#x27;t be able to instant-post a reply to another account&#x27;s tweet.<p>- Remove the retweet and quote tweet buttons. This is how things escalate. If you think that&#x27;s too radical, there&#x27;s plenty of variations: a cap on retweets per day. A dampening of how often a tweet can be retweeted in a period of time to slow the network effect.<p>- Put a cap on max tweets per day.<p>- When you go into a polarized thread and rapidly like a hundred replies that are on your &quot;side&quot;, you are part of the problem and don&#x27;t know how to use the like button. Hence, a cap on max likes per day or max likes per thread. So that they become quality likes that require thought. Alternatively, make shadow-likes. Likes that don&#x27;t do anything.<p>- When you&#x27;re a small account spamming low effort replies and the same damn memes on big accounts, you&#x27;re hitchhiking. You should be shadow-banned for that specific big account only. People would stop seeing your replies only in that context.<p>- Mob culling. When an account or tweet is mass reported in a short time frame and it turns out that it was well within guidelines, punish every single user making those reports. Strong warning, after repeated abuse a full ban or taking away the ability to report.<p>- DM culling. It&#x27;s not normal for an account to suddenly receive hundreds or thousands of DMs. Where a pile-on in replies can be cruel, a pile-on in DMs is almost always harassment. Quite a few people are OK with it if only the target is your (political) enemy, but we should reject it by principle. People joining such campaigns aren&#x27;t good people, they are sadists. Hence they should be flagged as potentially harmful. The moderation action here is not straightforward, but surely something can be done.<p>- Influencer moderation. Every time period, comb through new influencers manually, for example those breaking 100K followers. For each, inspect how they came to power. Valuable, widely loved content? Or toxic engagement games? If it&#x27;s the latter, dampen the effect, tune the alghoritm, etc.<p>- Topic spam. Twitter has &quot;topics&quot;, great way to engage in a niche. But they&#x27;re all engagement farmed. Go through these topics manually every once in a while and use human judgement to tackle the worst offenders (and behaviors)<p>- Allow for negative feedback (dislike) but with a cap. In case of a dislike mob, take away their ability to dislike or cap it.<p>Note how none of these potential measures address what it is that you said, it addresses behavior: the very obvious misuse&#x2F;abuse of the system. In that sense I agree with the author. Also, it doesn&#x27;t require AI. The patterns are incredibly obvious.<p>All of this said, the above would probably make Twitter quite an empty place. Because escalated outrage is the product.
shkkmo超过 2 年前
Let&#x27;s take the core points at the end in reverse order:<p>&gt; 3: Could you still moderate if you canʻt read the language?<p>Except, moderators do read the language. If think it is pretty self-serving to say that users views of moderation decisions are biased by content but moderators views are not.<p>&gt; 2: Freedom of speech was NEVER the issue (c.f. spam)<p>Spam isn&#x27;t considered a free speech issue because we generally accept that spam moderation is done based on behavior in a content-blind way.<p>This doesn&#x27;t magically mean that any given moderation team isn&#x27;t impinging free speech. Especially when there are misinformation policies in place which are explicitly content-based.<p>&gt; 1: It is a signal-to-noise management issue<p>Signal-to-noise management is part of why moderation can be good, but it doesn&#x27;t even justify the examples from the twitter thread. Moderation is about creating positive experiences on the platform and signal-to-noise is just part of that.<p>The
Fervicus超过 2 年前
&gt; Our current climate of political polarization makes it easy to think...<p>Stopped reading there. Reddit I think is one of the biggest offenders of purposely cultivating a climate of political polarization.
评论 #33457767 未加载
pluc超过 2 年前
Reddit uses an army of free labour to moderate.
fragmede超过 2 年前
One category that yishan doesn&#x27;t bring up in his content ladder of spam|non-controversial|controversial is copyright infringing content like the latest Disney movie. It fits along with spam as obviously okay to moderate. But take a moment and self-reflect on why that&#x27;s the case, and how much you&#x27;ve bought into capitalism as a solution for distributing scarce goods when some goods aren&#x27;t scarce.
ekianjo超过 2 年前
Spam is moderated because its not real users doing it. You dont want bots among humans. His take is weird on that end.
moron123超过 2 年前
Meh. This theory doesn&#x27;t fit to the reality of clearly politically motivated moderation on reddit and twitter and elsewhere. Banning Jordan Peterson for calling a Trans person by their old name is not a &quot;pattern of misbehavior&quot; and Jordan Peterson is not known for causing any offline violence. Heck, reddit banned one of the largest subreddits because they supported trump.
armchairhacker超过 2 年前
I wonder if the problems the author describes can be solved by artifically downvoting and not showing spam and flamewar content, not banning people.<p>- Spam: don&#x27;t show it to anyone, since nobody wants to see it. Repeatedly saying the same thing will get your posts heavily downvoted or just coalesced into a single post.<p>- Flamewars: again, artifically downvote them so that your average viewer doesn&#x27;t even see them (if they aren&#x27;t naturally downvoted). And also discourage people from participating, maybe by explicitly adding the text &quot;this seems like a stupid thing to argue about&quot; onto the thread and next to the reply button. The users who persist in flaming each other and then get upset, at that point you don&#x27;t really want them on your platform anyways<p>- Insults, threats, etc: again, hide and reword them. If it detects someone is sending an insult or threat, collapse it into &quot;&lt;insult&gt;&quot; or &quot;&lt;threat&gt;&quot; so that people know the content of what&#x27;s being sent but not the emotion (though honestly, you probably should ban threats altogether). You can actually do this for all kinds of vitriolic, provocative language. If someone wants to hear it, they can expand the &quot;&lt;insult&gt;&quot; bubble, the point is that most people probably don&#x27;t.<p>It&#x27;s an interesting idea for a social network. Essentially, instead of banning people and posts outright, down-regulate them and collapse what they are saying while remaining the content. So their &quot;free speech&quot; is preserved, but they are not bothering anyone. If they complain about &quot;censorship&quot;, you can point out that the First Amendment doesn&#x27;t require anyone to hear you, and people <i>can</i> hear you if they want to, but the people have specified and algorithm detects that they don&#x27;t.<p>EDIT: Should also add that Reddit actually used to be like this, where subreddits had moderators but admins were very hands-off (actually just read about this yesterday). And it resulted in jailbait and hate subs (and though this didn&#x27;t happen, could have resulted in dangerous subs like KiwiFarms). I want to make clear that I still think that content should be banned. But that content isn&#x27;t what the author is discussing here: he is discussing situations where &quot;behavior&quot; gets people banned and then they complain that their (tame) content is being censored. Those are the people who should be down-regulated and text collapsed instead of banned.
puffoflogic超过 2 年前
TL;DR: Run your platform to confirm to the desires of the loudest users. Declare anything your loudest users don&#x27;t want to see to be &quot;flamewar&quot; content and remove it.<p>My take: &quot;Flamebait&quot; <i>is</i> a completely accurate label for the content your loudest users don&#x27;t want to see, but it&#x27;s by definition your loudest users who are actually doing the flaming, and by definition they disagree with the things they&#x27;re flaming. So all this does is reward people for flamewars, while the moderators effectively crusade on behalf of the flamers. But it&#x27;s &quot;okay&quot; because, by definition, the moderators are going to be people who agree with the political views of the loudest viewers (if they weren&#x27;t they&#x27;d get heckled off), so the mods you actually get will be perfectly happy with this situation. Neither the mods nor the loudest users have any reason to dislike or see any problem with this arrangement. So why is it a problem? Because it leads to what I&#x27;ll call a flameocracy: whoever flames loudest gets their way as the platform will align with their desires (in order to reduce how often they flame). The mods and the platform are held hostage by these users but are suffering literal Stockholm Syndrome as they fear setting off their abusers (the flamers).
RickJWagner超过 2 年前
Reddit is a sewer. I don&#x27;t think the Ex-CEO has demonstrated any moderation skills.
zcombynator超过 2 年前
Spam is unwelcommed for a simple reason: there is no real person behind it.
评论 #33457075 未加载
评论 #33458363 未加载
jamisteven超过 2 年前
How about, dont moderate it? Just, let it be.
wcerfgba超过 2 年前
I like Yishan&#x27;s reframing of content moderation as a &#x27;signal-to-noise ratio problem&#x27; instead of a &#x27;content problem&#x27;, but there is another reframing which follows from that: moderation is also an <i>outsourcing problem</i>, in that moderation is about users outsourcing the filtering of content to moderators (be they all other users through voting mechanisms, a subset of privileged users through mod powers, or an algorithm).<p>Yishan doesn&#x27;t define what the &#x27;signal&#x27; is, or what &#x27;spam&#x27; is, and there will probably be an element of subjectivity to these which varies between each platform and each user on each platform. Thus successful moderation happens when moderators know what users want, i.e. what the users consider to be &#x27;good content&#x27; or &#x27;signal&#x27;. This reveals a couple of things about why moderation is so hard.<p>First, this means that moderation actually <i>is</i> a content problem. For example, posts about political news are regularly removed from Hacker News because they are off-topic for the community, i.e. we don&#x27;t consider that content to be the &#x27;signal&#x27; that we go to HN for.<p>Second, moderation can only be successful when there is a shared understanding between users and moderators about what &#x27;signal&#x27; is. It&#x27;s when this agreement breaks down that moderation becomes difficult or fails.<p>Others have posted about the need to provide users with the tools to do their own moderation in a decentralised way. Since the &#x27;traditional&#x27;&#x2F;centralised approach creates a fragile power dynamic which requires this shared understanding of signal, I completely understand and agree with this: as users we should have the power to filter out content we don&#x27;t like to see.<p>However, we have to distinguish between general and topical spaces, and to determine which communities live in a given space and what binds different individuals into collectives. Is there a need for a collective understanding of what&#x27;s on-topic? HN is not Twitter, it&#x27;s designed as a space for particular types of people to share particular types of content. Replacing &#x27;traditional&#x27; or centralised moderation with fully decentralised moderation risks disrupting the topicality of the space and the communities which inhabit it.<p>I think what we want instead is a &#x27;democratised&#x27; moderation, some way of moderating that removes a reliance on a &#x27;chosen few&#x27;, is more deliberate about what kinds of moderation need to be &#x27;outsourced&#x27;, and which allows users to participate in a shared construction of what they mean by &#x27;signal&#x27; or &#x27;on-topic&#x27; for their community. Perhaps the humble upvote is a good example and starting point for this?<p>Finally in the interest of technocratic solutions, particularly around spam (which I would define as repetitive content), has anyone thought about rate limits? Like, yeah if each person can only post 5 comments&#x2F;tweets&#x2F;whatever a day then you put a cap on how much total content can be created, and incentivise users to produce more meaningful content. But I guess that wouldn&#x27;t allow for all the <i>sick massive engagement</i> that these attention economy walled garden platforms need for selling ads...
protoman3000超过 2 年前
I like the idea that you don&#x27;t want to moderate content, but behavior. And it let me to these thoughts. I&#x27;m curious about your additions to these thoughts.<p>Supply moderation of psychoactive agents never worked. People have a demand to alter the state of their consciousness, and we should try to moderate demand in effective ways. The problem is not the use of psychoactive agents, it is the abuse. And the same applies to social media interaction which is a very strong psychoactive agent [1]. Nevertheless it can be useful. Therefore we want to fight abuse, not use.<p>I would like to put up to discussion the usage and extension of techniques for demand moderation in the context of social media interactions which we know to somewhat work already in other psychoactive agents. Think something like drugs education in schools, fasting rituals, warning labels on cigarettes, limited selling hours for alcohol, trading food stamps for drug addicts etc.<p>For example, assuming the platform could somehow identify abusive patterns in the user, it could<p>- show up warning labels that their behavior might be abusive in the sense of social media interaction abuse<p>- give them mandatory cool-down periods<p>- trick the allostasis principle of their dopamine reward system by doing things intermittently, e.g. by only randomly letting their posts to go through to other users, or only randomly allow them to continue reading the conversation (maybe only for some time), or only randomly shadow ban some posts<p>- make them read documents about harmful social media interaction abuse<p>- hint to them how abusive patterns in other people look like<p>- give limited reading or posting credits (e.g. &quot;Should I continue posting in this flamewar thread and then not post somewhere else where I find it more meaningful at another time?&quot;)<p>- etc.<p>I would like to hear your opinions about this in a sensible discussion.<p>_________<p>[1] Yes, social media interaction is a psychoactive and addictive agent, just like any other drug or your common addiction like overworking yourself, but I digress. People use social media interactions to among others raise their anger, to feed their addiction to complaining, to feel a high of &quot;being right&quot;&#x2F;owning it up to the libs&#x2F;nazis&#x2F;bigots&#x2F;idiots etc., to feel like they learned something useful, to entertain themselves, to escape from reality etc. Many people suffer from compulsively or at least habitual abuse of social media interactions, which has been shown by numerous studies (Sorry, to lazy to find a paper now to cite). Moreover the societal effects of abuse of social media interactions and their dynamics and influence on democratic politics are obviously detrimental.
评论 #33455866 未加载
PathOfEclipse超过 2 年前
Re: &quot;Hereʻs the answer everyone knows: there IS no principled reason for banning spam.&quot;<p>The author is making the mistake that &quot;free speech&quot; has been about saying whatever you want and whenever you want. This was never the case, including at the time of the <i>founding</i> of the U.S. constitution. There has always been a tolerance window which defines what you can say and what you can&#x27;t say without repercussions, often and usually enforced by society and societal norms.<p>The 1st amendment was always about limiting what the government can do to curtail speech, but, as we know, there are plenty of other actors in the system that have and continue to moderate communications. The problem with society today is that those in power have gotten really bad at defining a reasonable tolerance window, and in fact, political actors have worked hard to <i>shift</i> the tolerance window to benefit them and harm their opponents.<p>So, he makes this mistake and then builds on it by claiming that censoring spam violates free speech principles, but that&#x27;s not really true. And then he tries to equate controversy with spam, saying it&#x27;s not so much about the content itself but how it affects users. And that, I think leads into another major problem in society.<p>There has always been a tension between someone getting reasonably versus unreasonably offended by something. However, in today&#x27;s society, thanks in part to certain identitarian ideologies, along with a culture shift towards the worship or idolization of victimhood, we&#x27;ve given <i>tremendous</i> power to a few people to shut down speech by being offended, and vastly broadened what we consider reasonable offense versus unreasonable offense.<p>Both of these issues are ultimately cultural, but, at the same time, social media platforms have enough power to influence culture. If the new Twitter can define a less insane tolerance window and give more leeway for people to speak even if a small but loud or politically motivated minority of people get offended, then they will have succeeded in improving the culture and in improving content moderation.<p>And, of course, there is a third, and major elephant in the room. The government has been caught collaborating with tech companies to censor speech indirectly. This is a concrete violation of the first amendment, and, assuming Republicans gain power this election cycle, I hope we see government officials prosecuted in court over it.
评论 #33456090 未加载
jbirer超过 2 年前
If you take a look and analyze the people that were fired, you will find developers who cannot code, people who run Bitcoin nodes on company electricity, people with no skills or qualification for what they do. Elon Musk is trying to implement a meritocracy, it remains to be seen if he will do it right or botch it.
atchoo超过 2 年前
I think you have to be quite credulous to engage in this topic of &quot;twitter moderation&quot; as if it&#x27;s in good faith. It&#x27;s not about about creating a good experience for users, constructive debate or even money. It&#x27;s ALL about political influence.<p>&gt; Iʻm heartened to know that @DavidSacks is involved.<p>I&#x27;m not. I doubt he is there because Twitter is like Zenefits, it&#x27;s because his preoccupation over the last few years has been politics as part of the &quot;New Right&quot; Thiel, Master, Vance etc. running fund raisers for DeSantis and endorsing Musk&#x27;s pro-Russian nonsense on Ukraine.<p><a href="https:&#x2F;&#x2F;newrepublic.com&#x2F;article&#x2F;168125&#x2F;david-sacks-elon-musk-peter-thiel" rel="nofollow">https:&#x2F;&#x2F;newrepublic.com&#x2F;article&#x2F;168125&#x2F;david-sacks-elon-musk...</a>
评论 #33451709 未加载
评论 #33452794 未加载
评论 #33451349 未加载
评论 #33454545 未加载
nkotov超过 2 年前
Is anyone else having a hard time following along? Can someone provide a tl;dr?
评论 #33455823 未加载
matchagaucho超过 2 年前
tldr; Many posts on social media are &quot;spam&quot;. Nobody objects to spam filters.<p>Therefore, treat certain types of content as spam (based on metadata, not moderators).
cansirin超过 2 年前
trying out.
matai_kolila超过 2 年前
Yeah well, Yishan failed miserably at topic moderation on Reddit, and generally speaking Reddit has notoriously awful moderation policies that end up allowing users to run their own little fiefdoms just because they name-squatted earliest on a given topic. Additionally, Reddit (also notoriously) allowed some horrendously toxic behavior to continue on its site (jailbait, fatpeoplehate, the_donald, conservative currently) for literal years before taking action, so even when it comes to basic admin activity I doubt he&#x27;s the guy we should all be listening to.<p>I think the fact that this is absurdly long and wanders at least twice into environmental stuff (which <i>is</i> super interesting btw, definitely read those tangents) kind of illustrates just how not-the-best Yishan is as a source of wisdom on this topic.<p><i>Very</i> steeped in typical SV &quot;this problem is super hard so you&#x27;re not allowed to judge failure or try anything simple&quot; talk. Also it&#x27;s basically an ad for Block Party by the end (if you make it that far), so... yeah.
评论 #33458418 未加载
评论 #33458696 未加载
wackget超过 2 年前
Anyone got a TL;DR? I don&#x27;t feel like trudging through 100 sentences of verbal diarrhea.
lm28469超过 2 年前
Reading these threads on twitter is like listening to a friend having a bad mdma trip replaying his whole emotional life to you in a semi incoherent diarrhea like stream of thoughts<p>Please write a book, or at the very least an article... posting on twitter is like writing something on a piece of paper, showing it to your best friend and worst enemy before throwing it in the trash
评论 #33450557 未加载
评论 #33450553 未加载
评论 #33450834 未加载
评论 #33450285 未加载
评论 #33450421 未加载
评论 #33451746 未加载
评论 #33450622 未加载
评论 #33450428 未加载
评论 #33450469 未加载
评论 #33450588 未加载
评论 #33450530 未加载
评论 #33451134 未加载
评论 #33450398 未加载
评论 #33450737 未加载
评论 #33451214 未加载
评论 #33451538 未加载
评论 #33451996 未加载
评论 #33451276 未加载
评论 #33450425 未加载
评论 #33451643 未加载
评论 #33451525 未加载
评论 #33452540 未加载
aksjdhmkjasdof超过 2 年前
I have actually worked in this area. I like a lot of Yishan&#x27;s other writing but I find this thread mostly a jumbled mess without much insight. Here are a couple assorted points:<p>&gt;In fact, once again, I challenge you to think about it this way: could you make your content moderation decisions even if you didnʻt understand the language they were being spoken in?<p>I&#x27;m not sure what the big point is here but there are a couple parts to how this works in the real world:<p>1) Some types of content removal do not need you to understand the language: visual content (images&#x2F;videos), legal takedowns (DMCA).<p>2) Big social platforms contract with people around the world in order to get coverage of various popular languages.<p>3) You can use Google Translate (or other machine translation) to review content in some languages that nobody working in content moderation understands.<p>But some content that violates the site&#x27;s policies can easily slip through the cracks if it&#x27;s in the right less-spoken language. That&#x27;s just a cost of doing business. The fact that the language is less popular will limit the potential harm but it&#x27;s certainly not perfect.<p>&gt;Hereʻs the answer everyone knows: there IS no principled reason for banning spam. We ban spam for purely outcome-based reasons: &gt; &gt;It affects the quality of experience for users we care about, and users having a good time on the platform makes it successful.<p>Well, that&#x27;s the same principle that underlies all content moderation: &quot;allowing this content is more harmful to the platform than banning it&quot;. You can go into all the different reasons why it might be harmful but that&#x27;s the basic idea and it&#x27;s not unprincipled at all. And not all spam is banned from all platforms--it could just have its distribution killed or even be left totally alone, depending on the specific cost&#x2F;benefit analysis at play.<p>You can apply the same reasoning to every other moderation decision or policy.<p>The main thrust of the thread seems to be that content moderation is broadly intended to ban negative behavior (abusive language and so on) rather than to censor particular political topics. To that I say, yeah, of course.<p>FWIW I do think that the big platforms have taken a totally wrong turn in the last few years by expanding into trying to fight &quot;disinformation&quot; and that&#x27;s led to some specific policies that are easily seen as political (eg policies about election fraud claims or covid denialism). If we&#x27;re just talking about staying out of this business then sure, give it a go. High-level blabbering about &quot;muh censorship!!!&quot; without discussion of specific policies, is what you get from people like Musk or Sacks, though, and that&#x27;s best met with an eye roll.
fazfq超过 2 年前
When people ask you why you hate twitter threads, show them this hodgepodge of short sentences with sandwiched CO2 removal advertisements.
评论 #33451264 未加载
评论 #33450927 未加载
Waterluvian超过 2 年前
If I wanted quality content, I would just do the Something Awful approach and charge $x per account.<p>If I wanted lots of eyeballs (whether real or fake) to sell ads, I would just pay lip service to moderation issues, while focusing on only moderating anything that affects my ability to attract advertisers.<p>But what I want, above all, because I think it would be hilarious to watch, is for Elon to activate Robot9000 on all of Twitter.
评论 #33455690 未加载
blantonl超过 2 年前
This really was an outstanding read and take on Elon, Twitter, and what&#x27;s coming up.<p>But it literally could not have been posted in a worse medium for communicating this message. I felt like I had to pat my head and rub my tummy at the same time reading through all this, and to share it succinctly with colleagues resulted in me spending a good 15 minutes cutting and pasting the content.<p>I&#x27;ve never understood people posting entire blog type posts to.... Twitter.
评论 #33450871 未加载
ramblerman超过 2 年前
Did he begin answering the question, drop some big philosophical terms, and then just drift off into here is what I think we should do about climate change in 4 steps...?
评论 #33450667 未加载
评论 #33455574 未加载
评论 #33450679 未加载
评论 #33450437 未加载
评论 #33454883 未加载
billiam超过 2 年前
The best part of his engrossing Twitter thread is that he inserts a multitweet interstitial &quot;ad&quot; for his passion project promoting reforestation right in the middle of his spiel.
评论 #33456646 未加载
评论 #33456895 未加载
评论 #33456638 未加载
评论 #33456931 未加载
sweetheart超过 2 年前
I&#x27;m amazed at the number of people in this thread who are annoyed that someone would insert mention of a carbon capture initiative into an unrelated discussion. The author is clearly tired of answering the same question, as stated in the first tweet, and is desperately trying to get people to think more critically about the climate crisis that is currently causing the sixth mass extinction event in the history of the planet.<p>Being annoyed that someone &quot;duped&quot; you into reading about the climate crisis is incredibly frustrating to activists because it&#x27;s SO important to be thinking about and working on, and yet getting folks to put energy into even considering climate crisis is like pulling teeth.<p>I wonder if any of the folks complaining about the structure of the tweets has stopped to think about why the author feels compelled to &quot;trick&quot; us into reading about carbon capture.
评论 #33451961 未加载
评论 #33456717 未加载
gryBrd1987超过 2 年前
Twitter is text based. Video games have had text based profanity filters for online games for years.<p>Make it easy for users to define a regex list saved locally. On the backend train a model that filters images of gore and genitals. Aim users who opt in to that experience at that filtered stream.<p>This problem does not require a long winded thesis.
评论 #33457755 未加载
评论 #33464474 未加载
wasmitnetzen超过 2 年前
As once famously said by Mark Twain: &quot;I didn&#x27;t have time to write a short Twitter thread, so I wrote a long one instead.&quot;
评论 #33450657 未加载
dariusj18超过 2 年前
Does anyone else think it&#x27;s brilliant that he put advertisements inside his own thread?
评论 #33455854 未加载
评论 #33450985 未加载
评论 #33451388 未加载
评论 #33452518 未加载
greenie_beans超过 2 年前
that digression into plugging his start-up was gross!
fuckHNtho超过 2 年前
tldr tangential babbling that HN protects and wants us to admire...because reddit YC darlings. it almost makes me feel nostalgic.<p>Why are we to take yishan as an authority on content moderation, have you BEEN to reddit?! the kind of moderation of repetitive content he&#x27;s referring to is clearly not done AT ALL.<p>He does not put forth any constructive advice. be &quot;operationally excellent&quot;. ok, thanks. you&#x27;re wrong about spam. you&#x27;re wrong about content moderation. ok, thanks. who is his audience? he&#x27;s condescending the people who are dialed into online discourse inbetween finding new fun ways to plant trees and design an indulgent hawaiian palace. i expected more insight, to be honest. but time and time again we find the people at the top of internet companies are disappointingly common in their perspective on the world. they just happened to build something great once and it earned them a lifetime soapbox ticket.<p>ok, thanks.
0xbadcafebee超过 2 年前
It seemed interesting but after the 400th tweet I lost interest and went to do something productive
rhaksw超过 2 年前
I didn&#x27;t see any discussion of shadow moderation, so here&#x27;s my 2c. It&#x27;s wrong, let&#x27;s get rid of it:<p><a href="https:&#x2F;&#x2F;cantsayanything.win&#x2F;2022-10-transparent-moderation&#x2F;" rel="nofollow">https:&#x2F;&#x2F;cantsayanything.win&#x2F;2022-10-transparent-moderation&#x2F;</a>
linuxftw超过 2 年前
What a bunch of long-winded babble. Incredulously, he&#x27;s shilling an app at the end of this.<p>I don&#x27;t agree that this is an interesting submission, and IMO there&#x27;s no new information here.
tjoff超过 2 年前
&gt; <i>Machine learning algorithms are able to accurate identify spam</i><p>Nope. Not even close.<p>&gt; <i>and itʻs not because they are able to tell itʻs about Viagra or mortgage refinancing</i><p>Funny, because they can&#x27;t even tell that.<p>Which is why mail is being ruined by google and microsoft. Yes you could argue that they have incentives to do just that. But that doesn&#x27;t change the fact that they can&#x27;t identify spam.
评论 #33455752 未加载
P_I_Staker超过 2 年前
Key word here: ex (joking)... but seriously I&#x27;m absolutely baffled why someone would look to a former reddit exec for advice on moderation.<p>I guess you could say that they have experience, having made all the mistakes, and figured it out through trial and error! This seems to be his angle.<p>What I got from the whole reddit saga is how horrible the decision making was, and won&#x27;t be looking to them for sage advice. These people are an absolute joke.
评论 #33450804 未加载
aerovistae超过 2 年前
These random detours into climate-related topics are insanely disruptive of an otherwise interesting essay. I absolutely hate this pattern. I see what he&#x27;s trying to do - you don&#x27;t want to read about climate change but you want to read this other thing so I&#x27;m going to mix them together so you can&#x27;t avoid the one if you want the other - but it&#x27;s an awful dark pattern and makes for a frustrating and confusing reading experience. I kept thinking he was making an analogy before realizing he was just changing topics at random again. It certainly isn&#x27;t making me more interested in his trees project. If anything I&#x27;m less interested now.
评论 #33456032 未加载
gambler超过 2 年前
<i>&gt;No one argues that speech must have value to be allowed (c.f. shitposting).</i><p><i>&gt;Hereʻs the answer everyone knows: there IS no principled reason for banning spam.</i><p>The whole threads seems like it revolves around this line of reasoning, which strawmans what free speech advocates are actually arguing for. I&#x27;ve never heard of any of them, no matter how principled, fighting for the &quot;right&quot; of spammers to spam.<p>There is an obvious difference between spam moderation and content suppression. No recipient of spam wants to receive spam. On the other hand, labels like &quot;harmful content&quot; are most often used to stop communication between willing participants by a 3d party who doesn&#x27;t like the conversation. They are fundamentally different scenarios, regardless of how much you agree or disagree with specific moderation decisions.<p>By ignoring the fact that communication always has two parties you construct a broken mental model of the whole problem space. The model will then lead you stray in analyzing a variety of scenarios.<p>In fact, this is a very old trick of pro-censorship activists. Focus on the speaker, ignore the listeners. This way when you ban, say, someone with millions of subscribers on YouTube you can disingenuously pretend that it&#x27;s an action affecting only one person. You can then draw false equivalency between someone who actually has a million subscribers and a spammer who sent a message to million email addresses.
rglover超过 2 年前
A fun idea that I&#x27;m certain no one has considered with any level of seriousness: don&#x27;t moderate anything.<p>Build the features to allow readers to self-moderate and make it &quot;expensive&quot; to create or run bots (e.g., make it so API access is limited without an excessive fee, limit screen scrapers, etc). The &quot;pay to play&quot; idea will eliminate an insane amount of the junk, too. Any free network is inherently going to have problems of chaos. Make it so you can only follow X people with a free account, but upgrade to follow more. Limit tweets&#x2F;replies&#x2F;etc based on this. Not only will it work, but it will remove the need for all of the moderation and arguments around bias.<p>As for advertisers (why any moderation is necessary in the first place beyond totalitarian thought control): have different tiers of quality. If you want a higher quality audience, pay more. If you&#x27;re more concerned about broad reach (even if that means getting junk users), pay less. Beyond that, advertisers&#x2F;brands should set their expectations closer to reality: randomly appearing alongside some tasteless stuff on Twitter does not mean you&#x27;re <i>vouching</i> for those ideas.
评论 #33455454 未加载
评论 #33455469 未加载
评论 #33455295 未加载