TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Catastrophic effects of working as a Facebook moderator

113 pointsby seek3r00over 5 years ago

17 comments

_Microftover 5 years ago
The situation might not be so much better on other services of that scale (Youtube, image hosters,...) but what really makes the difference with FB for me is that it reminded me of a tweet from a FB employee who claimed that the amount of good that Facebbok does (might have been &#x27;can do&#x27;?) is <i>unlimited</i>. Unlimited, ffs? I can only wonder how it comes that self-perception and reality can diverge so enormously.<p>(My eternal gratitude to the one who can dig up the tweet. All my efforts were futile so far.)
评论 #20997154 未加载
评论 #20994960 未加载
TheOperatorover 5 years ago
&gt;They also said others were pushed towards the far right by the amount of hate speech and fake news they read every day.<p>There&#x27;s something wrong with mainstream reporting if mere exposure to social media turns people far right. It really strikes me that people are trapped in some pretty strong filter bubbles to the point mere exposure is enough to change political belief.<p>Spend a week on a far right community and you&#x27;ll be shown more stats that point to a far-right conclusion than you can critically evaluate. In any internet discussion of police racism for instance FBI crime stats will be mentioned in a heartbeat but I don&#x27;t think I&#x27;ve seen a mainstream journo bring them up once. Social media and mainstream media fundamentally follows different schemas of information simply because even bringing up certain data can cause a mainstream journo reputational damage.<p>This is also causing an inverse filter bubble where hateful ideas which actually have refutations don&#x27;t get refuted because people refuse to discuss the ideas on principle. Much of the data cited is crap and much of the interpretations are crap but they&#x27;re not meaningfully contested.
评论 #20995381 未加载
评论 #20995650 未加载
评论 #20995372 未加载
评论 #20995480 未加载
评论 #20998822 未加载
koevetover 5 years ago
I recently met a guy in Berlin who is employed by a company that is a FB sub-contractor for content screening. When I met him, he had been on sick leave for 2 weeks, because of the awful working conditions and the stuff he had to go through daily. He was obviously looking for a new position and, I can attest, he looked like &quot;damaged&quot;.
MrGilbertover 5 years ago
One idea: To cope with all this, reduce the flagged posts presented by the algorithm to maybe 2 - 3 hours a day. For the remaining 5 hours, the algorithm could show them chats, pictures and videos that are harmless and uplifting. This might create a counterbalance, showing that there is still something good in social media.<p>I think that&#x27;s how RL works in most countries. You are not running through the streets, getting slapped with crime and therelike all the time.<p>Of course, this also means you need more people to deal with the same amount of content as you do today.
评论 #20995063 未加载
评论 #20998866 未加载
评论 #20999367 未加载
cmgover 5 years ago
I spent most of last Friday keeping an eye on 4chan&#x27;s &#x2F;pol&#x2F; after finding out that morning that users were planning an attack on my job.<p>Even just looking for one day, it took a serious emotional toll on me. I&#x27;ve definitely seen some awful things on the Internet but the constant bombardment of hate speech, racism, anti-Semitism, and all sorts of disturbing images and text over the course of 6 hours made me feel physically sick a number of times, and I had to take extra care to rest the next day.<p>This is anecdotal of course, but I can&#x27;t imagine what the Facebook moderators go through having to process at least one ticket a minute.
评论 #20997687 未加载
moofightover 5 years ago
There are many aspects to this, but it seems that the most lasting and strong effects are due to visual content (especially raw content, violence...) rather than text (even though text can be violent).<p>Which is why automated Image&#x2F;Video moderation solutions (such as Vision, Rekog, Sightengine.com, Hive) will continue to grow. Not only because it is cheaper&#x2F;faster, but because it becomes a necessity. Or at least as a first filter to weed out the &quot;worst&quot; content.
评论 #20996125 未加载
评论 #20996198 未加载
评论 #20996005 未加载
forintiover 5 years ago
They could use some image processing to make the videos look cartoonish, at least for the first analysis.<p>That might soften the blow to the screeners.
duxupover 5 years ago
I wonder what the solution is here.<p>To expose people to this stuff continuously seems wrong.<p>Then again so does exposing everyone to it &#x2F; would probabbly kill the service if it wasn&#x27;t dealt with by someone.<p>Another concern is finding people who can handle these situations in a healthy way, might be few and far between, and generally the folks exposed to it folks hired into low pay &#x2F; outsourced warm bodies in chairs kinda situations.
评论 #20995515 未加载
评论 #20995067 未加载
评论 #20995829 未加载
bedheadover 5 years ago
Yet another reason to stop trying to have a central authority, in this case Facebook moderators, police speech. It can&#x27;t be done effectively or without major side effects. Let people filter on their own, we all do every day in the real world and it works just fine.
评论 #20995597 未加载
评论 #20995697 未加载
评论 #20995582 未加载
评论 #20997802 未加载
raslahover 5 years ago
What strikes me about this issue as a whole is what it says about the true state of &quot;AI&quot;. This is a perfect job for such technologies. I mean how is it that we&#x27;re already making &#x27;deep fake&#x27; videos and audio but can&#x27;t feed a video stream which is just a stream of images to an algorithm which can determine if it&#x27;s inappropriate. I recognize that some such tech is being utilized on the front end in this case, and that the problem is non trivial, but I see this as FB saying &#x27;good enough&#x27; and not pushing as hard as they could to improve the tech to where it can be trusted to make the decision. I sense that they may be telling themselves they&#x27;re doing social good by &#x27;creating jobs&#x27;. Why must humans be subjected to this torture? What happened to &quot;move fast and break things&quot;? Why not put the algorithms out front and let them have the final say, and let them learn and improve quickly? I suppose just because meat is cheaper than chips.
评论 #20998173 未加载
thatguyagainover 5 years ago
If you think about it, you have a large group in society who spend 8 hours a day watching content so controversial it won&#x27;t even reach the rest of us. Just like any other company, people eventually quit, and new people join. Now if you were the bad guy wanting to nudge a % of society in a specific political direction, wouldn&#x27;t this moderator group be a perfect target? Just bombard [insert social media platform] with propaganda content and you _will_ reach this group of people even if the content never appears on the platform. What a messed up situation.
评论 #20995049 未加载
post_breakover 5 years ago
Maybe you need people who are just numb to it? It seems like the kind of gig that would attract the kinds of people who like seeing it. I know that&#x27;s morbid to think about but look at subreddits like RIP &#x2F;r&#x2F;watchpeopledie, &#x2F;r&#x2F;enoughinternet, morbidreality is still around luckily. You get people who don&#x27;t mind the gore and terrible things, pay them to sift through it, and see what the outcome is there?
评论 #20995331 未加载
评论 #20995801 未加载
评论 #20996315 未加载
评论 #20998721 未加载
apolymathover 5 years ago
Back in 1999 when I had AOL 4.0 at the age of 16, I would frequent www.goregallery.com and other various gore-related websites that showcased real crime &amp; accident scenes from around the world. Still to this day, I am fascinated by that kind of content. I don&#x27;t really seek it out like I did as a teenager, but it excites me nonetheless.
teachrdanover 5 years ago
The solution here may be for all of us to flag posts that we know are benign, like puppies or birthday greetings. This would reduce the percentage of disturbing content moderators are forced to see, and costs us nothing at all.
评论 #20997737 未加载
stickfigureover 5 years ago
Just to be clear: There are no statistics in this article, just some anecdotes selected to push a particular narrative.<p>I don&#x27;t claim to know whether working as a FB moderator is &#x27;catastrophic&#x27; or not, but this article makes an emotional case for it rather than a rational case for it. It reminds me of reporting around the Foxconn suicides - sure, each is a tragedy, but it turns out the Foxconn suicide rate matches the national rate.
评论 #20996577 未加载
评论 #20997677 未加载
评论 #20996739 未加载
评论 #20996814 未加载
评论 #20998335 未加载
评论 #20996676 未加载
评论 #20997122 未加载
0xADADAover 5 years ago
The very fact that Facebook outsources its censorship to contractors underlines the precarity of a workforce that is considered both disposable in the short term and irrelevant in the long term.<p>Ultimately Facebook wishes to replace this workforce with AI automation where possible, and then heteromate the remaining human work by enticing users of the platform to inform on each other with posts that don&#x27;t abide by the platforms implicit social norms or opaque moderation rules.
J-dawgover 5 years ago
There seems to be a fundamental hypocrisy in a legal system that considers some material is so fundamentally <i>corrupting</i> that it must be illegal for people to possess or see it.<p>If this were truly the case, it should also be illegal to compel someone to look at this same material as part of their employment.
评论 #20995772 未加载