TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

More than 140 Kenya Facebook moderators sue after diagnoses of PTSD

380 点作者 uxhacker5 个月前

28 条评论

nappy-doo5 个月前
I worked at FB for almost 2 years. (I left as soon as I could, I knew it wasn&#x27;t a good fit for me.)<p>I had an Uber from the campus one day, and my driver, a twenty-something girl, was asking how to become a moderator. I told her, &quot;no amount of money would be enough for me to do that job. Don&#x27;t do it.&quot;<p>I don&#x27;t know if she eventually got the job, but I hope she didn&#x27;t.
评论 #42482872 未加载
评论 #42483509 未加载
评论 #42483864 未加载
1vuio0pswjnm75 个月前
Perhaps this is what happens when someone creates a mega-sized website comprising hundreds of millions of pages using other peoples&#x27; submitted material, effectively creating a website that is too large to &quot;moderate&quot;. By letting the public publish their material on someone else&#x27;s mega-sized website instead of hosting their own, perhaps it concentrates the web audience to make it more suitable for advertising. Perhaps if the PTSD-causing material was published by its authors on the authors&#x27; own websites, the audience would be small, not suitable for advertising. A return to less centralised web publishing would perhaps be bad for the so-called &quot;ad ecosystem&quot; created by so-called &quot;tech&quot; company intermediaries. To be sure, it would also mean no one in Kenya would be intentionally be subjected to PTSD-causing material in the name of fulfilling the so-called &quot;tech&quot; industry&#x27;s only viable &quot;business model&quot;: surveillance, data collection and online ad services.
评论 #42484168 未加载
评论 #42483486 未加载
yodsanklai5 个月前
I&#x27;m wondering if there are precedents in other domains. There are other jobs where you do see disturbing things as part of your duty. E.g. doctors, cops, first responders, prison guards and so on...<p>What makes moderation different? and how should it be handled so that it reduces harm and risks? surely banning social media or not moderating content aren&#x27;t options. AI helps to some extent but doesn&#x27;t solve the issue entirely.
评论 #42486670 未加载
评论 #42486853 未加载
评论 #42487490 未加载
评论 #42488842 未加载
评论 #42487518 未加载
评论 #42486675 未加载
评论 #42489401 未加载
评论 #42486918 未加载
评论 #42486655 未加载
评论 #42493575 未加载
评论 #42488558 未加载
评论 #42489382 未加载
评论 #42487212 未加载
评论 #42486959 未加载
评论 #42488789 未加载
评论 #42488268 未加载
评论 #42487020 未加载
评论 #42487438 未加载
quesomaster90005 个月前
The Kenyan moderators&#x27; PTSD reveals the fundamental paradox of content moderation: we&#x27;ve created an enterprise-grade trauma processing system that requires concentrated psychological harm to function, then act surprised when it causes trauma. The knee-jerk reaction of suggesting AI as the solution is, IMO, just wishful thinking - it&#x27;s trying to technologically optimize away the inherent contradiction of bureaucratized thought control. The human cost isn&#x27;t a bug that better process or technology can fix - it&#x27;s the inevitable result of trying to impose pre-internet regulatory frameworks on post-internet human communication that large segments of the population may simply be incompatible with.
评论 #42489562 未加载
评论 #42489857 未加载
评论 #42492426 未加载
pluc5 个月前
Worked at PornHub&#x27;s parent company for a bit and the moderation floor had a noticeable depressive vibe. Huge turnover. Can&#x27;t imagine what these people were subjected to.
评论 #42482664 未加载
azinman25 个月前
&gt; The moderators from Kenya and other African countries were tasked from 2019 to 2023 with checking posts emanating from Africa and in their own languages but were paid eight times less than their counterparts in the US, according to the claim documents<p>Why would pay in different countries be equivalent? Pretty sure FB doesn’t even pay the same to their engineers depending on where in the US they are, let alone which country. Cost of living dramatically differs.
评论 #42489127 未加载
评论 #42486912 未加载
评论 #42486627 未加载
oefrha5 个月前
They should probably hire more part time people working one hour a day?<p>Btw, it’s probably a different team handling copyright claims, but my run-in with Meta’s moderation gives me the impression that they’re probably horrifically understaffed. I was helping a Chinese content creator friend taking down Instagram, YouTube and TikTok accounts re-uploading her content and&#x2F;or impersonating her (she doesn’t have any presence on these platforms and doesn’t intend to). Reported to TikTok twice, got it done once within a few hours (I was impressed) and once within three days. Reported to YouTube once and it was handled five or six days later. No further action was needed from me after submitting the initial form in either case. Instagram was something else entirely; they used Facebook’s reporting system, the reporting form was the worst, it asked for very little information upfront but kept sending me emails afterwards asking for more information, then eventually radio silence. I sent follow-ups asking about progress, again, radio silence. Impersonation account with outright stolen content is still up till this day.
fouronnes35 个月前
Absolutely grim. I wouldn&#x27;t wish that job on my worst enemy. The article reminded me of a Radiolab episode from 2018: <a href="https:&#x2F;&#x2F;radiolab.org&#x2F;podcast&#x2F;post-no-evil" rel="nofollow">https:&#x2F;&#x2F;radiolab.org&#x2F;podcast&#x2F;post-no-evil</a>
throwaway484765 个月前
When people are protected from the horrors of the world they tend to develop luxury beliefs which leads them to create more suffering in the world.
评论 #42488309 未加载
评论 #42488256 未加载
评论 #42489060 未加载
para_parolu5 个月前
One of few fields where AI is very welcome
评论 #42482411 未加载
评论 #42483145 未加载
评论 #42482900 未加载
评论 #42489036 未加载
评论 #42482539 未加载
评论 #42482473 未加载
评论 #42482325 未加载
jkestner5 个月前
Borrowing the thought from Ed Zitron, but when you think about it, most of us are exposing ourselves to low-grade trauma when we step onto the internet now.
评论 #42483058 未加载
评论 #42485331 未加载
omoikane5 个月前
Possibly related, here is an article from 2023-06-29:<p><a href="https:&#x2F;&#x2F;apnews.com&#x2F;article&#x2F;kenya-facebook-content-moderation-lawsuit-8215445b191fce9df4ebe35183d8b322" rel="nofollow">https:&#x2F;&#x2F;apnews.com&#x2F;article&#x2F;kenya-facebook-content-moderation...</a> - Facebook content moderators in Kenya call the work &#x27;torture.&#x27; Their lawsuit may ripple worldwide<p>I found this one while looking for salary information on these Kenyan moderators. This article mentioned that they are being paid $429 per month.
shadowgovt5 个月前
Good! I hope they get every penny owed. It&#x27;s an awful job and outsourcing if to jurisdictions without protection was naked harm maximization.
blueflow5 个月前
I&#x27;m curious about the contents that these people moderated. What is it that seeing it fucks people up?
评论 #42483012 未加载
评论 #42483019 未加载
评论 #42482985 未加载
评论 #42483015 未加载
评论 #42483077 未加载
pllbnk5 个月前
There have been multiple instances where I would receive invites or messages from obvious bots - users having no history, generic name, sexualised profile photo. I would always report them to Facebook just to receive a reply an hour or a day later that no action has been taken. This means there is no human in the pipeline and probably only the stuff that&#x27;s not passing their abysmal ML filter goes to the actual people.<p>I also have a relative who is stuck with their profile being unable to change any contact details, neither email nor password because FB account center doesn&#x27;t open for them. Again, there is no human support.<p>BigTech companies must be mandated by law to have the number of live support people working and reachable that is a fixed fraction of their user number. Then, they would have no incentive to inflate their user numbers artificially. As for the moderators, there should also be a strict upper limit on the number of content (content tokens, if you will) they should view during their work day. Then the companies would also be more willing to limit the amount of content on their systems.<p>Yeah, it&#x27;s bad business for them but it&#x27;s a win for the people.
wkat42425 个月前
I have several friends who do this work for various platforms.<p>The problem is, someone has to do it. These platforms are mandated by law to moderate it or else they&#x27;re responsible for the content the users post. And the companies can not shield their employees from it because the work simply needs doing. I don&#x27;t think we can really blame the platforms (though I think the remuneration could be higher for this tough work).<p>The work tends to suit some people better than others. The same way some people will not be able to be a forensic doctor doing autopsies. Some have better detachment skills.<p>All the people I know that do this work have 24&#x2F;7 psychologists on site (most of them can&#x27;t work remotely due to the private content they work with). I do notice though that most of them do have an &quot;Achilles heel&quot;. They tend to shrug most things off without a second thought but there&#x27;s always one or two specific things or topics that haunt them.<p>Hopefully eventually AI will be good enough to deal with this shit. It sucks for their jobs or course but it&#x27;s not the kind of job anyone really does with pleasure.
评论 #42487180 未加载
评论 #42487938 未加载
Eumenes5 个月前
What do you call ambulance chasers, but they go after tech companies? Cause this is that.
CuriousRose5 个月前
I wonder if using AI to turn images and video into a less realistic style before going to the moderators, while preserving the image content will work to reduce trauma as it creates an artificial barrier from seeing human torture. We used to watch cartoons as kids with people being blown to pieces.
atleastoptimal5 个月前
Obvious job that would benefit everyone for AI to do instead of humans.
kittikitti5 个月前
Reddit mods could learn a thing or two from these people.
xvector5 个月前
This is the one job we can probably automate now.
bookofjoe5 个月前
<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42465459">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42465459</a>
toomanyrichies5 个月前
One terrible aspect of online content moderation is that, no matter how good AI gets and no matter how much of this work we can dump in its lap, to a certain extent there will always need to be a &quot;human in the loop&quot;.<p>The sociopaths of the world will forever be coming up with new and god-awful types of content to post online, which current AI moderators haven&#x27;t encountered before and which therefore won&#x27;t know how to classify. It will therefore be up to humans to label that content in order to train the models to handle that new content, meaning humans will have to view it (and suffer the consequences, such as PTSD). The alternative, where AI labels these new images and then uses those AI-generated labels to update the model, famously leads to &quot;model collapse&quot; [1].<p>Short of banning social media at a societal level, or abstaining from it at an individual level, I don&#x27;t know that there&#x27;s any good solution to this problem. These poor souls are taking a bullet for the rest of us. God help them.<p>1. <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Model_collapse" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Model_collapse</a>
percentcer5 个月前
it&#x27;s kinda crazy that they have normies doing this job
评论 #42483092 未加载
评论 #42483404 未加载
efitz5 个月前
I have a lot of questions.<p>The nature of the job really sucks. This is not unusual; there are lots of sucky jobs. So my concern is really whether the employees were informed what they would be exposed to.<p>Also I’m wondering why they didn’t just quit. Of course the answer is money, but if they knew what they were getting into (or what they were already into), and chose to continue, why should they be awarded more money?<p>Finally, if they can’t count on employees in poor countries to self-select out when the job became life-impacting, maybe they should make it a temporary gig, eg only allow people to do it for short periods of time.<p>My out-of-the-box idea is: maybe companies that need this function could interview with an eye towards selecting psychopaths. This is not a joke; why not select people who are less likely to be emotionally affected? I’m not sure anyone has ever done this before and I also don’t know if such people would be likely to be inspired by the images, which would make this idea a terrible one. My point is find ways to limit the harm that the job causes to people, perhaps by changing how people interact with the job since the nature of the job doesn’t seem likely to change.
评论 #42487879 未加载
bdangubic5 个月前
I wish they get trillion dollars but I am sure they signed their life away via waivers and whatnots when they got the job :(
评论 #42482610 未加载
neilv5 个月前
If I was a tech billionaire, and there was so much uploading of stuff so bad, that it was giving my employee&#x2F;contractors PTSD, I think I&#x27;d find a way to stop the perpetrators.<p>(I&#x27;m not saying that I&#x27;d assemble a high-speed yacht full of commandos, who travel around the world, righting wrongs when no one else can. Though that would be more compelling content than most streaming video episodes right now. So you could offset the operational costs a bit.)
评论 #42482305 未加载
评论 #42482745 未加载
评论 #42482353 未加载
sneak5 个月前
Perhaps if looking at pictures of disturbing things on the internet gives you PTSD than this isn’t the kind of job for you?<p>Not everyone can be a forensic investigator or coroner, too.<p>I know lots of people who can and do look at horrible pictures on the internet and have been doing so for 20+ years with no ill effects.
评论 #42482733 未加载
评论 #42482790 未加载
评论 #42482526 未加载
评论 #42483103 未加载