I found this juxtaposition unsettling:<p>“I was watching the content of deranged psychos in the woods somewhere who don’t have a conscience for the texture or feel of human connection,”<p>"...If the managers noticed a few minutes of inactivity, they would ping him on workplace messaging tool Slack to ask why he wasn’t working."<p>The texture of human connection is severely diminished when you are managing what is essentially a drip-fed trauma survivor remotely using a metric of trauma exposure per minute.
The real problem is not that such content exists, the problem is that Facebook makes it horrifically easy to distribute such content. Back when Facebook was more of interacting with people rather that liking and subscribing to whatever these content and joke pages shoved down these concerns where not present much. Clickbait, etc is just a natural progression of like and subscribing culture.<p>My Facebook feed about a year ago, when i last opened facebook was filled with the same memes and "inspirational messages" shoved down by a handful of pages being shared by dumb idiots. I had to search and go directly to a person's profile to see what they were up to.<p>The real problem is that Facebook started behaving like a TV channel that a social network.<p>If Facebook goes to only keeping profiles of real persons and removes any and all of these pages and blogs and news agencies and the lot, a lot of its woes with regard to content will be solved.<p>These pages give a sense of anonymity to the people behind them. Take that anonymity away. Once you know that your personal image will be directly tied to whatever you post and held responsible by everyone in your friend list, you will begin to curb your tendencies in public.
This sort of stuff at one point was handled by hosting providers. It now seems that the internet is "facebook/google" so it is no wonder they are getting the brunt of this sot of work.<p>I can tell you back in the day when I worked for Rackshack I never envied the abuse department. They always looked stressed out.<p>8k post in a day is just too many for one person. Its not about how much work they are doing, its about the content they are subjected to look at and review. To do that you have to actually think about it and make a decision -- and that takes a toll on people -- you don't get to forget.<p>I am not one for regulation but if there is on place in tech that should be considered for regulation, then I think this is a good place to start.<p>These workers need paid more, access to therapy and much more time off. I also think there are technical solutions to help ease the work needed, but that cost, and nobody seems to want to pay with human workers left holding the bag. This would also include stricter rules to make filtering easier.<p>Good job, you guys can put a dancing hotdog on the screen why not use that talent to work and make back-end systems to automate this sort of horrid work away. I know it will be hard, but if NN and deep learning is all cracked up to what keeps being preached then it should be within the realm of possible.<p>Also, while with the when it comes to the law, I am for the notion that it is okay if 10 bad people get away if it means not falsely convicting 1 good person. However on the internet with regards to what normally amounts to pointless shit that people post on the internet. I am okay with 100 good post being automatically removed it if means 1 bad post is also removed.
The internet really is an incredible cesspool. It would be interesting to see how the public reacted if YouTube and Facebook turned off their content moderation for a week. It would make the goatse meme look like a Sunday school picnic.
Why are we surprised or shocked? Hasn't society always used servants, police, soldiers, miners, loggers, garbage men, wardens, and such to do tasks that the rest of us are loath to do, and to keep certain stuff away from 'civilised' society?<p>Why would online be different suddenly? Analogues of all the above are needed online too. And somebody who has no other option will be unfortunate enough to fill these roles.
I would expect that social media websites whose content is largely decided democratically (via votes, shares, or the like) would relegate the majority of this content to a place where it is not seen by many. I would argue that the best way to handle this issue is to let the sites mechanisms deal with the content accordingly and then focus efforts on developing processes that will be able to detect and remove it automatically.<p>The article implies that they are forcing moderators to view the content at a high clip. Why, so as to get false positives back online as quickly as possible? Maybe moderators should only review content that reaches a certain threshold of complaint, and other content is left as is?
Can't help but being reminded of this Silicon Valley episode: <a href="https://youtu.be/dvn-hpZdElo" rel="nofollow">https://youtu.be/dvn-hpZdElo</a>
In the very early days of the internet being used at a very large corporation, I had the task of reviewing proxy logs to monitor what was euphemistically called "non-business use of the internet". I started by scanning for URLs with "XXX" in them, then pivoted to make a more extensive list.<p>I never looked at the content itself. Just the seeing the URLs was corrosive enough.
A Kaggle competition just started a week ago, hosted by Jigsaw (part of Alphabet) for classifying toxic content (insults, threats, vulgarity, etc) in online comments. $35,000 prize pool.
<a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge" rel="nofollow">https://www.kaggle.com/c/jigsaw-toxic-comment-classification...</a><p>Also interesting, they already have an API for doing this sort of classification:
<a href="https://perspectiveapi.com/" rel="nofollow">https://perspectiveapi.com/</a>
One of our team members used to do this job; luckily, she managed to do it with deep learning, so didn't have to spend too much time looking at unpleasant images.<p>This experience is one of the main drivers that pushes our team to develop an open, nonprofit conversation platform on which harrassment is difficult by design.<p>www.hellolyra.com/introduction
South Park did an episode on this. Where everyone was so afraid of "reality" they elected someone to censor it all and keep their tweets only positive.<p>Butters had to see every depraved thing. And he ended up trying to kill himself.<p>And in the end, when he almost died, they blamed <i>him</i> for failing to be the perfect filter.
It starts censoring topics that are illegal, then topics that aren't family friendly and finally, it will end censoring point of view and free speech.<p>Its 1984 all over.<p>Im not an anarchist however, if something is illegal then go ahead and cut it but, censoring without a legal cause is a crime.