>Anyhow, I truly believe humanity has to rollback to operating at a human scale.<p>>Using algorithms to flag content is totally fine… problem is when humans cannot interact with humans anymore and AI gets to chose what is right and wrong.<p>Can't agree more.
YouTube's content policy is bizare. Children in my family seem to watch videos of Spiderman dry humping Elsa or grannies kicking people in the nuts, but then they remove stuff like this.<p>I guess this is what happens when advertisers are the main priority and the moderators have all been replaced with robots. Content that may actually harm the development of children is considered fine, while someone posting a slightly edgey political take or a research paper about something which triggers a bot is removed.
We should start letting "AI" decide the outcome in any of Google's pending court cases. We could setup a twitter account and direct them to complain, I mean appeal by @ the twitter account and hope the bot gods decide their tweet is worthy enough for human review
I think the "oh no human moderation would cost us too much" defense is a distraction. It's a good one, because it is working, but it is still a distraction.<p>Let us suppose we have a program which checks for orthodoxy according to the YouTube Terms of Service against each video submitted.<p>Nothing but a lack of having done the work prevents YouTube from adding to its "rejected" flag the following:<p>1) A list of timestarts and durations of the portions of the video violating the ToS.
2) For each item in that list, make it a tuple and add a <i>reason</i> -- which clause was violated?<p>Then outputting that to the owner of the rejected video.<p>Nothing stops them from doing this. <i>Some</i> set of words or images happened somewhere in the timestream -- the orthodoxy program has that timestamp as it churns through the video. And a specific clause was violated -- a particular word in a wordlist as an example, because there's no simple, non-composite function that just says "good or not good."<p>This is really not a particularly high bar as an ask.
Recently I also discovered that YouTube shadow bans comments for using banned words (e.g. kill, coronavirus etc) or external links (two of my comments got banned). I have seen some people asking for sources on YouTube comments and it turns out you literally can't do that. Maybe that's why only spam comments rise to the top, any long or thoughtful comment simply gets shadow banned.<p>0. <a href="https://support.google.com/youtube/thread/6273409?hl=en" rel="nofollow">https://support.google.com/youtube/thread/6273409?hl=en</a><p>1. <a href="https://support.google.com/youtube/forum/AAAAiuErobU70d28s1NNC0/?hl=en&gpf=%23!topic%2Fyoutube%2F70d28s1NNC0" rel="nofollow">https://support.google.com/youtube/forum/AAAAiuErobU70d28s1N...</a>
Stop buying into the "the AI did it" whitewashing. Youtube is selectively targeting more and more channels and videos for daring to be any sort of contrarian. Its psyops under any other name.
FYI, this is the video: <a href="https://peertube.sunknudsen.com/videos/watch/182e7a03-729c-4d32-88b8-3de0dae58855" rel="nofollow">https://peertube.sunknudsen.com/videos/watch/182e7a03-729c-4...</a> ; the links in the description:<p>KRACK Attacks: Breaking WPA2 <a href="https://www.krackattacks.com/" rel="nofollow">https://www.krackattacks.com/</a><p>KRACK - Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2 <a href="https://www.youtube.com/watch?v=fOgJswt7nAc" rel="nofollow">https://www.youtube.com/watch?v=fOgJswt7nAc</a>
The hypocrisy from Google here is palpable. First this content creator got a strike for showing users how GPG works and now for linking to academic research. On the other hand you have Google project zero who openly publish vulnerability research to the detriment of some software vendors.<p>You can’t have it both ways and as someone who works in this space I’m pretty upset by this ham fisted approach to censoring content.<p>I look forward to google becoming the next AOL.
Maybe it's time for a public company-independent complains-platform where all companies are obliged by law to respond. Each complaint of course is hidden behind some anti-bot captcha/protection so that the companies themselve can't use AI to answer your complaints.
AI is a great excuse for censorship. But many economic actors want censorship including the advertisers. So it seems more competition to Youtube is the only answer to censorship. But I don’t see any credible competition on the horizon to Youtube for a while.<p>The ultimate ironic censorship on the part of Youtube:<p><a href="https://www.mintpressnews.com/media-censorship-conference-censored-youtube/274918/" rel="nofollow">https://www.mintpressnews.com/media-censorship-conference-ce...</a><p>And the discussion thread:<p><a href="https://news.ycombinator.com/item?id=26008217" rel="nofollow">https://news.ycombinator.com/item?id=26008217</a>
Time to install GPT-3 on your own server and unleash it onto Google-Support, YouTube-Support, Alpha-Support, etc. to complain about your situation ;-)
You just need to answer the captchas to keep it goin.
YouTube, like most of the "Internet" these days it's just a shopping center. It appears to be the Universe because these sites/companies are so massive but at the end of the day it's their turf and they do what they want.<p>Now, once they reach a certain scale, why are they allowed to operate like a "normal small company"?
I don't see any mention of the other thing that would be a concern to me - don't ever be in a situation where any Google product is <i>critical</i> to you, because the behemoth may casually destroy you as a side effect.<p>What if instead of locking him for a week they'd canceled his account (and related accounts e.g. Gmail, plus linked accounts because 'bypassing ban')?<p>And you don't have to be doing something like discussing security - maybe you're talking about Pokémon Go and they ban you for "CP videos" or you posted a bunch of red or green emoji in livestream comments for a gamer. Goodbye email, sites logged into with a Google account, Android phones (because they KNOW that phone number is linked to your identity), etc.<p>And what are you going to do about it? Call customer service? <i>snrk</i>
I remember a survey by google about security research and one of the questions was something like "What hinders learning/education in this topic?", and I remember answering exactly <i>this</i> behavior.
FWIW the video in question seems to be this one from ~1y ago:<p><a href="https://peertube.sunknudsen.com/videos/watch/182e7a03-729c-4d32-88b8-3de0dae58855" rel="nofollow">https://peertube.sunknudsen.com/videos/watch/182e7a03-729c-4...</a>
It would be fun to do a quick survey of books breathlessly laying out the future of the internet from 20 years ago.<p>Looking back, the evolution of social media/youtube was pretty obvious. Back then, not so much.<p>1) Begin with anything goes including illegal. Run at a loss, Grow baby!<p>2) Ads<p>3) Bitching from some important people causes some removal of the more flagrantly illegal stuff.<p>4) Employees and PR departments apply a POV to what is now close to a monopoly. Large scale censorship.<p>What makes the modern era interesting is the POV angle. I can't imagine Rockefeller's Standard Oil restricting sales to people with conflicting politics. Of course, there are people spending a lot of time each day on r/politics and what used to be r/thedonald shouting joyous insanity at each other...that's the modern pool of workers.
Whenever I hear "algorithm" anymore, I see it as the "statistics" of the past. This entire thing is a product of math washing[1].<p>[1]<a href="https://www.mathwashing.com/" rel="nofollow">https://www.mathwashing.com/</a>
It's disheartening that in the 2000s, people used to regularly talk about creating decentralized systems to prevent megacorporations and governments from being able to censor the internet, now the tech community that used to deplore such tactics actively supports mass AI censorship on entire host of topics where the only version of the truth that is allowed to stand is #thenarrative.
Are we certain this was not because the vulnerability is called "KRACK" which is similar to the drug "crack"? Youtube has been very strict about such phrasing mishaps.
This is what people choose when they decide to angrily tweet at advertisers into pulling away ads whenever YouTube doesn't take down bad videos quickly enough. It tells Youtube that they should tune their moderation to be overly aggressive. This is true regardless of how much human moderation they use.
One of the largest and most profitable companies in the world can't hire enough low-paid humans to do content review. They already have some of the highest margins in history and they still can't figure out that sacrificing less than 1% of profits could solve this problem?
Odd, more so given a whole video is upon YT about this from 2017.<p><a href="https://youtu.be/Oh4WURZoR98" rel="nofollow">https://youtu.be/Oh4WURZoR98</a>
I think that Google should hide all sites that display ads from the search results... because after all, it goes against AMP's principles of responsive websites.<p>They definitely should also block Gmail... because it has become incredibly slow to load.<p>I bet the CEO is laughing a lot when he sees all of this interacting together.
Google has long been manipulating content, look up "blue anon". You can compare searches between Bing and you'll find lots of content thats been "moderated"
Not a popular opinion here but selectively covering up research is a play straight out of the Communist Manifesto. This sounds more like concerns over the spread of hacking intel, but it's worth noting yet another example given big tech's selective censorship over the past year.
> Anyhow, I truly <i>believe</i> humanity has to rollback to operating at a human scale.<p>BYOReligion.. and Google believes that they need to increase 'engagement', increase profit, and decrease headcount.. so.. there you have it dear Sun Knudsen.<p>Edit: I don't care for the 'virtual currency' of 'karma'. I wonder though; the sky is blue, I point at the sky, I say "the sky is blue", and people frown (?) upon me for stating the obvious fact of life.. Or is it just Google-trolls/fanboys/fangirls that don't like the true being called out? My above comment is also applicable to any entity that automates "as much as possible" and maintains a small team to manage operations. Something tells me that their internal Finance team has "enough" staff to monitor the General Ledger. Because General Ledger is critical to their $$$$$. A 'content creator' though, is not as critical, thus gets far less attention, and <insert FAANG> will only be bothered to fix this, if said person yells loud enough. How is <i>this</i> fact of life being downvoted? (not me.. I don't grow taller/shorter based on the karma points) :)
I am sure it is ok to use AI to cheapen initial screening. This FAANG and the like set of companies are becoming vital for people to the point that sometime they can loose their income. I see nothing wrong with those that serve as a source of income being declared as utility type service with the mandatory staged conflict resolution process. Said process at the last stage (well before the actual lawsuit) would include being able to present proof to the actual human being/s who has the power to reverse the decision and is paid for but not employed by said utility. In case of utility failing to comply that watchdog should be able to levy fines. This should also help to prevent cases when the utility "moderation" is based on the political (other similar in nature) opinions of the owners.<p>All this "I am a private company and do as I please" when it comes to a very big companies is baloney. They have way too much power and should be held responsible for how this power being used. Preventing them from being able to lobby governments should be of first priority as well.<p>Since the companies of that type are international it is of course up to the individual governments to implement whatever if any measures they would see fit.
Google is a private company. It's fully within their rights to remove any video they don't like, for whatever reason.<p>It comes with the territory if we want Google to be able to censor hate speech and misinformation.<p>You can always use a competing site, or build your own.