TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Tech firms must tame toxic algorithms to protect children online

52 pointsby w14about 1 year ago

16 comments

Aurornisabout 1 year ago
People love these proposals until they read the details and think of the consequences. Anything that requires &quot;robust age-checks&quot; means that everyone using the site must go through an ID check and validation process. No more viewing anything without first logging in via your ID-checked account<p>&gt; 1. Carry out robust age-checks to stop children accessing harmful content<p>&gt; Our draft Codes expect much greater use of highly-effective age-assurance[2] so that services know which of their users are children in order to keep them safe.<p>&gt; In practice, this means that all services which do not ban harmful content, and those at higher risk of it being shared on their service, will be expected to implement highly effective age-checks to prevent children from seeing it. In some cases, this will mean preventing children from accessing the entire site or app. In others it might mean age-restricting parts of their site or app for adults-only access, or restricting children’s access to identified harmful content.<p>Before people try to brush aside these regulations as only applying to sites you don&#x27;t think you use, the proposal is vague about what is included in the guidelines. It includes things like &quot;harmful substances&quot;, meaning any discussion of drugs or mushrooms could be included, for example.<p>Think twice before encouraging regulations that would bring ID checking requirements to large parts of the internet. If you enjoy viewing sites like Reddit or Hacker News or Twitter without logging in or handing over your ID, these proposals are not good for you at all.
评论 #40300560 未加载
评论 #40305263 未加载
评论 #40300529 未加载
评论 #40300738 未加载
评论 #40300827 未加载
评论 #40300501 未加载
评论 #40300914 未加载
评论 #40301089 未加载
kwhitefootabout 1 year ago
The thing that always strikes me in all the reporting and discussion of the problems that Ofcom is trying to solve is that no one seems to ask if the problems are equally bad in other countries, especially in non-English speaking countries. And if it isn&#x27;t then can, and should, whatever helps there be implemented in the UK?<p>I live in Norway and it doesn&#x27;t seem that the problem is so severe here. Or is it simply that English speaking media is more willing to latch on to extreme events and make out that they are the norm?
评论 #40300717 未加载
评论 #40300331 未加载
评论 #40300371 未加载
评论 #40300430 未加载
评论 #40300572 未加载
评论 #40300370 未加载
评论 #40300381 未加载
评论 #40300307 未加载
causalabout 1 year ago
Something important to keep in mind: most people never experience just how twisted these recommendation algorithms can get, because each of us gets an experience tailored to our developed tastes.<p>But these algorithms will totally curate wildly disturbing playlists of content because it has learned that this can be incredibly addicting to minds unprepared for it.<p>And what&#x27;s most sinister is how opaque the process is, to the degree that a parent can&#x27;t track what is happening without basically watching their kids activity full-time.<p>Idk if OFCOM is implementing this right or not, but I think there would be a much greater outcry if more people saw the breadth of these algorithms&#x27; toxicity.
评论 #40300584 未加载
ricardo81about 1 year ago
It&#x27;s quite obvious that Twitter&#x2F;Google&#x2F;Facebook&#x2F;whoever do not have algorithms that scale where they can genuinely curate their content. Seems that obvious since Google bought YouTube.<p>Isn&#x27;t it quite obvious that it&#x27;s never been their prerogative. Nor protecting copyright.
评论 #40300907 未加载
cynicalsecurityabout 1 year ago
Isn&#x27;t it the parents&#x27; job? Why introduce authoritarianism under the disguise of caring for children?
评论 #40300316 未加载
评论 #40300348 未加载
sudofailabout 1 year ago
In my view, we need legislation to step in and enforce some level of algorithmic tuning. Modern algorithms drive engagement at all costs, regardless if it&#x27;s healthy for the individual. I want to be able to tune the algorithm to potentially use a timeline feed instead, or limit content to only come from topics I subscribe to, etc. We probably need parental controls that allow parents to enforce algorithm tuning as well.<p>A recent example of an algorithm going wrong is Reddit. Home used to show you strictly a feed of reddits you subscribed to, and it was shown as a timeline. The most recent changes not only removed the timeline approach to the feed, it&#x27;s now injecting subreddits you don&#x27;t subscribe to and asks if you&#x27;re interested in them.
fidotronabout 1 year ago
It&#x27;s curious how aligned this is with similar moves in Canada discussed here: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40298552">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40298552</a><p>For those unfamiliar Ofcom is basically the UK telecoms regulator.
az09mugenabout 1 year ago
I have a better solution : tech firms must stop using toxic algorithms for everyone, not just childrens. Why are they allowed to use these practices in the first place ? Why do we have to endure&#x2F;tolerate this stuff that makes internet a worse place ?
johneaabout 1 year ago
No intention of protecting adults?
评论 #40300249 未加载
评论 #40300185 未加载
yawboakyeabout 1 year ago
next week: music studios must tame toxic lyrics to protect children.
surfingdinoabout 1 year ago
Interesting... if you sob and moan to YT or Instagram about not having enough followers or views they&#x27;ll tell you to replace the word &quot;algorithm&quot; with &quot;audience&quot;, so people. It makes sense, if your content is not popular with people no algorithm will surface it (recent tweaking of Instagram algo notwithstanding). But if we follow that interpretation we have to admit that it&#x27;s not the algorithms that are toxic, but people. So what Ofcom is asking tech companies is to &quot;tame&quot; toxic people. Good luck with that. Parents have to realize that computers, phones, or tablets help sometimes unsavoury characters get in touch with their children. We do not allow strangers into daycare centres, schools, or children&#x27;s hospitals, so why do we allow strangers unrestricted access to our children via the devices we give them? Parents need to be told to take responsibility for who has access to their children.
gedyabout 1 year ago
&gt; We want children to enjoy life online. But for too long, their experiences have been blighted by seriously harmful content which they can’t avoid or control. Many parents share feelings of frustration and worry about how to keep their children safe. That must change.<p>Yes, stop letting kids stare at screens all day. Yes, you are a bad&#x2F;lazy parent letting the firehose of the Internet pipe into their heads.
评论 #40300715 未加载
评论 #40300234 未加载
评论 #40300160 未加载
eddof13about 1 year ago
No. The parents are responsible for their children, not tech firms or anyone else.
w14about 1 year ago
- Ofcom sets out more than 40 practical steps that services must take to keep children safer - Sites and apps must introduce robust age-checks to prevent children seeing harmful content such as suicide, self-harm and pornography - Harmful material must be filtered out or downranked in recommended content
deadbabeabout 1 year ago
What if we just ban all recommendation systems?
评论 #40300656 未加载
Digit-Alabout 1 year ago
Personally, I am against the idea of adults having to prove their age before being able to access certain types of content - particularly if that means giving their identity. I am not, however, adverse to the idea that big tech companies should be more responsible for what they are serving to youngsters.<p>Yes, I know their are plenty of tools to allow parents to restrict what sites their children visit, etc... but not all parents are tech savvy enough to be able to set this stuff up, plus you could still allow a child to access Youtube, for example, but then find they are getting unsavoury recommendations from the algorithm.<p>This made me think about the fact that the major platforms (Alphabet, Amazon, Apple, Meta, and Microsoft) gather enough data on their users that they almost certainly know roughly how old someone is, even if no age has been provided to them. They can use all the signals they have available to provide a score for how certain they are that an individual is, or is not, legally an adult.<p>(As an example, if you have a credit or debit card in your Google or Apple wallet then you are almost certainly an adult because it would be very difficult for a child to obtain a card and get it into a digital wallet due to the security procedures that are in place.)<p>Given that, if these companies get forced to discern whether users are adults or not in order to serve appropriate content then it seems a no brainer for them to provide free age verification as well.<p>My vision would be for the UK government to provide an anonymised age verification router service. When a website requires you to verify your age in order to access some particular content it could ask you which age verification service you wish to use. It then sends a request to the government &quot;middleman&quot; that includes only the URL of the verification service. The router forwards the request anonymously to the specified server (no ip address logs are stored). If you are logged in to the account already then it will immediately return true or false to verify that you are or are not an adult. If you are not logged in then you will be prompted to login to your account with the service and then it will return the answer. The government server will then return the answer to the original website.<p>That way, we can get free, anonymous verification.<p>I&#x27;m sure people will have issues with this idea, such as &quot;do you trust the government server to not log details fo your request instead of being anonymous?&quot; - to which I do not have a definitive answer, but I feel like it is potentially a little better than having Google or Facebook knowing what sites I am visiting that need verification.<p>Anyone out there have any thoughts on this? I have only just had the idea pop into my head, so no serious thought has gone into it. There are probably issues that I have not thought about.