People love these proposals until they read the details and think of the consequences. Anything that requires "robust age-checks" means that everyone using the site must go through an ID check and validation process. No more viewing anything without first logging in via your ID-checked account<p>> 1. Carry out robust age-checks to stop children accessing harmful content<p>> Our draft Codes expect much greater use of highly-effective age-assurance[2] so that services know which of their users are children in order to keep them safe.<p>> In practice, this means that all services which do not ban harmful content, and those at higher risk of it being shared on their service, will be expected to implement highly effective age-checks to prevent children from seeing it. In some cases, this will mean preventing children from accessing the entire site or app. In others it might mean age-restricting parts of their site or app for adults-only access, or restricting children’s access to identified harmful content.<p>Before people try to brush aside these regulations as only applying to sites you don't think you use, the proposal is vague about what is included in the guidelines. It includes things like "harmful substances", meaning any discussion of drugs or mushrooms could be included, for example.<p>Think twice before encouraging regulations that would bring ID checking requirements to large parts of the internet. If you enjoy viewing sites like Reddit or Hacker News or Twitter without logging in or handing over your ID, these proposals are not good for you at all.
The thing that always strikes me in all the reporting and discussion of the problems that Ofcom is trying to solve is that no one seems to ask if the problems are equally bad in other countries, especially in non-English speaking countries. And if it isn't then can, and should, whatever helps there be implemented in the UK?<p>I live in Norway and it doesn't seem that the problem is so severe here. Or is it simply that English speaking media is more willing to latch on to extreme events and make out that they are the norm?
Something important to keep in mind: most people never experience just how twisted these recommendation algorithms can get, because each of us gets an experience tailored to our developed tastes.<p>But these algorithms will totally curate wildly disturbing playlists of content because it has learned that this can be incredibly addicting to minds unprepared for it.<p>And what's most sinister is how opaque the process is, to the degree that a parent can't track what is happening without basically watching their kids activity full-time.<p>Idk if OFCOM is implementing this right or not, but I think there would be a much greater outcry if more people saw the breadth of these algorithms' toxicity.
It's quite obvious that Twitter/Google/Facebook/whoever do not have algorithms that scale where they can genuinely curate their content. Seems that obvious since Google bought YouTube.<p>Isn't it quite obvious that it's never been their prerogative. Nor protecting copyright.
In my view, we need legislation to step in and enforce some level of algorithmic tuning. Modern algorithms drive engagement at all costs, regardless if it's healthy for the individual. I want to be able to tune the algorithm to potentially use a timeline feed instead, or limit content to only come from topics I subscribe to, etc. We probably need parental controls that allow parents to enforce algorithm tuning as well.<p>A recent example of an algorithm going wrong is Reddit. Home used to show you strictly a feed of reddits you subscribed to, and it was shown as a timeline. The most recent changes not only removed the timeline approach to the feed, it's now injecting subreddits you don't subscribe to and asks if you're interested in them.
It's curious how aligned this is with similar moves in Canada discussed here: <a href="https://news.ycombinator.com/item?id=40298552">https://news.ycombinator.com/item?id=40298552</a><p>For those unfamiliar Ofcom is basically the UK telecoms regulator.
I have a better solution : tech firms must stop using toxic algorithms for everyone, not just childrens. Why are they allowed to use these practices in the first place ? Why do we have to endure/tolerate this stuff that makes internet a worse place ?
Interesting... if you sob and moan to YT or Instagram about not having enough followers or views they'll tell you to replace the word "algorithm" with "audience", so people. It makes sense, if your content is not popular with people no algorithm will surface it (recent tweaking of Instagram algo notwithstanding). But if we follow that interpretation we have to admit that it's not the algorithms that are toxic, but people. So what Ofcom is asking tech companies is to "tame" toxic people. Good luck with that. Parents have to realize that computers, phones, or tablets help sometimes unsavoury characters get in touch with their children. We do not allow strangers into daycare centres, schools, or children's hospitals, so why do we allow strangers unrestricted access to our children via the devices we give them? Parents need to be told to take responsibility for who has access to their children.
> We want children to enjoy life online. But for too long, their experiences have been blighted by seriously harmful content which they can’t avoid or control. Many parents share feelings of frustration and worry about how to keep their children safe. That must change.<p>Yes, stop letting kids stare at screens all day. Yes, you are a bad/lazy parent letting the firehose of the Internet pipe into their heads.
- Ofcom sets out more than 40 practical steps that services must take to keep children safer
- Sites and apps must introduce robust age-checks to prevent children seeing harmful content such as suicide, self-harm and pornography
- Harmful material must be filtered out or downranked in recommended content
Personally, I am against the idea of adults having to prove their age before being able to access certain types of content - particularly if that means giving their identity. I am not, however, adverse to the idea that big tech companies should be more responsible for what they are serving to youngsters.<p>Yes, I know their are plenty of tools to allow parents to restrict what sites their children visit, etc... but not all parents are tech savvy enough to be able to set this stuff up, plus you could still allow a child to access Youtube, for example, but then find they are getting unsavoury recommendations from the algorithm.<p>This made me think about the fact that the major platforms (Alphabet, Amazon, Apple, Meta, and Microsoft) gather enough data on their users that they almost certainly know roughly how old someone is, even if no age has been provided to them. They can use all the signals they have available to provide a score for how certain they are that an individual is, or is not, legally an adult.<p>(As an example, if you have a credit or debit card in your Google or Apple wallet then you are almost certainly an adult because it would be very difficult for a child to obtain a card and get it into a digital wallet due to the security procedures that are in place.)<p>Given that, if these companies get forced to discern whether users are adults or not in order to serve appropriate content then it seems a no brainer for them to provide free age verification as well.<p>My vision would be for the UK government to provide an anonymised age verification router service. When a website requires you to verify your age in order to access some particular content it could ask you which age verification service you wish to use. It then sends a request to the government "middleman" that includes only the URL of the verification service. The router forwards the request anonymously to the specified server (no ip address logs are stored). If you are logged in to the account already then it will immediately return true or false to verify that you are or are not an adult. If you are not logged in then you will be prompted to login to your account with the service and then it will return the answer. The government server will then return the answer to the original website.<p>That way, we can get free, anonymous verification.<p>I'm sure people will have issues with this idea, such as "do you trust the government server to not log details fo your request instead of being anonymous?" - to which I do not have a definitive answer, but I feel like it is potentially a little better than having Google or Facebook knowing what sites I am visiting that need verification.<p>Anyone out there have any thoughts on this? I have only just had the idea pop into my head, so no serious thought has gone into it. There are probably issues that I have not thought about.