Do grocery stores “capitalize on vulnerability” when they place name-brand products at eye level?<p>Do carmakers “capitalize on vulnerability” when they advertise pickup trucks as big tough vehicles for tough, outdoorsy men?<p>Do providers of health insurance for pets “capitalize on vulnerability” when they say you need to buy their product if you love your pet?<p>At some point people need to be responsible for their own decisions. And I can’t get that worked up about Meta’s free product.
The dilemma parents are grappling with is this: tablets and smartphones, while beneficial for children's learning and socializing, also expose them to constant marketing and propaganda, even within the confines of their bedrooms, as they attempt to connect with peers or complete tasks.<p>Previously, children's exposure to marketing and propaganda was mostly confined to their entertainment hours, during which they watched television or read magazines. There was at least some hope for moderation. However, "apps" have blurred these boundaries, as the same devices used for education and social interaction are also channels for persistent advertising and messaging, making it harder to limit exposure to just "entertainment" time.
There is science driving the design of products to make them addictive.<p>For teen girls - the apps are designed to scare them about being socially excluded. For teen boys - the apps are designed to fill their need to master skills.<p>The issue that the government has to deal with with app addictions is self harm attempts by girls (e.g. emergency room visits) and underperformance of boys in the real world (e.g. low college enrollment).<p>If you are trying to make an addictive app, this is a good reference to understand the science: <a href="https://www.amazon.com/Hooked-How-Build-Habit-Forming-Products/dp/1591847788" rel="nofollow noreferrer">https://www.amazon.com/Hooked-How-Build-Habit-Forming-Produc...</a><p>BJ Fogg is a good reference too: <a href="https://www.bjfogg.com" rel="nofollow noreferrer">https://www.bjfogg.com</a>
Please correct me if I'm wrong, but it's my understanding that Meta (and most other big tech companies) have long been in the business of hiring a large number of recent social science Ph.D. graduates from top U.S. universities. People with a lot of knowledge of statistics and some domain-specific knowledge in their fields that could possibly be applicable to their job. The whole purpose of doing this is to create teams of marketing people doing in-house research to figure out how to best manipulate others by maximizing "engagement" or whatever other metric.<p>Isn't this just how all big tech companies operate as a normal business practice? Certainly Youtube is no better when it comes to targeted content and advertisements to children to their detriment.<p>My main point is that I don't think it makes any difference whether Meta has some internal document proving that they specifically target children with these practices. The problem is so much bigger than a single policy or company, and legislatures need to figure out a better way to address the overarching problems. I don't have much faith that these one-off lawsuits will make that much of an impact given that they almost always lead to some fine or settlement that is an acceptable business loss for the company.<p>I'm all for Meta being decimated by a thousand cuts in the form of lawsuits from various levels of government, but at best it would just be replaced with something else unless more regulation exists at the top levels (US / EU / etc).
It seems like basically all marketing and advertising is human pen-testing. Thought is serialized into video or audio and then deserialized back into thought which is evaluated. Sometimes this evaluation causes downstream thoughts and actions (including propagating the vulnerability). The question is whether the resulting action is 'organic' or a RCE - overriding the agency of the actor.<p>I think a core class that should be taught is how to safety deserialize sensory input as to avoid causing RCEs. Or basically 'patching' these known vulnerabilities.
The nuance with social media ('digital' media generally I guess), is how hard it is for third parties to verify/audit/understand wtf is going on to be able to prove if anything negative is happening.<p>With broadcast media like TV, I can see what the programming is, and I can watch the same ads that every other house is getting broadcast to know what's being shown to kids (and research companies do this). Similarly for retail media, I can go to a store and see what a retailer is doing.<p>For Meta with AI newsfeeds and targeted ads, it's impossible to know exactly what any one persons experience is. I don't know the veracity of this specific case is but as a minimum I think there should be some legislation that force these companies to be auditable in some way...
Should come as no surprise, honestly.<p>Above all else since turning public, Meta is in the business of making money. It's not illegal to target user's vulnerabilities in order to get the user to spend more time or money on their platform. It's unethical as hell, but it's business 101 - the shareholders would revolt if Zuck came out and said "here's this opportunity to make you all a ton of money, but we're placing our personal ethics above doing this, so we're not". He'd get sued for breach of fiduciary duty.<p>Now, are Meta's product strategies unethical (or questionably ethical), harmful to society, and setting bad precedent? Yeah, I'd agree with that. But the market and shareholders like money.
I for one hope this case reach the supreme court and it's struck down on egregious Government overreach. This case has no proof about net harm to teenagers by social media.<p>This case is basically projecting everyone's misplaced hate of social media without doing a proper controlled experiment of it's benefits/harm to the society.<p>You can't do controlled experiments on humans and hence the states have no case except overreach. If they really want to cater to their constituents then pass specific laws.
Here is the now unredacted complaint:<p><a href="https://ia800508.us.archive.org/12/items/gov.uscourts.cand.419868/gov.uscourts.cand.419868.73.2.pdf" rel="nofollow noreferrer">https://ia800508.us.archive.org/12/items/gov.uscourts.cand.4...</a><p>Employee names are still redacted. Given Zuckerberg's views on privacy, one wonders why they should remain "anonymous".
The problems of social media go far beyond exploiting teens. It used to be you only saw mob behavior when a large group got together. With social media, you can have a virtual mob going all the time, and ready to materialize IRL for kinetic impact. We now just shrug when things like this happen (Jewish high school teacher forced to go into hiding because of students rampaging).<p><a href="https://nypost.com/2023/11/25/metro/jewish-teacher-hides-in-queens-high-school-as-students-riot/" rel="nofollow noreferrer">https://nypost.com/2023/11/25/metro/jewish-teacher-hides-in-...</a>
Disclaimer: I worked at Meta for a time (not in areas relevant to this lawsuit). My experience was that many people working there also have children, cared deeply about this issue, and wanted to find ways to solve it.
The only enforceable claim is regarding usage of minors below the age of 13. All other claims are "soft" violations. Legal but unethical.<p>How do you regulate legal but unethical? You can't. So let's make it illegal. But how?<p>Maximum notifications per day? Deep introspection of the actual content? Good and bad influencers? Curfue? It's impossible to codify this into law, unless you're China.
Boy this is so different than the MMA accusations against X. The WSJ fully reported on their methodology, gave Meta months to change, and partnered with a third party to verify the results. The advertiser responses to their ads showing next to arguably much more objectional content here are quite different.
I would pay for an adless and algo control version of Facebook, Insta and Twitter/X.<p>With Twitter even if I pay, still get the same number ads.<p>I want to customize what is shown in my feed.
I'd argue that Facebook itself is protected 1A speech (as are the recommendations of the YouTube algorithm). It's not a consumer product, and it's not a defective one. Parents have parental controls, and they should educate themselves on how to effectively use them.<p>I suppose that the broader concern is over precisely what duties a company has to its customers. They obviously have the duty to be truthful when making offers, but every customer relationship will have an adversarial component where each party benefits at the other's expense (or at the expense of third parties). In cases like a bar serving alcohol to customers, there's usually some responsibility to prevent patrons from getting extremely intoxicated and getting in a car. But that case involves a clear signal that someone is dangerous. Facebook doesn't know if someone's grades are suffering or if they're having mental health issues. It doesn't know if it should tell the user to "touch grass".
Marxists have long argued that the problem with capitalism is <i>not</i> that it's the <i>cause</i> of humanity's social problems, but that it systemically <i>exploits</i> humanity's social problems.
Yet another hack job about the dangers of algorithms. Do these journalists not understand that Meta is worth at least several billion dollars? Do they not get how much value Meta and all its products have delivered to people all over the world? Just terrible reporting all around.