TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Judges rule Big Tech's free ride on Section 230 is over

468 点作者 eatonphil9 个月前

57 条评论

nsagent9 个月前
The current comments seem to say this is rings the death knell of social media and that this just leads to government censorship. I&#x27;m not so sure.<p>I think the ultimate problem is that social media is not unbiased — it curates what people are shown. In that role they are no longer an impartial party merely hosting content. It seems this ruling is saying that the curation being algorithmic does not absolve the companies from liability.<p>In a very general sense, this ruling could be seen as a form of net neutrality. Currently social media platforms favor certain content, while down weighting others. Sure, it might be at a different level than peer agreements between ISPs and websites, but it amounts to a similar phenomenon when most people interact on social media through the feed.<p>Honestly, I think I&#x27;d love to see what changes this ruling brings about. HN is quite literally the only social media site (loosely interpreted) I even have an account on anymore, mainly because of how truly awful all the sites have become. Maybe this will make social media more palatable again? Maybe not, but I&#x27;m inclined to see what shakes out.
评论 #41393405 未加载
评论 #41395202 未加载
评论 #41396513 未加载
评论 #41393322 未加载
评论 #41397839 未加载
评论 #41393315 未加载
评论 #41395555 未加载
评论 #41395561 未加载
评论 #41401913 未加载
评论 #41398119 未加载
评论 #41397851 未加载
评论 #41395722 未加载
评论 #41392941 未加载
评论 #41393645 未加载
评论 #41392930 未加载
评论 #41400174 未加载
评论 #41397752 未加载
评论 #41397572 未加载
评论 #41398305 未加载
评论 #41398673 未加载
评论 #41396838 未加载
评论 #41396184 未加载
评论 #41394437 未加载
评论 #41393384 未加载
Animats9 个月前
This turns on what TikTok &quot;knew&quot;:<p><i>&quot;But by the time Nylah viewed these videos, TikTok knew that: 1) “the deadly Blackout Challenge was spreading through its app,” 2) “its algorithm was specifically feeding the Blackout Challenge to children,” and 3) several children had died while attempting the Blackout Challenge after viewing videos of the Challenge on their For You Pages. App. 31–32. Yet TikTok “took no and&#x2F;or completely inadequate action to extinguish and prevent the spread of the Blackout Challenge and specifically to prevent the Blackout Challenge from being shown to children on their [For You Pages].” App. 32–33. Instead, TikTok continued to recommend these videos to children like Nylah.</i>&quot;<p>We need to see another document, &quot;App 31-32&quot;, to see what TikTok &quot;knew&quot;. Could someone find that, please? A Pacer account may be required. Did they ignore an abuse report?<p>See also Gonzales vs. Google (2023), where a similar issue reached the U.S. Supreme Court.[1] That was about whether recommending videos which encouraged the viewer to support the Islamic State&#x27;s jihad led someone to go fight in it, where they were killed. The Court rejected the terrorism claim and declined to address the Section 230 claim.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Gonzalez_v._Google_LLC" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Gonzalez_v._Google_LLC</a>
评论 #41394409 未加载
评论 #41397690 未加载
评论 #41394643 未加载
delichon9 个月前
<p><pre><code> TikTok, Inc., via its algorithm, recommended and promoted videos posted by third parties to ten-year-old Nylah Anderson on her uniquely curated “For You Page.” One video depicted the “Blackout Challenge,” which encourages viewers to record themselves engaging in acts of self-asphyxiation. After watching the video, Nylah attempted the conduct depicted in the challenge and unintentionally hanged herself. -- https:&#x2F;&#x2F;cases.justia.com&#x2F;federal&#x2F;appellate-courts&#x2F;ca3&#x2F;22-3061&#x2F;22-3061-2024-08-27.pdf?ts=1724792413 </code></pre> An algorithm accidentally enticed a child to hang herself. I&#x27;ve got code running on dozens of websites that recommends articles to read based on user demographics. There&#x27;s nothing in that code that would or could prevent an article about self-asphyxiation being recommended to a child. It just depends on the clients that use the software not posting that kind of content, people with similar demographics to the child not reading it, and a child who gets the recommendation not reading it and acting it out. If those assumptions fail should I or my employer be liable?
评论 #41394620 未加载
评论 #41394317 未加载
评论 #41394817 未加载
评论 #41396436 未加载
评论 #41395723 未加载
评论 #41395477 未加载
评论 #41394830 未加载
评论 #41395190 未加载
评论 #41402139 未加载
评论 #41395696 未加载
mjevans9 个月前
&quot;&quot;&quot;The Court held that a platform&#x27;s algorithm that reflects &quot;editorial judgments&quot; about &quot;compiling the third-party speech it wants in the way it wants&quot; is the platform&#x27;s own &quot;expressive product&quot; and is therefore protected by the First Amendment.<p>Given the Supreme Court&#x27;s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others&#x27; content via their expressive algorithms, it follows that doing so amounts to first-party speech under Section 230, too.&quot;&quot;&quot;<p>I&#x27;ve agreed for years. It&#x27;s a choice in selection rather than a &#x27;natural consequence&#x27; such as a chronological, threaded, or even &#x27;__end user__ upvoted &#x2F;moderated&#x27; (outside the site&#x27;s control) weighted sort.
评论 #41392811 未加载
hn_acker9 个月前
For anyone making claims about what the authors of Section 230 intended or the extent to which Section 230 applies to targeted recommendations by algorithms, the authors of Section 230 (Ron Wyden and Chris Cox) wrote an amicus brief [1] for Google v. Gonzalez (2023). Here is an excerpt from the corresponding press release [2] by Wyden:<p>&gt; “Section 230 protects targeted recommendations to the same extent that it protects other forms of content presentation,” the members wrote. “That interpretation enables Section 230 to fulfill Congress’s purpose of encouraging innovation in content presentation and moderation. The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable Internet users and platforms alike. Section 230’s protection remains as essential today as it was when the provision was enacted.”<p>[1][PDF] <a href="https:&#x2F;&#x2F;www.wyden.senate.gov&#x2F;download&#x2F;wyden-cox-amicus-brief-section-230" rel="nofollow">https:&#x2F;&#x2F;www.wyden.senate.gov&#x2F;download&#x2F;wyden-cox-amicus-brief...</a><p>[2] <a href="https:&#x2F;&#x2F;www.wyden.senate.gov&#x2F;news&#x2F;press-releases&#x2F;sen-wyden-and-former-rep-cox-urge-supreme-court-to-uphold-precedent-on-section-230" rel="nofollow">https:&#x2F;&#x2F;www.wyden.senate.gov&#x2F;news&#x2F;press-releases&#x2F;sen-wyden-a...</a>
评论 #41396728 未加载
评论 #41396417 未加载
Xcelerate9 个月前
I&#x27;m not at all opposed to implementing <i>new</i> laws that society believes will reduce harm to online users (particularly children).<p>However, if Section 230 is on its way out, won&#x27;t this just benefit the largest tech companies that already have massive legal resources and the ability to afford ML-based or manual content moderation? The barriers to entry into the market for startups will become insurmountable. Perhaps I&#x27;m missing something here, but it sounds like the existing companies essentially got a free pass with regard to liability of user-provided content and had plenty of time to grow, and now the government is pulling the ladder up after them.
评论 #41393347 未加载
评论 #41393451 未加载
评论 #41397721 未加载
评论 #41394302 未加载
评论 #41398520 未加载
octopoc9 个月前
&gt; In other words, the fundamental issue here is not really whether big tech platforms should be regulated as speakers, as that’s a misconception of what they do. They don’t speak, they are middlemen. And hopefully, we will follow the logic of Matey’s opinion, and start to see the policy problem as what to do about that.<p>This is a pretty good take, and it relies on pre-Internet legal concepts like distributor and producer. There&#x27;s this idea that our legal &#x2F; governmental structures are not designed to handle the Internet age and therefore need to be revamped, but this is a counterexample that is both relevant and significant.
评论 #41396020 未加载
tboyd479 个月前
Fantastic write-up. The author appears to be making more than a few assumptions about how this will play out, but I share his enthusiasm for the end of the &quot;lawless no-man’s-land&quot; (as he put it) era of the internet. It comes at a great time too, as we&#x27;re all eagerly awaiting the AI-generated content apocalypse. Just switch one apocalypse for a kinder, more human-friendly one.<p>&gt; So what happens going forward? Well we’re going to have to start thinking about what a world without this expansive reading of Section 230 looks like.<p>There was an internet before the CDA. From what I remember, it was actually pretty rad. There can be an internet after, too. Who knows what it would look like. Maybe it will be a lot less crowded, less toxic, less triggering, and less addictive without these gigantic megacorps spending buku dollars to light up our amygdalas with nonsense all day.
评论 #41393854 未加载
评论 #41397395 未加载
seydor9 个月前
The ruling itself says that this is not about 230, it&#x27;s about TikTok&#x27;s curation and collation of the specific videos. TikTok is not held liable for the user content but for the part that they do their &#x27;for you&#x27; section. I guess it makes sense, manipulating people is not OK whether it&#x27;s for political purposes as facebook and twitter do, or whatever. So 230 is not over<p>It would be nice to see those &#x27;For you&#x27; and youtube&#x27;s recomendations gone. Chronological timelines are the best , and will bring back some sanity. Don&#x27;t like it? don&#x27;t follow it<p>&gt; Accordingly, TikTok’s algorithm, which recommended the Blackout Challenge to Nylah on her FYP, was TikTok’s own “expressive activity,” id., and thus its first-party speech.<p>&gt;<p>&gt; Section 230 immunizes only information “provided by another[,]” 47 U.S.C. § 230(c)(1), and here, because the information that forms the basis of Anderson’s lawsuit—i.e., TikTok’s recommendations via its FYP algorithm—is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.
评论 #41393047 未加载
评论 #41394988 未加载
评论 #41393059 未加载
评论 #41393053 未加载
chucke19929 个月前
So basically closer and closer to governmental control over social networks. Seems like a global trend everywhere. Governments will define the rules by which communication services (and social networks) should operate.
评论 #41393370 未加载
评论 #41392983 未加载
评论 #41394820 未加载
评论 #41392690 未加载
评论 #41393325 未加载
评论 #41392769 未加载
评论 #41397538 未加载
评论 #41393898 未加载
评论 #41392610 未加载
评论 #41392673 未加载
评论 #41392923 未加载
skeltoac9 个月前
Disclosures: I read the ruling before reading Matt Stoller’s article. I am a subscriber of his. I have written content recommendation algorithms for large audiences. I recommend doing one of these three things.<p>Section 230 is not canceled. This is a significant but fairly narrow refinement of what constitutes original content and Stoller’s take (“The business model of big tech is over”) is vastly overstating it.<p>Some kinds of recommendation algorithms produce original content (speech) by selecting and arranging feeds of other user generated content and the creators of the algorithms can be sued for harms caused by those recommendations. This correctly attaches liability to risky business.<p>The businesses using this model need to exercise a duty of care toward the public. It’s about time they start.
ssalka9 个月前
&gt; There is no way to run a targeted ad social media company with 40% margins if you have to make sure children aren’t harmed by your product.<p>More specific than being harmed by your product, Section 230 cares about <i>content you publish</i> and whether you are acting as a publisher (liable for content) or a platform (not liable for content). This quote is supposing what would happen if Section 230 were overturned. But in fact, there is a way that companies would protect themselves: simply don&#x27;t moderate content at all. Then you act purely as a platform, and don&#x27;t have to ever worry about being treated as a publisher. Of course, this would turn the whole internet into 4chan, which nobody wants. IMO, this is one of the main reasons Section 230 continues to be used in this way.
评论 #41396537 未加载
hnburnsy9 个月前
To me this decision doesn&#x27;t feel it is demolishing 230, but reducing its scope, a scope that was exanded by other court decisions. Per the article 230 said not liable for user content and not liable for restricting content. This case is about liability for reinforcing content.<p>Would love to have a timeline only, non reinforcing content feed.
blueflow9 个月前
Might be a cultural difference (im not from the US), but leaving a 10 year unsupervised with content from (potentially malicious) strangers really throws me off.<p>Wouldn&#x27;t this be the perfect precedence case on why minors should not be allowed on social media?
评论 #41395847 未加载
评论 #41395363 未加载
评论 #41395234 未加载
Smithalicious9 个月前
Hurting kids, hurting kids, hurting kids -- but, of course, there is zero chance any of this makes it to the top 30 causes of child mortality. Much to complain about with big tech, but children hanging themselves is just an outlier.
评论 #41397764 未加载
评论 #41396127 未加载
janalsncm9 个月前
Part of the reason social media has grown so big and been so profitable is that these platforms have scaled past their own abilities to do what normal companies are required to do.<p>Facebook has a “marketplace” but no customer support line. Google is serving people scam ads for months, leading to millions in losses. (Imagine if a newspaper did that.) And feeds are allowed to recommend content that would be beyond the pale if a human were curating it. But because “it’s just an algorithm bro” we give them a pass because they can claim plausible deniability.<p>If fixing this means certain companies can’t scale to a trillion dollars with no customer support, too bad. Google can’t vet every ad? They could, but choose not to. Figure it out.<p>And content for children should have an even higher bar than that. Kids should not be dying from watching videos.
ang_cire9 个月前
This is wonderful news.<p>The key thing people are missing is that TikTok is not being held responsible for <i>the video content itself</i>, they are being held responsible for their own code&#x27;s actions. The video creator didn&#x27;t share (or even attempt to share) the video with the victim- TikTok did.<p>If adults want to subscribe themselves to that content, that is their choice. Hell, if kids actively seek out that content themselves, I don&#x27;t think companies should be responsible if they find it.<p>But if the company itself is the one proactively choosing to show that content to kids, that is 100% on them.<p>This narrative of being blind to the vagaries of their own code is playing dumb at best: we all know what the code we write does, and so do they. They just don&#x27;t want to admit that it&#x27;s <i>impossible</i> to moderate that much content themselves with <i>automatic recommendation algorithms</i>.<p>They could avoid this particular issue entirely by just showing people content they choose to subscribe to, but that doesn&#x27;t allow them to inject content-based ads to a much broader audience, by showing that content to people who have not expressed interest&#x2F; subscribed to that content. And that puts this on them as a business.
WCSTombs9 个月前
From the article:<p>&gt; Because TikTok’s “algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,” it becomes TikTok’s own speech. And now TikTok has to answer for it in court. Basically, the court ruled that when a company is choosing what to show kids and elderly parents, and seeks to keep them addicted to sell more ads, they can’t pretend it’s everyone else’s fault when the inevitable horrible thing happens.<p>If that reading is correct, then Section 230 isn&#x27;t nullified, but there&#x27;s something that isn&#x27;t shielded from liability any more, which IIUC is basically the &quot;Recommended For You&quot;-type content feed curation algorithms. But I haven&#x27;t read the ruling itself, so it could potentially be more expansive than that.<p>But assuming Matt Stoller&#x27;s analysis there is accurate: frankly, I avoid those recommendation systems like the plague anyway, so if the platforms have to roll them back or at least be a little more thoughtful about how they&#x27;re implemented, it&#x27;s not necessarily a bad thing. There&#x27;s no new liability for what users post (which is good overall IMO), but there can be liability <i>for the platform implementation itself</i> in some cases. But I think we&#x27;ll have to see how this plays out.
评论 #41392882 未加载
评论 #41394569 未加载
kevwil9 个月前
Whatever this means, I hope it means less censorship. That&#x27;s all my feeble brain can focus on here: free speech good, censorship bad. :)
评论 #41395810 未加载
2OEH8eoCRo09 个月前
I love this.<p>Court: Social Media algos are protected speech<p>Social Media: Yes! Protect us<p>Court: Since you&#x27;re speech you must be liable for harmful speech as anyone else would be<p>Social Media: No!!
评论 #41395728 未加载
renewiltord9 个月前
If I spam filter comments am I subject to this? That is, the remaining comments are effectively like I was saying them?
评论 #41393124 未加载
dwallin9 个月前
The link to the actual decision: <a href="https:&#x2F;&#x2F;cases.justia.com&#x2F;federal&#x2F;appellate-courts&#x2F;ca3&#x2F;22-3061&#x2F;22-3061-2024-08-27.pdf?ts=1724792413" rel="nofollow">https:&#x2F;&#x2F;cases.justia.com&#x2F;federal&#x2F;appellate-courts&#x2F;ca3&#x2F;22-306...</a>
deafpolygon9 个月前
Section 230 is alive and well, and this ruling won&#x27;t impact it. What will change is that US social media firms will move away from certain types of algorithmic recommendations. Tiktok is owned by Bytedance which is a Chinese firm, so in the long run - no real impact.
telotortium9 个月前
Anyone know what the reputation of the Third Circuit is? I want to know if this ruling is likely to hold up in the inevitable Supreme Court appeal.<p>The Ninth Circuit has a reputation as flamingly progressive (see &quot;Grants Pass v. Johnson&quot;, where SCOTUS overruled the Ninth Circuit, which had ruled that cities couldn&#x27;t prevent homeless people from sleeping outside in public parks and sidewalks). The Fifth Circuit has a reactionary reputation (see &quot;Food and Drug Administration v. Alliance for Hippocratic Medicine&quot;, which overruled a Fifth Circuit ruling that effectively revoked the FDA approval of the abortion drug mifepristone).
intended9 个月前
Hoo boy.<p>So- platforms aren’t publishers, they are distributors (like news stands or pharmacies).<p>So they are responsible for the goods they sell.<p>They aren’t responsible for user content - but they are responsible for what they choose to show.<p>This is going to be dramatic.
carapace9 个月前
Moderation doesn&#x27;t scale, it&#x27;s NP-complete or worse. Massive social networks <i>sans</i> moderation cannot work and cannot be made to work. Social networks require that the moderation system is a super-set of the communication system and that&#x27;s not cost effective (except where the two are co-extensive, e.g. Wikipedia, Hacker News, Fediverse.) We tried it because of ignorance (in the first place) and greed (subsequently). This ruling is just recognizing reality.
评论 #41395014 未加载
janalsncm9 个月前
This seems like it contradicts the case where YouTube wasn’t liable for recommending terrorist videos to someone.
jrockway9 个月前
I&#x27;m not sure that Big Tech is over. Media companies have had a viable business forever. What happens here is that instead of going to social media and hearing about how to fight insurance companies, you&#x27;ll just get NFL Wednesday Night Football Presented By TikTok.
game_the0ry9 个月前
Pavel gets arrested, Brazil threatens Elon, now this.<p>I am not happy with how governments think they can dictate what internet users can and cannot see.<p>With respect to TikTok, parents need have some discipline and not give smart phones to their ten-year-olds. You might as well give them a crack pipe.
评论 #41396144 未加载
drbojingle9 个月前
There&#x27;s no reason,as far as I&#x27;m concerned, that we shouldn&#x27;t have a choice in algorithms on social media platforms. I want to be able to pick an open source algorithm that i can understand the pros and cons of. Hell let me pick 5. Why not?
falcolas9 个月前
&gt; the internet grew tremendously, encompassing the kinds of activities that did not exist in 1996<p>I guess that&#x27;s one way to say that you never experienced the early internet. In three words: rotten dot com. Makes all the N-chans look like teenagers smoking on the corner, and Facebook et.al. look like toddlers in paddded cribs.<p>This will frankly hurt any and all attempts to host any content online, and if anyone can survive it, it will be the biggest corporations alone. Section 230 also protected ISPs and hosting companies (linode, Hetzer, etc) after all.<p>Their targeting may not be intentional, but will that matter? Are they willing to be jailed in a foreign country because of their perceived inaction?
评论 #41394244 未加载
评论 #41393096 未加载
1vuio0pswjnm79 个月前
&quot;In other words, the fundamental issue here is not really whether big tech platforms should be regulated as speakers, as that&#x27;s a misconception of what they do. They don&#x27;t speak, they are middlemen.&quot;<p>Parasites.
ratorx9 个月前
I think a bigger issue in this case is the age. A 10-year old should not have access to TikTok unsupervised, especially when the ToS states the 13-year age threshold, regardless of the law’s opinion on moderation.<p>I think especially content for children should be much more severely restricted, as it is with other media.<p>It’s pretty well-known that age is easy to fake on the internet. I think that’s something that needs tightening as well. I’m not sure what the best way to approach it is though. There’s a parental education aspect, but I don’t see how general content on the internet can be restricted without putting everything behind an ID-verified login screen or mandating parental filters, which seems quite unrealistic.
评论 #41396218 未加载
tempeler9 个月前
Finally, it goes to end of global social media. jurisdiction cannot be use as a weapon. if you use it as a weapon. they don&#x27;t hesitate use that to you as a weapon.
drpossum9 个月前
I hope this makes certain streaming platforms liable for the things certain podcast hosts say while they shovel money at and promote them above other content.
评论 #41397810 未加载
6gvONxR4sf7o9 个月前
So under this new reading of the law, is it saying that AWS is still not liable for what someone says on reddit, but now reddit might be responsible for it?
Nasrudith9 个月前
It is amazing how people were programmed to completely forget the meaning of Section 230 over the years just by repetition of the stupidest propaganda.
BurningFrog9 个月前
Surely this will bubble up to the Supreme Court?<p>Once they&#x27;ve weighed in, we&#x27;ll know if the &quot;free ride&quot; really is over, and if so what ride replaces it.
评论 #41394508 未加载
nness9 个月前
&gt; Because TikTok’s “algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,” it becomes TikTok’s own speech.<p>This is fascinating and raises some interesting questions about where the liability starts and stops i.e. is &quot;trending&#x2F;top right now&#x2F;posts from following&quot; the same as a tailored algorithm per user? Does Amazon become culpable for products on their marketplace? etc.<p>For good or for bad, this century&#x27;s Silicon Valley was built on Section 230 and I don&#x27;t foresee it disappearing any time soon. If anything, I suspect it will be supported by future&#x2F;refined by legislation instead of removed. No one wants to be the person who legisliate away all online services...
tomcam9 个月前
Have to assume dang is moderating his exhausted butt off, because the discussion on this page is vibrant and courteous. Thanks all!
评论 #41394586 未加载
rsingel9 个月前
With no sense of irony, this blog is written on a platform that allows some Nazis, algorithmically promotes publishers, allows comments, and is thus only financially viable because of Section 230.<p>If you actually want to understand something about the decision, I highly recommend Eric Goldman&#x27;s blog post:<p><a href="https:&#x2F;&#x2F;blog.ericgoldman.org&#x2F;archives&#x2F;2024&#x2F;08&#x2F;bonkers-opinion-repeals-section-230-in-the-third-circuit-anderson-v-tiktok.htm" rel="nofollow">https:&#x2F;&#x2F;blog.ericgoldman.org&#x2F;archives&#x2F;2024&#x2F;08&#x2F;bonkers-opinio...</a>
skeptrune9 个月前
My interpretation of this is it will push social media companies to take a less active role in what they recommend to their users. It should not be possible to intentionally curate content while simultaneously avoiding the burden of removing content which would cause direct harm justifying a lawsuit. Could not be more excited to see this.
DidYaWipe9 个月前
While this guy&#x27;s missives are not always on target (his one supporting the DOJ&#x27;s laughable and absurd case against Apple being an example of failure), some are on target... and indeed this ruling correctly calls out sites for exerting editorial control.<p>If you&#x27;re going to throw up your hands and say, &quot;Well, users posted this, not us!&quot; then you&#x27;d better not promote or bury any content with any algorithm, period. These assholes (TikTok et al) are now getting what they asked for with their abusive behavior.
linotype9 个月前
Twitter sold at the perfect time. Wow.
theendisney9 个月前
I put a few forums online that never got active users. What they did get was spam, plenty of it, a lot of it. We can imagine the sheer amount of garbage posted on hn, reddit, Facebook etc<p>Deleting the useless garbage one has to develop an idea where the line is suppose to be. The bias there will eventually touch all angles of human discourse. As an audience matures it gets more obvious what they would consider interesting or annoying. More bias.<p>Then there are legal limits in each country, the &quot;correct&quot; religion and natuonalism.<p>Quite the shit storm.
nitwit0059 个月前
I am puzzled why there are no arrests in this sort of case. Surely, convincing kids to kill themselves is a form of homicide?
endtime9 个月前
Not that it matters, but I was curious and so I looked it up: the three-judge panel comprised one Obama-appointed judge and two Trump-appointed judges.
Devasta9 个月前
This could result in the total destruction of social media sites. Facebook, TikTok, Youtube, Twitter, hell even Linkedin cannot possibly survive if they have to take responsibility for what users post.<p>Excellent news, frankly.
评论 #41392505 未加载
评论 #41392672 未加载
评论 #41392774 未加载
评论 #41392815 未加载
评论 #41392759 未加载
评论 #41392565 未加载
评论 #41392318 未加载
zmmmmm9 个月前
What about &quot;small tech&quot;?<p>... because it&#x27;s small tech that need Section 230. If anything, retraction of 230 will be the real free ride for big tech, because it will kill all chance of threatening competition at the next level down.
oldgregg9 个月前
Insane reframing. Big tech and politicians are pushing this, pulling the ladder up behind them-- X and new decentralized networks are a threat to their hegemony and this is who they are going after. Startups will not be able to afford whatever bullshit regulatory framework they force feed us. How about they mandate any social network over 10M MAU has to publish their content algorithms.. ha!
mikewarot9 个月前
&gt;There is no way to run a targeted ad social media company with 40% margins if you have to make sure children aren’t harmed by your product.<p>So, we actually have to watch out for kids, and maybe only have a 25% profit margin? Oh, so terrible! &#x2F;s<p>I&#x27;m 100% against the political use of censorship, but 100% for the reasonable use of government to promote the general welfare, secure the blessings of liberty for ourselves, and our posterity.
评论 #41393144 未加载
hello_computer9 个月前
This is a typical anglosphere move: Write another holy checklist (I mean, &quot;Great Charter&quot;), indoctrinate the plebes into thinking that they were made free because of it (they weren&#x27;t), then as soon as one of the bulleted items leaves the regime&#x27;s hiney exposed, have the &quot;judges&quot; conjure a new interpretation out of thin-air for as long as they think the threat persists.<p>Whether it was Eugene Debs being thrown in the pokey, or every Japanese civilian on the west coast, or some harmless muslim suburbanite getting waterboarded, nothing ever changes. Wake me up when they actually do something to Facebook.
stainablesteel9 个月前
tiktok in general is great at targeting young women<p>the chinese and iranians are taking advantage of this and thats not something i would want to entrust to them
2OEH8eoCRo09 个月前
Fantastic! If I had three wishes, one of them might be to repeal Section 230.
trinsic29 个月前
When I see CEO&#x27;s, CFO&#x27;s going to prison for the actions of there corporations, then I&#x27;ll believe laws actually make things better. Otherwise any court decisions that say some action is now illegal is just posturing.
phendrenad29 个月前
I have no problem with this. Section 230 is almost 100 years old, long before anyone could have imagined an ML algorithm curating user content.<p>Section 230 absolutely should come with an asterisk that if you train an algorithm to do your dirty work you don&#x27;t get to claim it wasn&#x27;t your fault.
jmyeet9 个月前
What I want to sink in for people that whenever people talk about an &quot;algorithm&quot;, they&#x27;re regurgitating propaganda specifically designed to absolve the purveyor of responsibility for anything that algorithm does.<p>An algorithm in this context is nothing more than a reflection of what all the humans who created it designed it to do. In this case, it&#x27;s to deny Medicaid to make money. For RealPage, it&#x27;s to drive up rents for profit. Health insurance companies are using &quot;AI&quot; to deny claims and prior authorizations, forcing claimants to go through more hoops to get their coverage. Why? Because the extra hoops will discourage a certain percentage.<p>All of these systems come down to a waterfall of steps you need to go through. Good design will remove steps to increase the pass rate. Intentional bad design will add steps and&#x2F;or lower the pass rate.<p>Example: in the early days of e-commerce, you had to create an account before you could shop. Someone (probably Amazon) realized they lost customers this way. The result? You could create a shopping cart all you want and you didn&#x27;t have to create an account unti lyou checked out. At this point you&#x27;re already invested. The overall conversion rate is higher. Even later, registration itself became optional.<p>Additionally, these big consulting companies are nothing more than leeches designed to drain the public purse
评论 #41394839 未加载