TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Is AI for online segregation ethical?

17 点作者 iosystem超过 2 年前
A friend informed me that AI is being used to separate certain individuals in online spaces. He expresses this use of the technology is quite new and is an evolution of shadowbanning certain persons that otherwise add toxicity to discussions. The persons shadowbanned in this instance are tricked into assuming they're not banned by fake interactions generated by the AI towards real content while nonshadowbanned users interact with each other to the content. My friend said the technology is just the beginning and it eventually will be used for certain demographics throughout online spaces for certain governments. I'm curious how ethical is this?

9 条评论

BjoernKW超过 2 年前
It&#x27;s unethical, without a doubt.<p>As described, it&#x27;s not even about segregating different audiences, but about sequestering and silencing specific people, yet keeping them engaged at the same time, perhaps to be able to continue to show them ads and keep up the platform&#x27;s KPIs.<p>In a way, that&#x27;d be quite similar to dating platforms creating fake profiles to keep their members engaged, which I doubt anyone would consider ethical behaviour.<p>If an online service doesn&#x27;t want certain types of opinions, ideas, or people on their platform, that&#x27;s absolutely fine. Just tell them, so they can take their business elsewhere.
评论 #33310140 未加载
scantis超过 2 年前
Censorship is already unethical. One would have establish a need for censorship first. Can a book be written, that causes people to become a danger and thus needs to be forbidden? Is it even possible that a text on a computer screen can cause harm to someone?<p>I&#x27;m not aware of this and any argument for censorship is usually quite a long stretch of hypertheticals.<p>For violating TOS people should know what rule they have violated. For example doxing, spamming or just being insufferably rude. Punishment without explanation or reason is obviously unethical.<p>I think shadow banning people and then actively keeping them in loop can hardly argued to be ethical. Shadow banning itself is barely ethical.
评论 #33308513 未加载
评论 #33313469 未加载
TigeriusKirk超过 2 年前
Sounds similar to the &quot;heaven banning&quot; concept. That was fictional as far as we know, but it&#x27;s obviously technologically possible. <a href="https:&#x2F;&#x2F;twitter.com&#x2F;nearcyan&#x2F;status&#x2F;1532076277947330561" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;nearcyan&#x2F;status&#x2F;1532076277947330561</a>
smoldesu超过 2 年前
Did your friend ever produce any evidence for this?
评论 #33307790 未加载
bastawhiz超过 2 年前
What&#x27;s the incentive to build tech like this? Or, why is it valuable for someone who is shadow banned from knowing they&#x27;re shadow banned because they don&#x27;t get reactions?<p>This doesn&#x27;t make sense to me, because it seems like a huge amount of investment to...what? Get them to see more ads instead of getting sick of people not reacting to their trolling and leaving entirely? If the return on investment is high enough for this to be practical, it suggests the community has a toxicity problem that itself needs to be addressed. What makes so many people so angry and toxic?
woojoo666超过 2 年前
This seems trivial to bypass. The goal for deceiving banned users is to discourage them from creating new accounts. But I&#x27;m sure eventually, the system will be exposed, like if a shadowbanned user is friends with non-banned user and they notice discrepancies. After that, people will just regularly create new accounts to see if the new account can interact with the old one, to see if the old one has been banned
chatterhead超过 2 年前
It&#x27;s not ethical at all. Neither is annoyance functionality to churn unmonetized users nor the habit of some (Apple&#x2F;Windows) to constantly change features and UI functionality to force adaption which creates more buy-in which has the effect of tricking people into continued use of a product that otherwise is lackluster.<p>Pavlov has nothing on the business of training people through computing.
bergenty超过 2 年前
In my opinion if it’s biological segregation it’s unethical but if it’s ideological segregation, absolutely go for it.
评论 #33310692 未加载
评论 #33309878 未加载
pannSun超过 2 年前
It is my dream that one day, and one day soon, social media companies uncorrupted by having to appease the uneducated masses will, secretly, present entirely fabricated realities to some, placing them in an invisible prison, while magnifying the message of others.<p>No misguided popular movement (or even factual information, lacking critical context of course) will be able to gain momentum, as its promoters will be unknowingly yelling into the void. On the other hand, movements and ideas that promote wellbeing and harmony will spread easily. It does not take advanced AI to categorize vocal people by politics, and quarantine those with hateful ideas.<p>This is our chance to safely defuse humanity&#x27;s hateful instincts, and we should take it.<p>Best of all, it is completely ethical because, as a since-deleted post pointed out, it is vaguely hinted at in the Terms of Service. Voluntary agreements are the highest form of ethics. They are beyond moral question, and in fact the whole purpose of society is to enable and enforce them.<p>Those who don&#x27;t like it are free to stop participating in online society. With such an easy and safe way out, there is absolutely nothing to worry about.
评论 #33309790 未加载
评论 #33318934 未加载