TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Is AI for online segregation ethical?

17 pointsby iosystemover 2 years ago
A friend informed me that AI is being used to separate certain individuals in online spaces. He expresses this use of the technology is quite new and is an evolution of shadowbanning certain persons that otherwise add toxicity to discussions. The persons shadowbanned in this instance are tricked into assuming they're not banned by fake interactions generated by the AI towards real content while nonshadowbanned users interact with each other to the content. My friend said the technology is just the beginning and it eventually will be used for certain demographics throughout online spaces for certain governments. I'm curious how ethical is this?

9 comments

BjoernKWover 2 years ago
It&#x27;s unethical, without a doubt.<p>As described, it&#x27;s not even about segregating different audiences, but about sequestering and silencing specific people, yet keeping them engaged at the same time, perhaps to be able to continue to show them ads and keep up the platform&#x27;s KPIs.<p>In a way, that&#x27;d be quite similar to dating platforms creating fake profiles to keep their members engaged, which I doubt anyone would consider ethical behaviour.<p>If an online service doesn&#x27;t want certain types of opinions, ideas, or people on their platform, that&#x27;s absolutely fine. Just tell them, so they can take their business elsewhere.
评论 #33310140 未加载
scantisover 2 years ago
Censorship is already unethical. One would have establish a need for censorship first. Can a book be written, that causes people to become a danger and thus needs to be forbidden? Is it even possible that a text on a computer screen can cause harm to someone?<p>I&#x27;m not aware of this and any argument for censorship is usually quite a long stretch of hypertheticals.<p>For violating TOS people should know what rule they have violated. For example doxing, spamming or just being insufferably rude. Punishment without explanation or reason is obviously unethical.<p>I think shadow banning people and then actively keeping them in loop can hardly argued to be ethical. Shadow banning itself is barely ethical.
评论 #33308513 未加载
评论 #33313469 未加载
TigeriusKirkover 2 years ago
Sounds similar to the &quot;heaven banning&quot; concept. That was fictional as far as we know, but it&#x27;s obviously technologically possible. <a href="https:&#x2F;&#x2F;twitter.com&#x2F;nearcyan&#x2F;status&#x2F;1532076277947330561" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;nearcyan&#x2F;status&#x2F;1532076277947330561</a>
smoldesuover 2 years ago
Did your friend ever produce any evidence for this?
评论 #33307790 未加载
bastawhizover 2 years ago
What&#x27;s the incentive to build tech like this? Or, why is it valuable for someone who is shadow banned from knowing they&#x27;re shadow banned because they don&#x27;t get reactions?<p>This doesn&#x27;t make sense to me, because it seems like a huge amount of investment to...what? Get them to see more ads instead of getting sick of people not reacting to their trolling and leaving entirely? If the return on investment is high enough for this to be practical, it suggests the community has a toxicity problem that itself needs to be addressed. What makes so many people so angry and toxic?
woojoo666over 2 years ago
This seems trivial to bypass. The goal for deceiving banned users is to discourage them from creating new accounts. But I&#x27;m sure eventually, the system will be exposed, like if a shadowbanned user is friends with non-banned user and they notice discrepancies. After that, people will just regularly create new accounts to see if the new account can interact with the old one, to see if the old one has been banned
chatterheadover 2 years ago
It&#x27;s not ethical at all. Neither is annoyance functionality to churn unmonetized users nor the habit of some (Apple&#x2F;Windows) to constantly change features and UI functionality to force adaption which creates more buy-in which has the effect of tricking people into continued use of a product that otherwise is lackluster.<p>Pavlov has nothing on the business of training people through computing.
bergentyover 2 years ago
In my opinion if it’s biological segregation it’s unethical but if it’s ideological segregation, absolutely go for it.
评论 #33310692 未加载
评论 #33309878 未加载
pannSunover 2 years ago
It is my dream that one day, and one day soon, social media companies uncorrupted by having to appease the uneducated masses will, secretly, present entirely fabricated realities to some, placing them in an invisible prison, while magnifying the message of others.<p>No misguided popular movement (or even factual information, lacking critical context of course) will be able to gain momentum, as its promoters will be unknowingly yelling into the void. On the other hand, movements and ideas that promote wellbeing and harmony will spread easily. It does not take advanced AI to categorize vocal people by politics, and quarantine those with hateful ideas.<p>This is our chance to safely defuse humanity&#x27;s hateful instincts, and we should take it.<p>Best of all, it is completely ethical because, as a since-deleted post pointed out, it is vaguely hinted at in the Terms of Service. Voluntary agreements are the highest form of ethics. They are beyond moral question, and in fact the whole purpose of society is to enable and enforce them.<p>Those who don&#x27;t like it are free to stop participating in online society. With such an easy and safe way out, there is absolutely nothing to worry about.
评论 #33309790 未加载
评论 #33318934 未加载