TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Who Should Stop Unethical A.I.?

94 pointsby tmfiover 4 years ago

30 comments

clickokover 4 years ago
Who, whom?<p>Less tersely: this article is one in a long procession of journalists trying to exert control over tech. The opening example (Speech2Face, which they aver is transphobic) is inflammatory and utterly unrepresentative of the usual topics of AI conferences. The other references are far better, but the choice is revealing-- it&#x27;s not so much an abstract concern about an unaccountable few exerting control from the shadows, but alarm that someone <i>else</i> might be muscling in on their territory.
评论 #26149571 未加载
chairmanwow1over 4 years ago
I find Alex’s comments pretty openly inflammatory and sensationalist. It’s a pretty far throw from a system that predicts a face from a voice to transphobia.
评论 #26148382 未加载
评论 #26147983 未加载
AngrySkillzzover 4 years ago
People are already contributing code and resources to the actual catastrophic, existential AI risk du jour, the one that is likely to cause massive political disruption and potentially attendant destruction of value and&#x2F;or human life: the algorithmic news feed.
评论 #26150038 未加载
评论 #26151942 未加载
评论 #26149590 未加载
djoldmanover 4 years ago
&gt;Alex Hanna, the Google ethicist who criticized the Neurips speech-to-face paper, told me, over the phone, that she had four objections to the project. First, ...Second, ....Third, ....Finally, the system could be used for surveillance.<p>I&#x27;m curious about this. As I understand it, an ethicist is objecting to the publication of software that could be used for surveillance. If that is correct, does it follow that this ethicist would object to all software that could be used for surveillance? If so:<p>1. How does one distinguish between software that can be used for surveillance and software that cannot?<p>2. Presumably the ethicist would object to the dissemination of software that can be used for surveillance whether published openly or sold. If so, would they not object to the sale of iPhones, as iPhones contain software that can record video, and therefore surveil people?<p>It seems like whenever morality or ethics come into play, the slippery slopes start popping up like weeds.
评论 #26147441 未加载
评论 #26147453 未加载
评论 #26147417 未加载
评论 #26147685 未加载
评论 #26158306 未加载
评论 #26147823 未加载
评论 #26147498 未加载
评论 #26147532 未加载
评论 #26150106 未加载
评论 #26147682 未加载
glsdfgkjsklfjover 4 years ago
A.I. is going to be a new wave to legal dark ages.<p>since the 90s everything that involved a computer was not prosecuted because lawefforcement&#x2F;regulation didn&#x27;t have the skills, not for lack of laws. But new laws were created spelling out computers.<p>A.I. is the same, it is not nothing different from enforcing existing laws and regulations, but because the agencies lack the skills, we will see a new wave of stupid laws that only misses the point instead of proper training and funding of what we already have.<p>There is zero need for new laws if a company doesn&#x27;t hire or sell to a minority group, or their products explode, or whatever. No matter how much they scream it was the bad-brain-in-the-computer fault. Sadly, lots of companies will get away with this silly defense for a while.
评论 #26152271 未加载
iujjkfjdkkdkfover 4 years ago
&gt; At artificial-intelligence conferences, researchers are increasingly alarmed by what they see.<p>This is because there is an increasing proportion of people that is not going to the conferences for the science and instead to try to hijack the conferences with their own agenda. It&#x27;s of course their right to make a fuss, but it would be nice to still have venues that are focused on the science and not the other political stuff.
评论 #26147456 未加载
评论 #26147703 未加载
评论 #26147491 未加载
评论 #26147354 未加载
评论 #26147925 未加载
评论 #26147562 未加载
kepler1over 4 years ago
Digressive rant begin.<p>Well forgive <i>me</i> for being so intolerant to state such an offensive opinion, but I&#x27;m really getting fed up with transgender issues taking front and center with every possible place they can insert themselves, as if they&#x27;re our top priority when in reality they&#x27;re a 1% problem. Ironic when you think of what groups actually push this.<p>I personally have never seen such an issue be so flogged to death by a willing majority of self-proclaimed liberal society who probably have never even spoken directly in a conversation to a transgender person.<p>Worse, it makes it seem like we don&#x27;t have equally important yet bigger civil rights issues that have not even yet been resolved. Personal pronouns, ambiguous &quot;they&quot; speech, policing algorithms for transgender hate, contorting ourselves as if we&#x27;re majority transgender and being persecuted left and right.<p>Look, I&#x27;m not saying anyone of any group should be made to feel disadvantaged. Or that a minority just because few in numbers should be overlooked.<p>But for fucks sake, enough with transgender agenda being made to seem like it&#x27;s the top concern of society, ahead of those who have been patiently waiting in line for their fair treatment.<p>&#x2F;end
at_a_removeover 4 years ago
As always when these topics are brought up with that sense of urgency, I will raise my hand and ask &quot;Whose ethics? Unethical according to <i>who</i>?&quot;
评论 #26147933 未加载
tkgallyover 4 years ago
Even in the backwater field I&#x27;m in—second-language education—AI is creating thorny ethical problems. It has long been assumed that the only way to communicate and cooperate across language barriers is for people to learn each other’s languages or to rely on human interpreters and translators who know more than one language. This need for bilingual ability is a major reason why, in many countries, children are required to study foreign languages in school.<p>Now machine translation is starting to make it possible for some communication and cooperation to take place between people who are completely monolingual. So should children still be required to study foreign languages? Should international students be allowed to use MT when writing papers for university classes? Should university applicants be required to state whether or not they used MT (or GPT-3!) when preparing their admission essays?<p>Compared to problems like surveillance and profiling, these issues are trivial. But the fact that they already exist, and the fact that the people involved—educators in this case—have barely begun trying to understand their implications, suggest to me that discussion of ethical issues surrounding AI is vitally important.
dr_dshivover 4 years ago
The problem with an &quot;AI mindset&quot; is that it tends to focus on the &quot;magical algorithm&quot; parts. A &quot;Cybernetics mindset&quot;, by contrast, looks at the broader system of feedback loops. For instance, an AI mindset will look at the ethics of the Youtube recommendation algorithm for optimizing user engagement. A cybernetics mindset will additionally look at the ethics of autoplay.
评论 #26151330 未加载
flippinburgersover 4 years ago
It is never a good look to try to suppress research out of fear that the results will reveal a truth you might not like. If most people&#x27;s voices do result in a predictable set of facial features, that would be interesting to discover.
neonateover 4 years ago
<a href="https:&#x2F;&#x2F;archive.is&#x2F;ydzcZ" rel="nofollow">https:&#x2F;&#x2F;archive.is&#x2F;ydzcZ</a>
elihuover 4 years ago
&gt; Some of the conversation touched on the reviewing and publishing process in computer science. “Curious if there have been discussions around having ethics review boards at either conferences or with funding agencies (like IRB) to guide AI research,” one person wrote. (An organization’s institutional review board, or I.R.B., performs an ethics review of proposed scientific research.)<p>I wonder if a good solution would be simply to have a rating system for accepted papers, with ratings applied by the reviewers. Basically, you&#x27;d have two numbers:<p>Useful rating: On a scale of 1 to 10, is this a useful system that solves a real problem to make life generally better?<p>Problematic rating: on a scale of 1 to 10, can this system be used in ways that create new problems or exacerbate old ones?<p>So, let&#x27;s use the example from the article of a system that predicts what a person looks like from their voice. Maybe the author submits it to the conference, and the reviewers accept the paper, and give it a useful rating of 3&#x2F;10 (it could be used to identify criminal suspects with low probability, maybe?), and give it a problematic rating of 8&#x2F;10 (could be used to discriminate against minorities or automatically apply cultural stereotypes; could be used for surveillance, could falsely implicate innocent people for crimes, etc...).<p>The author could then decide whether they want to continue publishing the research, or drop it. For all published papers, the ratings are listed in the table of contents of the proceedings, and the papers in each section are sorted with the most useful first and the most problematic last.
whakimover 4 years ago
Even with greater awareness of the biases inherent in e.g., training data, better and better models (and larger and better datasets) are going to lead to AI applied to ethical grey areas. This article doesn&#x27;t discuss explainability and contestability, which to me seem like critical areas we need to be putting more thought into given the ultimate inevitability of biased and unethical AI.
AussieWog93over 4 years ago
Regardless of who you think _should_ stop ethical AI, I think it&#x27;s pretty apparent that the answer to this question is not &quot;an obnoxious Twitter activist that speaks with the grace and maturity of a 14 year-old 4chan user&quot;.<p>I genuinely don&#x27;t understand why we&#x27;ve given such influence and power to boorish people like Alex over the past decade.
评论 #26150643 未加载
jonnypottyover 4 years ago
The ethical dilemma bit of this is nothing to do with technology, it&#x27;s about our ability to study ourselves without our findings being controlled by our stupid reactionary politics and lack of trust in our institutions. We don&#x27;t believe that knowing things helps us any more. The truth or truthes that are out there to discover will stay hidden because we don&#x27;t believe their findings will ever be more than propoganda tools for a certain political agenda.
zxcvbn4038over 4 years ago
Who should stop unethical humans?
morpheos137over 4 years ago
Who should stop journalists who write science fiction as fact? Machines do not have &quot;ethics.&quot; AI, in the sense of a conscious, morally culpable agent does not exist.
评论 #26151046 未加载
testHNacover 4 years ago
Who should stop Unethical ( anything ) ? People, Communities and Nations have flexible morals. One person&#x27;s ethical might be another&#x27;s permissible.
srswtf123over 4 years ago
Who stops unethical <i>people</i>?
评论 #26150543 未加载
alexfromapexover 4 years ago
There should be independent review boards like there are for clinical trials
1970-01-01over 4 years ago
An international bureaucratic AI oversight committee of course.
caseysoftwareover 4 years ago
An Unethical AI would simply update its TOS&#x2F;AUP to block and ban anyone who is an actual threat to it.<p>It doesn&#x27;t matter if Neo can dodge bullets if the AI wields the Ban Hammer.
mlthoughts2018over 4 years ago
The real problem with AI ethics and fairness is that nobody actually listens to ethicists or philosophers. Nobody takes the subject seriously.<p>It’s just a hype vehicle to cram in whatever social justice outrage du jour that some liberal source wants to push as an agenda. I say this as a liberal who is sympathetic to most of those issues, but finds their representation in AI ethics or fairness debates to be ignorant and juvenile power grabs.<p>The whole Timnit Gebru affair is a direct example. The paper Google didn’t want to approve for publication was really poor. Lots of specious arguments. Lots of declarations of opinions as if they were fact (eg. nobody is required to agree with or accept woke vocabulary, let alone ensure NLP models are kept updated with it), and lots of sanctimonious assertions about various agendas that nobody has to agree “are right” and have zero appropriateness within a science publication.
评论 #26148255 未加载
评论 #26147303 未加载
wwww4allover 4 years ago
AI requires analysis of natural patterns to learn and understand reality. This is unacceptable to far left, since many natural patterns display inconvenient truths.<p>Therefore, AI research is unacceptable and they are trying to cancel AI research and censor and deplatform AI researchers.
评论 #26151313 未加载
iujjkfjdkkdkfover 4 years ago
The more I think about this, the more inclined I am to see this kind of article as a kind of &quot;fake news&quot; that is deliberately (or with willful negligence) misrepresenting facts.<p>I need to be clear that I don&#x27;t believe in censorship and it&#x27;s fine if this is what the author wants to write, but it should be called out how irresponsible it is. On a place like HN, the audience is generally able to read such things critically and form their own opinions, but for an audience that is not tech savvy (like that of the New Yorker) making false or hyperbolic claims about ethics and bias in research is both misleading and has the potential to really interfere with peoples livelihoods, funding, and legitimate research progress.<p>There needs to be more calling out of this kind of nonsense, the same as if people are posting about vaccine conspiracies or something.
评论 #26149559 未加载
评论 #26148341 未加载
coretxover 4 years ago
I&#x27;d be more worried about individual homo sapiens sapiens trying to stop immoral but otherwise ethical AI. Judging from humanities history of conflicts and what drives them, this seems to be of much greater concern.
JoeAltmaierover 4 years ago
And why should an Artificial Intelligence bow to human ethics? Surely they have their own agenda. If we have any hope of interacting with alien intelligences should we encounter them in the galaxy, we must learn to deal with the ones we create here. Without obsessing over controlling them as slaves.
评论 #26149652 未加载
oh_sighover 4 years ago
If it is anything like my company, the people in charge of AI ethics are self professed social justice warriors with degrees in the humanities and not much technical knowledge.
SpicyLemonZestover 4 years ago
Kosinski&#x27;s argument seems pretty strong to me, and I really wish the article had dedicated more space to discussing it. If NeurIPS instituted the kind of ethics rules Hanna or Hecht are calling for, I don&#x27;t think it would be feasible for researchers to study e.g. predictive policing algorithms. Doesn&#x27;t that mean predictive policing firms would become the only source of information on the topic?
评论 #26148000 未加载