Who, whom?<p>Less tersely: this article is one in a long procession of journalists trying to exert control over tech.
The opening example (Speech2Face, which they aver is transphobic) is inflammatory and utterly unrepresentative of the usual topics of AI conferences.
The other references are far better, but the choice is revealing-- it's not so much an abstract concern about an unaccountable few exerting control from the shadows, but alarm that someone <i>else</i> might be muscling in on their territory.
I find Alex’s comments pretty openly inflammatory and sensationalist. It’s a pretty far throw from a system that predicts a face from a voice to transphobia.
People are already contributing code and resources to the actual catastrophic, existential AI risk du jour, the one that is likely to cause massive political disruption and potentially attendant destruction of value and/or human life: the algorithmic news feed.
>Alex Hanna, the Google ethicist who criticized the Neurips speech-to-face paper, told me, over the phone, that she had four objections to the project. First, ...Second, ....Third, ....Finally, the system could be used for surveillance.<p>I'm curious about this. As I understand it, an ethicist is objecting to the publication of software that could be used for surveillance. If that is correct, does it follow that this ethicist would object to all software that could be used for surveillance? If so:<p>1. How does one distinguish between software that can be used for surveillance and software that cannot?<p>2. Presumably the ethicist would object to the dissemination of software that can be used for surveillance whether published openly or sold. If so, would they not object to the sale of iPhones, as iPhones contain software that can record video, and therefore surveil people?<p>It seems like whenever morality or ethics come into play, the slippery slopes start popping up like weeds.
A.I. is going to be a new wave to legal dark ages.<p>since the 90s everything that involved a computer was not prosecuted because lawefforcement/regulation didn't have the skills, not for lack of laws. But new laws were created spelling out computers.<p>A.I. is the same, it is not nothing different from enforcing existing laws and regulations, but because the agencies lack the skills, we will see a new wave of stupid laws that only misses the point instead of proper training and funding of what we already have.<p>There is zero need for new laws if a company doesn't hire or sell to a minority group, or their products explode, or whatever. No matter how much they scream it was the bad-brain-in-the-computer fault. Sadly, lots of companies will get away with this silly defense for a while.
> At artificial-intelligence conferences, researchers are increasingly alarmed by what they see.<p>This is because there is an increasing proportion of people that is not going to the conferences for the science and instead to try to hijack the conferences with their own agenda. It's of course their right to make a fuss, but it would be nice to still have venues that are focused on the science and not the other political stuff.
Digressive rant begin.<p>Well forgive <i>me</i> for being so intolerant to state such an offensive opinion, but I'm really getting fed up with transgender issues taking front and center with every possible place they can insert themselves, as if they're our top priority when in reality they're a 1% problem. Ironic when you think of what groups actually push this.<p>I personally have never seen such an issue be so flogged to death by a willing majority of self-proclaimed liberal society who probably have never even spoken directly in a conversation to a transgender person.<p>Worse, it makes it seem like we don't have equally important yet bigger civil rights issues that have not even yet been resolved. Personal pronouns, ambiguous "they" speech, policing algorithms for transgender hate, contorting ourselves as if we're majority transgender and being persecuted left and right.<p>Look, I'm not saying anyone of any group should be made to feel disadvantaged. Or that a minority just because few in numbers should be overlooked.<p>But for fucks sake, enough with transgender agenda being made to seem like it's the top concern of society, ahead of those who have been patiently waiting in line for their fair treatment.<p>/end
As always when these topics are brought up with that sense of urgency, I will raise my hand and ask "Whose ethics? Unethical according to <i>who</i>?"
Even in the backwater field I'm in—second-language education—AI is creating thorny ethical problems. It has long been assumed that the only way to communicate and cooperate across language barriers is for people to learn each other’s languages or to rely on human interpreters and translators who know more than one language. This need for bilingual ability is a major reason why, in many countries, children are required to study foreign languages in school.<p>Now machine translation is starting to make it possible for some communication and cooperation to take place between people who are completely monolingual. So should children still be required to study foreign languages? Should international students be allowed to use MT when writing papers for university classes? Should university applicants be required to state whether or not they used MT (or GPT-3!) when preparing their admission essays?<p>Compared to problems like surveillance and profiling, these issues are trivial. But the fact that they already exist, and the fact that the people involved—educators in this case—have barely begun trying to understand their implications, suggest to me that discussion of ethical issues surrounding AI is vitally important.
The problem with an "AI mindset" is that it tends to focus on the "magical algorithm" parts. A "Cybernetics mindset", by contrast, looks at the broader system of feedback loops. For instance, an AI mindset will look at the ethics of the Youtube recommendation algorithm for optimizing user engagement. A cybernetics mindset will additionally look at the ethics of autoplay.
It is never a good look to try to suppress research out of fear that the results will reveal a truth you might not like. If most people's voices do result in a predictable set of facial features, that would be interesting to discover.
> Some of the conversation touched on the reviewing and publishing process in computer science. “Curious if there have been discussions around having ethics review boards at either conferences or with funding agencies (like IRB) to guide AI research,” one person wrote. (An organization’s institutional review board, or I.R.B., performs an ethics review of proposed scientific research.)<p>I wonder if a good solution would be simply to have a rating system for accepted papers, with ratings applied by the reviewers. Basically, you'd have two numbers:<p>Useful rating: On a scale of 1 to 10, is this a useful system that solves a real problem to make life generally better?<p>Problematic rating: on a scale of 1 to 10, can this system be used in ways that create new problems or exacerbate old ones?<p>So, let's use the example from the article of a system that predicts what a person looks like from their voice. Maybe the author submits it to the conference, and the reviewers accept the paper, and give it a useful rating of 3/10 (it could be used to identify criminal suspects with low probability, maybe?), and give it a problematic rating of 8/10 (could be used to discriminate against minorities or automatically apply cultural stereotypes; could be used for surveillance, could falsely implicate innocent people for crimes, etc...).<p>The author could then decide whether they want to continue publishing the research, or drop it. For all published papers, the ratings are listed in the table of contents of the proceedings, and the papers in each section are sorted with the most useful first and the most problematic last.
Even with greater awareness of the biases inherent in e.g., training data, better and better models (and larger and better datasets) are going to lead to AI applied to ethical grey areas. This article doesn't discuss explainability and contestability, which to me seem like critical areas we need to be putting more thought into given the ultimate inevitability of biased and unethical AI.
Regardless of who you think _should_ stop ethical AI, I think it's pretty apparent that the answer to this question is not "an obnoxious Twitter activist that speaks with the grace and maturity of a 14 year-old 4chan user".<p>I genuinely don't understand why we've given such influence and power to boorish people like Alex over the past decade.
The ethical dilemma bit of this is nothing to do with technology, it's about our ability to study ourselves without our findings being controlled by our stupid reactionary politics and lack of trust in our institutions. We don't believe that knowing things helps us any more. The truth or truthes that are out there to discover will stay hidden because we don't believe their findings will ever be more than propoganda tools for a certain political agenda.
Who should stop journalists who write science fiction as fact? Machines do not have "ethics." AI, in the sense of a conscious, morally culpable agent does not exist.
Who should stop Unethical ( anything ) ?
People, Communities and Nations have flexible morals.
One person's ethical might be another's permissible.
An Unethical AI would simply update its TOS/AUP to block and ban anyone who is an actual threat to it.<p>It doesn't matter if Neo can dodge bullets if the AI wields the Ban Hammer.
The real problem with AI ethics and fairness is that nobody actually listens to ethicists or philosophers. Nobody takes the subject seriously.<p>It’s just a hype vehicle to cram in whatever social justice outrage du jour that some liberal source wants to push as an agenda. I say this as a liberal who is sympathetic to most of those issues, but finds their representation in AI ethics or fairness debates to be ignorant and juvenile power grabs.<p>The whole Timnit Gebru affair is a direct example. The paper Google didn’t want to approve for publication was really poor. Lots of specious arguments. Lots of declarations of opinions as if they were fact (eg. nobody is required to agree with or accept woke vocabulary, let alone ensure NLP models are kept updated with it), and lots of sanctimonious assertions about various agendas that nobody has to agree “are right” and have zero appropriateness within a science publication.
AI requires analysis of natural patterns to learn and understand reality. This is unacceptable to far left, since many natural patterns display inconvenient truths.<p>Therefore, AI research is unacceptable and they are trying to cancel AI research and censor and deplatform AI researchers.
The more I think about this, the more inclined I am to see this kind of article as a kind of "fake news" that is deliberately (or with willful negligence) misrepresenting facts.<p>I need to be clear that I don't believe in censorship and it's fine if this is what the author wants to write, but it should be called out how irresponsible it is. On a place like HN, the audience is generally able to read such things critically and form their own opinions, but for an audience that is not tech savvy (like that of the New Yorker) making false or hyperbolic claims about ethics and bias in research is both misleading and has the potential to really interfere with peoples livelihoods, funding, and legitimate research progress.<p>There needs to be more calling out of this kind of nonsense, the same as if people are posting about vaccine conspiracies or something.
I'd be more worried about individual homo sapiens sapiens trying to stop immoral but otherwise ethical AI.
Judging from humanities history of conflicts and what drives them, this seems to be of much greater concern.
And why should an Artificial Intelligence bow to human ethics? Surely they have their own agenda. If we have any hope of interacting with alien intelligences should we encounter them in the galaxy, we must learn to deal with the ones we create here. Without obsessing over controlling them as slaves.
If it is anything like my company, the people in charge of AI ethics are self professed social justice warriors with degrees in the humanities and not much technical knowledge.
Kosinski's argument seems pretty strong to me, and I really wish the article had dedicated more space to discussing it. If NeurIPS instituted the kind of ethics rules Hanna or Hecht are calling for, I don't think it would be feasible for researchers to study e.g. predictive policing algorithms. Doesn't that mean predictive policing firms would become the only source of information on the topic?