I've written here once before that AI ethics researchers are like hiring hackers working in security, where they hold your products to an exogenous standard or ideal, which can add value, but has risks.<p>I've worked for companies who hired and then had to fire hackers for publishing vulnerabilities in their own and in customer products. Security is fundamentally a technology governance function, and AI ethics piece is also a technology governance function, where both hackers and AI ethics researchers are in-effect activists to create awareness for change, but aren't typically who you would put in the actual governance role.<p>An AI ethics conference has analogies to Defcon or Blackhat, in that shunning vendors because the attendees perceived that some prominent hackers were treated unfairly may have some precedents. Microsoft's relationship with Defcon in the 90's vs. post 2000 is an example of how this both happened and changed.<p>Extending that analogy, the disconnect in the relationship between researchers and vendors was a symptom of what a disadvantage the vendors were at to the asymmetric risk that activists/hackers posed because the media story of the conflict between giant corporations being vulnerable to scrappy hackers writes itself. In terms of how to handle it, google can probably use precedents from MSFT vs. defcon, and AI ethics researchers can look at how hackers both succeeded and failed to change security and tech governance.
Google should have never started hiring ethical AI researchers. They should have hired AI researchers and made their stuff available to independent ethics researchers.<p>These independent researchers should be employed by i.e. a university and funded by government funds, maybe even additionally with an AI regulation fee (i.e. like car makers have to pay to certify their cars).<p>It makes no sense for them to hire their critics and then think they can stay independent. You are basically paying people to shit on your own products. Makes no sense.
Well, there's at least 2 perspectives here.<p>If the two people were fired due to race, gender, or even ethical opinion, this reaction is more than appropriate.<p>If the two people were fired due to breaking established rules and processes, exfiltrating files or similar — this can very well backfire on minorities. It indirectly creates another hurdle to hiring a minority member if the company needs to consider that they can't viably fire them for breaking rules.<p>I'm unaware of good public information supporting either perspective. And both the ex-employees and Google may be restricted from publishing sensitive information to support their cause. I'm not sure how to form an informed opinion on this, honestly.
This title is confusing in English. It makes it sound like <i>Google</i> is the entity that made the decision not to sponsor, instead of the conference organizers. A better title would be "AI ethics conference organizers no longer want Google as sponsors".
Google translation of the original German text:<p>Google is not allowed to sponsor the Conference of Fairness, Accountability, and Transparency (FAccT) this year. This has been announced by the Association for Computing Machinery (ACM). The reason for this is the dismissal of two AI researchers and allegations of racism. The business relationship is considered paused, but not ended. The British company DeepMind, which belongs to the Google group, is still allowed to participate - the break does not affect relationships with other big tech companies.<p>Michael Ekstrand, co-chair of the conference sponsors, justifies the decision with the layoffs of Timnit Gebru and Margaret Mitchell, former leaders of the ethics and artificial intelligence (AI) team at Google.
Layoffs reason for a break<p>Google had made headlines with the dismissal of the two employees in the past few months. The trigger was the planned publication of a paper. Together with colleagues, Gebru criticized the dangers posed by large AI language models. As a result, Google spoke out against publication of the paper as long as Gebru is mentioned as a co-author. Officially, the paper "did not meet the requirements for a publication", but there was suspicion that the company was just trying to get rid of an unpleasant critic. This approach caused unrest among Google employees, and accusations of racism were also loud.<p>Margaret Mitchel, founder and co-head of the ethical AI team, was fired a short time later for attempting to use automated scripts to find evidence of the discriminatory treatment of her colleague in emails. Google's mail system did not escape this and locked the AI researcher out of the system. Recently, the company announced some internal changes in how it handles its AI teams and their employees, as well as the research results. Suresh Venkatasubramanian, member of the FAccT program committee, announced on Twitter last Friday that he would like to re-examine the framework conditions for sponsorship in the coming year.
This will further marginalize AI ethics and can only be seen as a good thing.<p>NPR did yet another piece on the topic last weekend and interviewed an AI ethics expert. When, once again, I’m being told that the reason a system has a hard time identifying the features of black faces is systemic racism, and doesn’t even mention the optic and sensor issues, I think the field needs a reboot.
Google's push to participate in AI ethics discussions has always been about controlling the narrative. The fact that it took them firing two people for ACM to notice is rather hilarious. Just read the papers they promote. The meta-narrative is rather clear. They want to present AI/ML as something that is dangerous in the hands of individuals or smaller companies and that requires constant and direct control from experts with tons of data and resources. So they're perfectly happy to talk about stuff like model bias, but the moment you start asking uncomfortable questions about ML in general you better watch out.<p>These topics should be handled by people without corporate ties. Period. The conflict of interests is so in-your-face, it's mind-boggling more people don't speak about it.
Considering that "AI ethics" basically means we shouldn't have AI until we can guarantee that AI doesn't commit thought crimes(according to the current groupthink morality prevalent among the educated class), it's not surprising.
I've changed the URL from <a href="https://www.heise.de/news/Google-als-Sponsor-fuer-KI-Ethik-Konferenz-nicht-mehr-erwuenscht-5070341.html" rel="nofollow">https://www.heise.de/news/Google-als-Sponsor-fuer-KI-Ethik-K...</a> to which seems to be the main 3rd party article on this in English.<p>Submissions to HN need to be in English. We have deep respect for German and other languages, but HN is an English-language site. It's important that the community be able to read an article. When people can't read an article they react purely to the title, which leads to shallower discussion.<p><a href="https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sort=byDate&type=comment&query=english%20language%20site%20by:dang" rel="nofollow">https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...</a><p>I've also changed the title from "Google no longer wanted to sponsor the AI ethics conference", which unless my rudimentary German is betraying me, actually is not what the heise.de headline says. If I understand the story correctly, it's the conference that didn't want Google.
The title is misleading. It's confusing because the German original title make clear that ACM does not want Google to be a sponsor anymore, whereas the English title adds the possibility that it might be <i>Google</i> who wanted to sponsor the conference, but then decided not to.