Maybe different teams are different, but on my previous team within Google AI, we thought the goal of google's pubapproval process was to ensure that internal company IP (eg. details about datasets, details about google compute infra) does not leak to the public, and maybe to shield Google from liability. Nothing more.<p>In all of my time at Google AI, I never heard of pubapproval being used for peer review or to critique the scientific rigor of the work. It was never used as a journal, it was an afterthought that folks on my team would usually clear only hours before important deadlines. We like to leave peer review to the conferences/journals' existing process to weed out bad papers; why duplicate that work internally?<p>I'm disappointed that Jeff has chosen to imply that pubapproval is used to enforce rigour. That is a new use case and not how it has been traditionally used. Pubapproval hasn't been used to silence uncomfortable minority viewpoints until now. If this has changed, it's a very, very new change.
This reads to me like Google felt that the paper painted some of their other technologies in a poor light, and wanted to insert language that made them look better. The way he describes their objections, they strike me as the sort of thing that is routinely addressed in the camera-ready version of papers by adding a few lines to the related work section. Not the sort of thing that a conference reviewer or an internal reviewer would reject a paper over.<p>Previously, we only had one side of this story. But if this is Dean's best spin on Google's side of the story, I'm very tempted to conclude they're in the wrong here. Obviously I don't have all the information, but the information I do have feels consistent with the idea that someone important at Google didn't like Gebru's paper for corporate-political (meaning making Google look good, as opposed to political-political) reasons, they tried to get Gebru to play ball, she refused, and now they have to back-project a justification in the name of "scientific integrity".<p>Unfortunately, I think this is a story where most people's opinions about who's in the right will be more informed by their previous opinions about Gebru and Dean than the narrower question of what happened with this particular paper. I'm probably even guilty of that to some extent myself, given that I'm a fan of some of Gebru's previous work.
I don't know Jeff Dean. I have read some of his work, watched some of his presentations. He seems a credible bloke.<p>This, though, looks and feels like thinly-veiled retroactive and pretty unconvincing PR. It's short on detail and appears somewhat at odds with several points from Timnit Gebru's resignation note [0]:<p>- Dean says the paper was reviewed by a "cross-functional team". Gebru says she received the feedback through a "privileged and confidential document to HR"<p>- Dean says the paper was submitted for review on the day it was due to be published; Gebru says they had notified "PR & Policy 2 months before".<p>- Dean suggests the feedback was due to the paper not highlighting mitigating work for some of the limitations the paper was exposing. That seems like a very normal part of the research process. Why, then, does Gebru claim that she was told that a "manager can read you a privileged and confidential document" and that no other recourse or exploration of the feedback was permissible?<p>The only thing we know from the outside is that reality will be far more nuanced and complicated than the tidbits that leak out. Even allowing for that though - and reading some of the related comments here - Google isn't coming out of this well at the moment.<p>[0]: <a href="https://www.platformer.news/p/the-withering-email-that-got-an-ethical" rel="nofollow">https://www.platformer.news/p/the-withering-email-that-got-a...</a><p>EDIT: Fixed spelling of Timnit Gebru's name.
There are a lot of people commenting that she didn't actually resign. I agree, but it sounds like the conversation went like this:<p><i>employee: I'm not happy about x, y and z. If you don't do those, I'm going to quit.<p>manager: well we are not going to do those, so thank you for your time. We accept your resignation and would like it to start immediately (i.e. you're fired).
</i><p>If you are gonna tell your manager that you plan to resign if a condition isn't met, then what do you expect them to say if they don't plan to fulfill that condition? It sounds like she was expecting them to say <i>"Hey, well we don't want to meet your demands, but sure, we're happy to have a disgruntled employee around here, so feel free to stick around, or you could just quit on your own timeline, no sweat".</i><p>I suspect that <i>many</i> people would be fired on the spot for threatening to resign, so don't threaten it if you aren't okay with that consequence.
After reading Jeff Dean's response, I can only come to the conclusion that Timnit Gebru acted like a primadonna. It is completely normal to have to obtain prepublication signoff on material before it is submitted to a conference (and manager signoff even before abstract submission). Given the breadth of experience at Google, it seems strange not to avail yourself of this. Demanding to know the identity of reviewers is absurd (no journal would tolerate that) and deeply unprofessional. Making it an explicit ultimatum was her decision. Denigrating the entire area of research at Google on a large mailing list is the action of someone who wants to be terminated.<p>People claiming that the deficiencies in the paper are minor and wouldn't be blockers obviously have little experience submitting to academic journals. Other parts of Google doing deeply technical work probably don't have the same level of review as the Ethical AI group -- for obvious reasons.<p>There is usually a long back and forth -- there are even memes about the infuriating comments from "Reviewer 2" [1][2]. Omitting to mention argument-obsoleting developments in the field (from your own lab!) is more than enough to send you back to extensive redrafting.<p>To be clear on terminology -- a retraction is an academic black mark, and occurs to a paper after publication, usually for reasons of research misconduct. This is not an instance of that.<p>[1] <a href="https://twitter.com/redpenblackpen/status/1133440569907195904/photo/1" rel="nofollow">https://twitter.com/redpenblackpen/status/113344056990719590...</a>
[2] <a href="http://jasonya.com/wp/wp-content/uploads/2016/01/PowerResponsibility.jpg" rel="nofollow">http://jasonya.com/wp/wp-content/uploads/2016/01/PowerRespon...</a>
<i>Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.</i><p>This sheds some new light...
My researcher acquaintances at industry labs at IBM, Microsoft, HP, Xerox, ATT, Bell, DEC, Compaq, etc. never have had to have their papers reviewed internally before submitting them to conferences or journals. What's up with Google?
Dean writes:<p><i>Our aim is to rival peer-reviewed journals in terms of the rigor and thoughtfulness in how we review research before publication.</i><p>But Gebru writes that HR and her management chain delivered her feedback in a surprise meeting where she was not allowed to read the actual feedback, understand the process which generated it, or engage in a dialogue about it:<p><i>Have you ever heard of someone getting “feedback” on a paper through a privileged and confidential document to HR?</i><p><i>A week before you go out on vacation, you see a meeting pop up at 4:30pm PST on your calendar (this popped up at around 2pm). No one would tell you what the meeting was about in advance. Then in that meeting your manager’s manager tells you “it has been decided” that you need to retract this paper by next week...</i><p><i>And you are told after a while, that your manager can read you a privileged and confidential document and you’re not supposed to even know who contributed to this document, who wrote this feedback, what process was followed or anything. You write a detailed document discussing whatever pieces of feedback you can find, asking for questions and clarifications, and it is completely ignored.</i><p>(from <a href="https://www.platformer.news/p/the-withering-email-that-got-an-ethical" rel="nofollow">https://www.platformer.news/p/the-withering-email-that-got-a...</a>)<p>I've been through the peer review process at Physical Review Letters, SIGMOD, and VLDB. You get a document containing all the reviewer's comments, plus a metareviewer's take on the overall decision and what has to change. You can engage in dialogue with the metareviewer, including a detailed response letter justifying your choices, highlighting things the reviewers may have missed, and explaining where you plan to make changes. You get additional rounds of comments from the reviewers in light of that letter on later drafts.<p>I'm not a Googler, and I have no idea what the standard review process looks like there, but what Gebru describes does not sound <i>at all</i> like peer review. I also note that Dean does not contradict Gebru's account of the meeting or feedback process. If I had a paper rejected in this fashion, I would also demand to know what the hell was going on and who was responsible.<p>This feels <i>off</i>.
In her email she claims that she is constantly being dehumanized. This is unacceptable to someone in her amazing position and to be honest sounds like she is a narcissist. I think Google couldn't wait to get rid of her (cannot blame them, she wanted to sue the company a year prior, also "represented" the company really bad on social media) and her email with demands was their opportunity. I don't think she would survive in any company. She is better off starting her own company or going to academia. I don't feel bad for her, I'm sure she already got multiple offers. Not from FB that's for sure though :)
Yesterday the conversation was all over the tone of the email exchange, and my gut was regardless of the research it discussed that Google was probably alright to choose to accept that researcher's resignation.<p>Now I think I was wrong, Google looks like they're full of crap. If the research doesn't pass muster I'd like to read it and pass my own judgement. I'm guessing the tone was justified.
> Our aim is to rival peer-reviewed journals in terms of the rigor and thoughtfulness in how we review research before publication.<p>I can see how this might be frustrating for academics working within Google. The field already has systems in place for peer-review. While I admire the idea of Google holding their research to a certain standard, it also provides a mechanism for dismissing research that paints Google (the corporation) in a bad light. If a paper is good enough to pass an (external) review process, why should it not be published?
It sadden me that they make Jeff Dean work on these kind of issues. His map/reduce invention brought to the world a lot more than managing his coworker egos.
While I’m a spectator to this unfolding story and am reacting to a pretty cursory overview of what happened, this has the distinct feeling that Google thought they were moving in for a checkmate by, in their words, “accepting a resignation”, only to have it — very predictably? - blow up in their faces completely.<p>Even if Google is acting presenting the chain of events faithfully, their final move to jump on an opportunity to remove the researcher and call it resignation seems so aggressive and incongruous that from the outside it makes it seem like this conflict is rooted in a larger and more difficult relationship that they calculated was no longer in their interest.<p>I’m wondering if the cost benefit analysis is still looking that way on the inside, because this move and the attention it’s causing is so contrary to their stated goals that I have to wonder if Google is committed to those goals at all. Others must be wondering exactly the same thing.
I would be absolutely furious if a manager blocked my work from being published. Even a manager who has worked as a researcher, in my experience, significantly lacks the expertise to be making such judgments. A research manager’s job is to be familiar with work across an entire portfolio, so they will be necessarily less knowledgeable than individual senior researchers. Presumably Google generally also feels this way seeing as this appears to be the first case where prepublication approval was denied for content. I would never work somewhere where management had such a lack of respect for my own judgment as a researcher.
It's frankly unprofessional for Jeff Dean to post only his email in this document without providing more context. The news media has in cases provided balanced coverage that included the email from Timnit that prompted Google's action: <a href="https://www.platformer.news/p/the-withering-email-that-got-an-ethical" rel="nofollow">https://www.platformer.news/p/the-withering-email-that-got-a...</a><p>This post from Jeff Dean simply underscores that he has failed to balance the need for diplomacy with the research thesis of his own research group. I'm not saying he's being malicious, but incredibly tone deaf. While I appreciate he gets "attacked" at nearly every talk (at one retinopathy talk I saw him grilled for 10 minutes on race), he's going to continue to get this sort of attention until he can stop being the Googler who wants to tell you why their view is right.
I think we’re starting to see companies discover the limits of how far they’re willing to let employees push their “woke” agenda using the company’s name.<p>It seems disparaging your own company while ignoring research that counters yours is Google’s limit, but we’ll have to wait to see the research paper if it leaks.
The followup does not answer the major questions raised by this part of Dean's original email:<p>> Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted. A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. [...] We acknowledge that the authors were extremely disappointed with the decision that Megan and I ultimately made, especially as they’d already submitted the paper.<p>When it was "approved for submission," was that approval final and actionable, or some kind of conditional approval? Is demanding a retraction after an approval the normal way it works at Google, or was this an unusual occurrence that Gebru was right to question?
Had Gebru not raised an ultimatum, what would have been the consequence of the paper not passing internal review be?
It sounds like she submitted the paper anyway, would there have been consequences to those actions?
This topic, over the past couple of days, seems to me to have become fraught, polarised and argued to distraction and faction.<p>It's not a stretch to say that the world has a problem with discrimination, or that big personalities can rock the boat when they go against the grain. Or that corporates have interests to defend.<p>I'd like in this case, though - not to say that all the other factors aren't worthy of examination as subjects in their own right - to see this paper. The authors/collaborators are leaders in the field. There are legitimate concerns about AI/ML; the validity/reliability of data from which, nowadays, consequential outcomes are derived.<p>Please, let's ignore - at least for now - the corporate politics, the heat of the race/gender politics, the PR machines weighing in (look at all the news outlets grabbing this atm), and take a look at what the paper says. The right people (those in the field and qualified/able to do so) should be listened to.<p>As for the politics of this, and the posturing and politicising and agenda-building on all sides: that doesn't help.<p>AI and ML affect us all now. This is a trend. Let's at least expose the findings of acknowledged leaders in the field - let's see this study - before we descend into the sideshows of politics and factionalism. Please.<p>Edit to clarify: tl;dr - I've not seen/read this contested paper. Whatever its contents, I want to see them, and imo that is more important than the current controversy blizzard etc.
I am reluctant to accuse Jeff Dean of bad faith, but this argument doesn't scan. I've been on the inside of an AI team during a crisis and then seen how senior management spun details to obfuscate critical details to avoid responsibility. Dean is slandering Gebru in a manner that will make it easier to dismiss her work and the work of other AI ethicists (especially women and bipoc) in the future. He and Google are actually themselves guilty of a lacking rigor (i.e. ethical rigor). Worst of all, I expect that racists inside Google and the wider industry will utilize this argumentative structure in the service of neutralizing ethicists and bipoc in the future. This is truly despicable. I used to be a great admirer of Jeff Dean.
Well, the lesson I'm taking from this is to have a good HR department, and listen to them.<p>One of their most valuable functions is to stop you "letting go of" people for stupid and petty reasons that blow up in your face, spectacularly.
Jeff Dean is lying:<p><a href="https://twitter.com/timnitGebru/status/1335017524937756672" rel="nofollow">https://twitter.com/timnitGebru/status/1335017524937756672</a><p>> <i>1/Man there’s so much to pick apart. Let’s start with one thing. I want to ask if Jeff Dean has looked at the publication approval policy that he keeps on mentioning in his email. Like, for example, a simple look at the website? Let’s read.</i><p>> <i>2/First off “Start the PubApprove process at least 1 week in advance of any deadline”. Okay, not sure where the 2 weeks in Jeff’s email came from.</i><p>> <i>3/ But ALSO “The perfect policy” “There is no such thing as the perfect policy. Fortunately Googlers like to do the right thing. Please do that here—read the policy and do what makes sense.”</i><p>> <i>4/ ALSO “Meanwhile, we strive to make the PubApprove process as lightweight as possible: hopefully eliminating the temptation to skip it.” I don’t know man you might have to resign immediately if you just “do what makes sense” so beware.</i><p>> <i>5/ Finally, I wouldn’t want to know what would happen to you if you had “the temptation to skip it” mentioned on the website. Beware researchers</i><p>> <i>6/In spite of this, we gave a heads up BEFORE even writing the paper—on September 18. Saying that we were about to write this paper. So much to say here, so much. But I’ll stop here for now.</i>
I read the abstract. The paper doesn't critique Google. It critiques the current focus of AI research of building bigger and bigger models. Google is one of the leaders in building those big models. So what? They also invest in research to make smaller models that perform equally well. They have a huge incentive - those big models are very expensive. I don't think anyone at Google really care if the paper is published. I'm pretty sure it will get published sooner or later. The authors could've addressed the reviewers' comments pretty quickly and then re-submit to another conference. The whole thing blew out of proportion. She picked the wrong battle. Should've bitten the bullet.
I don't know why Google is bothering to say anything. It's pretty obvious that those who support the person who left have anchored their opinions and won't change under any circumstances, and those defending Google are anchored as well.
It appears that this was released intentionally by Dean.
But in case it wasn't, or in case it gets altered/updated - it's been archived [1].<p>[1] <a href="https://archive.is/VpAN8" rel="nofollow">https://archive.is/VpAN8</a>
It's standard practice to check in with stakeholders before submitting a paper.<p>The important issue in the paper seems to be that the paper villainized large language models, which Google has a vested interest in.<p>The paper was likely publishable with a bit more context in the introduction.<p>I've been in similar situations and they were handled offline.<p>The explosiveness of this situation seems to have been prepared the history of Timnit's relationship with Google.<p>This incident was just the spark.
I've read her email. In my first job, right after college, I have also sent an inflammatory email (nothing compared to the email she has sent) to my manager (not a whole group of coworkers). And I got seriously reprimanded for it by my then manager. Even after a decade I cringe when I remember the email I wrote. I have no idea why people think it's okay to send emails like this and expect not to get fired.
Key phrase ...<p>> <i>Highlighting risks without pointing out methods for researchers and developers to understand and mitigate those risks misses the mark on helping with these problems</i><p>You have to have sympathy for her position but in the end if all you do is offer criticism then it's hard to see you inside Google, let alone as a manager.
This letter reads like Exxon-Mobil chastising their climatologist for not taking into account their latest research on fuel efficiency, or Philip Morris laying into their house doctor for discounting the psychological benefits of rich, mellow flavor.
Funny that Jeff Dean would write and share this in a google doc. After 20 years of being a driving force for the web, there was no other easier, more appropriate tool avaialable to a senior google employee to share his opinion online.
TL;DR: Jeff and Co got sick of Timnit's woke bullshit, pushed back, she threw around some ultimatums, they called her bluff and pushed her out. Personally I'm glad that Google is finally cleaning house. More of this plz.
I just can’t but feel that they entirely miss the point. Nobody is outraged that Google rejected a paper because it needed some adjustments.<p>What we want to know is if that is actually the case.
These kind of encounters are going to be more common. AI people want to make AI. But it can be dangerous. And, by definition, we can't control AI.
Every "woke" company in SV is going to go through this. I really want to know how "diversity' will help someone build a software company say a CRM app. How does a skin color/race help with that? If ideas of a different race matter, why should they be employees and not found through user testing/customer feedback? Coinbase, Google, keep it rolling.. No fully remote company has to go through this bullshit.
> "So if you would like to change things, I suggest focusing on leadership accountability and thinking through what types of pressures can also be applied from the outside. For instance, I believe that the Congressional Black Caucus is the entity that started forcing tech companies to report their diversity numbers. Writing more documents and saying things over and over again will tire you out but no one will listen."<p>People are forgetting about the part where she basically encourages her colleagues to talk to congress, at a time when tech CEOs are regularly being hauled in front of congressional committees. At the point when that is written, this clearly is an adversarial relationship between her and Google. And it wasn't Google that made it adversarial<p>I couldn't imagine writing something like that and keeping my job
A lot of people mistakenly think Google is part of their family and has incentives other than making profit and avoiding bad PR. It's not really a surprise they took the first opportunity they could to fire someone who has in the past threatened them with Lawsuits... Don't be evil Google died a long time ago, there's nothing to see here, just business as usual.
> Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback<p>It always amazes how blatantly authoritarian these "woke" types are. I would not be surprised if the sole reason she wanted the identities of every consultant was to engage in some sort of witch-hunting and bigoteering[0]<p>[0] <a href="https://www.urbandictionary.com/define.php?term=Bigoteering" rel="nofollow">https://www.urbandictionary.com/define.php?term=Bigoteering</a>
I am more concerned whether she is really a good researcher or not. If she is, just continue her great work and contribute real good stuff in the field. BTW, what exactly is Ethical AI?
I think Google can't be force to admit that when postmodernist racial theory ideologues are in the research / leadership ranks this kind of review is necessary. So Google is just going to say this is how we do it (or did it always).
Jeff Dean has(/had) a cult following at Google. Internally, there was even a large collection of Jeff Dean jokes (like Chuck Norris jokes of yesteryear).<p>It is sad to see him drop his credibility and become a PR mouthpiece. I know how this happens and I know why. But it is still sad to watch. Like watching you favourite band sell out to a big label.
Why doesn’t Jeff just suck up his pride, apologize and hire her back rather than write increasingly thin rationalizations for his reaction?<p>High level research (and engineering) will involve egos and you should expect this kind of push back when you stop someone from publishing. Nothing here justifies how he handled it<p>Of course, I think many of us have seen this reaction before, maybe even done it ourselves. It’s bad for everyone, don’t do it Jeff!