These points are...almost uniformly terrible.<p>> Training large AI models consumes a lot of computer processing power, and hence a lot of electricity. Gebru and her coauthors refer to a 2019 paper from Emma Strubell and her collaborators on the carbon emissions and financial costs of large language models. It found that their energy consumption and carbon footprint have been exploding since 2017, as models have been fed more and more data.<p>You train the model once...and then use it to provide incredibly cheap value for billions of people. Comparing the carbon footprint of a single flight between NYC and LA to the training of a model is...insanely disingenuous. The model gets trained once. The correct comparison would be between the carbon footprint of <i>building the plane</i>. Or, atlernatively, to amortize the carbon footprint of the training over all of the individual queries it answers.<p>> Large language models are also trained on exponentially increasing amounts of text. This means researchers have sought to collect all the data they can from the internet, so there's a risk that racist, sexist, and otherwise abusive language ends up in the training data.<p>This is the only actually legitimate point. This is a real problem, but everyone already knows about this problem, and so if she's going to talk about it she should be doing so in a solutions focused way if she means to contribute anything to the field. She may have done that in the paper, but this review doesn't say so.<p>> The researchers summarize the third challenge as the risk of “misdirected research effort.” Though most AI researchers acknowledge that large language models don’t actually understand language and are merely excellent at manipulating it, Big Tech can make money from models that manipulate language more accurately, so it keeps investing in them. “This research effort brings with it an opportunity cost,” Gebru and her colleagues write. Not as much effort goes into working on AI models that might achieve understanding, or that achieve good results with smaller, more carefully curated datasets (and thus also use less energy).<p>Your criticism is that...tech companies are spending capital on making more profit for themselves? Thats uh, not much of a criticism. Especially when you consider the fact that this technology has positive spillover effects for other groups. These language models can be repurposed to combat racism online, and for all sorts of other things. But even if you ignore that, the premise here is just an utterly trivial near-tautology: "Company invests in things that make it more money".<p>> The final problem with large language models, the researchers say, is that because they’re so good at mimicking real human language, it’s easy to use them to fool people. There have been a few high-profile cases, such as the college student who churned out AI-generated self-help and productivity advice on a blog, which went viral.<p>Sure. You could say this about photoshop, too, and people have. But this technology is going to happen, with or without Google's help.<p>> In his internal email, Dean, the Google AI head, said one reason the paper “didn’t meet our bar” was that it “ignored too much relevant research.” Specifically, he said it didn’t mention more recent work on how to make large language models more energy-efficient and mitigate problems of bias.<p>> However, the six collaborators drew on a wide breadth of scholarship. The paper’s citation list, with 128 references, is notably long. “It's the sort of work that no individual or even pair of authors can pull off,” Bender said. “It really required this collaboration.”<p>Your defense against a claim that specific research was missed is to cite...the length of the citation list? Lol. This argument would hardly pass muster in a forum comment.