TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

On the dangers of stochastic parrots: Can language models be too big? (2021)

73 pointsby Schipholover 2 years ago

11 comments

sthatipamalaover 2 years ago
This paper is the product of a failed model of AI safety, in which dedicated safety advocates act as a public ombudsman with an adversarial relationship with their employer. It&#x27;s baffling to me why anyone thought that would be sustainable.<p>Compare this to something like RLHF[0] which has acheived far more for aligning models toward being polite and non-evil. (This is the technique that helps ChatGPT decline to answer questions like &quot;how to make a bomb?&quot;)<p>There&#x27;s still a lot of work to be done and the real progress will be made by researchers who implement systems in collaboration with their colleagues and employers.<p>[0] <a href="https:&#x2F;&#x2F;openai.com&#x2F;blog&#x2F;instruction-following&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;blog&#x2F;instruction-following&#x2F;</a>
评论 #34383862 未加载
评论 #34384328 未加载
评论 #34383766 未加载
评论 #34384627 未加载
hardwaregeekover 2 years ago
I&#x27;m still midway through the paper, but I gotta say, I&#x27;m a little surprised at the contrast between the contents of the paper and how people have described it on HN. I don&#x27;t agree with everything that is said, but there are some interesting points made about the data used to train the models, such as it capturing bias (I would certainly question the methodology of using reddit as a large source of training data), and that bias being amplified by filtering algorithms that produce the even larger datasets used for modern LLMs. The section about environmental impact might not hit home for everyone, but it is valid to raise issues around the compute usage involved in training these models. First, because it limits this training to companies who can spend millions of dollars on compute, and second because if we want to scale up models, efficiency is probably a top goal.<p>What really confuses me here is how this paper is somehow outside the realm of valid academic discourse. Yes, it is steeped in activist, social justice language. Yes, it has a different perspective than most CS papers. But is that wrong? Is that enough of a sin to warrant such a response that this paper has received? I&#x27;ll need to finish the paper to fully judge, but I&#x27;m leaning towards no, it is not enough of a sin.
评论 #34385755 未加载
评论 #34385586 未加载
评论 #34385940 未加载
larveover 2 years ago
I just finished working my way through this this morning. The literature list is quite interesting and gives a lot of pointers for people who want to walk the line between overblown hype and doomsday scenarios.
kinstonover 2 years ago
People seem to miss the specific thing that was controversial about this paper: The paper gives inaccurate estimates of the carbon dioxide emissions that result from machine learning.<p>My understanding is that this is the main reason why Google wanted the paper pulled. You can&#x27;t have a paper by Google authors saying incorrect things about how much CO2 Google is emitting.<p>The authors refused. They argued that they gave a citation for their numbers, and it&#x27;s not their problem if the paper they cite is inaccurate.<p>Eventually Google did publish a response: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2204.05149" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2204.05149</a> But this got totally buried...
SinePostover 2 years ago
This is a great paper to introduce someone to the potential ethical issues that large language models have. A concern I have with these models that the paper only dusts upon is the notion of &quot;truth&quot; when the machine has no method to determine it. Do we admit defeat for what the machine does not have a very high confidence in an answer, or go with a popular opinion&#x2F;interpretation? These are epistemological issues that I don&#x27;t see being resolved anytime soon.
monkaijuover 2 years ago
The problems with LLM are numerous but whats really wild to me is that even as they get better at fairly trivial tasks the advertising gets more and more out of hand. These machine dont think, and they dont understand, but people like the CEO of OpenAI allude to them doing just that, obviously so the hype can make them money.
评论 #34384554 未加载
评论 #34383935 未加载
评论 #34383411 未加载
评论 #34384737 未加载
评论 #34385545 未加载
platypiiover 2 years ago
This paper is embarrassingly bad. It&#x27;s really just an opinion piece where the authors rant about why they don&#x27;t like large language models.<p>There is no falsifiable hypothesis to be found in it.<p>I think this paper will age very poorly, as LLMs continue to improve and our ability to guide them (such as with RLHF) improves.
评论 #34384549 未加载
评论 #34383729 未加载
评论 #34383982 未加载
评论 #34383667 未加载
评论 #34383583 未加载
weeksieover 2 years ago
This was mostly political guff about environmentalism and bias, but one thing I didn&#x27;t know was that apparently larger models make it easier to extract training data.<p>&gt; Finally, we note that there are risks associated with the fact that LMs with extremely large numbers of parameters model their training data very closely and can be prompted to output specific information from that training data. For example, [28] demonstrate a methodology for extracting personally identifiable information (PII) from an LM and find that larger LMs are more susceptible to this style of attack than smaller ones. Building training data out of publicly available documents doesn’t fully mitigate this risk: just because the PII was already available in the open on the Internet doesn’t mean there isn’t additional harm in collecting it and providing another avenue to its discovery. This type of risk differs from those noted above because it doesn’t hinge on seeming coherence of synthetic text, but the possibility of a sufficiently motivated user gaining access to training data via the LM. In a similar vein, users might query LMs for ‘dangerous knowledge’ (e.g. tax avoidance advice), knowing that what they were getting was synthetic and therefore not credible but nonetheless representing clues to what is in the training data in order to refine their own search queries<p>Shame they only gave that one graf. I&#x27;d like to know more about this. Again, miss me with the political garbage about &quot;dangerous knowledge&quot;, the most concerning thing is the PII leakage as far as I can tell.
评论 #34383971 未加载
评论 #34383782 未加载
评论 #34383592 未加载
xiaolingxiaoover 2 years ago
I believe this is the papers that got timnit and mmitchel fired from google, followed by a protracted media&#x2F;legal campaign against google and vice versa.
评论 #34383887 未加载
评论 #34383910 未加载
评论 #34383263 未加载
2bitencryptionover 2 years ago
Pure speculation ahead-<p>The other day on Hacker News, there was that article about how scientists could not tell GPT-generated paper abstracts from real ones.<p>Which makes me think- abstracts for scientific papers are high-effort. The corpus of scientific abstracts would understandably have a low count of &quot;garbage&quot; compared to, say, Twitter posts or random blogs.<p>That&#x27;s not to say that all scientific abstracts are amazing, just that their goal is to sound intelligent and convincing, while probably 60% of the junk fed into GPT is simply clickbait and junk content padded to fit some publisher&#x27;s SEO requirements.<p>In other words, ask GPT to generate an abstract, and I would expect it to be quite good.<p>Ask it to generate a 5-paragraph essay about Huckleberry Finn, and I would expect it to be the same quality as the corpus- that is to say, high-school English students.<p>So now that we know these models can learn many one-shot tasks, perhaps some cleanup of the training data is required to advance. Imagine GPT trained ONLY on the library of congress, without the shitty travel blogs or 4chan rants.
评论 #34384685 未加载
评论 #34384654 未加载
评论 #34384735 未加载
评论 #34384727 未加载
srvmshrover 2 years ago
I am of the general understanding that this paper became less about the LLMs &amp; more of a insinuating hit piece against Alphabet. At least, some of the controversial nuggets got Gebru (and later M Mitchell) fired.<p>From a technical standpoint, there is little new stuff that I found this paper offered in understanding why LLMs can have unpredictable nature, or what degree of data will get exposed by clever hacks (or if there are systematic ways to go about it). It sounded more like a collection of verifiable anecdotes for easy consumption (which can be a good thing by itself if you want capsule understanding in a non-technical way)
评论 #34383888 未加载
评论 #34385856 未加载
评论 #34383714 未加载