TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Yann LeCun and Andrew Ng: Why the 6-Month AI Pause Is a Bad Idea [video]

126 点作者 georgehill大约 2 年前

25 条评论

thecupisblue大约 2 年前
Besides it being impossible to implement, the 6-month pause would do nothing but give a 6 month competitive advantage to OpenAI over everybody else. They could sell the model access model to companies without having any competitors on the market.<p>The whole &quot;pause AI or it will destroy the world thing&quot; is just a large pot of hype anyways, pushed by journalists lacking the understanding of technology, twitter-bros who are building &quot;AI copywriters&quot; and &quot;profile photo generators&quot; and owners of &quot;AI&quot; companies.<p>While AGI is still quite far away, it&#x27;s common to see people included in the process claiming &quot;it&#x27;s showing signs of general intelligence&quot;, &quot;it&#x27;s becoming self-aware&quot; etc. Hell, they got a horse in the race and that horse can earn them billions of dollars - of course they would claim it&#x27;s insanely advanced. And the journalists just regurgitate the claims because it generates clicks.<p>If Sam comes out tomorrow and says &quot;he is afraid of GPT4&quot;, tens of thousands of clickbait articles about it will come out, together with rebuttals and opinion pieces. Millions of dollars worth of ads will be sold via those articles. The value of OpenAI will increase in public&#x27;s eyes and more corporations will be interested in including GPT in their products, thinking it&#x27;s going to revolutionise their line of business.<p>And of course, we have the VC&#x27;s investing in &quot;AI&quot; products, hyping it up even more - wouldn&#x27;t you do the same if you had few hundred million invested in 50 different companies and each tweet&#x2F;article could increase the potential of them raising more money?
评论 #35484577 未加载
评论 #35484702 未加载
评论 #35484882 未加载
评论 #35484782 未加载
评论 #35484629 未加载
评论 #35485807 未加载
评论 #35484885 未加载
评论 #35485120 未加载
评论 #35486200 未加载
评论 #35485427 未加载
评论 #35484635 未加载
pksebben大约 2 年前
If we really want to curb the potential for harm from these models, open them up. Share the checkpoints and the code to train them. Democratize the development of new systems.<p>The infosec world figured this out ages ago. Yes, it&#x27;s possible that <i>some</i> bad actors will gain access to capabilities that they didn&#x27;t have before - but the strength of a billion eyes is huge, and can rapidly accelerate solutions to deficits.<p>What we need is a massive, open, community curated dataset that continually evolves. Craft regulations that enforce data availability and free access. Without such mechanisms, AI systems all have a SPOF that&#x27;s just waiting to bite the collective us in the ass.
评论 #35484789 未加载
评论 #35485015 未加载
评论 #35484840 未加载
评论 #35484825 未加载
评论 #35484819 未加载
nerpderp82大约 2 年前
I think a world wide pause is absolutely necessary. And this is a pause in something &quot;better&quot; for whatever definition than GPT4.<p>We are on the cusp of the next step function in humanities capabilities as a technological civilization. The upsides are enormous and I welcome them, but the downsides are even bigger and we should map this out before running off the damn cliff. Technologically, politically, economically.<p>The fact that it would be hard, or difficult should not play into whether we have the discussion. This is the issue and reasoning that proliferated the bomb. The bomb didn&#x27;t have to be dropped on Nagasaki and Hiroshima and unfettered bigai doesn&#x27;t have to enter the winner take all capital arena.<p>Even with just chatgpt4 level performance, millions of people are in the process of losing their jobs. I had to argue at a bar a couple weeks ago for a dev manager not to lay off his entire team of green junior data engineers. He claimed he could work along and replace the three people that reported to him... I not only saved those peoples jobs (hopefully) but also his own. There is no way he could have stayed afloat even managing requirements.<p>Huge changes are coming and we aren&#x27;t prepared. I&#x27;d rather have a 15 mph collision than a 55 mph one.<p>I am not convinced that two AI researchers are qualified about the larger ramifications of their creations. They don&#x27;t have a grounding all of the other skills necessary, are they politicians, economists, psychologists or philosophers?
评论 #35485442 未加载
评论 #35485768 未加载
评论 #35485195 未加载
评论 #35485560 未加载
YeGoblynQueenne大约 2 年前
&gt;&gt; <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;live&#x2F;BY9KV8uCtj4?feature=share&amp;t=1766">https:&#x2F;&#x2F;www.youtube.com&#x2F;live&#x2F;BY9KV8uCtj4?feature=share&amp;t=176...</a><p>Here, Yann LeCun is saying that we don&#x27;t have systems that can reason and plan.<p>That needs a qualifier: we don&#x27;t have _statistical machine learning_ systems that can reason and plan. There are plenty of systems that can reason, and plenty of systems that can plan, and even some that can do both, but those systems are not statistical machine learning systems. Rather they are what is derisively dismissed as &quot;Good Old-Fashioned, AI&quot;: they are classical, logic-based, symbolic systems, automated theorem provers and planners.<p>Reasoning, in particular, specifically deductive reasoning, is solved to a degree that cannot be surpassed: the Resolution principle is a sound and complete deductive reasoning system with a single inference rule that is easily run on a computer because of its simplicity, and because it is a single inference rule; while other sound and complete systems for deductive reasoning exist, they do not consist of a single rule and a human must be on hand to select the appropriate rule at each step of a proof. Or, they are just extremely expensive to run whereas Resolution, thanks to its One Simple Trick™ of unification can be executed efficiently.<p>As to planing, we have fast algorithms for planning today, as for all kinds of tree search, constraint optimisation, SAT-solving and the like, and those algorithms are routinely used in the industry; except of course they are no longer recognised as &quot;artificial intelligence&quot; <i>because</i> they are so common. The so-called &quot;AI effect&quot;.<p>In any case, reasoning and planning, and many other tasks that were perfectly possible with classical AI are, for the time being, impossible with deep neural networks, as Yann LeCun (and not just anybody) says. We have regressed. In our passion to build ever better classifiers, we threw away the ability to reason.
davesque大约 2 年前
I&#x27;m a bit confused why LeCun, Ng, or anyone else is acting like this is an actual thing that could happen.
评论 #35485582 未加载
评论 #35484883 未加载
评论 #35484701 未加载
xiphias2大约 2 年前
,, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.&#x27;&#x27;<p>If they want pause, they should first convience all companies bigger than OpenAI (Google, Tesla, Meta, and the Chinese counterparts) to pause all deep learning model training, to keep the competition fair.<p>The worst thing that could happen is pausing OpenAI, big companies catching up, and destroying a smaller company by having more cash.
tolstoshev大约 2 年前
Dr Strange checked 6,438,347 timelines and in zero of them did an AI pause happen.
pmarreck大约 2 年前
Agreed with Yann as soon as he voiced dissent against this.<p>Also noticed something: The people who believe consciousness and intelligence and creativity and will are wholly contained in mechanistic wetware (which, in all honesty, should be the null hypothesis) are the most afraid of AGI. Dualists and other people who believe that consciousness resides in some soul-like thing seem much less afraid of this.<p>One thing’s for sure: The smarter it gets (or seems), the more that any difference between “machine intelligence” and “human intelligence” will become apparent.
评论 #35484744 未加载
评论 #35485154 未加载
评论 #35486191 未加载
评论 #35484671 未加载
评论 #35484809 未加载
评论 #35484839 未加载
arisAlexis大约 2 年前
Le cunn says weird stuff like gpt is not impressive and also that we understand llms like airplanes
mordymoop大约 2 年前
Cool to see the two guys who have been most consistently proven wrong over the last 3 years join forces.
评论 #35485137 未加载
gregwebs大约 2 年前
There&#x27;s no real acknowledgement in this conversation of the existential threat. I would recommend listening to Eliezer Yudkowsky on the subject [1] [2]:<p>&gt; If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.<p>&gt; There’s no proposed plan for how we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.There’s no proposed plan for how we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.<p>It is a valid criticism that Yudkowsky and others are over-staging the abilities of current day GPT. For example Eliezer says &quot;can it reason? It can play chess&quot;. But I looked at a GPT chess game (actually published in an article on his less wrong site). GPT has no idea how to play chess. It can make some moves in the opening that match opening theory which it has been trained on. But then it will lose a piece because it has no actual ability to think 2 moves ahead. And then when it gets into the end game it tries to make illegal king moves.<p>As impressive as GPT is, this does not seem to be an AGI that can properly understand and reason, and I am even skeptical that the current LLM models will lead to this. We have many years before AGI wipes us out :) I really don&#x27;t know anything about AI though and we never know when the next breakthrough advance will occur.<p>[1] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=AaTRHFaaPG8">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=AaTRHFaaPG8</a> [2] <a href="https:&#x2F;&#x2F;time.com&#x2F;6266923&#x2F;ai-eliezer-yudkowsky-open-letter-not-enough&#x2F;" rel="nofollow">https:&#x2F;&#x2F;time.com&#x2F;6266923&#x2F;ai-eliezer-yudkowsky-open-letter-no...</a>
drcode大约 2 年前
Two people who already agree with each other patting each other on the back.
technocratius大约 2 年前
I don&#x27;t like the tone and thoughts of Yann LeCun (who I deeply respect for his technical work) in this interview at all. Unfortunately, he also has a clear conflict of interest in this discussion, as lead AI at Meta. He basically waves any concerns of near-future superhuman intelligence away, saying that these models have a superficial world model. This both ignores (a) the fact that interpretability and alignment research severely lags behind the rate of new, more powerful models being release, so there&#x27;s little data&#x2F;science to back that claim up, and (b) it sounds like he misjudges the current geometric rate at which AI is improving, which can make AGI more near-term than expected. I also really detest his us of the &quot;AI Doomers&quot; stigmatization, which is nothing short of an ad hominem to those who have more strong concerns than he himself has. Not very constructive.
motohagiography大约 2 年前
When we talk about AI, we probably overestimate what human intelligence may be and how evenly it is distributed. Sure, your job probably can&#x27;t be done by an ML model yet, but I&#x27;d practically guarantee your middle-managers probably could.<p>I can&#x27;t help but think a lot of humans haven&#x27;t achieved the &quot;human level intelligence,&quot; these speakers attribute to everyone else. They talk about these models not being able to achieve human level intelligence (HLI?) because even though we are language oriented, we can do things like learn to drive in a few hours vs. this task being currently much more difficult for machines - but all those comparisons are to competencies in our physical environment that are not relevant to a media environment, which exists as a substrate for the narratives people use to shape their beliefs and identities. If we overestimate how intelligent people really are, LLM&#x27;s may be more powerful than we think.<p>Explosives may be a useful analogy, where we could bring to bear mechanical power and leverage to a ceiling of its ability in our environment, but once we harnessed chemical (and eventually, atomic) reactions, the things we can create with them can be trillions of times as powerful as our respective abilities. Humans with springs and levers weren&#x27;t that powerful after all, and the risk is that what we call HLI is, relatively, springs and levers level powerful. This might seem like an argument in favour of a pause, where in LLM&#x27;s we have managed to invent the equivalent of an intellectual explosive, but I would argue we need to accelerate and spread understanding of ML past the point of where it can be monopolized by a small cadre of social engineers.<p>The <i>only</i> thing that prevents you being managed by a machine is the physical competency and market value to decline it, and pausing to add governance now will be specifically to deprive you of that. The advocates for an ML development pause seem all about progress when that means redistributing things others already have and inserting themselves as governance, but they are absolutely against it when this progress means producing something net-new that could reduce the ability of said managers to be the ones to be the redistributors.<p>I think we should only accelerate AI tools development so that they can provide more equal opportunity to all, and let people create their own disincentives for using it for the systemic oppression that I can also guarantee it will be used for if we pause to let the gatekeepers in.
jokoon大约 2 年前
I guess ai deserves a new winter to be honest, it would teach everybody a lesson and put more pressure on scientists to do better research on ai.<p>For now ai is not science or research or innovation, it&#x27;s just money spent so that companies can say they&#x27;re at the bleeding edge.
评论 #35485067 未加载
kerng大约 2 年前
I was a bit underwhelmed by the depth of the conversation - it entirely lacked an opposing or at least an alternate view to try to understand the other side. Going in I thought Andrew would moderate it like that, but it was more of a bubble discussion.
flappyeagle大约 2 年前
It&#x27;s silly of Ng and LeCun to dignify the letter with a response
评论 #35486146 未加载
评论 #35486031 未加载
simonw大约 2 年前
I extracted a transcript using Whisper: <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;simonw&#x2F;b3d48d6fcec247596fa2cca841d3fb7a" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;simonw&#x2F;b3d48d6fcec247596fa2cca841d3f...</a>
tpl大约 2 年前
It doesn&#x27;t matter that its a bad idea. It&#x27;s not implementable as policy.
评论 #35486056 未加载
nomilk大约 2 年前
What&#x27;s the primary risk supporters of the pause are trying to mitigate? Is it a skynet situation, that AI will take too many jobs, or simply the unknown?
评论 #35486251 未加载
olliej大约 2 年前
Ignoring everything else about &quot;AI&quot; and what not, which I do think is largely over hyped (I recognize that we&#x27;ve got a but of new systems and designs that can do things that weren&#x27;t previously possible, I just don&#x27;t &quot;it&#x27;s really thinking!!!!1!!!&quot; from any of it):<p>What is a 6 month hiatus going&#x2F;expected to do?<p>It would be like 70s LA saying &quot;we&#x27;re going to have a 6 month increase in fuel emission requirements&quot; and then pretending that would do something to solve the smog.
评论 #35486115 未加载
purpleblue大约 2 年前
6 month AI pause is the most absurd, self-serving thing I&#x27;ve heard in my history in tech.
评论 #35486204 未加载
rexreed大约 2 年前
Side question - what was this livestreamed with? Streamyard?
ericls大约 2 年前
Gotta wait for 6 month for China to catch up
29athrowaway大约 2 年前
tl;dw: Conflict of interest.<p>If they were Uyghurs in Xinjiang being monitored 24&#x2F;7 with deep learning based surveillance and sent to reeducation camps they would feel less optimistic about their research.<p>In China there are IP cameras now with a builtin ethnic minority detector using deep learning.<p>It is easy to feel optimistic about tech while living in luxury with a 20m+ salary.
评论 #35487848 未加载
评论 #35485458 未加载