TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Co-founder of DeepMind on how AI threatens to reshape life as we know it

50 点作者 pmastela超过 1 年前

8 条评论

jazzyjackson超过 1 年前
AI risk is a spectacularization of a new source of wealth<p>When agriculture and then fossil fuels + supercharged agriculture with petro-fertilizer allowed humanity to become 100x more productive, the owners of the land &amp; capital managed to capture the gains almost entirely, leaving the proletariat powerless except for their labor, and still struggling to survive despite the enormous windfall in energy<p>Now that AI is on the verge of becoming the major producer of wealth in the 21st century onward, the owners of the capital would like very much if we would talk about anything except the possibility of capturing the wealth of models trained on the public&#x27;s behavior and distributing it directly as basic income, the way Alaska and Norway redistribute profits from oil to citizens. The data that has been drilled and refined into inference models belongs to humanity, to every book and letter ever penned, why are allowing the investors* of Facebook, Google, and Microsoft to capture the value and leaving the rest of humanity to toil?<p>The talk of extinction is a magicians trick to divert attention from the fact that we could likely move to 4-hour workweeks in the next decades, but only if wealth distribution is forced onto the capital owners, they will not share if they are not made to.<p>* yes, indeed, the public can be the investors! as is the case with mutual funds and so on, but the people who are being obsoleted are going to have no significant wealth tied up in these stocks in order to receive dividends. I would support, as an alternative to aggressive taxation, a forced dilution of the stocks of any company found to be replacing its workforce with a superintelligence. Distribute the newly minted stocks to all citizens so dividends can pay out to the rightful receivers of royalties.
评论 #37421523 未加载
评论 #37422530 未加载
评论 #37421770 未加载
评论 #37421576 未加载
ml-anon超过 1 年前
Its wild that a guy who was demoted and fired for bullying and abusing staff over the course of a decade as well as overseeing almost certainly illegal misuse of NHS patient data that caused Google to shut down DM health and made the landscape so toxic they eventually shuttered Google Health still has a seat at the table. I&#x27;d say shame on The Guardian for this breathless puff piece but...its The Guardian.
评论 #37421740 未加载
wruza超过 1 年前
Maybe it’s just me right after a nap, but boy that was this year’s hardest and emptiest read. The only thing that became clear is you can buy more of it in a book format.
评论 #37421072 未加载
评论 #37420766 未加载
skepticATX超过 1 年前
These fluff pieces are so predictable and uninspiring. There are tons of qualified and interesting people working in AI and they holds diverse range of viewpoints. Why is it one specific sub-group’s beliefs that are continuously forced on us without even an attempt at critical examination of said beliefs?
评论 #37422617 未加载
Simulacra超过 1 年前
I&#x27;ve read Daemon by Daniel Suarez, like many tech people. I get it. AI could take over and supplant a global government, threatening society, life, wealth etc etc.<p>We&#x27;ve been having the same conversation since the dawn of science fiction. If at any point there is a general confusion between what is artificial, and what is human, it will cause such an alarmist backlash that AI will always be kept in check, or destroyed..<p>No human wants to admit that they thought they were talking to a human the whole time when in fact it was a robot. For that reason alone AI will never be allowed to grow to a point where it threatens life &quot;as we know it.&quot;
评论 #37420959 未加载
评论 #37431031 未加载
评论 #37420890 未加载
评论 #37423718 未加载
smokel超过 1 年前
I don&#x27;t read all the shallow articles on the coming AI apocalypse, but I gave this one a try.<p>I was a bit disappointed about the lack of creativity in dreaming up an AI infested future.<p>People are able to force each other into gullibly working 40 hours a week, so that they can have shelter, food and a smartphone. This is not a rational thing, it is historically grown group think on a massive scale. Trying to use rational arguments to forecast what the future will look like based on this chaotic process seems silly at best.<p>Nobody in the 1400s expected cars, democracy, let alone Facebook. So if this AI thing is as good as the printing press, then I wholly expect everyone to clone themselves a couple of thousand times, inhabit interiors of planets, grow 500,000 years old before entering higher education, but not something mundane as &quot;letting an AI fill in the paperwork to set up a company&quot;.<p>TL;DR, you can safely skip this article.
评论 #37421275 未加载
kmeisthax超过 1 年前
How the hell do better neural networks mean that printing material from CO2 emissions becomes financially viable, economic, or even just thermodynamically favored? A neural network is a tool for learning a particular statistical distribution. Processes and information we don&#x27;t already know don&#x27;t just fall out of the training distribution, you need to run experiments and prototype your way to get to the thing you want to do. You use neural networks when you want to do things at scale, and problems that have to be tackled at scale in R&amp;D or research work are relatively scarce.<p>I do not doubt that machine learning and neural networks will accelerate research, but the limiting factor is still going to be humans in the loop for the forseeable future given the current state of the art (e.g. LLMs with tree-of-thought reasoning and LLM-powered agents that are easily subverted in the same way one hacks a badly written PHP application). You will have people using ML to crunch large datasets or accelerate simulation work, but that&#x27;s it.<p>Asymmetric attacks are very feasible with today&#x27;s LLMs and art generators, but that harm has already come to pass. It is also not a <i>new</i> harm. If you don&#x27;t believe me, then I&#x27;ve got hard evidence of Donald Trump cheating in Barack Obama&#x27;s Minecraft server[3].<p>Also...<p>&gt; These include increasing the number of researchers working on safety (including “off switches” for software threatening to run out of control) from 300 or 400 currently to hundreds of thousands; fitting DNA synthesisers with a screening system that will report any pathogenic sequences; and thrashing out a system of international treaties to restrict and regulate dangerous tech.<p>Ok, so first off, all those AI safety researchers need to be fired if they thought &#x27;off switches&#x27; were a good thing to mention here. It&#x27;s already fairly well established AI safety canon that a &quot;sufficiently smart AI&quot; will reason about the off switch in a way that makes it useless[0].<p>Furthermore, notice how these are all obviously scary scenarios. The article fails to mention the mundane harms of AI: automation blind spots that render any human supervision useless[1]. I happen to share Suleyman&#x27;s opposition to the PayPal brand of fauxbertarianism[2], but I would like to point out that they&#x27;re on <i>your side</i>. Elon Musk talks about &quot;AI harms&quot; just like you do and thinks it needs to be regulated. The obvious choices of requiring a license to train AI is exactly the kind of regulatory capture that actual libertarians, right- or left-, would rail against. What we need are not bans or controls on training AI but bans or controls on businesses and governments using AI for safety-critical, business development, law enforcement, or executive functions. That is a harm that is here today, has been here for at least a decade, and could be regulated without stymieing research and creating &quot;AI NIMBYs&quot;.<p>[0] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=3TYT1QfdfsM">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=3TYT1QfdfsM</a><p>[1] <a href="https:&#x2F;&#x2F;pluralistic.net&#x2F;2023&#x2F;08&#x2F;23&#x2F;automation-blindness&#x2F;#humans-in-the-loop" rel="nofollow noreferrer">https:&#x2F;&#x2F;pluralistic.net&#x2F;2023&#x2F;08&#x2F;23&#x2F;automation-blindness&#x2F;#hum...</a><p>[2] AKA, power is only bad if the fist has the word &#x27;GOVERNMENT&#x27; written on it. Or &#x27;let snek step on other snek&#x27;. This is distinct from just right-libertarianism.<p>[3] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=aL1f6w-ziOM">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=aL1f6w-ziOM</a>
verdverm超过 1 年前
Why is it &quot;AI threatens to...&quot; rather than &quot;AI opens opportunities to...&quot;?<p>To me, there seems to be way more upside than downside, like pretty much every new tech
评论 #37421337 未加载
评论 #37421300 未加载