TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AI’s biggest risk is the corporations that control them

296 点作者 LukeEF大约 2 年前

29 条评论

fat-chunk大约 2 年前
I was at a conference called World Summit AI in 2018, where a vice president of Microsoft gave a talk on progress in AI.<p>I asked a question after his talk about the responsibility of corporations in light of the rapidly increasing sophistication of AI tech and its potential for malicious use (it&#x27;s on youtube if you want to watch his full response). In summary: he said that it&#x27;s the responsibility of governments and not corporations to figure out these problems and set the regulations.<p>This answer annoyed me at the time, as I interpreted it as a &quot;not my problem&quot; kind of response, and thereby trying to absolve tech companies of any damage caused by rapid development of dangerous technology that regulators cannot keep up with.<p>Now I&#x27;m starting to see the wisdom in his response, even if this is not what he fully meant, in that most corporations will just follow the money and try to be the first movers when there is an opportunity to grab the biggest share of a new market, whether we like it or not, regardless of any ethical or moral implications.<p>We as a society need to draw our boundaries and push our governments to wake up and regulate this space before corporations (and governments) cause irreversible negative societal disruption with this technology.
评论 #35844039 未加载
评论 #35843853 未加载
评论 #35843889 未加载
评论 #35844742 未加载
评论 #35845288 未加载
评论 #35845890 未加载
评论 #35857049 未加载
评论 #35845752 未加载
评论 #35845342 未加载
mrshadowgoose大约 2 年前
I fully agree that malicious corporations and governments are the largest risk here. However, I think it&#x27;s incredibly important to reject the reframing of &quot;AI safety&quot; as anything other than the existential risk AGI poses to most of humanity.<p>What will the world look like when AGI is finally achieved, and the corporations and governments that control them rapidly have millions of useless mouths to feed? We might end up living in a utopic post-scarcity society where literally every basic need is furnished by a fully automated industrial base. But there are no guarantees that the entities in control will take things in that direction.<p>AI safety is not about whether &quot;tech bros are going to be mean to women&quot;. AI safety is about whether my government is concerned with my continued comfortable existence once my economic value as a general intelligence is reduced to zero.
评论 #35843340 未加载
评论 #35842956 未加载
评论 #35843147 未加载
评论 #35843290 未加载
评论 #35842830 未加载
评论 #35842879 未加载
评论 #35844029 未加载
评论 #35842806 未加载
评论 #35845566 未加载
评论 #35848395 未加载
评论 #35843704 未加载
评论 #35842856 未加载
评论 #35857052 未加载
评论 #35842735 未加载
评论 #35843051 未加载
tgv大约 2 年前
While I&#x27;m in the more alarmist camp when it comes to AI, these arguments surprised me a bit. This time it isn&#x27;t &quot;will somebody think of the children&quot; but rather &quot;won&#x27;t someone think of the women who aren’t white&quot;. The argumentation then lays the blame at corporations (i.c. Google) for not preventing actual harm that happens today. While discrimination is undeniable, and it is an actual source of harm, the reasoning seems rather generic and can be applied to anything corporate and is more politically inspired than the other arguments against AI.
评论 #35842261 未加载
评论 #35847025 未加载
评论 #35842430 未加载
agentultra大约 2 年前
This is exactly the problem with ML right now. Hinton and other billionaires are making sensational headlines predicting all sorts of science fiction. The media loves a good story and fear is catchy. But it obscures the real danger: humans.<p>LLM’s are merely tools.<p>Those with the need, will, and desire to use them for their own ends pose the real threat. State actors who want better weapons, billionaires who want an infallible police force to protect their estates, scammers who want to pull off bigger frauds without detection, etc.<p>It is already causing undue harm to people around the world. As always it’s those less fortunate that are disproportionately affected.
评论 #35843194 未加载
评论 #35844834 未加载
评论 #35843722 未加载
评论 #35846558 未加载
评论 #35842370 未加载
评论 #35845229 未加载
nologic01大约 2 年前
The biggest risk I see (in the short term) is people being <i>forced</i> to accept outcomes where &quot;AI&quot; plays, in one form or another a defining role that materially affects human lives.<p>Thus people accepting implicitly (without awareness) or explicitly (as a precondition for receiving important services and without any alternatives on offer) algorithmic regulation of human affairs that is controlled by specific economic actors. Essentially a bifurcation of society into puppets and puppeteers.<p>Algorithms encroaching into decision making have been an ongoing process for decades and in some sense it is an inescapable development. Yet the manner in which this can be done spans a vast range of possibilities and there is plenty of precedence: Various regulatory frameworks and checks and balances are in place e.g., in the sectors of medicine, insurance, finance etc. where algorithms are used to <i>support</i> important decision making, not replace it.<p>The novelty of the situation rests on two factors that do not merely replicate past circumstances:<p>* the rapid pace of algorithmic improvement which creates a pretext for suppressing societal push-back<p>* the lack of regulation that rather uniquely characterized the tech sector, which allowed creating de-facto oligopolies, lock-ins and lack of alternatives<p>The long term risk from AI depends entirely on how we handle the short term risks. I don&#x27;t really believe we&#x27;ll see AGI or any such thing in the foreseeable future (20 years), entirely on the basis of how the current AI mathematics looks and feels. Risks from other - existential level - flaws of human society feel far greater, with biological warfare maybe the highest risk of them all.<p>But the road to AGI becomes dystopic long before it reaches the destination. We are actually already in a dystopia as the social media landscape testifies to anybody who wants to see. A society that is algorithmically controlled and manipulated at scale is a new thing. Pandora&#x27;s box is open.
评论 #35845555 未加载
bioemerl大约 2 年前
And hey guys, there are two big open source communities running that focus heavily on running this stuff offline.<p>KoboldAi<p>oobabooga<p>Look them up, join their discords, rent a few GPU servers and contribute to the stuff they are building. We&#x27;ve got a living solution you can contribute to right now if you&#x27;re super worried about this.<p>This stuff is actually a very valid way to move towards finding a use for LLMs at your workplace, they offer pretty easy tools for doing things like fine tuning, so if you have a commercially license model you could throw a problem at it and see if it works.
评论 #35842222 未加载
评论 #35842674 未加载
satisfice大约 2 年前
The feminist complains about feeling disrespected for half the interview instead of dealing with the substance of the question. When she finally gets around to commenting on his point, it&#x27;s a vacuous and insulting dismissal-- exactly the sort of thing she seems to think people shouldn&#x27;t do to her.<p>Most of what she says is sour grapes. But when you put all that aside, there&#x27;s something else disturbing going on: apparently the AI experts who wish to criticize how AI is being developed and promoted can&#x27;t even agree on the most basic concerns.<p>It seems to me when an eminent researcher says &quot;I&#x27;m worried about {X}&quot; with resepct to the focus of their expertise, no reasonable person should merely shrug and call it a fantasy.
superkuh大约 2 年前
AI&#x27;s aren&#x27;t the AIs. The artificial intelligences with non-human motives are the non-human legal persons: corporations themselves. They&#x27;ve already done a lot of damage to society. Corporate persons should not have the same rights as human persons.
评论 #35842822 未加载
flangola7大约 2 年前
The biggest risk is machines running out of hand and squishing all of us like a bug by accident. Once pseudo-intelligent algorithms are running every part of industry and engaging in global human communications it only takes minor errors to cascade and amplify into a real problem, one that will be moving faster than we can react to.<p>Think stock market flash crash, replacing digital numbers that can be paused and reset with physical activity in supply chains, electrical grids, internet infrastructure, and interactions in media and interpersonal communication.
mitthrowaway2大约 2 年前
Hinton: &quot;The main immediate danger is bad actors. Also, while not immediate, there is a concern that AI might eventually become smarter than humans&quot;.<p>Whittaker: &quot;Wrong! The main immediate danger is corporations. And the concern that AI might become smarter than humans not immediate.&quot;
siliconc0w大约 2 年前
I think my biggest concerns are:<p>0) civil unrest from economic impacts and changes in how the world works<p>1) increasing the leverage of bad actors - almost certainly this will increase frauds and thefts but on the far end you things like, &quot;Your are GPT bomb maker. Build me the most destructive weapon possible with what I can order online.&quot;<p>2) swarms of kill bots, maybe homemade above<p>3) AI relationships replacing human ones. I think this one cuts both ways since loneliness kills but seems like it&#x27;ll have dangerous side-effects like further demolishing the birth rate.<p>Somewhat down on the list is the fear corporations or government gatekeeping the most powerful AIs and using them to enrich themselves, making it impossible to compete or just get really good at manipulating the public. There does seem to be a counterbalance here with open-source models and people figuring out how to make them more optimized so better models are more widely available.<p>In some sense this will force us to get better at communicating with each other - stamping out bots and filtering noise from authentic human communication. Things seem bad now but it seems inevitable that every possible communication channel is going to get absolutely decimated with very convincing laser-targeted spam which will be very difficult to stop without some sort of large scale societal proof of human&#x2F;work system (which ironically altman is also building).
krono大约 2 年前
Relevant recent announcement by Mozilla regarding their acquisition of an e-commerce product&#x2F;review scoring &quot;AI&quot; service, with the intent to integrate it into the core Firefox browser: <a href="https:&#x2F;&#x2F;blog.mozilla.org&#x2F;en&#x2F;mozilla&#x2F;fakespot-joins-mozilla-firefox-shopping-announcement&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.mozilla.org&#x2F;en&#x2F;mozilla&#x2F;fakespot-joins-mozilla-f...</a><p>Mozilla will be algorithmically profiling you and your actions on covered platforms, and if it ever decides you are a fraud or invalid for some reason, it very conveniently advertise this accusation to all its users by default. Whether you will be able to sell your stuff or have your expressed opinion of a product be appreciated and heard by Firefox users will be in Mozilla&#x27;s hands.<p>A fun fact that serves to show what these companies are willing to throw overboard just to gain the smallest of edges, or perhaps simply to display relevance by participating in the latest trends: the original company&#x27;s business strategy was essentially Mozilla&#x27;s Manifesto in reverse, and included such things as selling all collected data to all third parties (at least their policies openly admitted to this). The person behind all that is now employed by Mozilla, the privacy proponent.
gmuslera大约 2 年前
Guns don&#x27;t kill people, at least tightly controlled guns. If they do, then the killer was whoever controls it. And not just corporations. Intelligence agencies, non-tech corporations, actors with enough money and so on.<p>The not-so-tightly controlled ones, at least in the hands of individuals not in a position of power or influence, may run into the risk of becoming illegal in a way or another. The system will always try to get into an artificial scarcity position.
评论 #35842754 未加载
13years大约 2 年前
I wouldn&#x27;t constrain it to only corporations, but all entities.<p>Ultimately, most of the dangers, at least those close enough to reason about all are risks that come about from how we will use AI on ourselves.<p>I&#x27;ve described those and much more from the following.<p>&quot;Yet, despite all the concerns of runaway technology, the greatest concern is more likely the one we are all too familiar with already. That is the capture of a technology by state governments and powerful institutions for the purpose of social engineering under the guise of protecting humanity while in reality protecting power and corruption of these institutions.&quot;<p><a href="https:&#x2F;&#x2F;dakara.substack.com&#x2F;p&#x2F;ai-and-the-end-to-all-things" rel="nofollow">https:&#x2F;&#x2F;dakara.substack.com&#x2F;p&#x2F;ai-and-the-end-to-all-things</a>
eachro大约 2 年前
At this point there are quite a lot of companies training these massive LLMs. We&#x27;re seeing startups with models that are not quite GPT-4 level but close enough to GPT-3.5 pop up on a near daily basis. Moreover, model weights are being released all the time, giving individuals the opportunity to tinker with them and further release improved models back to the masses. We&#x27;ve seen this with the llama&#x2F;alpaca&#x2F;alpaca.cpp&#x2F;alpaca-lora releases not too long ago. So I am not at all worried about this risk of corporate control.
1vuio0pswjnm7大约 2 年前
&quot;Because there&#x27;s a lot of power and being able to withhold your labor collectively, and joining together as the people that ultimately make these companies function or not, and say, &quot;We&#x27;re not going to do this.&quot; Without people doing it, it doesn&#x27;t happen.&quot;<p>The most absurd &quot;excuse&quot; I have seen, many times now online, is, &quot;Well, if I didn&#x27;t do that work for Company X, somebody else would have done it.&quot;<p>Imagine trying to argue, &quot;Unions are pointless. If you join a union and go on strike, the company will just find replacements.&quot;<p>Meanwhile so-called &quot;tech&quot; companies are going to extraordinary lengths to prevent unions not to mention to recruit workers from foreign countries who have lower expectations and higher desperation (for lack of a better word) than workers in their home countries.<p>The point that people commenting online always seem to omit is that not everyone wants to do this work. It&#x27;s tempting to think everyone would want to do it because salaries might be high, &quot;AI&quot; people might be media darlings or whatever. It&#x27;s not perceived as &quot;blue collar&quot;. The truth is that the number of people who are willing to spend all their days fiddling around with computers, believing them to be &quot;intelligent&quot;, is limited. For avoidance of doubt, by &quot;fiddling around&quot;, I do not mean sending text messages, playing video games, using popular mobile apps and what not. I mean grunt work, programming.<p>This is before one even considers only a limited number of people may have actually the aptitude. Many might spend large periods of time trying and failing, writing one line of code per day or something. Companies could be bloated with thousands of &quot;engineers&quot; who can be laid off immediately without any noticeable effect on the company&#x27;s bottom line. That does not mean they can replace the small number of people who really are essential.<p>Being willing does not necessary equate to being able. Still, I submit that even the number of willing persons is limited. It&#x27;s a shame they cannot agree to do the right thing. Perhaps they lack the innate sense of ethics needed for such agreement. That they spend all their days fiddling with computers instead of interacting with people is not surprising.
fredgrott大约 2 年前
I have a curious question, where did the calculator(tabulator) operators go?<p>Did we suddenly have governments fall when they were replaced by computers?<p>Did we suddenly have massive unemployment when they were replaced?<p>AI is a general purpose tool, and like other general purpose tools it expands not only human&#x27;s reach mind wise it betters society and lifts up the world.<p>We have been through this before, we will get through it quite well since the last oh general purpose tool will replace us rumor mill reactive noise.
评论 #35843771 未加载
tpoacher大约 2 年前
The two are not mutually exclusive dangers. If anything, they are mutually reinforcing.<p>The Faro Plague in Horizon Zero Dawn was indeed brought on by Ted Faro&#x27;s shortsightedness, but the same shortsightedness would not have caused Zero Dawn had Ted Faro been a car salesman instead. (forgive my reliance on non-classical literature for the example).<p>The way this is framed makes me think this framing itself is even more dangerous than the dangers of AI per se.
brigadier132大约 2 年前
AI&#x27;s biggest risk are governments with militaries controlling them. Mass human death and oppression has always been carried out by governments.
评论 #35842557 未加载
data_maan大约 2 年前
All these warnings about AI safety are bullshit.<p>Humanity is perfectly well capable of ruining itself without help from AGI (nuclear proliferation is unsolved and getting worse, climate change will bite soon etc).<p>If anything AGI could save us by giving us some help in solving these problems. Or perhaps doing the mercy kill to put us out quickly, instead of us suffering a protracted death by a slowly deteriorating environment.
评论 #35842870 未加载
peteradio大约 2 年前
The risk is already here, its the data companies of men control and the 100 year effort to enhance our ability to mine it. If we say AI is the coming risk we are fools.
评论 #35842845 未加载
EVa5I7bHFq9mnYK大约 2 年前
Now that everyone and their mother in law has chimed in about the perils of AI, folks are arguing whose mother in law gave the better talk.
mmaunder大约 2 年前
Much of todays conversation around AI mirrors conversations that occurred at the dawn of many other technological breakthroughs. The printing press, electricity, radio, the microprocessor, PCs and packaged software, the Internet and the Web. Programmers can now train functions rather than hand coding them. It’s just another step up.
photochemsyn大约 2 年前
&gt; &quot;What you said just now—the idea that we fall into a kind of trance—what I’m hearing you say is that’s distracting us from actual threats like climate change or harms to marginalized people.&quot;<p>Is the argument here that people are rather passive and go along with whatever the system serves up to them, hence they&#x27;re liable to &#x27;fall into a trance&#x27;? If so, then the problem is that people are passive, and it doesn&#x27;t really matter if they&#x27;re passively watching television or passively absorbing an AI-engineered social media feed optimized for advertiser engagement and programmed consumption, is it?<p>If you want to use LLMs to get information about fossil-fueled global warming from a basic scientific perspective, you can do that, e.g.:<p>&gt; &quot;Please provide a breakdown of how the atmospheric characteristics of the planets Venus, Earth, and Mars affects their surface temperature in the context of the Fourier and Manabe models.&quot;<p>If you want to examine the various approaches civilizations have used to address the problem of economic and social marginalization of groups of people, you could ask:<p>&gt; &quot;How would [insert person here] address the issue of economic and social marginalization of groups of people in the context of an industrial society experiencing a steep economic collapse?&quot;<p>Plug in Ayn Rand, Karl Marx, John Maynard Keynes, etc. for contrasting ideas. What sounds best to you?<p>It&#x27;s an incredibly useful tool, and people can use it in many different ways - if they have the motivation and desire to do so. If we&#x27;ve turned into a society of brainwashed apathetic zombies passively absorbing whatever garbage is thrown our way by state and corporate propagandists, well, that certainly isn&#x27;t the fault of LLMs. Indeed LLMs might help us escape this situation.
29athrowaway大约 2 年前
The biggest risk is giving unlimited amounts of data to those corporations.
nico大约 2 年前
The people that control those corporations<p>It’s not AI, it’s us<p>It’s humans making the decision
nico大约 2 年前
No corporation controls AI<p>AI is open<p>AI is the new Linux<p>And it’s people in control, not corporations
评论 #35846677 未加载
irrational大约 2 年前
I thought the biggest risk was Sarah Connor and Thomas Anderson.
benreesman大约 2 年前
I’m just completely at a loss for how so many people ostensibly so highly qualified even start with absurd, meaningless terms like “Artificial General Intelligence”, and then go on to conclude that there’s some kind of Moore’s Law going on around an exponent, an exponent that fucking Sam Altman has publicly disclaimed. The same showboat opportunist that has everyone changing their drawers over the same 10-20% better that these things have been getting every year since 2017 is managing investor expectations down, and everyone is losing their shit.<p>GPT-4 is a wildly impressive language model that represents an unprecedented engineering achievement as concerns any kind of trained model.<p>It’s still regarded. It makes mistakes so fundamental that I think any serious expert has long since decided that forcing language arbitrarily hard is clearly not to path to arbitrary reasoning. It’s at best a kind of accessible on-ramp into the latent space where better objective functions will someday not fuck up so much.<p>Is this a gold rush thing at the last desperate end of how to get noticed cashing in on hype? Is it legitimate fear based on too much bad science fiction? Is it pandering to Sam?<p>What the fuck is going on here?
评论 #35845596 未加载