TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Meta disbanded its Responsible AI team

404 点作者 jo_beef超过 1 年前

36 条评论

RcouF1uZ4gsC超过 1 年前
Because Meta is releasing their models to the public, I consider them the most ethical company doing AI at scale.<p>Keeping AI models closed under the guise of “ethics”, is I think the most unethical stance as it makes people more dependent on the arbitrary decisions, goals, and priorities of big companies, instead being allowed to define “alignment” for themselves.
评论 #38334851 未加载
评论 #38329230 未加载
评论 #38329468 未加载
评论 #38329341 未加载
评论 #38330410 未加载
评论 #38335394 未加载
评论 #38330374 未加载
seanhunter超过 1 年前
It never made any organizational sense for me to have a &quot;responsible AI team&quot; in the first place. Every team doing AI work should be responsible and should think about the ethical (and legal at a bare minimum baseline) dimension of what they are doing. Having that concentrated in a single team means that team becomes a bottleneck where they have to vet all AI work everyone else does for responsibility and&#x2F;or everyone else gets a free pass to develop irresponsible AI which doesn&#x27;t sound great to me.<p>At some point AI becomes important enough to a company (and mature enough as a field) that there is a specific part of legal&#x2F;compliance in big companies that deals with the concrete elements of AI ethics and compliance and maybe trains everyone else, but everyone doing AI has to do responsible AI. It can&#x27;t be a team.<p>For me this is exactly like how big Megacorps have an &quot;Innovation team&quot;[1] and convince themselves that makes them an innovative company. No - if you&#x27;re an innovative company then you foster innovation everywhere. If you have an &quot;innovation team&quot; that&#x27;s where innovation goes to die.<p>[1] In my experience they make a &quot;really cool&quot; floor with couches and everyone thinks it&#x27;s cool to draw on the glass walls of the conference rooms instead of whiteboards.
评论 #38332603 未加载
评论 #38331347 未加载
评论 #38333021 未加载
评论 #38330613 未加载
评论 #38332057 未加载
评论 #38330548 未加载
评论 #38331608 未加载
评论 #38330529 未加载
评论 #38333160 未加载
评论 #38332416 未加载
评论 #38331414 未加载
评论 #38331752 未加载
评论 #38331748 未加载
评论 #38334296 未加载
评论 #38333735 未加载
评论 #38333344 未加载
评论 #38335832 未加载
评论 #38333369 未加载
评论 #38330433 未加载
评论 #38330855 未加载
评论 #38333305 未加载
评论 #38330306 未加载
评论 #38333073 未加载
评论 #38333366 未加载
评论 #38332373 未加载
评论 #38332996 未加载
评论 #38334779 未加载
happytiger超过 1 年前
Early stage “technology ethics teams” are about optics and not reality.<p>In the early stages of a new technology the core ethics lies in the hands of very small teams or often individuals.<p>If those handling the core direction decide to unleash irresponsibly, it’s done. Significant harm can be done by one person dealing with weapons of mass destruction, chemical weapons, digital intelligence, etc.<p>It’s not wrong to have these teams, but the truth is that anyone working with the technology needs to be treated like they are on an ethics team, not build an “ethical group” who’s supposed to proxy the responsibility for doing it the “right way.”<p>Self-directed or self-aware AI also complicate this situation immeasurably, as having an ethics team presents a <i>perfect target</i> for a rogue AI or bad actor. You’re creating a “trusted group” with special authority for something&#x2F;someone to corrupt. Not wise to create privileged attack surfaces when working with digital intelligences.
MicolashKyoka超过 1 年前
AI safety is just a rent seeking cult&#x2F;circus, leeching on the work done by others. Good on Meta for cleaning shop.
评论 #38335416 未加载
luigi23超过 1 年前
When moneys out and theres a fire going on (at openai), its the best moment to close departments that were solely for virtue signaling :&#x2F;
评论 #38333471 未加载
Geisterde超过 1 年前
Completely absent a single example of what this team positively contributed. Perhaps we should look at a track record of the past few years and see how effective meta has been in upholding the truth, it doesnt look pretty.
irusensei超过 1 年前
Considering how costly is to train models I&#x27;m sure control freaks and rent seekers are probably salivating to dig their teeth into this but as technologies progress and opposing parts of the world get hold of this all this responsible and regulated feel good corpo crap bs will backfire.
ralusek超过 1 年前
There is no putting the cat back in the bag. The only defense against AI at this point is more powerful AI, and we just have to hope that:<p>1.) there <i>is</i> an equilibrium that can be reached<p>2.) the journey to and stabilizing at said equilibrium is compatible with human life<p>I have a feeling that the swings of AI stabilizing among adversarial agents is going to happen at a scale of destruction that is very taxing on our civilizations.<p>Think of it this way, every time there&#x27;s a murder suicide or a mass shooting type thing, I basically write that off as &quot;this individual is doing as much damage as they possibly could, with whatever they could reasonably get their hands on to do so.&quot; When you start getting some of these agents unlocked and accessible to these people, eventually you&#x27;re going to start having people with no regard for the consequences requesting that their agents do things like try to knock out transformer stations and parts of the power grid; things of this nature. And the amount of mission critical things on unsecured networks, or using outdated cryptography, etc, all basically sitting there waiting, is staggering.<p>For a human to even be able to probe this space means that they have to be pretty competent and are probably less nihilistic, detached, and destructive than your typical shooter type. Meanwhile, you get a reasonable agent in the hands of a shooter type, and they can be any midwit looking to wreak havoc on their way out.<p>So I suspect we&#x27;ll have a few of these incidents, and then the white hat adversarial AIs will come online in earnest, and they&#x27;ll begin probing, themselves, and alerting to us to major vulnerabilities and maybe even fixing them. As I said, eventually this behavior will stabilize, but that doesn&#x27;t mean that the blows dealt in this adversarial relationship don&#x27;t carry the cost of thousands of human lives.<p>And this is all within the subset of cases that are going to be &quot;AI with nefarious motivations as directed by user(s).&quot; This isn&#x27;t even touching on scenarios in which an AI might be self motivated against our interests
speedylight超过 1 年前
I honestly believe the best to make AI responsibly is to make it open source. That way no single entity has total control over it, and researchers can study them to better understand how they can be used nefariously as well as in a good way—doing that allows us to build defenses to minimize the risks, and reap the benefits. Meta is already doing that, but other companies and organizations should do that as well.
评论 #38330187 未加载
评论 #38329195 未加载
评论 #38329109 未加载
评论 #38329051 未加载
评论 #38329363 未加载
评论 #38330070 未加载
评论 #38329525 未加载
评论 #38329910 未加载
评论 #38329075 未加载
评论 #38329060 未加载
seydor超过 1 年前
The responsibility of AI should lie in the hands of users, but right now , no company is even close to giving AI users the power to shape their product in responsible ways. The legal system already covers for these externalities, and all attempts at covering their ass have resulted to stupider and less useful systems.<p>They are literally leaking more and more users to the open source models because of it. So, in retrospect, maybe it would be better if they didn&#x27;t disband it.
评论 #38332863 未加载
unicornmama超过 1 年前
Meta cannot be both referee and player on the field. Responsible schmenponsible. True oversight can only come from a an independent entity.<p>These internal committees are Kabuki theater.
评论 #38331090 未加载
corethree超过 1 年前
Safety for AI is like making safe bullets or safe swords or safe shotguns.<p>The reason why there&#x27;s so much emphasis on this is liability. That&#x27;s it. Otherwise there&#x27;s really no point.<p>It&#x27;s the psychological aspect of blame that influences the liability. If I wanted to make a dirty bomb it&#x27;s harder to blame google for it if I found the results through google, easier to blame AI for it if I found the results from an LLM. Mainly because the data was transferred from the servers directly to me when it&#x27;s an LLM. But the logical route of getting that information is essentially the same.<p>So because of this companies like Meta (who really don&#x27;t give a shit) spend so much time emphasizing on this safety bs. Now I&#x27;m not denigrating meta for not giving a shit, because I don&#x27;t give a shit either.<p>Kitchen knives can kill people folks. Nothing can stop it. And I don&#x27;t give a shit about people designing safety into kitchen knives anymore than I give a shit about people designing safety into AI. Pointless.
评论 #38329356 未加载
评论 #38330049 未加载
评论 #38328999 未加载
评论 #38329693 未加载
stainablesteel超过 1 年前
i have no problem with this<p>anyone who has a problem with this should have quantitatively MORE of a problem with the WHO removing &quot;do no harm&quot; from their guidelines. i would accept nothing less.
xkcd1963超过 1 年前
Whoever actually buys into these pitiful showcase of morale for marketing purposes cant be helped. American companies are only looking for the profit, doesn&#x27;t matter the cost.
g96alqdm0x超过 1 年前
How convenient! Turns out they don’t give the slightest damn about “Responsible AI” in the first place. It’s nice to roll out news like this while everyone else is distracted.
评论 #38329875 未加载
jbirer超过 1 年前
Looks like responsibility and ethics got in the way of profit.
pelorat超过 1 年前
Probably because it&#x27;s a job anyone can do.
karmasimida超过 1 年前
Responsible AI should be team oriented in the first place, each project has very different security objective
martin82超过 1 年前
If we ever had &quot;responsible software&quot; teams and they would actually have any power, companies like Meta, Google and Microsoft wouldn&#x27;t even exist.<p>So yeah... the whole idea of &quot;responsible AI&quot; is just wishful thinking at best and deceptive hypocrisy at worst.
baby超过 1 年前
I really really hate what we did to LLMs. We throttled it so much that it&#x27;s not as useful as it used to be. I think everybody understands that the LLMs lie some % of the time, it&#x27;s just dumb to censor them. Good move on Meta.
评论 #38329955 未加载
评论 #38330055 未加载
评论 #38333044 未加载
tayo42超过 1 年前
It feels like to me the ai field is filled with Corp speak phrases that aren&#x27;t clear at all? Alignment, responsible, safety etc. These aren&#x27;t adjectives normal people use to describe things. What&#x27;s up with this?
readyplayernull超过 1 年前
The only reason BigCo doesn&#x27;t disband their legal team is because of laws.
hypertele-Xii超过 1 年前
Google removed &quot;Don&#x27;t be evil&quot;, so we know they do evil. Facebook disbaneded responsible AI team, so we know they do AI irresponsibly. I love greedy evil corporations telling on themselves.
评论 #38332063 未加载
camdenlock超过 1 年前
Such teams are just panicked by the idea that these models might not exclusively push their preferred ideology (critical social justice). We probably shouldn’t shed a tear for their disbandment.
say_it_as_it_is超过 1 年前
&quot;Move slow and ask for permission to do things&quot; evidently wasn&#x27;t working out. This firing wasn&#x27;t a call to start doing evil. It was too tedious a process.
arisAlexis超过 1 年前
of course, a guy with LeCunn&#x27;s ego is perfect to destroy humanity along with Musk
ITB超过 1 年前
A lot of comparison between AI safety and legal or security teams. It doesn’t hold. Nobody really knows what it means to build a safe AI, so these teams can only resort to slowing down for slowing down sake. At least a legal team can make reference to real liabilities, and a security team can identify actual exposure.
neverrroot超过 1 年前
Timing is everything, the coup at OpenAI will have quite the impact
dudeinjapan超过 1 年前
…then they gave their Irresponsible AI team a big raise.
doubloon超过 1 年前
in other news, Wolves have disbanded their Sheep Safety team.
amai超过 1 年前
Capitalism is only responsible if that brings more money than being irresponsible.
Simon_ORourke超过 1 年前
AI is a tool, and there&#x27;s about as much point as having some team fretting about responsible usage as there is having similar notions in a Bazooka manufacturer. Whoever ultimately owns the AI (or the Bazooka) will always dictate how and where the particular tool is used.<p>Many of these AI Ethics foundations (e.g., DAIR), just seem to advocate rent seeking behavior, scraping out a role for themselves off the backs of others who do the actual technical (and indeed ethical) work. I&#x27;m sure the Meta Responsible AI team was staffed with similar semi-literate blowhards, all stance and no actual work.
评论 #38330396 未加载
评论 #38330554 未加载
评论 #38330267 未加载
评论 #38330786 未加载
评论 #38330229 未加载
评论 #38330555 未加载
评论 #38331115 未加载
评论 #38331109 未加载
评论 #38334773 未加载
spangry超过 1 年前
Does anyone know what this Responsible AI team did? Were they working on the AI alignment &#x2F; control issue, or was it more about curtailing politically undesirable model outputs? I feel like the conflation of these two things is unfortunate because the latter will cause people to turn off the former. It&#x27;s like a reverse motte and bailey.
评论 #38329900 未加载
评论 #38330469 未加载
评论 #38329757 未加载
asylteltine超过 1 年前
I’m okay with this. They mostly complained about nonsense or nonexistent problems. Maybe they can stop “aligning” their models now
评论 #38329726 未加载
评论 #38328950 未加载
ryanjshaw超过 1 年前
Seems like something that should exist as a specialist knowledge team within an existing compliance team i.e. guided by legal concerns primarily.
评论 #38329194 未加载
评论 #38329534 未加载
评论 #38329765 未加载
121789超过 1 年前
These types of teams never last long
评论 #38329758 未加载