TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

OpenAI and Anthropic agree to send models to US Government for safety evaluation

57 pointsby RafelMri9 months ago

15 comments

bko9 months ago
My issue with AI safety is that it&#x27;s an overloaded term. It could mean anything from an llm giving you instructions on how to make an atomic bomb to writing spicy jokes if you prompt it to do so. it&#x27;s not clear which safety these regulatory agencies would be solving for.<p>But I&#x27;m worried this will be used to shape acceptable discourse as people are increasingly using LLMs as a kind of database of knowledge. It is telling that the largest players are eager to comply which suggests that they feel they&#x27;re in the club and the regulations will effectively be a moat.
评论 #41441456 未加载
评论 #41441227 未加载
评论 #41440866 未加载
评论 #41441188 未加载
0cf8612b2e1e9 months ago
What exactly does the evaluation entail? Ask a bunch of naughty questions and see what happens? Unless the model can do nothing, I imagine all of them can be tricked into saying something unfortunate.<p>Naughty is in the eye is the beholder. Ask me what a Satanist is, and I would expect something about a group who challenges religious laws enshrining Christianity. Ask an evangelical and discussing the topic could be forbidden heresy.<p>Pretty much any religious topic is going to anger someone. Can the models safely say anything?
评论 #41441351 未加载
评论 #41441353 未加载
评论 #41467863 未加载
评论 #41441214 未加载
ChrisArchitect9 months ago
Official AI Safety Institute release from last week: <a href="https:&#x2F;&#x2F;www.nist.gov&#x2F;news-events&#x2F;news&#x2F;2024&#x2F;08&#x2F;us-ai-safety-institute-signs-agreements-regarding-ai-safety-research" rel="nofollow">https:&#x2F;&#x2F;www.nist.gov&#x2F;news-events&#x2F;news&#x2F;2024&#x2F;08&#x2F;us-ai-safety-i...</a><p>(<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=41390447">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=41390447</a>)
accra4rx9 months ago
Bigger question : Is US Government ready to do a comprehensive safety evaluation ? I think it it a cheap way for OpenAI and Anthropic to get a vetting that their models are safe to use and be adaptable by various Govt entity and other organization
评论 #41441222 未加载
评论 #41441139 未加载
评论 #41441189 未加载
squarefoot9 months ago
For some reason I mentally swapped OpenAI+Anthropic with parents and models with kids, possibly because it seemed the natural extension of a rotten corrupt mindset that can only produce disasters if given enough power.
评论 #41441558 未加载
josephd799 months ago
Great. Exactly what we need, more govt reg…
评论 #41443890 未加载
beefnugs9 months ago
I like to chuckel to myself that this is what happens when you try the ole &quot;will our product be TOOO AMAZING IT MIGHT END THE WORLD??&quot; viral marketing attempt
JumpCrisscross9 months ago
Not thrilled about this happening with zero input from the Congress.<p>That said, this is the NIST, a technical organisation. This collaboration will inform future lawmaking.
DSingularity9 months ago
Of course they want this. Now they get to do all the fun and profitable stuff and all responsibility is on the government which certified the models.
评论 #41441496 未加载
valunord9 months ago
Oh boy. Here we go.
评论 #41441487 未加载
earleybird9 months ago
I have the utmost respect for the standards and engineering work done by NIST. I&#x27;m left with such cognitive dissonance seeing their name juxtaposed with &quot;AI safety&quot;. That said, if anyone can come up with a decent definition, I have faith that NIST would be the ones though I&#x27;m not holding my breath.
评论 #41440827 未加载
irthomasthomas9 months ago
Oh boy, I hope those models do not get leaked in the process.
bschmidt19 months ago
Because lobbying exists in this country, and because legislators receive financial support from corporations like OpenAI, any so-called concession by a major US-based company to the US Government is likely a <i>deal</i> that will only benefit the company.<p>Altman has been clear for a long time he wants the government to step in and regulate models (obvious regulatory capture move). They haven&#x27;t done it, and no amount of Elon Musk or Joe Rogan influence can get people to care, or see it as anything other than regulatory capture. This is OpenAI moving forward anyway, but they can&#x27;t be the only ones. <i>Hey Anthropic, get in...</i><p>- It makes Anthropic &quot;the other major provider&quot;, the Android to OpenAI&#x27;s Apple<p>- It makes OpenAI not the only one calling for regulation<p>It reminds me of when Ted Cruz would grill Zuck on TV, yell at him, etc. - it&#x27;s just a show. Zuck owns the senators, not the other way around. All the big players in our economy own a piece of the country, and they work together to make things happen - not the government. It&#x27;s not a cabal with a unified agenda, there are competing interests, rivalries, and war. But we the voter aren&#x27;t exposed to the real decision-making. We get the classics: Abortion, same-sex marriage, which TV actor is gonna win president - a show.
评论 #41441467 未加载
nutanc9 months ago
Dammit, now bureaucrats in other countries will jump on this as they have something easy to copy and get ahead in their profession.
plsbenice349 months ago
Does <i>anyone</i> take the term &#x27;safety&#x27; here seriously without laughing? It is so obvious that it is a propaganda term for censorship and manipulation of cultural-political values