TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

AI Labs Urged to Pump the Brakes in Open Letter

32 pointsby ktamuraabout 2 years ago

14 comments

midland_truckerabout 2 years ago
I find it really hard to see how productive a collective pause and &#x27;think&#x27; about something so inherently unpredictable will be.<p>&gt; &quot;implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts&quot;<p>Who deserves to be called an expert on this? Feels like Economics or something, where you have camps of thought advocating for themselves but little way of knowing who&#x27;s right. Best to break things and develop antibodies whilst the stakes are still low.
vlaaadabout 2 years ago
So Musk was outed from OpenAI and is now salty? Or are they trying to build a competitor and are falling behind and want OpenAI to get a break so they can catch up? In either way, the ethics talk is total bullshit.
评论 #35354751 未加载
评论 #35354742 未加载
reset-passwordabout 2 years ago
I felt the same way when I really wanted a super soaker 2000 and then the neighbor kid got one before me. &quot;MOOOOM!!!&quot;
评论 #35357530 未加载
andrewstuartabout 2 years ago
People are really freaking out about AI aren’t they?<p>Why bother? It’s moving super fast, just wait and see what happens.<p>And even if you could control or regulate it, exactly how would you do that? What would you be regulating&#x2F;controlling? How would you define it?<p>And why would you want to anyway? The party has just started, if you think the revolution has arrived, your completely wrong - this is just the beginning - the most amazing stuff is yet to come.<p>These people begging for the pace to slow, it’s analogous to the newspapers and music companies wanting the internet to slow down as they were being rapidly involuntarily made redundant.
评论 #35355354 未加载
评论 #35356262 未加载
评论 #35355026 未加载
评论 #35355361 未加载
评论 #35356744 未加载
fookerabout 2 years ago
If this sort of prevention didn&#x27;t work for nuclear weapons it&#x27;ll not work for anything ever.
评论 #35355337 未加载
评论 #35354825 未加载
Kinranyabout 2 years ago
The letter: <a href="https:&#x2F;&#x2F;futureoflife.org&#x2F;open-letter&#x2F;pause-giant-ai-experiments&#x2F;" rel="nofollow">https:&#x2F;&#x2F;futureoflife.org&#x2F;open-letter&#x2F;pause-giant-ai-experime...</a><p>I don&#x27;t see any way to verify the signatures. Though the mention of Sam Harris&#x27; signature disappearing suggests they&#x27;re being moderated at least?
评论 #35354752 未加载
sj8822about 2 years ago
The posters here seem to be highly skeptical of the need to regulate emerging AI.<p>I find that pretty disappointing and surprising.<p>Recently, they gave gpt-4 access to a terminal, the internet, and money. And gpt-4 itself is a software (and software in general has bugs, vulnerabilities, etc) black box that is incredibly, unprecedentedly powerful and not fully understood. Part of its training data is almost every known security vulnerability.<p>You guys really don’t see any potential problems with this? I mean really? Get a little creative here.
评论 #35366134 未加载
mindcrimeabout 2 years ago
See also:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35356980" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35356980</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35356750" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35356750</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35356771" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35356771</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35356499" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35356499</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35356490" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35356490</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35356390" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35356390</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35355803" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35355803</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35354565" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35354565</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35353458" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35353458</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35353484" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35353484</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35353394" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35353394</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35354704" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35354704</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35355541" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35355541</a><p>(not about &quot;the letter&quot; but in the same spirit)<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35356691" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35356691</a>
评论 #35359035 未加载
windexabout 2 years ago
My bets are that they want to catch up.
评论 #35356431 未加载
weekendflavourabout 2 years ago
The accelerationist dream is finally becoming a reality and these nerds wanna stop it. Deal with it
RcouF1uZ4gsCabout 2 years ago
Speaking of AI safety:<p>With Tesla “Autopilot” Elon Musk is responsible for releasing AI that has actually killed multiple people.<p>I don’t think ChatGPT has killed anyone yet.
DoctorOetkerabout 2 years ago
Ultimately its a millennia old fight between manipulators and rationalism.<p>The bead maze toy versus the abbacus.<p>Twisted contorted reasoning versus formal verification.<p>Choose your abbacus.<p>The black box nature of machine learning models is not the issue. Instead of training to imitate vacuous conjectures and claims as humans on average typically do, they could be trained to do automated theorem proving, AlphaZero style.<p>A minimalistic verifier like metamath is available free for download, including set.mm and a freely available book. It would be hard to purge from civilization.<p>Currently its math database is collaboratively worked on at github.<p>In theory a blockchain could host it.<p>Fermat style challenging could be used to objectively assess the value of theorems: the longer it matures unproven as a challenge on the chain, the higher the reward if someone finally proves it.<p>This inevitably creates an incentive to enter and digitize known mathematics into machine readable form, which will be easy for machine learning to accomplish.<p>Machine learning empowered automated theorem proving will become a profitable business, with the fruits available for all to benefit from.<p>Well, cryptography and protocols will also appear.<p>So during training the machine learning models will get endless bedtime stories about Alice, Bob and Eve.<p>Using conventional forward or backward chaining combined with adversarial models, one can construct arbitrary provable theorems, and negate it, then hide or propagate the negation so its not simply the first symbol in the theorem.<p>So we can train models to challenge each other Fermat style, about the truth or falsehood of a statement, and demand proof.<p>We can thus construct artificial mathematical systems with known inconsistencies and train models to <i>seek</i> a proof that the system contains an inconsistency. Such a proof will depend on the conflicting axioms.<p>Hence the models will be our best tool to detect and resolve hypocrisy.<p>The literal meaning of &quot;apocalypse&quot; is &quot;revelation&quot; or &quot;uncovering&quot;, not &quot;big tragedy&quot;...<p>The verification algorithm, for example the ~300 LoC python implementation by Raph Levien, owes most of its length to parsing the metamath file format.<p>The actual Kolmogorov complexity of the verification algorithm itself is much smaller. There won&#x27;t be any bits to &quot;align&quot;.<p>All these hopeless attempts at trying to align the intuition component of the machine learning model, instead of training it to gain intuition in producing logical derivations.<p>The real horror of the control freaks is not that their alignment mechanisms might fail, but that its impossible to bias the verification algorithm itself, that its impossible to perpetuate the conflicts of interest, that any additional code in the verifier is immediately suspect, especially if it obviously skips all checks and dogmatically accepts a statement if its signed by a hardcoded &quot;right&quot; key.<p>The objective judge will be mechanized.<p>&quot;abaccus akbar!&quot;
hermannj314about 2 years ago
Elon already made his money selling a promise of FSD but now wants to pump the brakes on AGI because it is almost here and he doesn&#x27;t own it?<p>Where did all the hyper-competitive SV libertarians disappear to in the last year?
评论 #35355093 未加载
评论 #35355590 未加载
评论 #35356779 未加载
rvzabout 2 years ago
&gt; As of Tuesday, no O̶p̶e̶n̶AI.com employees had signed the letter, although CEO Sam Altman’s name briefly appeared then disappeared from the list of signatories.<p>You already know the intention(s) of Sam Altman and O̶p̶e̶n̶AI.com. It was only to run with VC money and close up all their research up.<p>They are no better than DeepMind.
评论 #35355094 未加载