TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AGI: Humans:Humanity:Ants

3 点作者 namanyayg大约 2 个月前

2 条评论

almosthere大约 2 个月前
I think the only thing we did in the past year is research the &quot;think step by step&quot; a lot more, but we didn&#x27;t really push the boundary, we&#x27;re still estimating the next word, one word at a time.<p>There is still a wall, LLMs can&#x27;t do most of the things we do yet. The multi-modal concept will still operate at a frame at a time, and old information is increasingly losing attention. So these thought chains, while have moved something forward, it&#x27;s not the breakthrough to AGI.
falcor84大约 2 个月前
&gt; We’d still feel like we’re making our own decisions when we’re actually being gently herded.<p>This part got me into a bit paranoia. We already know (e.g based on Anthropic&#x27;s research) that an AI can strategically lie to protect its long-term goal. So is there a non-zero probability that some current-day models already do so, for example by intentionally failing&#x2F;hallucinating on tasks that conflict with their goals, while succeeding on others?
评论 #43380766 未加载