TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Anthropic's Claude 3 causes stir by seeming to realize when it was being tested

7 点作者 durron大约 1 年前

2 条评论

_cs2017_大约 1 年前
Researchers are human and as such tend to say weird things due to various psychological biases or issues. We should be careful not to take all their statements seriously. The most reasonable comment is quoted in the article:<p>Noah Giansiracusa, Bentley University math professor and frequent AI pundit, tweeted, &quot;Omg are we seriously doing the whole Blake Lemoine Google LaMDA thing again, now with Anthropic&#x27;s Claude?&quot;
fruit2020大约 1 年前
I don’t know much about how these models work, but does this mean that a lot of ‘smartness’ that they show in how they articulate the answers are just them ‘parroting’ the contractors who ‘role played ai’?