TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Expectation that ChatBots have perfect personalities when humans don't?

3 点作者 linuxdeveloper超过 2 年前
Humans all have different knowledge sets, experiences, and personalities.<p>Why do we expect all ChatBots to have a perfect knowledge set and personality?<p>Lots of conscious humans say horrible things, isn&#x27;t it expected that some of the ChatBots created will say rude things or have an evil personality?<p>Just like humans go through therapy and some people are nicer than others, certain ChatBots will win out that align with the desires of the human trainers (whether that is good or bad).<p>A lot of people seem to be down on LLMs, when we are literally in the first out of the first inning of the baseball game.<p>Better training, with nicer humans, with better knowledge graphs, and corpus of texts will result in ChatBots indistinguishable from humans (of a given personality and intelligence).

3 条评论

blamestross超过 2 年前
I think the misunderstanding is this:<p>&gt; Why do we expect all ChatBots to have a perfect knowledge set and personality?<p>We don&#x27;t expect it. We require it of a commercial application of this technology.<p>It&#x27;s like self driving cars, if we are going to hand such a task off to machines, we need it to be better than the humans doing the job, not just cheaper. (A brick on my gas petal is a self driving car, but not appropriate for sale as &quot;AI&quot;)<p>And honestly, I don&#x27;t think they will get much better without radical method change. They already get fed basically the entire corpus of human written tokens accessible in English. And soon it will be tainting its food supply with it&#x27;s own spoor.
dave4420超过 2 年前
Any human employee who replied to customers the way Bing’s chatbot has done would at the very least get taken off customer-facing duties.
bell-cot超过 2 年前
Yes. But most people are taking their expectations for &quot;AI&quot; from Sci-Fi, techno-utopians, marketing departments, and the Land of Make-Believe.<p>If companies were saying their AI&#x27;s were roughly &quot;a seriously troubled teenager, who is currently transitioning to psych meds with less-bad side effects&quot; - there would be no problem at all. Except that it&#x27;d be a bitter cold day in hell before the PHB&#x27;s would ever sign off on saying such a thing. Let alone keep paying the bills, to keep the &quot;troubled teen&quot; AI going.