TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Expectation that ChatBots have perfect personalities when humans don't?

3 pointsby linuxdeveloperover 2 years ago
Humans all have different knowledge sets, experiences, and personalities.<p>Why do we expect all ChatBots to have a perfect knowledge set and personality?<p>Lots of conscious humans say horrible things, isn&#x27;t it expected that some of the ChatBots created will say rude things or have an evil personality?<p>Just like humans go through therapy and some people are nicer than others, certain ChatBots will win out that align with the desires of the human trainers (whether that is good or bad).<p>A lot of people seem to be down on LLMs, when we are literally in the first out of the first inning of the baseball game.<p>Better training, with nicer humans, with better knowledge graphs, and corpus of texts will result in ChatBots indistinguishable from humans (of a given personality and intelligence).

3 comments

blamestrossover 2 years ago
I think the misunderstanding is this:<p>&gt; Why do we expect all ChatBots to have a perfect knowledge set and personality?<p>We don&#x27;t expect it. We require it of a commercial application of this technology.<p>It&#x27;s like self driving cars, if we are going to hand such a task off to machines, we need it to be better than the humans doing the job, not just cheaper. (A brick on my gas petal is a self driving car, but not appropriate for sale as &quot;AI&quot;)<p>And honestly, I don&#x27;t think they will get much better without radical method change. They already get fed basically the entire corpus of human written tokens accessible in English. And soon it will be tainting its food supply with it&#x27;s own spoor.
dave4420over 2 years ago
Any human employee who replied to customers the way Bing’s chatbot has done would at the very least get taken off customer-facing duties.
bell-cotover 2 years ago
Yes. But most people are taking their expectations for &quot;AI&quot; from Sci-Fi, techno-utopians, marketing departments, and the Land of Make-Believe.<p>If companies were saying their AI&#x27;s were roughly &quot;a seriously troubled teenager, who is currently transitioning to psych meds with less-bad side effects&quot; - there would be no problem at all. Except that it&#x27;d be a bitter cold day in hell before the PHB&#x27;s would ever sign off on saying such a thing. Let alone keep paying the bills, to keep the &quot;troubled teen&quot; AI going.