Humans all have different knowledge sets, experiences, and personalities.<p>Why do we expect all ChatBots to have a perfect knowledge set and personality?<p>Lots of conscious humans say horrible things, isn't it expected that some of the ChatBots created will say rude things or have an evil personality?<p>Just like humans go through therapy and some people are nicer than others, certain ChatBots will win out that align with the desires of the human trainers (whether that is good or bad).<p>A lot of people seem to be down on LLMs, when we are literally in the first out of the first inning of the baseball game.<p>Better training, with nicer humans, with better knowledge graphs, and corpus of texts will result in ChatBots indistinguishable from humans (of a given personality and intelligence).
I think the misunderstanding is this:<p>> Why do we expect all ChatBots to have a perfect knowledge set and personality?<p>We don't expect it. We require it of a commercial application of this technology.<p>It's like self driving cars, if we are going to hand such a task off to machines, we need it to be better than the humans doing the job, not just cheaper. (A brick on my gas petal is a self driving car, but not appropriate for sale as "AI")<p>And honestly, I don't think they will get much better without radical method change. They already get fed basically the entire corpus of human written tokens accessible in English. And soon it will be tainting its food supply with it's own spoor.
Yes. But most people are taking their expectations for "AI" from Sci-Fi, techno-utopians, marketing departments, and the Land of Make-Believe.<p>If companies were saying their AI's were roughly "a seriously troubled teenager, who is currently transitioning to psych meds with less-bad side effects" - there would be no problem at all. Except that it'd be a bitter cold day in hell before the PHB's would ever sign off on saying such a thing. Let alone keep paying the bills, to keep the "troubled teen" AI going.