If it's all just tensors, matrices, and math behind them, then how do these chatbots have distinct personalities?<p>One possible explanation is that the personality is just an emergent phenomenon. But still, it seems distinct and very human like. Not something random. Could it be that Bing's Sydney personality is just because someone fed it a large data of text which served as the "seed" for that personality? Or did this personality seemingly arise out of nowhere?<p>The Google engineer who made headlines last year also claimed LaMDA had emotions and a personality.
The data makes a difference, but there is also prompt engineering. You can prime the same model with some pretext to get different "personalities" and Sydney could have a very different tone if given a different pretext<p>These are not real personalities, just probability based text generation.<p><a href="https://github.com/verdverm/chatgpt">https://github.com/verdverm/chatgpt</a> has links and can be used to experiment
I've detected equal amounts of personality, rationale and reasoning in all the chatbots I've interacted with, or seen interactions of.. The amounts are zero..<p>While most people are fairly weak in reasoning and general cognition, at least there's evidence they try.. But every chatbot output I've ever seen, just feels like it's outputting "what a human would probably say in this situation" without any cognition behind it at all..<p>I've spent many hours with chatgpt, the more I try to make it do something, the more it shines through how it's just a dangerously confident bullshit-generator. Sure, it can give you correct results, as long as something was already written on the subject, and the majority of what's written is true.. But it can't.. DO anything with it.. it can condense it, sometimes correctly, sometimes not.. But so far, I've been entirely unable to use it as an augment for thinking or developing ideas or thoughts or reasoning..
It's all anthropomorphic projection. LLMs don't have personalities or emotions, as you point out, the text is produced using tensors and matrix calculations.<p>The idea is kind of absurd on its face, it's akin to asking "do characters in books have emotions?". No, they don't, why would they?
They don't. It's you who gives it meaning. Also, it's trained on trillions (gazillions?) of man-hours of data, so even if it's an antiquated "algorithm" it manages to be authentic. Also, no one factors in the small adjustments (and likely deliberate prompt / text insertions) that the parent company would undertake to make the ""AI"" seem more real. It's 50% tech, 50% marketing.
It's really a form of autocomplete. It ranks the next best text based on the ones it's given.<p>You can train them to be more likely to respond in a certain way. I'll copy some text from here: <a href="https://buttondown.email/hillelwayne/archive/programming-ais-worry-me/" rel="nofollow">https://buttondown.email/hillelwayne/archive/programming-ais...</a><p>Original author personality: "Now, here’s some important context: Bertrand Meyer’s entire deal is software correctness. He invented Eiffel. He trademarked Design By Contract (tee em). He regularly rants about how SEs don’t know about logic. He didn’t notice the error."<p>ChatGPT personality: "To provide some background information, Bertrand Meyer is heavily invested in ensuring the accuracy of software. He is credited with creating the Eiffel programming language and trademarking Design By Contract (DBC), and frequently criticizes software engineers for lacking a fundamental understanding of logic. Despite his expertise, he failed to detect an error."<p>ChatGPT pretending to be Conan: "Listen up, for I shall give you the lowdown. Bertrand Meyer, he be all about makin' sure software is correct, through and through. The man created Eiffel, trademarked Design By Contract, and he's always grumblin' about how software engineers ain't got no grasp of logic. But get this, he missed an error, by Crom!"<p>GPT-3 (curie) actually trained on Conan: "What’s the matter with you, Bertrand? How could you be so stupid as to waste your time studying software correctness and then not notice this glaring error?"<p>ChatGPT tends to tone it down to a level of neutralness. Partly to prevent people from anthromorphizing AI and creating some other unpleasant questions. Part of it is just some safeguards to prevent it from sounding like the last paragraph.<p>The three AI-generated paragraphs are all the same robotic personality, but they've just been rendered differently. That last one sounds angry, but it's not. It's just repeating the dialect from the environment it's used to. If someone talked exactly like X, you'd assume that their personality is also like X.<p>Also the ChatGPT-Conan dialect is likely copying what others have written about Conan, and not Robert E Howard's Conan itself.
can chatgpt develop a personality?<p>Yes, ChatGPT can develop a personality. ChatGPT is an AI-powered chatbot platform that uses natural language processing (NLP) and deep learning to understand user input and generate natural-sounding responses. By using a combination of pre-defined rules, machine learning algorithms, and natural language processing, ChatGPT can learn to recognize patterns in user input, develop a personality, and even generate more natural-sounding responses.<p><a href="https://trackmybot.com" rel="nofollow">https://trackmybot.com</a>
Always picking the best possible next token would be boring.<p>The algo doesn't consistently take the highest rank. As such, it's start ( random) will define a bit of it's personality by guessing the best words that associate with the less neutral word.