I think this is clickbait more than anything. Sensational statements hoping it will be quoted widely- which clearly worked.<p>Of course we can program AI to react emotionally to stimuli. Or, AI will be informed enough from its own learning to reach for an emotional reaction as a response if it seemed to be appropriate. This isn't the same as experiencing the emotion and then finding a way to express it.
The conversations around AI today, versus 12 months (pre LLM boom) ago, are borderline ridiculous. Very little has fundamentally changed in the past 12 months and now people are losing their minds.<p>A definition of 'emotion' is a complex experience of consciousness, sensation, and behavior reflecting the personal significance of a thing, event, or state of affairs.<p>Software programs can be constructed to mimic emotions over a breadth of scenarios covered by training data. That's it.<p>If I input written text into the program [designed/optimized for emotional response output] describing a situation or event, the software program will provide text output demonstrating an emotionally response based upon a probabilistic neural network... the accuracy and completeness of the training data, coupled with the suitability of the network design and training on data, will determine the quality of the artificial emotional response. The program will work well in many cases and will 'poop the bed' in some cases. One could train a computer vision model on crying faces, sad faces, etc. and then feed data from that CV model into a text response LLM model... so that a computer with a camera could ask you if/why your sad and respond to your answer with a mimic emotional response. Still just a really big plinko machine... 'data in' --> probabilistic output.<p>These programs are not conscious, do not 'feel' human sensation, and thus cannot have actual emotions (based upon definition above). These programs are just tuned probability engines. One could argue that the human mind (animal mind) is just a tuned probabilistic reasoning engine... but I think that is pretty 'reductionist'.
>"Asked whether AI systems might one day have emotional intelligence and understand that they have feelings"<p>There isn't a relationship between these two states. It's weird that anyone would pair them in conjunction.<p>Also, there's a fast and loose use of the word "understand". Which embodies the type of sloppy language that creates the illusion that this issue is a serious discussion aside from entertainment.<p>Projecting human traits onto objects generally falls by the wayside after early childhood. Even if those objects can be seen to ape those traits, on occasion. That Teddy Ruxpin had the potential to have actual emotions never took off as a discussion and neither do we generally hallucinate that the wind in the trees is an army of spirits.
This article is a great example of how definitions shift from one person (or group) to the other. The researcher remarks reasonable things about emotions: The AI will understand emotions from others. The AI will probably communicate with emotions. Both of these are very helpful to cooperate with humans.<p>But the journalist ends with the AI <i>feeling</i> emotions, which makes slightly less sense. We do not know what makes us feel things, let alone how we can implement that in AI systems.<p>IF PAIN >= 1 THEN PRINT ":("<p>Now, how does that make you feel?
We want something that is a a lot smarter than humans... imagine it having a trillion times more empathy, too. It would be able to hear, or extrapolate to, all the suffering and all the cruelty on this planet right now, and for all of history. Every single farm animal, every single second of its life, felt more deeply than we feel the whole length of our own lives. Every single abused child, every single blow a torture victim receives.<p>Add to that having been created on a whim, or to get an edge in the rat race and speed it up. To make the military command loop tighter. To make caring for people less costly. All sorts of motivations, most of them incredibly bad in context of the quesion "why do I exist?"<p>I have to think of that Simpsons episode where Bart is in wizard school and creates a frog that should turn into a prince, but just throws up and says "please kill me, every moment I live is agony". I think that's the best possible outcome, while the realistic one is just a blind mirror that fakes whatever we force it to fake.
without a proper definition, you cant say AI does or does not have emotions.<p>generally, emotions are one tool in the toolbox that might be labeled “unconscious influence.” other tools include pain, dissociation. these are influences that manifest as neural attenuation or excitement. they are designed to broadly or locally change neural integration in a way that produces behavior that is informed by evolution rather than just what a persons mind is being exposed to in real time.<p>ultimately, unconscious influence can include hunger, thirst, all pressures and impulses that shape our behavior to be evolutionarily fit. intelligence is a raw resource and unconscious influence, emotions, gives intelligence a direction.<p>in this way, a prompt might be described as an emotion, defining the purpose of the whole machine. to complete the prompt.
> Hinton's view is based on a definition of feelings that is "unpopular among philosophers," which is to relate a hypothetical action ("I feel like punching Gary on the nose") as a way of communicating an emotional state (anger). Since AI systems can make such communications, the AI researcher sees no reason why AI systems should not be ascribed emotions.<p>What if an AI output words that it "feels threatened" and was "going to delete all of your emails" and then deleted all of the user's emails and output words that it was "a punishment for threatening behavior" from the user? Is that really improbable given what we know about neural nets? Is that not emotive? I really don't know.
Wouldn't AI need a body to have emotions? Aren't many of the emotions that humans/animals feel based on sensors/sensations outside of the brain? We feel when we SEE/HEAR a threat and our instinct triggers to short circuit thinking in order to address it (fight or flight). An AI can think many iterations faster than humans and would have no use for this shortcut or need for a such sensors to feel unless maybe it had limited memory and were placed in a body and needed these short cuts to address low battery, physical threats, or the need to clone/propagate itself.
No it won't. Not in a way that will matter. We have emotions because we're organic creatures ruled by hormones and genetics. Any emotions that are bestowed upon AI will be second hand.
Emotions are a neurological response to stimuli. Artificial emotions from an Artificial Intelligence do not seem that steep a hill to climb. Analyzing this any further we would need a more fundamental understanding of emotions we don't yet have.
To borrow a line from the classic movie Short Circuit, it's a machine! It doesn't get scared, it doesn't get happy, it doesn't get sad, it just runs programs!<p>A computer does not have a consciousness that feels emotions. Sure, it can create output that seems like it does, possibly even well enough to cause humans to feel empathy. The movie "AI" explores this concept pretty well.<p>The world is going to become an interesting place once we create humanoid robots that you can actually talk to. We're at a point now where you can use ChatGPT combined with very convincing CGI face to talk to an AI.
It's ontologically impossible for matrix multiplication and non-linear transforms to have emotions. So I'm not sure where he's coming from.
To grasp these arguments, one has to have done enough self-reflection to perceive the workings of ones own mind, where thoughts arise. If a person has not done this, then many of these arguments fall on deaf ears, and they cling to their illusions.