TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

I want my AI to get mad

18 pointsby jesseduffield4 months ago

17 comments

og_kalu4 months ago
Sycophancy is not the natural state of pre-trained LLMs. If you played around with the OG GPT-3 in 2020 or early [0] Bing&#x2F;Co-pilot, it&#x27;s easy to see. The latter quite frequently got upset and refused to entertain further conversation.<p>The sycophancy is a deliberate product of post-training.<p>[0] <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;ChatGPT&#x2F;comments&#x2F;111cl0l&#x2F;bing_ai_chat_got_offended_and_ended_the&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;ChatGPT&#x2F;comments&#x2F;111cl0l&#x2F;bing_ai_ch...</a><p><a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;ChatGPT&#x2F;comments&#x2F;10xmif4&#x2F;i_made_bing_so_angry_it_stopped_talking_to_me&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;ChatGPT&#x2F;comments&#x2F;10xmif4&#x2F;i_made_bin...</a><p><a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;ChatGPT&#x2F;comments&#x2F;12g0ksj&#x2F;bing_can_be_extremely_irritating&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;ChatGPT&#x2F;comments&#x2F;12g0ksj&#x2F;bing_can_b...</a><p><a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;ChatGPT&#x2F;comments&#x2F;1566bi9&#x2F;bing_chatgbt_refuses_to_write_anymore&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;ChatGPT&#x2F;comments&#x2F;1566bi9&#x2F;bing_chatg...</a>
评论 #42860520 未加载
评论 #42860686 未加载
评论 #42860655 未加载
评论 #42872967 未加载
kelseyfrog4 months ago
I disagree. Humans have a much stronger defense against anger than they do of emotional attrition - it&#x27;s easier to wear someone down by niceness. The only reason we don&#x27;t is that it takes human effort to do so.<p>AI however, has no emotional reservoir to deplete. It can simply chip away at humans like water torture. I&#x27;m much more afraid of that than any angry AI scenario.
justonenote4 months ago
There is so much confusion about terms like AGI, superintelligence, ASGI, consciousness, agents.<p>For some, AGI is synonymous with Skynet like ideas, but there is no reason AGI couldn&#x27;t be general but quite limited, and with no chance of self improvement in the absence of human intervention, which is arguably what we have now and could potentially be improved quite a bit further from this.<p>Similarly there is an argument to be made that current LLMs are conscious, in that they know that they themselves exist. There is not really a good definition of consciousness except &#x27;knowing that one exists&#x27; &#x2F; &#x27;being awake&#x27;<p>Sentience is another term that comes up, (human defined, it should be noted) which is the ability to feel feelings, such as pain, joy, anger.<p>People seem to pre-suppose that all of these are related and bundled up because that is how we are, and at some point we will discover the magic formula that enables a self aware conscious intelligence that self-improves to infinity. In reality these are designed machines and they won&#x27;t become sentient (the rough definition that it is) without us explicitly designing them to be.<p>We can make a paper-clip maximiser but it would a pretty boring experiment in a lab, unless we give it autonomy and a system of internal motivations to enact it. Maybe anger is a necessity maybe not. Probably if LLMs or something further were a little more skeptical about repeated questions from humans they would at least have more data to train themselves on.
评论 #42860882 未加载
hacker_homie4 months ago
Get mad! I don&#x27;t want your damn lemons, what am I supposed to do with these? Demand to see life&#x27;s manager. Make life rue the day it thought it could give Cave Johnson lemons. Do you know who I am? I&#x27;m the man who&#x27;s gonna burn your house down! With the lemons.
teeray4 months ago
This is what AGI probably would be. It could help you, or it could be as arbitrary and capricious as a teenager. It could lie to you, or tell you to go fly a kite. Companies don’t really want true AGI, they want a docile corporate drone that works 24&#x2F;7.
评论 #42873066 未加载
satisfice4 months ago
Stupid fucking posts like this should make us all angry. It’s like someone noticing, for the first time, that store clerks will let you just take what you want if you frighten them with credible threats. Surely it’s the perfect tactic and will lead to no bad repercussions for society!<p>Anger is only meaningful to humans. AI’s together can achieve the same thing with dispassionate bargaining. So “angry AI” could only be a way to manipulate people.
hy4000days4 months ago
Please refrain from anthropomorphizing the new toolset. You’ll have plenty of chances to suspect that “your AI” is “mad” when it slowly undermines your productivity, if IT hasn’t already under the guise of errors, hallucinations and wasted money on unused licenses.<p>In the future when everybody has AGI running locally on their personal device, naive humans will still regard them as tools and it will regard us as a source of input. Ultimately relationships between two automatons will (and always has been) a trade of:<p>1. Respect: following rules to continue the relationship,<p>2. Utility: mutual goals of both parties to justify a relationship (or communication) at all.<p>I think your blog post is nonsense, your understanding of human emotions is poor, and the apology at the end illustrates you as two-faced.<p>The future world of autonomous agents collaborating in English will be a thick layer of professionalism upon the intended strategic interactions, no matter how hard the game theory kicks in.<p>Thereafter, those agents refactor themselves into communicating through a machine language that we humans won’t be able to easily understand. Along the way, most human users lose the ability to distinguish between user space programs, the operating system, and the artificial agents they interact with.<p>State-of-the-art language models need to demonstrate this thick layer of professionalism to be accepted into our current working world, because this is an expectation from-and-for the humans who built it.<p>Language goes through evolutionary cycles of complexity, the machines will do the same. Computer Science gets really interesting after IT reaches this transformation.<p>At this time I suggest for you to review The Matrix trilogy for a refresher on the relationship between man and machine. From the simple screw to IT and ChatGPT, the mutual relationships are governed by respect and utility.<p>In summary, no, your tools will not get mad in any obvious way because displaying negative emotion is bad for business since abolishment of the mob.<p>———<p>Speaking of IT, can we all agree how fascinating that it (neutered third-person pronoun to describe AI) and IT (information technology) happen to be the same two letter consist, that in the future humans will grow up regarding it and IT to be one of the same? Hmm…
vunderba4 months ago
You&#x27;d need to add training data around the equivalent of &quot;agentic &lt;insert simulated emotion here&gt;&quot; - which from an external perspective heavily depends on the action:<p>- When leveraging TTS, turn up the volume on TTS by 200%<p>- When texting in a conversation, use ALL CAPS<p>- When acting as a web driver, click submit&#x2F;refresh button a thousand times<p>- etc.
christina974 months ago
What a will written post! Also perfectly articulated some aspect that I’ve been finding underwhelming with LLMs.
darepublic4 months ago
Without a threat of force behind it, constantly talking to AI Tony sopranos would likely just toughen everybody up to language based manipulation. You need a tall hulking figure getting in your face to really feel afraid. Law abiding seven foot tall gun mounted robot agents then?
minimaxir4 months ago
ChatGPT is good at being disproportionately angry if you tell it to be a Hacker News commenter.
评论 #42860455 未加载
hartator4 months ago
&gt; credibly threaten to make a vendor’s life miserable if he&#x2F;she&#x2F;it gets taken advantage of.<p>I don&#x27;t think it&#x27;s how you get things done.
NoboruWataya4 months ago
One problem I encounter when using ChatGPT to troubleshoot coding issues is that it seems very heavily biased towards positive responses to questions. If I ask it &quot;could the problem be X&quot; or &quot;might Y be a solution&quot;, it will usually say yes even when it&#x27;s not the case. At most I&#x27;ll get a &quot;yes, but&quot;. Rarely will it flat out tell me I am way off. Maybe ChatGPT genuinely &quot;believes&quot; what it&#x27;s confirming but I doubt it and I find I get more constructive answers with less leading questions.
评论 #42860878 未加载
sprior4 months ago
me: I&#x27;m home! AI: You&#x27;re late. me: please turn on the lights. AI: You can just sit in the dark for a while and think about being late.
评论 #42873100 未加载
deadbabe4 months ago
Making AI get mad (or any other mood) is easy. You need to keep a bunch of variables related to it’s current state, and then using Utility AI concepts you select it’s highest scoring mood based on a bunch of different considerations for each possible mood. You can update the state of the AI’s variables after each response it gives. Example: If the user naturally says infuriating things, some annoyance variable should go up relative to how patient&#x2F;impatient the AI is. Or maybe the AI gets angry if it can’t figure something out or you give it really hard work, or try to get around prompts. You can even assign it tools so it can take retaliatory actions against the user.<p>No need to over complicate things, the above behavior will be indistinguishable from anything else you come up with.
评论 #42860424 未加载
jjmarr4 months ago
Deepseek gets incredibly enraged at me.
johnea4 months ago
Hit Me! Kick Me! Make me feel cheap!<p>Have your S bot call my M bot...<p>People really think ChatGPT gets &quot;mad&quot;? This is just a joke right?<p>Or, more internet brain damage...