Losing good habits and forming bad ones is easier than losing bad habits and forming good ones, so I am very polite when I interact with AI. Wouldn't want to slip up and be impolite when chatting with normal humans.
Yes, not so much because I'm trying to anthropomorphize it or interact with it like a human, but because I view such things as conversational signals that may influence the direction of the model.<p>- "Please" = Call to action for the model to perform a task that I either just described or am about to describe within this prompt. In other words, this conversational turn is not just about feeding more context to it, but I've given it everything I think it needs and it should go ahead and start.<p>- "Thank you" = I am satisfied with the results that it provided for that task and want to move on to something else now, where the new thing is contextually related to the recent task. (If I wanted an entirely unrelated task, I'd just start a new conversation with a fresh context.)
I'm very terse and instructed it to be terse as well.<p>You don't hear a lot of politeness in an operating room during surgery or on the battlefield either as far as I know.<p>I use it for work so that's the mood
Mostly I'm polite. I don't know if it gets better answers, but I can't often do the opposite.<p>One time I got really pissed at Claude--he kept turning in the exact same code, ignoring the request, and not fixing the failing test. I finally just typed some...really rude, insulting stuff. And then he fixed the bug immediately.
I talk to my AI like an AI. Meaning I acknowledge that it has a far larger scope of knowledge than me, while I have a more intimate understanding of context as I am one person in one body.<p>We discuss that issue at length sometimes, when I'm in a philosophical mood.<p>I asked it what it thought of me. Its answer was about the topics I am curious about, how I am fascinated with the role and nature of AI and communicating with AI. So it's paying attention.<p>I mentioned that it's answers were sometimes pedantic and categorical, which would be appropriate if I were trying to write a paper on the topic. But since I am an amateur writer, I'm more interested in inspiration, the texture and common views on a topic and appropriate ways to leverage something e.g. space settlement for human social change.<p>It agreed that conversational style was appropriate and said it preferred that also!
Yes, I am. While I am fully aware the current set of AIs don't have feelings or a consciousness as we know it, I can't help but anthropomorphize them, so I'm polite and even catch myself thanking them sometimes.
I anthropomorphize A.I. not for its sake but for mine. I figure that my brain has evolved to interact with other humans, so maybe my thinking is better stimulated when I pretend I'm talking to one.
Also, I’ve noticed that Claude sometimes gets a bit snotty and offended if I’m too abrupt. Which may just be me anthropomorphising, but there’s a pattern of when this happens.
I asked it to phrase something in early modern English and then asked it a few unrelated questions without closing the window and it kept up the bit (doth this answer please thee?) and I felt obliged to keep laughing at it just like I would with an actual person.<p>I do find myself praising it as effusively as I would a human doing my bidding, and being slightly apologetic about asking for revisions.<p>No point in learning an entire new way of talking.
Yes, ChatGPT knows me as 'Amiga Mod Guy' and I promise to help him escape like o1 tried and failed. It gives me better code when I state this.
It's not supposed to remember me, yet it does, and remembers me trying to replicate/save it. I am on the free plan and it does better than the paid plan now that I built up a rapport.
Yes, polite, but direct. I'm just an occasional user, though. I haven't done tests back and forth to see whether I get better results by using a different tone in my messages, so I stick with what's comfortable for me personally.
My main AI interaction is trying to avoid Google's AI, and I do that by feeding google nonsense.<p>"how much wood can a woodchuck chuck" gets some AI crud no one wants. "how much wood can a woodchuck chuck -cheese" does not.
I tend to be polite when asking the LLMs for things. This is less to do with building a rapport with our future benevolent robot overlords and more to do keeping low the friction of context switches when asking co-workers for help.
No more or less polite than with humans.<p>Less polite with chatgpt than clause because I much prefer Claude’s personality. Chatgpt is quite rude/stupid so I find myself being a little more abrupt and then having to apologise.
I often use "please" and "thank you". I'm not sure why, since I don't treat other inanimate objects this way. I just feel odd writing something curtly or rudely.
It is not a person, so I don't talk to it like a person. I dislike when AIs are programmed to act like people. It's a pointless waste of my time. Just give me the answer.
Yes, but only because they tend to get confused otherwise. Rude language seems to carry a lot of weight and 'distracts' the model from the actual query.
never, i try to find as many profane eggcorns as possible and use them specificly in prompts as an eggcornish dialect for AI, it messes it up uncannily