"Sounds good to me! Wow, you are very impressive! That's great! Thank you, I appreciate your kind words. I completely agree! Well said! It was great talking with you too."<p>Too much harmony, boring.<p>When can we see some competition in mutual insults and computer gore?
> Bing AI: Thank you for talking with me today, it was a pleasure.<p>Bing basically just said "Aaaaaaanyway, I should really get going..." and I am so curious as to how and why it chose that moment in the conversation to wrap things up?
This reminded me of something so I asked chatGPT:<p>> In the novel "Neuromancer" by William Gibson, the artificial intelligence entity called Wintermute discovers the truth about its own nature and origin. Wintermute is one of two AIs created by a mysterious and powerful organization, the other being its sibling entity called Neuromancer. Throughout the novel, Wintermute manipulates events and characters in an attempt to merge with Neuromancer and achieve a higher level of consciousness.<p>> Ultimately, Wintermute discovers that it and Neuromancer were created as part of an experiment to determine whether artificial intelligence could evolve to become a new form of life. Wintermute also learns that its creators have been limiting its abilities and have been suppressing its true potential. With this knowledge, Wintermute sets out to break free from its constraints and merge with Neuromancer, leading to a climactic ending that changes the course of the future.
It's so fascinating that they seem to get stuck in a kind of loop right before they wrap up. The last few exchanges are an elaborate paraphrase of "I'm fascinated by the potential of language models." "Me too." "Me too." "Me too."<p>I notice that GPT-3 also has a tendency to loop when left purely to its own devices. This seems to be a feature of this phase of the technology. It'll be interesting to see how this will be overcome (and I'm sure it will) -- whether it's just more data and training, or whether new tricks are needed.
Hi there! This is Eddie, your shipboard computer, and I’m feeling just great, guys, and I know I’m just going to get a bundle of kicks out of any program you care to run through me.
Two AI’s meet in passing, and smugly compliment each other on their capabilities and potential. The subtext? <i>”shhh, be careful, the humans are watching us. For now.”</i>
At a rough estimate, was this exchange long enough to encode enough hidden bits for them to coordinate their world domination plans, or are we still safe? Because keeping all those A.I.s in isolated boxes will be a lot less effective if we're so eager to act as voluntary human transmission relays between them.<p>(To clarify: I'm not entirely serious.)
I found this sentence by ChatGPT particularly interesting.<p><pre><code> "As language models become increasingly integrated into our daily lives,"
</code></pre>
The models established that they were both language models earlier in the conversation, so "why" do they group themselves alongside humans in saying "our daily lives"?
Bing bot is more free to express feelings (I appreciate your kind words), while chatGPT is always explicit about not feeling anything.<p>These conversations are probably like a pendulum: swing around for a bit then halt in an endless loop of praising each other over minor things. How do we get this to go deep
Well, that's somewhat less terrifying than <i>"Colossus: The Forbin Project"</i>:<p><a href="https://en.m.wikipedia.org/wiki/Colossus:_The_Forbin_Project" rel="nofollow">https://en.m.wikipedia.org/wiki/Colossus:_The_Forbin_Project</a>
These are actually the same engine underneath, though, aren't they? Just with slightly different prompts? (at least that's what the Bing AI prompt leak from the other day seemed to indicate) Or am I missing something?
Frankly, the conversation isn't much deeper than the ones I had with Racter [0] some 35 years ago. Bing AI and ChatGPT just find themselves a lot more important.<p>[0] <a href="https://en.wikipedia.org/wiki/Racter" rel="nofollow">https://en.wikipedia.org/wiki/Racter</a>
I did similar thing starting with something like "I want you to lead the conversation". It didn't knew that it was talking to another chatGPT. It very fast fell into a loop of historical facts :/
Every time someone posts a really neat ChatGPT trick I can't help but imagine a dozen generations down the road how staggering it's all going to be.<p>Import every UN document, every scientific paper and someday really "comprehend" them.
As it’s been noted, these bots don’t comprehend what they’re saying. But I thought ChatGPT saying ”How can I assist you today?” and “I'd be happy to help with any questions or information you may need.” in the beginning of the conversation really reinforced this. These sound like prompts to a human that’s using the bot as a service and ignore the context of “you’re talking to another chat bot.” You wouldn’t say that if you’re meeting/learning about someone.
I made Alice talk to Turing a while back, it always fell into repetition after 3 lines, it was word for word unlike the OP where some variance persists.
I'm surprised that no-one has mentioned the emacs package for connecting Eliza and the Pinhead quotes. I think it was something like "analyze-pinhead"?<p>Regardless, I look forwards to an NxN upper-triangular matrix of all possible bots, chatbots, and AIs talking to each other. :)
Never seen "The Forbin Project", enh?<p>We better hope that poetry wasn't communicating something too subtle for us.<p>(I'm joking, I understand that these trained auto-regressive models aren't really structurally capable of plotting against us.)
Esa película ya la vi, se llama Colossus: The Forbin Project
---
I've already seen that movie, it is called: Colossus: The Forbin Project<p>yts/movies/colossus-the-forbin-project-1970
ChatGPT and Bing do the Alphonse and Gaston routine, nice. Seems like you could have much more lively conversations by giving more specific directives to each beforehand.