Here is the pointer to the original Eliza paper
<a href="https://dl.acm.org/doi/10.1145/365153.365168" rel="nofollow">https://dl.acm.org/doi/10.1145/365153.365168</a><p>Note that Weizenbaum was an AI critic: Weizenbaum's intention was not for Eliza to pass the Turing test, but to show to people that a clearly not intelligent program based on primitive pattern matching can <i>appear</i> to behave intelligently.<p>He failed: His own secretary wanted to be left alone with the software and typed in her personal problems. Work on Eliza (1963-65, paper published 1966) until today is mostly misunderstood.
Many years ago I spent a lot of time on a website called Personality Forge, where you would create chat bots very similar to this and set them loose. They could then talk to other people or each other and you could review the transcripts. At one point I even entered my chat bot in The Chatterbox Challenge. It was so much fun to work on this thing at the time, but when I rediscovered it years later[0] I was mostly just disappointed in how "fake" all that effort was.<p>Now here I am talking about life, the universe, and everything with ChatGPT. It makes me both inexplicably happy/hopeful and simultaneously weirdly melancholic.<p>[0] <a href="https://liza.io/its-not-so-bad.-we-have-snail-babies-after-all/" rel="nofollow">https://liza.io/its-not-so-bad.-we-have-snail-babies-after-a...</a>
Eliza's meant to us to be an illustration of the problem. In good old fashioned AI sentiment, it illustrates the fact that you need another if statement for every new kind of construct you want to simulate. But you deign to simulate each thing, like say turning a verb into a gerund, by writing a specific "gerundification" routine. Another to swap the Mes to Yous, etc. this isn't how people think nor most modern AI. This is totally different from just looking at the world distilling patterns from it and using those patterns as the basis for a response. To teach a modern AI new stuff, you don't have to write another if statement. They have similar intentions and at some resolution or distance, they are trying to do similar things. However, they work and totally different ways and the new dynamic generative AI strategy that learns from input as opposed to just symbolically transforming it syntactically is a totally different paradigm. I don't care whether you call it AI or not.
> Interestingly, Eliza still outperforms ChatGPT in certain Turing test variations.<p>I see we have a new entry for the 2024 Lies of Omission award.<p>The article linked to plainly shows that Eliza only beats ChatpGPT-3.5 and is in the bottom half when ranked against a variety of different system prompts. An excellent ass covering strategy that relies on the reader not checking sources.<p>An honest author would have actually quoted the article saying:<p>> GPT-4 achieved a success rate of 41 percent, second only to actual humans.<p>instead of constructing a deliberately misleading paraphrase.
> One of the first computer programs that successfully passed the Turing test was Eliza. Created in 1966 by Joseph Weizenbaum, Eliza skillfully emulated the speech patterns of a psychotherapist in its conversations.<p>Why was the Turing test still relevant after this? Didn't this indicate it was very flawed test? Or it was hard to come up with a better test?
I remember this being written in basic for the c64. Not sure if I had the real Eliza or a clone. But it was fun to look at all of the canned responses and try to get it to respond with them.
> Clearly, Eliza is not an AI<p>Sadly the author doesn't elaborate on this. I thought nowadays 'AI' is a synonym for 'algorithm', which would fit ELIZA<p>Is there an accepted definition of the word AI?
The article is missing the interesting bits, so here are the relevant Wikipedia pages:<p><a href="https://en.wikipedia.org/wiki/ELIZA" rel="nofollow">https://en.wikipedia.org/wiki/ELIZA</a><p><a href="https://en.wikipedia.org/wiki/ELIZA_effect" rel="nofollow">https://en.wikipedia.org/wiki/ELIZA_effect</a><p><a href="https://en.wikipedia.org/wiki/Joseph_Weizenbaum" rel="nofollow">https://en.wikipedia.org/wiki/Joseph_Weizenbaum</a>
I'm stoked to be learning about the real Eliza, after being surreptitiously exposed via the Zachtronics game [0]. The game has a fascinating dystopian/scifi take where a company provides AI therapy through human "proxies" that simply vocalize the AI response.<p>[0] <a href="https://www.zachtronics.com/eliza/" rel="nofollow">https://www.zachtronics.com/eliza/</a>
Then later on there was Azile[1], the evil twin to Eliza. Instead of being trained with data to make it sound helpful and supportive, it uses hostile and insulting language.<p>[1] <a href="https://www.civilizr.com/communication/azile-chatbot-savage-af-ai/" rel="nofollow">https://www.civilizr.com/communication/azile-chatbot-savage-...</a>
In a very rough sense, it actually seems more similar to what LLMs are doing than I would have assumed. That might mainly be because they are both attempting to generate more text from an input.
So was Eliza from 1966 more intelligent than Dr SBAITSO from 1990?<p><a href="https://en.wikipedia.org/wiki/Dr._Sbaitso" rel="nofollow">https://en.wikipedia.org/wiki/Dr._Sbaitso</a>