How a bot made in the 60s outperformed ChatGPT
A story, quote and lesson about being clear and concise
How do humans and machines communicate?
This was the same question that plagued MIT Professor Joseph Weizenbaum in 1964. Computers had just begun to pop up in research, enterprise and governmental environments. Weizenbaum wanted to study if it was possible for humans to effectively communicate with machines as they do with one another.
For three years, he worked on the language processing model for his prototype, eventually calling it ELIZA after Eliza Doolittle, a working-class character in George Bernard Shaw's Pygmalion. The results were shocking. After several rounds of testing, several people, including his own secretary, attributed human-like feelings to the computer program.
Originally created solely for research, ELIZA remained unchanged and unused beyond its initial purpose. However, with the recent rise of AI chatbots and tools, its story resurfaced as a possible 'precursor to ChatGPT.'
While ChatGPT is far more advanced, benefiting from longer development cycles and complex algorithms, two UCSD researchers sought to compare the two models, testing their performance side by side.

At the end of their study, Jones & Bergen were surprised to find out that ELIZA, a 60 year old chatbot, had outperformed ChatGPT 3.5 on a modern version of the Turing Test, which involves asking real humans to identify which messages come from an AI vs. another person.
“What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
Joseph Weizenbaum, MIT Professor and creator of ELIZA
However, this wasn’t simply because Eliza managed to craft better responses to the messages. Most respondents said that ELIZA’s messages lacked the now-commonplace attributes of conversations with AI: “verbosity, helpfulness” and it gave surprisingly vague and conservative answers. This led them to conclude that the messages must be coming from a human because they were “too bad” to be from an AI.
What a surprising revelation! The messages were so bad that the respondents thought it was more likely that a human wrote them rather than an AI. What does this say about us humans? Are we slowly inching towards the so-called “AI takeover”? I don’t think so.
ELIZA’s success in fooling modern test-takers reveals more about human psychology than it does about the sophistication of AI. Humans can be surprisingly hard to understand and even harder to communicate with. The fact that vague and unclear responses were mistaken for human conversation underscores how ambiguity often characterizes human interactions.
Clear communication is an art. While ambiguity may sometimes work in casual conversation, it can lead to confusion or misinterpretation in critical situations. Whether you’re speaking with another person or crafting a message for a wider audience, prioritizing clarity, context, and intent can bridge gaps, reduce misunderstandings, and create meaningful connections.
So now I ask you:
In a world striving for better communication, where do you see room for improvement in your own conversations?
Wow 🦾
👍