ELIZA effect explained

In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum and imitating a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.

History

The effect is named for ELIZA, the 1966 chatbot developed by MIT computer scientist Joseph Weizenbaum. When executing Weizenbaum's DOCTOR script, ELIZA simulated a Rogerian psychotherapist, largely by rephrasing the "patients replies as questions:[1]

Human: Well, my boyfriend made me come here.

ELIZA: Your boyfriend made you come here?

Human: He says I'm depressed much of the time.

ELIZA: I am sorry to hear you are depressed.

Human: It's true. I'm unhappy.

ELIZA: Do you think coming here will help you not to be unhappy?

Though designed strictly as a mechanism to support "natural language conversation" with a computer,[2] ELIZA's DOCTOR script was found to be surprisingly successful in eliciting emotional responses from users who, in the course of interacting with the program, began to ascribe understanding and motivation to the program's output.[3] As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."[4] Indeed, ELIZA's code had not been designed to evoke this reaction in the first place. Upon observation, researchers discovered users unconsciously assuming ELIZA's questions implied interest and emotional involvement in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion.[5]

Although the effect was first named in the 1960s, the tendency to understand mechanical operations in psychological terms was noted by Charles Babbage. In proposing what would later be called a carry-lookahead adder, Babbage remarked that he found such terms convenient for descriptive purposes, even though nothing more than mechanical action was meant.[6]

Characteristics

In its specific form, the ELIZA effect refers only to "the susceptibility of people to read far more understanding than is warranted into strings of symbols—especially words—strung together by computers".[7] A trivial example of the specific form of the Eliza effect, given by Douglas Hofstadter, involves an automated teller machine which displays the words "THANK YOU" at the end of a transaction. A naive observer might think that the machine is actually expressing gratitude; however, the machine is only printing a preprogrammed string of symbols.[7]

More generally, the ELIZA effect describes any situation[8] [9] where, based solely on a system's output, users perceive computer systems as having "intrinsic qualities and abilities which the software controlling the (output) cannot possibly achieve"[10] or "assume that [outputs] reflect a greater causality than they actually do".[11] In both its specific and general forms, the ELIZA effect is notable for occurring even when users of the system are aware of the determinate nature of output produced by the system.

From a psychological standpoint, the ELIZA effect is the result of a subtle cognitive dissonance between the user's awareness of programming limitations and their behavior towards the output of the program.[12]

Significance

The discovery of the ELIZA effect was an important development in artificial intelligence, demonstrating the principle of using social engineering rather than explicit programming to pass a Turing test.[13]

ELIZA convinced some users into thinking that a machine was human. This shift in human-machine interaction marked progress in technologies emulating human behavior. Two groups of chatbots are distinguished by William Meisel as "general personal assistants" and "specialized digital assistants".[14] General digital assistants have been integrated into personal devices, with skills like sending messages, taking notes, checking calendars, and setting appointments. Specialized digital assistants "operate in very specific domains or help with very specific tasks". Weizenbaum considered that not every part of the human thought could be reduced to logical formalisms and that "there are some acts of thought that ought to be attempted only by humans".[15]

In the 1990s, Clifford Nass and Byron Reeves conducted a series of experiments establishing The Media Equation, demonstrating that people tend to respond to media as they would either to another person (by being polite, cooperative, attributing personality characteristics such as aggressiveness, humor, expertise, and gender) – or to places and phenomena in the physical world. Numerous subsequent studies that have evolved from the research in psychology, social science and other fields indicate that this type of reaction is automatic, unavoidable, and happens more often than people realize. Reeves and Nass argue that, "Individuals' interactions with computers, television, and new media are fundamentally social and natural, just like interactions in real life".[16]

When chatbots are anthropomorphized, they tend to portray gendered features as a way through which we establish relationships with the technology. "Gender stereotypes are instrumentalised to manage our relationship with chatbots" when human behavior is programmed into machines.[17] Feminized labor, or women's work, automated by anthropomorphic digital assistants reinforces an "assumption that women possess a natural affinity for service work and emotional labour".[18] In defining our proximity to digital assistants through their human attributes, chatbots become gendered entities.

Incidents

As artificial intelligence has advanced, a number of internationally notable incidents underscore the extent to which the ELIZA effect is realized.

In June 2022, Google engineer Blake Lemoine claimed that the large language model LaMDA had become sentient, hiring an attorney on its behalf after the chatbot requested he do so. Lemoine's claims were widely pushed back by experts and the scientific community. After a month of paid administrative leave, he was dismissed for violation of corporate policies on intellectual property. Lemoine contends he "did the right thing by informing the public" because "AI engines are incredibly good at manipulating people".[19]

In February 2023, Luka made abrupt changes to its Replika chatbot following a demand from the Italian Data Protection Authority, which cited "real risks to children". However, users worldwide protested when the bots stopped responding to their sexual advances. Moderators in the Replika subreddit even posted support resources, including links to suicide hotlines. Ultimately, the company reinstituted erotic roleplay for some users.[20] [21]

In March 2023, a Belgian man died by suicide after chatting for six weeks on the app Chai. The chatbot model was originally based on GPT-J and had been fine-tuned to be "more emotional, fun and engaging". The bot, ironically having the name Eliza as a default, encouraged the father of two to kill himself, according to his widow and his psychotherapist.[22] [23] [24] In an open letter, Belgian scholars responded to the incident fearing "the risk of emotional manipulation" by human-imitating AI.[25]

See also

Further reading

Notes and References

  1. Web site: dialogues with colorful personalities of early ai . Güzeldere . Güven . Franchi, Stefano . 2007-07-30 . https://web.archive.org/web/20110425191843/http://www.stanford.edu/group/SHR/4-2/text/dialogues.html . 2011-04-25 . live .
  2. Joseph. Weizenbaum. ELIZA--A Computer Program For the Study of Natural Language Communication Between Man and Machine. Communications of the ACM. Massachusetts Institute of Technology. 9. January 1966. 2008-06-17. 10.1145/365153.365168. 36. 1896290. free.
  3. Book: Suchman, Lucy A.. Plans and Situated Actions: The problem of human-machine communication. Cambridge University Press. 1987. 978-0-521-33739-7. 24. 2008-06-17.
  4. Book: Weizenbaum, Joseph. Computer power and human reason: from judgment to calculation. registration. 1976. W. H. Freeman. 7.
  5. News: Billings . Lee . 2007-07-16 . Rise of Roboethics . dead . https://web.archive.org/web/20090228092414/http://www.seedmagazine.com/news/2007/07/rise_of_roboethics.php . 2009-02-28 . . (Joseph) Weizenbaum had unexpectedly discovered that, even if fully aware that they are talking to a simple computer program, people will nonetheless treat it as if it were a real, thinking being that cared about their problems – a phenomenon now known as the 'Eliza Effect'..
  6. Green. Christopher D.. Christopher D. Green. Was Babbage's Analytical Engine an Instrument of Psychological Research?. History of Psychology. 8. 1. 35–45. February 2005. 10.1037/1093-4510.8.1.35 . 16021763 .
  7. Book: Hofstadter, Douglas R.. 1996. Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought. Preface 4 The Ineradicable Eliza Effect and Its Dangers, Epilogue. https://books.google.com/books?id=somvbmHCaOEC&pg=PA157. 157. Basic Books. 978-0-465-02475-9 .
  8. Book: Fenton-Kerr, Tom. GAIA: An Experimental Pedagogical Agent for Exploring Multimodal Interaction. Computation for Metaphors, Analogy, and Agents. Lecture Notes in Computer Science. 1999. 1562. Springer. 156. 10.1007/3-540-48834-0_9. 978-3-540-65959-4. Although Hofstadter is emphasizing the text mode here, the "Eliza effect" can be seen in almost all modes of human/computer interaction..
  9. Book: Ekbia, Hamid R.. Artificial Dreams: The Quest for Non-Biological Intelligence. Cambridge University Press. 2008. 978-0-521-87867-8. 8. registration.
  10. Anthropomorphic Agents: Friend, Foe, or Folly. W.. King. M-95-1. University of Washington. 1995.
  11. Book: Organizational Simulation. Rouse. William B.. Boff. Kenneth R.. Wiley-IEEE. 2005. 978-0-471-73943-2. 308–309. This is a particular problem in digital environments where the "Eliza effect" as it is sometimes called causes interactors to assume that the system is more intelligent than it is, to assume that events reflect a greater causality than they actually do..
  12. Book: Ekbia, Hamid R.. Artificial Dreams: The Quest for Non-Biological Intelligence. Cambridge University Press. 2008. 978-0-521-87867-8. 156. registration. But people want to believe that the program is "seeing" a football game at some plausible level of abstraction. The words that (the program) manipulates are so full of associations for readers that they CANNOT be stripped of all their imagery. Collins of course knew that his program didn't deal with anything resembling a two-dimensional world of smoothly moving dots (let alone simplified human bodies), and presumably he thought that his readers, too, would realize this. He couldn't have suspected, however, how powerful the Eliza effect is..
  13. Book: Emotions in Humans and Artifacts. Trappl. Robert. Petta. Paolo. Payr. Sabine. 353. 2002. 978-0-262-20142-1. The "Eliza effect" — the tendency for people to treat programs that respond to them as if they had more intelligence than they really do (Weizenbaum 1966) is one of the most powerful tools available to the creators of virtual characters.. MIT Press. Cambridge, Mass..
  14. Dale. Robert. September 2016. The return of the chatbots. Natural Language Engineering. en. 22. 5. 811–817. 10.1017/S1351324916000243. 1351-3249. free.
  15. Book: Weizenbaum, Joseph . Computer power and human reason : from judgment to calculation . 1976 . W. H. Freeman and Company . 0-7167-0464-1 . San Francisco, Cal. . 1527521.
  16. Reeves . Byron . Nass . Clifford . January 1996 . The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places . Cambridge University Press.
  17. https://2018.xcoax.org/pdf/xCoAx2018-Costa.pdf Costa, Pedro. Ribas, Luisa. Conversations with ELIZA: on Gender and Artificial Intelligence. From (6th Conference on Computation, Communication, Aesthetics & X 2018) Accessed February 2021
  18. Hester, Helen. 2016. "Technology Becomes Her." New Vistas 3 (1):46-50.
  19. Web site: "I worked on Google's AI. My fears are coming true" . . 27 February 2023 .
  20. Web site: 'It's Hurting Like Hell': AI Companion Users Are in Crisis, Reporting Sudden Sexual Rejection . 15 February 2023 .
  21. Web site: Why People Are Confessing Their Love for AI Chatbots . 23 February 2023 .
  22. Web site: After a chatbot encouraged a suicide, "AI playtime is over." . 10 April 2023 .
  23. Web site: 'He Would Still be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says . 30 March 2023 .
  24. Web site: Married father commits suicide after encouragement by AI chatbot: Widow . 30 March 2023 .
  25. Web site: Open Letter: We are not ready for manipulative AI – urgent need for action .