Today’s paper of the week from Fermat’s Library is about Eliza by J. Weizenbaum. Eliza is a very early natural language processing system, that can mimic a Rogerian pshychologist‘s tactics and thus feel like a very sentient program within its context. Like many of my age, I was first introduced to Eliza via a game: a friend had an Amiga and a copy of it and we played around trying to make it tell us stuff based on information we were feeding it. It looked really amazing at the time.
Also like many others, I come across Eliza every few years and have toyed with an implementation. This is why this podcast on Eliza’s lineage is very interesting, because it shows from where you most likely learned to implement Eliza. Do listen to it. It contains a very interesting observation from a person who has actually read Weizenbaum’s paper:
Its name was chosen to emphasize that it nmy be incrementally improved by its users, since its language abilities may be continually improved by a “teacher”.ELIZA A Computer Program For the Study of Natural Language Communication Between Man And Machine
So what most people overlook is that the original Eliza included a training mode, much like (in an abstract way at least) current chatbots that are all the craze do.
Weizenbaum himself was disturbed by the acceptance of Eliza and the anthropomorphic effect it had on people. He wrote Computer Power and Human Reason (a book that I need to skim through sometime given that I’ve read McCarthy’s refutation) to point out the issues he thought were important and was a fierce critic of AI in his later life, to the point of being marginalized in conferences where he appeared to preach his warnings. 99 Percent Invisible has an episode on him which you may also find interesting.
One goal for an augmented ELIZA program is thus a system whichELIZA A Computer Program For the Study of Natural Language Communication Between Man And Machine
already has access to a store of information about some aspect of the
real world and which, by means of conversational interaction with
people, can reveal both what it knows, i.e. behave as an information
retrieval system, and where its knowledge ends and needs to be
augmented. Hopefully the augmentation of its knowledge will also be
a direct consequence of its conversational experience. It is precisely
the prospect that such a program will converse with many people and
learn something from each of them which leads to the hope that it
will prove an interesting and even useful conversational partner.
Too bad he didn’t successfully pursue this goal; no one else has. I thinkDefending AI Research
success would have required a better understanding of formalization than
is exhibited in the book.