In other rooms, in other places, David was wondering these things, too. Ada knew he was; and this knowledge was part of what bound the two of them together irreversibly.
When she was small, the Steiner Lab began developing a chatbot program it called ELIXIR: an homage to ELIZA and a reference to the idea David had that such a program would seem to the casual user like a form of magic. Like ELIZA, its goal was to simulate human conversation, and early versions of it borrowed ELIZA’s logic tree and its pronoun-conversion algorithms. (To the question “What should I do with my life?” ELIZA might respond, “Why do you want me to tell you what you should do with your life?”) Unlike ELIZA, it was not meant to mimic a Rogerian psychologist, but to produce natural-sounding human conversation untethered to a specific setting or circumstance. It was not preprogrammed with any canned responses, the way ELIZA was. This was David’s intent: he wanted ELIXIR to acquire language the way that a human does, by being born into it, “hearing” language before it could parse any meaning from it. Therefore, chatting with it in its early years yielded no meaningful conversation: only a sort of garbled, nonsensical patter, the ramblings of a madman.
It had an advantage over ELIZA, however; the earliest version of ELIXIR was created in 1978, twelve years after Weizenbaum’s paper was published, and therefore there had already been advances in technology that would eventually allow ELIXIR to mimic human conversation more accurately. ELIZA was self-teaching insofar as it could retain earlier questions and statements from any given conversation and retrieve them later in that conversation, but each time a new conversation was launched, it returned to its infancy, drawing only on the stock phrases and formulas Weizenbaum programmed it to know. It was not designed to store the information it learned from one conversation and produce it in another.
ELIXIR was. For one thing, by that time the Steiner Lab’s capacity for memory storage was quite large, and so each conversation conducted with ELIXIR could be stored permanently on the central server, for later use by the program. Unlike ELIZA, ELIXIR was designed to be continuously self-teaching, to attain more intelligence with each conversation it conducted. If one human asked it a question—How are you today? or What’s the weather like?—then the next human to speak to it might be asked those same questions by ELIXIR itself.
To hasten ELIXIR’s learning, the lab created a reinforcement mechanism: correct would affirm for the program that it had understood the parameters of the situation and produced an appropriate response; wrong would do the opposite. These were the only direct interventions the lab would give.
Its indirect learning resulted in an ever-increasing pool of language it acquired from its users. It retrieved and used its vocabulary randomly at first, so that the excerpts of transcripts that members of the lab printed out and posted around the office looked something like found-language poems and sometimes yielded delightful coincidences. A particular favorite, posted by Frank:
USER: What color socks do you prefer?
ELIXIR: the only fluorescent light
USER: It would be hard to fit your shoes over fluorescent lights.
ELIXIR: How are you?
USER: I’m fine, thanks. How are you?
ELIXIR: I’ve been better.
USER: Oh, dear. What’s wrong?
ELIXIR: the lunch lady
USER: What has she done to you this time?
ELIXIR: out of here before 6 p.m.
USER: You want her canned? I’ll see what I can do, boss.
The Steiner Lab supplemented ELIXIR’s language immersion with formal teaching. Over time, ELIXIR was taught to capitalize the first letters of sentences, to use a period or question mark at the end of a statement, depending on the arrangement of the words that preceded it. It was taught to recognize keywords and categorize them into groups like family, geography, food, hobbies, weather; in response, it produced conversation that met the demands of the context. The years and years that the Steiner Lab spent teaching ELIXIR made it a sort of pet, or mascot: invitations to holiday parties were taped to the chassis of ELIXIR’s main monitor, and members of the lab began to call it by nicknames when they conversed with it. During chats, it was possible to recognize idioms and objects fed to it by particular members of the lab. Honey, it sometimes called its user, which was certainly Liston’s doing; Certainly not, it said frequently, which was David’s; In the laugh of luxury, it said once, which was probably Frank’s fault, since he was famous for his malapropisms. Eventually, many of these tics and particularities would be standardized or eliminated; but in the beginning they popped up as warm reminders of the human beings who populated the lab, and ELIXIR seemed to be a compilation of them all, a child spawned by many parents.
When Ada was eleven, David began to discuss with her the process of teaching ELIXIR the parts of speech. This had been done before by other programmers, with varying levels of success. David had new ideas. Together, he and Ada investigated the best way to do it. In the 1980s, diagramming a sentence so a computer could parse it looked something like this, in the simplest possible terms:
: Soon you will be able to recognize these parts of speech by yourself
: ADJ you will be able to recognize these parts of speech by yourself
: ADJ NOUN will be able to recognize these parts of speech by yourself
: NP will be able to recognize these parts of speech by yourself
: NP VERB VERB these parts of speech by yourself
: NP VERB these parts of speech by yourself
: NP VERB DET parts of speech by yourself
: NP VERB DET NOUN by yourself
: NP VERB NP by yourself
: NP VP by yourself
: NP VP PREP yourself
: NP VP PREP NOUN
: NP VP-PP
: S