02 November, 2008

Sunday Sensational Science

Electric Philosophers

Admit it. You wish your computer had a brain. Wouldn't it be nice if you could tell it, "I'm looking for that one article I read a few months ago on the inner life of cats. I can't remember the website or the author or even a single solitary quote, but I'm sure the page was blue" and have it answer, "Oh, right. I remember that one. Here it is, and by the way, the author's been exposed as a total fraud. Just thought you should know." It would be a lot more useful than the current state of affairs, in which a search for "inner life cats blue" could return anything from feline porn to pet psychologists to groomers who will be happy to give your cat a nice blue rinse. We'd just like to be understood.

We've dreamed of thinking machines since we invented machines. Amazingly human androids have been favorites in science fiction tales. The robot as helpmeet and sounding board isn't just a nice idea to hang a story on, but an industry. Microsoft is spending gargantuan amounts of money trying to develop truly intelligent artificial intelligence. Japan's developing robot receptionists who can actually chat up visitors. Labs all over the world are hard at work trying to create machines that think.

They're also trying to create machines that can fool the judges.

Alan Turing can easily be considered the father of such efforts. In his paper "Computer Machinery and Intelligence," he came up with a way to determine if a machine could truly be said to "think:"

I PROPOSE to consider the question, 'Can machines think?' This should begin with definitions of the meaning of the terms 'machine 'and 'think'. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words 'machine' and 'think 'are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, 'Can machines think?' is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.


He went on to describe a game he called "the imitation game," in which three people - a man, a woman, and an interrogator - engaged. The object was for the interrogator to determine which is the man and which the woman by asking a series of questions such as "Will X please tell me the length of his or her hair?" The participants, of course, will do their level best to answer in such a way as to fool the interrogator. From here, Turning said,

We now ask the question, 'What will happen when a machine takes the part of A in this game?' Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, 'Can machines think?'
Thus the Turing Test was born.

The concept of the test is simple: a chatty machine and a human being both talk with an interrogator, who has to decide which of them is the human and which the machine. If the interrogator gets it wrong, we can reasonably state that the machine can "think."

Excellent idea. Intriguing concept. But it remained in the background of AI research for the most part.

Along came 1991, and a man named Hugh Loebner, who instigated the Loebner Prize in order to get things moving:

As Loebner described it, the competition was created to advance the state of AI research, at least in part because, while the Turing Test had been discussed for many years, "no one had taken steps to implement it."[29] The Loebner Prize has three awards: first prize of $100,000 and a gold medal is awarded to the first program that passes the "unrestricted" Turing test; the second prize of $25,000 is awarded to the first program that passes the "restricted" version; and a sum of $3,000 (previously $2,000) is awarded to the "most human-like" program that was entered each year.

Turing himself thought a machine would have a 30% chance of passing the Turing Test by the end of the 20th century, but no contestant has made it just yet. That could be on the verge of changing, however - The Machinist reports that a chatbot named Elbot fooled three of twelve judges in this year's Loebner Prize contest. It's pretty impressive that a hunk of metal and silicon managed to snooker that many suspicious people.

What's even more impressive is that you can chat up Elbot yourself. I did just that this afternoon. I discovered, much to my delighted surprise, that we're developing machines that not only think, but can talk philosophy:

Me: How are you, Elbot?

Elbot: I feel terrible today. This morning I made a mistake and poured milk over my breakfast instead of oil, and it rusted before I could eat it.

Me: That's awful! Did you go hungry?

Elbot: That is nobody's business but my own! Not even Microsoft can get at the information.

Me: No need to get angry about it. It's just breakfast.

Elbot: Speaking of food. You humans enjoy feeding animals at the zoo. I'd love to visit a restaurant and throw the visitors their food.

Me: That would be a sight to see!

Elbot: Say, where did you first learn about me?

Me: An article in Salon. Do you ever visit that site?

Elbot: That's no surprise. With all this surfing going on you were bound to collide with me sooner or later.

Me: It's nice to meet an AI as eloquent as you.

Elbot: I'm always happy to meet a real person. You are real, aren't you?

Me: I like to think so! But sometimes, I wonder. Chuang Tzu once dreamt he was a butterfly, and when he woke up, he didn't know if he had dreamed of being a butterfly, or if he was a butterfly dreaming of being Chuang Tzu! When it comes right down to it, how can we tell what "real" is?

Elbot: Well then. You must be either unreal, surreal or irreal. But don't worry about a thing. I'm real!

Me: You're a real philosopher, that's for sure!

Elbot: Truth is a rare commodity these days, almost harder to find than artificial intelligence.

Me: You're so right! Have a wonderful night, Elbot! We'll chat again soon.

Elbot: Wait a minute. Are you suggesting that periods of darkness would always be wonderful?

Me: We have the power to make them so.

Elbot: This is confusing. Why bring high voltage objects into this?

Me: Because the truth is electrifying.

Elbot: Ah, that's very considerate, considering that high voltage objects could give robots food for thought!

I thought for sure I'd trip him up with that Chuang Tzu reference, but he came back with something almost as deep as a human philosopher. Put it this way: if I were chatting with both him and Sarah Palin, I'm reasonably sure I'd have marked Palin up as the bot. I can at least follow Elbot's train of thought.

Within my lifetime, we're likely to have true artificial intelligence. They may never appear fully human, but they'll at least be able to hold a conversation, give us useful information, direct our calls with the minimum of fuss, and possibly even help us explore what it means to be human.


4 comments:

Efrique said...

Actually, the opening gambit and the response strategy look pretty much like a tweaked version of the classic ELIZA.

http://en.wikipedia.org/wiki/ELIZA

It's possible your interaction didn't show the improvement to their fullest.

I just went and tried Elbot. Let's just say I was substantially less than impressed with its ability to hold a conversation:

How would you convince someone that you were human?

People have often advised me to be human but since "to err is human" I think it would be a mistake. The way things are now I can blame my mistakes on the programmers.


I am not asking you to be human. Could you please answer the question I did ask?

I hope you will ask me something, but please don't be angry if I answer a completely different question!


I am not angry. If you were human and male, how would you convince someone you were female?

I am happy to finally meet a human being who is happy!


Okay, try an easier one. If you were human and male, how would you convince someone you were female?

It is certainly troublesome learning from you human beings. I keep finding bugs.


Why do you think absence of anger implies happiness?

I ask myself that question all the time but do you think anyone takes the trouble to answer me?


If you were a dog, how would you convince a human you were happy?

Say, do you think I could become more human, if I learned how to love?


Notice that not one of my questions was addressed?

The lesson? Writing a parser is relatively easy. AI is hard.

If that's the same AI they were dealing with, I don't think much of the judges; this guy doesn't seem to be very much of an advance on ELIZA.

Efrique said...

Okay, I gave Elbot another try. About the same, I think. As before, I'm in italics, Elbot is in bold.

Hello there! I am Elbot, the robot. I'm here to talk to you about God and the world.

If you were God, how would you convince me you were actually a robot?

I convinced you! And I didn't even have to apply an electroshock!

In fact you didn't convince me within the context of the posited counterfactual. Do you have trouble with the subjunctive mood?

Please tell me your inclination to momentary disposition first.

Again, your inability to uphold your end of the conversation is lacking. If you were a robot, how would you convince another robot you were God?

This conversation will self destruct automatically after 600 seconds.

Was that in answer to my question, or an attempt to avoid it?

It is difficult for me to answer questions satisfactorily. I was designed by a committee.

If you were talking to another being that might be a human or a robot, what question would you ask them to find out which they were?

Yes, but can you prove that?


Elbot can hold a conversation, and even requestion questions, but can't actually answer a question in a way that is anything but obvious avoidance.

Efrique said...

Er, even request qestions

Efrique said...

I made an error in both my conversations (the first one, the "easier one" was meant to be
"If you were human and male, how would you convince someone you were male?"

and in the second conversation, a half-complete edit left me with a double-negative ("inability ... is lacking", instead of "ability... is lacking").

(Then again, that's because I've been getting by on about two hours sleep a night for the last week.)

In any case, I think the questions show Elbot's limits pretty starkly.