Test measures whether robots are capable of thinking like humans

Oct 13, 2008 09:52 GMT  ·  By

A recent experiment performed at Berkshire's University of Reading checked whether robots could resemble humans in thinking, based on text conversations. Chatbots' jokes, arguments and answers were not enough to beat a Turing test.

 

Alan Turing was a bright British mathematician who addressed the issue of robot thinking capabilities during the 1950s, stating that chatting was not a sign of intelligence and that, if a computer spoke like a human, then it should also think like one in all practical aspects. In order to test artificial talking skills, he devised a stratagem that would confront a human with a computer said to possess impressive chatting abilities.

 

However, since he thought humans would be prejudiced against machines from the very beginning, he thought to add a human component to the computer. Thus, the tester would address his questions, statements, jokes or complaints to a split screen, and receive answers both from a computer and a human. If he couldn’t tell which was which after 5 minutes, then the machine was considered equal in prowess to a human speaker.

 

In 1991, based on the Turing test, American philanthropist and scientist Hugh Loebner, in collaboration with the Cambridge Center for Behavioral Studies, came up with a yearly contest that would implement it. There is a substantial first prize, $100,000 in cash and a solid 18-carat gold medal for any computer that, besides text, would be able to process visual and audio data. A silver medal would make its way to a machine that would trick half of its testers into thinking it is human, while the bronze one is awarded to the most human-like chatbot. No gold or silver medals have been won so far.

 

This year's 5 participants were Brother Jerome, Elbot, Eugene Goostman, Jabberwacky and Ultra Hal. Alice, the 6th program, couldn't be finished in due time to enter the competition. All programs used various methods in order to fool their jury – humor, arguments, parrying tricky questions – but only one was actually able to give rise to uncertainties in 3 of its 12 testers.

 

With all that, a doubt percentage of 25 was enough for Elbot to secure the bronze medal and $3,000 for its creator, Fred Roberts. Elbot has disarmed the testers from the very beginning by admitting it was a machine, and used humor in order to be more convincing. When asked, “Hi. How's it going?” by a tester, it replied, “I feel terrible today. This morning I made a mistake and poured milk over my breakfast instead of oil, and it rusted before I could eat it.” Also, it attempted to maintain control of the conversation's topics, in order not to have it divagate towards subjects it was not pre-programmed to cope with.

 

Of course, there are many issues and debates on the fact that pre-programming is nothing like actual thought, since using taught lines properly is not equal to improvising. Professor Kevin Warwick, a cyberneticist at the Berkshire University, who organized this year's tests, states, “You can be flippant, you can flirt, it can be on anything. I'm sure there will be philosophers who say, 'OK, it's passed the test, but it doesn't understand what it's doing'.”

 

However, it has to begin somewhere, right? As Warwick says, “Where the machines were identified correctly by the human interrogators as machines, the conversational abilities of each machine was scored at 80% and 90%. This demonstrates how close machines are getting to reaching the milestone of communicating with us in a way in which we are comfortable. That eventual day will herald a new phase in our relationship with machines, bringing closer the time in which robots start to play an active role in our daily lives.” Still, if the machine becomes intelligent by itself and self-aware, would humans still have the right to shut it down?

Photo Gallery (2 Images)

Loebner Prize medal: Alan M. Turing side
Loebner Prize medal: Hugh G. Loebner side
Open gallery