fbpx
Politics Foreign Affairs Culture Fellows Program

The Turing Test Gimmick

The Verge reports that a chatbot called Eugene Goostman has passed the Turing Test. Hosted by Reading University at the Royal Society in London, the “Turing Test 2014” asked 30 people to participate in five parallel conversations by text (one with a human, one with a computer program or chatbot) for five minutes each and […]

The Verge reports that a chatbot called Eugene Goostman has passed the Turing Test. Hosted by Reading University at the Royal Society in London, the “Turing Test 2014” asked 30 people to participate in five parallel conversations by text (one with a human, one with a computer program or chatbot) for five minutes each and judge whether they were communicating with a human or not. A third of the judges identified Goostman as a human, and so it is said to have passed the test.

For those unfamiliar with the Turing Test, in 1950, Alan Turing suggested that one way to answer the question of whether machines might be able to think is to test their ability to imitate human language. He called this the “imitation game”:

The new form of the problem can be described in terms of a game which we call the “imitation game.” It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either “X is A and Y is B” or “X is B and Y is A”… We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?”

Turing predicted that in 50 years a person would have a 70% chance of accurately guessing whether he was speaking with a person or a machine:

I believe that in about fifty years’ time it will be possible, to programme computers…to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, “Can machines think?” I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

So what’s the problem with Eugene Goostman’s pass?

Some have suggested that telling judges that Goostman was a 13-year-old boy from Ukraine made it easier for the program to pass. Odd responses would more likely be chalked up to the program’s supposed age, foreignness or limited English. Others have argued that Goostman was not a computer but a chatbot, that the 30% pass mark is dubious, and that the Turing Test held in London is not the same as Turing’s “imitation game.” After all, in Turing’s rules for the imitation game, he gave no time limit and suggested that passing it meant the interrogator would choose “wrongly as often when the game is played” with a computer as when it is played “between a man and a woman.”

But the real problem is that the Turing Test is meaningless. It cannot test for intelligence or consciousness. It never has, and it never will.

In Turing’s original paper, he responds to a number of possible objections to his test—one by Geoffrey Jefferson. In a 1949 speech, Jefferson argued that a machine cannot be said to think until it “can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols.” Turing argues that for Jefferson the only way to know if a computer can think “is to be the machine and to feel oneself thinking.” It follows, then, that the only way to know if another person thinks is to be that very person. “I am sure,” Turing writes, “that Professor Jefferson does not wish to adopt the extreme and solipsist point of view.” In short, we judge intelligence or consciousness in humans or other beings using external signs. Jefferson’s objection is invalid, Turing claims, and his “imitation game” stands.

But this is a rather weak reading of Jefferson’s (possible) objection. Jefferson is not suggesting that the only way to know if a machine thinks is to be that machine. Rather he is suggesting that the only way to know if a machine thinks is to observe evidence of human understanding or feeling in a machine. Until a machine is able to produce some seemingly spontaneous text or piece of music that expresses some unprogrammed idea or feeling that, on further investigation, shows both an understanding of the words used and an awareness of the feelings felt, it cannot be said to think.

Here’s the deal: Turing’s “imitation game” does not test for either of these. It does not test for evidence of understanding. It does not test for evidence of feeling. How could it when there is no theory for how consciousness could develop from metal, plastic and electricity?

The “imitation game,” rather, simply tests whether computer programmers can fool other people into thinking a program is a human. It’s a game that has become a gimmick to get funding or wildly overhyped press releases.

Advertisement

Comments

The American Conservative Memberships
Become a Member today for a growing stake in the conservative movement.
Join here!
Join here