Big Question: Are there any “human” endeavors that artificial intelligence won’t be able to handle better than we can?
June 1, 2011 |  by Michael Anft

“Alan Turing, the brilliant English mathematician and computer scientist, anticipated most objections to the possibility of ‘thinking machines’ more than 60 years ago. One common argument goes something like this: Sure, computers might be able to do X, but they’ll never be able to do Y.  Turing countered that most Y’s are things that it would be silly to create a machine to do—fall in love, enjoy strawberries and cream—even if we could. However, as we look at other possible Y’s, we’re often struck by the results of research into Y around the world. There are programs that create paintings, compose music, and even recognize anger in a human voice.

“Where art is concerned, I think computers will only match us someday. However, in other areas, I expect computers to exceed the capacity of humans simply because of processing speed and storage capacity. With the right algorithms, computers will be able to think faster and have access to more information than a human, who has pesky biological limitations and psychological biases. But this won’t happen right away.

“For the time being, most of our research is limited to what is called ‘weak’ AI. We try to find specific solutions to specific problems. We create savants. Very few researchers are interested in ‘strong’ AI or building general-purpose artificial intelligence. As a result, we have programs like Watson that can play Jeopardy! but can’t play chess. But I think more researchers will return to the problem of strong AI. At that point, it won’t be so much a question of if as when computers will become smarter than us. In which case, we should probably worry more about making sure they’re friendly.”

Stephyn Butcher is a lecturer at the Whiting School and a PhD student in the Department of Computer Science.

—Interview by Michael Anft