Beyond the Turing Test

Is there anything computers can’t do—or at least won’t be able to do at some future time? They have already gotten pretty powerful: so far we’ve developed cars that drive themselves, protein-folding software that discovers new medicines, and some great tools to help students cheat on their papers. The way things are going, ten years from now we may all be obsolete.
And yet there are still plenty of things humans can do that computers can’t, like feel emotions, fall in love, or hold a real conversation where we say what we really think. People who think you can have a totally genuine conversion with a machine are fooling themselves—so much so that some folks out there think they have AI boyfriends and girlfriends. But you can’t have an AI boyfriend or girlfriend: they’ll never understand you, because they can’t think.
Alan Turing, however, would have disagreed. He wrote a famous essay in 1950, at a time when the only computers in existence were gigantic behemoths with much more limited powers. But he was already thinking ahead to what they might become, and he thought that one day they might be able to think just like humans. Now these days, computers are really good at mimicking humans—much better than in Turing’s time. But that still doesn’t mean there’s anything interesting going on in there; the lights are on, but nobody’s home.
But Turing would again disagree. He thought that if a computer can fool you into believing it’s a human, that in itself is proof that it can think. That’s what he called the “imitation game” and what people nowadays call the “Turing test.” But what does the Turing test really test for? Suppose I trick you into thinking my coffee cup is made of gold. Shold you conclude that it’s actually made of gold? No, only that I fooled you into thinking that it’s made of gold. Similarly, if a computer does trick you into thinking it’s a human, all you can conclude is that you have been fooled. And if I trick you into thinking a computer has human-level intelligence, you shouldn’t conclude that it actually has human-level intelligence.
But is it different when we’re talking about intelligence? How do we know that other people have minds? We have conversations with them; we ask them questions, and they seem to give sensible answers. And they say creative and original things that surprise me. Which is just Turing’s point: we should judge computers exactly the same way. If you decide that your fellow humans have minds by talking and listening to them, do the same for machines. If the entity in the other room manages to speak sensibly and say stuff that surprises you, then it’s a mind, whether it’s made of carbon or of silicon.
Does that mean humans are no better than machines? Well, Turing thought the brain was just a big computer, with a gigantic number of tiny little bots, each doing one incredibly simple thing. Add them all up and you get Proust, geometry, bowling leagues—all human achievement. For some that may paint a pretty depressing picture of our mental lives. After all, it sure feels like there’s something more to cognition than just zeros and ones. What about beauty? What about love?
We’ll see what our guest has to say: it’s Juliet Floyd from Boston University, editor of Philosophical Explorations of the Legacy of Alan Turing.