Can machines think? That was the question with which Alan Turing opened his famous paper of 1950, ‘Computing machinery and intelligence’. The question was not exactly new, but the answer he gave opened up a new era in our thinking about minds. It had been more or less agreed up to that time that consciousness required a special and particularly difficult kind of explanation. If it didn’t require spiritual intervention, or outright magic, it still needed some special power which no mere machine could possibly reproduce. Turing boldly predicted that by the end of the century we should have machines which everyone habitually treated as conscious entities, and his paper inspired a new optimism about our ability to solve the problems. But that was 1950. I’m afraid that fifty years of work since then have effectively shown that the answer is no – machines can’t think
A little premature, I think. You have to remember that until 1950 there was very little discussion of consciousness. Textbooks on psychology never mentioned the subject. Any scientist who tried to discuss it seriously risked being taken for a loony by his colleagues. It was effectively taboo. Turing changed all that, partly by making the notion of a computer a clear and useful mathematical concept, but also through the ingenious suggestion of the Turing Test . It transformed the debate and during the second half of the century it made consciousness the hot topic of the day, the one all the most ambitious scientists wanted to crack: a subject eminent academics would take up after their knighthood or Nobel. The programme got under way, and although we have yet to achieve anything like a full human consciousness, it’s already clear that there is no insurmountable barrier after all. I’d argue, in fact, that some simple forms of artificial consciousness have already been achieved.
But Turing’s deadline, the year 2000, is past. We know now that his prediction, and others made since, were just wrong. Granted, some progress has been made: no-one now would claim that computers can’t play chess. But they haven’t done that well, even against Turing’s own test, which in some ways is quite undemanding. It’s not that computers failed it; they never got good enough even to put up a serious candidate. You say that consciousness used to be a taboo subject, but perhaps it was just that earlier generations of scientists knew how to shut up when they had nothing worth saying…
Of course, people got a bit over-optimistic during the last half of the last century. People always quote the story about Marvin Minsky giving one of his graduate students the job of sorting out vision over the course of the summer (I have a feeling that if that ever happened it was a joke in the first place). Of course it’s embarassing that some of the wilder predictions have not come true. But you’re misrepresenting Turing. The way I read him, he wasn’t saying it would all be over by 2000, he was saying, look, let’s put the philosophy aside until we’ve got a computer that can at least hold some kind of conversation.
But really I’m wasting my breath – you’ve just got a closed mind on the subject. Let’s face it, even if I presented you with a perfectly human robot (even if I suddenly revealed that I myself had been a robot all along), you still wouldn’t accept that it proved anything, would you?
Your version of Turing sounds relatively sensible, but I just don’t think his paper bears that interpretation. As for your ‘perfectly human’ robot, I look forward to seeing it, but no, you’re right, I probably wouldn’t think it proved anything much. Imitating a person, however brilliantly, and being a person are two different things. I’d need to know what was going on inside the robot, and have a convincing theory of why it added up to real consciousness.
No theory is going to be convincing if you won’t give it fair consideration. I think you must sometimes have serious doubts about the so-called problem of other minds. Do you actually feel sure that all your fellow human beings are really fully conscious entities?