Welcome our new zombie overlords

Picture: Marks. BlandulaA completely new idea in the field of consciousness doesn’t come along all that often, but I think Professor Joel Marks of the University of New Haven may just have come up with one. In the latest issue of Philosophy Now he asks whether consciousness might not actually be a disadvantage, just as (in his eyes) having colour in the comic strips every day instead of black and white represents a backward step.

He’s talking about phenomenal consciousness, of course; qualia, the real redness of red or blueness of blue. We often do things best, he points out, when we do them automatically: when we stop to become really conscious of how things look or sound, we make mistakes. Perhaps when Mary finally emerged from her black-and-white room, seeing colour for the first time would merely make her trip over the carpet.

But if that’s true, then zombies (hypothetical people who function normally but have no conscious experience, just colourless registration of data) would have a consistent edge over the rest of us. They would be bound to take over the world. Just a minute, now – you don’t think that they could already…

Bitbucket Marks is just indulging his penchant for fabulation, but there is a serious point lurking here somewhere, and once again it is clearly that qualia are a load of rubbish. The orthodox theory of qualia requires that they make no difference to the practical functioning of a human being, so that zombies without them are perfectly imaginable. But there are two reasons why that can’t be so.

First, everything requires energy, however small an amount. Qualia have been likened to the humming noise that the computer makes when you switch it on, or the whistle on a steam engine. Neither, it is claimed, affect the operation of the machine. In the case of the computer, it looks as if this might be true, but if you try to make one which is otherwise identical but doesn’t make the noise, you’ll soon find out it isn’t so. The whistle is an even worse example, because it does have a small direct effect on the operation of the engine. If you blow the whistle long enough and hard enough, the locomotive will actually begin to lose power. It follows that zombies couldn’t be exactly like normal people.

Second, everything has consequences. If there could be people with qualia and people without qualia, one sort would have an advantage. Either the qualia would help you spot food and predators, or, as Marks suggests, they would be an unwelcome distraction. Either way, one sort of person would have died out long ago.

Blandula Oh, that doesn’t follow at all! Think of sickle-cell anaemia and malaria. The anaemia is a bad thing, but worth having where malaria is rife and untreatable, because it gives some immunity. But that doesn’t mean everyone in the relevant areas has anaemia. A balance is reached. The same might be true of zombies. Maybe they’re good at say, being accountants and running large corporations, but rubbish at thinking of new ideas and telling jokes (or vice versa). If there are too many accountants, the jokers will breed more successfully, but if everyone is laughing and can’t add up, the numerate people will start to do better. A balance is struck and we end up with some zombies and some ‘normal’ people in the population.

Nice to hear you argue that qualia must be significant, though…

Bitbucket If there were such things they would have to be: in fact, however, we are all zombies in the sense of not having real qualia, and always were zombies; or if you don’t like looking at it that way, you can say we’ve all got qualia, but that qualia are just a normal psychological phenomenon with causes and effects just like everything else. Take your pick – basically we’re better off just forgetting the whole mess.

Is the Turing Era over?

Picture: Turing. Picture: Blandula. Can machines think? That was the question with which Alan Turing opened his famous paper of 1950, ‘Computing machinery and intelligence’. The question was not exactly new, but the answer he gave opened up a new era in our thinking about minds. It had been more or less agreed up to that time that consciousness required a special and particularly difficult kind of explanation. If it didn’t require spiritual intervention, or outright magic, it still needed some special power which no mere machine could possibly reproduce. Turing boldly predicted that by the end of the century we should have machines which everyone habitually treated as conscious entities, and his paper inspired a new optimism about our ability to solve the problems. But that was 1950. I’m afraid that fifty years of work since then have effectively shown that the answer is no – machines can’t think

Picture: Bitbucket. A little premature, I think. You have to remember that until 1950 there was very little discussion of consciousness. Textbooks on psychology never mentioned the subject. Any scientist who tried to discuss it seriously risked being taken for a loony by his colleagues. It was effectively taboo. Turing changed all that, partly by making the notion of a computer a clear and useful mathematical concept, but also through the ingenious suggestion of the Turing Test . It transformed the debate and during the second half of the century it made consciousness the hot topic of the day, the one all the most ambitious scientists wanted to crack: a subject eminent academics would take up after their knighthood or Nobel. The programme got under way, and although we have yet to achieve anything like a full human consciousness, it’s already clear that there is no insurmountable barrier after all. I’d argue, in fact, that some simple forms of artificial consciousness have already been achieved.

Picture: Blandula. But Turing’s deadline, the year 2000, is past. We know now that his prediction, and others made since, were just wrong. Granted, some progress has been made: no-one now would claim that computers can’t play chess. But they haven’t done that well, even against Turing’s own test, which in some ways is quite undemanding. It’s not that computers failed it; they never got good enough even to put up a serious candidate. You say that consciousness used to be a taboo subject, but perhaps it was just that earlier generations of scientists knew how to shut up when they had nothing worth saying…

Picture: Bitbucket. Of course, people got a bit over-optimistic during the last half of the last century. People always quote the story about Marvin Minsky giving one of his graduate students the job of sorting out vision over the course of the summer (I have a feeling that if that ever happened it was a joke in the first place). Of course it’s embarassing that some of the wilder predictions have not come true. But you’re misrepresenting Turing. The way I read him, he wasn’t saying it would all be over by 2000, he was saying, look, let’s put the philosophy aside until we’ve got a computer that can at least hold some kind of conversation.

But really I’m wasting my breath – you’ve just got a closed mind on the subject. Let’s face it, even if I presented you with a perfectly human robot (even if I suddenly revealed that I myself had been a robot all along), you still wouldn’t accept that it proved anything, would you?

Picture: Blandula. Your version of Turing sounds relatively sensible, but I just don’t think his paper bears that interpretation. As for your ‘perfectly human’ robot, I look forward to seeing it, but no, you’re right, I probably wouldn’t think it proved anything much. Imitating a person, however brilliantly, and being a person are two different things. I’d need to know what was going on inside the robot, and have a convincing theory of why it added up to real consciousness.

Picture: Bitbucket. No theory is going to be convincing if you won’t give it fair consideration. I think you must sometimes have serious doubts about the so-called problem of other minds. Do you actually feel sure that all your fellow human beings are really fully conscious entities?

Picture: Blandula. Well…