Bedtime Stories from the Philosophers of Mind
Linespace

home
The Turing Test

Turing test
Once upon a time, some friends invented a game with a teleprinter. Two would go into another room and answer questions via the teleprinter: the questioner on the other end had to guess which of them was a woman. They did not have to tell the truth: both candidates tried to convince the interrogator that they were female . It occurred to Alan Turing, one of the players, that this would make an interesting test for a future computer: could it pretend to be human so successfully that it deceived a human questioner on the other end of the wire?

He thought it would be able to; specifically, he thought that by the end of the twentieth century a computer would be able to fool an average respondent during several minutes of apparently ordinary conversation. The really controversial claim, however, was that this kind of test could establish that a computer was, or at least deserved to be treated as, conscious.

The weak form of this claim (that if something seems to be conscious we might as well treat it as if it were for the time being) is hard to argue with, but not particularly interesting. Against the stronger form (that things which pass the test really are conscious), it can be argued that what makes someone conscious is not their external behaviour, or specifically their ability to hold an intelligent conversation, but what goes on inside their heads.

Do their responses spring from a real understanding of the conversation? In response, supporters of the test might ask how we know anyone is conscious other than by deductions based on the intelligence of their behaviour (conversational behaviour being an especially demanding variety).

Linespace

BlandulaThe real problem with the Turing test is that it doesn't work. Suppose we got incoherent gibberish through the teleprinter, or the words 'What are you talking about?', or a string of Xs, or nothing, every time. Would that prove that there wasn't a stupid or angry human being on the other end? Or suppose we get perfect, sophisticated answers to our questions. Does that prove they aren't a set of pre-recorded answers being selected by a cunning but witless algorithm, or by a long run of good luck? No, and no. For a test of this kind to work, there would have to be a question which human beings invariably answered one way and computers invariably answered another. Clearly there is no such question. Really the whole thing is a misapplication of Leibniz's Law .

Curiously enough, the origins of the Test point to a strong counter-argument. In the original form, it wasn't a computer pretending to be a person, it was a man pretending to be a woman. Yet no-one would argue that if the deception succeeded, that showed the man really was, in fact, female.

Linespace 

The Chinese Room home

Chinese Room

The room's first message arrives...

Once upon a time, a man was locked up in a room and made to do some extremely boring work, running a computer program by hand. He was given the instructions from the computer program written out in ordinary English, and a lot of data written in Chinese, which he did not understand. The computer program was designed to provide answers to simple questions about a story which was included in the data. By following the English instructions, the man could reproduce (rather slowly) the behaviour of the computer, manipulate the data in the same way, and so generate the same outputs. But he could not read either the inputs, or the outputs he generated, which were both in Chinese, and so it is absurd to maintain that what he did shows any understanding of the input information, although from the point of view of Chinese speakers outside the room, he was giving appropriate answers. Since he was doing exactly what a computer does, it follows that the computer does not, by virtue of running any particular program, have any understanding of what the program is about, or of its results, either. Doesn't it?

John Searle , who was of course the author of this tale, mentions a number of objections to his conclusions, none of which he finds convincing. He gives them names and attributes them to institutions, creating the rhetorically useful impression that whole schools of philosopers have laboured in vain for generations to come up with arguments against him (Berkeley seems to have done the most work).   
1. (The Systems reply - Berkeley) The man and the instructions together form a system to which understanding can be attributed.
2. (The Robot reply - Yale) If the computation controlled a robot rather than merely answering questions, it would show real understanding.
3. (The Brain Simulator reply - Berkeley and MIT) What if the instructions simulate the working of a brain, synapse by synapse?
4. (The Combination reply - Berkeley and Stanford) Maybe the foregoing are not fully convincing alone, but if combined they make a strong case.
5. (The Other Minds reply - Yale) But look, how can you diagnose understanding other than by examining responses?
6. (The Many Mansions reply - Berkeley) Perhaps after all digital computers don't understand, but another kind of machine, as yet undiscovered, might.

Taking the objections in reverse order, Searle has no problem with the idea that some machine other than a digital computer might one day be conscious: he accepts that the brain is a machine, anyway. The practicalities of diagnosing consciousness are not the issue; the point is what it is you are trying to diagnose. Of course Searle is not impressed by the mere combination of arguments he has rejected individually. Simulating a brain is no good; a simulation of rain doesn't make you wet: you could simulate synapses with a system of water pipes which the man in the room controls: just as obviously as in the original example, he still doesn't understand the stories he is asked about. Using the outputs to control a robot rather than answer questions makes no difference and adds no understanding. It seems highly implausible to attribute understanding to an arbitrary 'system' made up of the conjunction of the man and some rules. If necessary, the man can memorise the rules: then the whole 'system' is in his memory, but he still doesn't understand the Chinese.

A sceptical view

Linespace 

Mary the colour scientist home

Mary the color 
                  
 
 
 
 
 
 
 
 
 
 
 scientist

Freed at last, Mary saw red.

Once upon a time, there was a very eminent scientist called Mary, who knew everything there is to know - everything - about the physical side of colour. She understood neurology and optics, diseases of the eye, practical opthalmology, and the underlying quantum physics, all in full detail, and much more besides. Her memory and understanding were extraordinary - in fact, some people who don't believe in the story have said that they must have been so extraordinary that really we just confuse ourselves by attributing such amazing knowledge to a human being. Be that as it may, by a strange quirk of fate, Mary had never seen colours - although her colous vision functioned perfectly well. She had spent her life confined in a room where nothing was coloured (what is it with these philosophers about locking people in rooms?): all she could ever see was black, white, and different shades of grey.

One day, however, her colleagues at the Institute of Vision finally decided it was time to let Mary out, and led her into a room where a single red rose stood in a vase on the table. Suddenly she could see colours, or at least a colour, for the first time. Now the point is this: I said that Mary knew everything about colour from the objective, physical point of view, but when she saw it for the first time, she knew something she'd never known before - what it is like to see colour. Didn't she? And this proves that really seeing red involves something over and above the simple business of wavelengths and electrical impulses. Doesn't it?

Linespace

Bitbucket   No, of course not. Mary acquired no new knowledge when she saw the rose - she had simply had a new experience. Focussing too exclusively on the role of the senses as information gatherers can lead us into the error of supposing that to experience a particular sight or sound is merely to gain some information. If that were so, reading the label on a bottle of wine would be as enjoyable as drinking it. Of course experiencing something allows us to generate information about it, but we also experience the reality , which in itself has nothing to do with information. I suspect that academics are particularly prone to this confusion. Perhaps they have spent so much time dealing with abstractions that reality begins to seem puzzling, and the fact that information about the colour red does not contain any real redness comes to seem a philosophical puzzle instead of the banal fact it really is. This is not to say that there are no mysteries about the nature of reality: but the fact that it doesn't consist of information isn't one. 

Linespace

Blandula  There's none so blind. However, I can't quite leave it there - there's another point which really has to be made. It's fairly obvious that when Frank Jackson came up with this story, he meant to be politically correct by choosing a woman as the expert colour scientist. But what was the result? Instead of merely perpetuating the assumption that scientists are men, he ended up painting a picture of a woman routinely confined, controlled, and turned into an experimental animal -  a far worse revelation of grossly unacceptable subconscious attitudes.

Linespace

Bitbucket Gosh. Er, it never occured to me before - and of course it doesn't make any difference to anything - but, er ... is Blandula a girl?

Linespace

Chip-head home

Bill the chip-head

Bill noticed nothing...

Once upon a time, in the dead of night, a gang of mad, evil scientists (in philosophers' stories, scientists are usually mad and evil) crept into Bill's room, and carefully anaesthetising him, opened up his skull. With infinite care, they placed in his brain a tiny device which, when switched on remotely, would take over the work of a single neuron. Although the device was based on a silicon chip, it reproduced the functional behaviour of the neuron perfectly, so that whether Bill's brain was using the original neuron or the new chip, its behaviour would remain exactly the same. Repairing Bill's skull with such exquisite skill that no detectable trace of the operation remained, the triumphant scientists hurried away.

Over the course of the next few months, they returned on many occasions, replacing thousands of neurons until the unsuspecting Bill had, in effect, two brains. By throwing the lever in their secret laboratory, the scientists could switch Bill from operating with a normal brain, to operating with one composed entirely of silicon chips. Not only had they demonstrated the possibility of an artificial brain: they had produced one so similar to Bill's real brain that they could switch between the two while Bill was in the middle of a sentence without causing so much as a momentary pause.

But what was happening to BIll's qualia ? If qualia, real subjective experiences, only come from proper human brains, then every time the scientists switched to the silicon brain, they must disappear. They could waggle the switch on and off and produce 'dancing qualia' if they wanted. Not only that - how did it work when they were only half-way through the replacement programme, with only half the neurons affected? Did Bill have faded qualia, half as intense as before? Or were certain neurons crucial, making a sudden, absolute difference? 

The really bizarre thing is that none of this affected Bill's behaviour in the least. With his silicon brain switched on, his behaviour was exactly the same as it would have been with it off - so although in one case he wasn't having a real experience of redness at all, he would still say that he was. The only conclusion you can draw is that there can't be anything fundamentally special about neurons after all. If qualia aren't constituted by functions, they must at least be determined by them, so that a silicon brain with the same functional patterns gives rise to just the same qualia as an organic one. Doesn't it?

Linespace

Blandula  That just begs the whole question. By assuming that silion chips can stand in for neurons, you build in the answer you want from the start. The trouble is, people like you have been assuming for a long time that neurons are just electric switches. The whole neural network thing is based on that assumption. But it isn't anything like as simple as that. Synapses depend on some very sophisticated chemistry, with many different transmitters and other factors to take into account. The way a neuron functions depends intimately on all sorts of messy biological factors, quite unlike a chip. I don't think it's even safe to ignore the role of glial cells, which are ususally just dismissed as packaging for the neurons, but actually play an important role in the biochemical environment of the brain. It really is strange the way people attempt to explain things like emotions in terms of wiring, when it's so evident that hormones and other chemical factors have a leading role.

Linespace

Bitbucket  You're misreading the story. Two things. First, of course this kind of experiment isn't remotely practical, and never will be. There are all sorts of difficulties. We could work on the story a bit to remove some of the plausibility problems - Bill could have been paralysed in an accident and kept in hospital permanently, for example. But it doesn't matter really - in these examples we have to be allowed a bit of magic. The point is, what would be the consequences if this were possible? If you refuse to consdider anything but true examples, we're not going to get very far. 

Second, I don't assume that neurons are just electric switches. This chip device could have tiny reservoirs of all the necessary neurotransmitters, or whatever. You just have to accept, as part of the starting assumptions, that in all the relevant respects , it performs the functions of a neuron. If it helps, you can suppose that the chips replace groups of neurons, rather than single ones. 

Linespace

BlandulaNo, it won't work! I realise we have to accept some unrealistic premises for the sake of argument at times, but this is different. You explicitly took it for granted just then that there is some set of properties of Bill's neurons that include all the relevant properties , and that the rest are unimportant. But to assume that is to assume functionalism or something like it from the very start! All the properties of Bill's brain are potentially relevant - none of them can be excluded. A robot, a machine, or a computer, are what they are because they approximate to some abstract design: but Bill is just that organism over there. He's not an approximate instance of Billhood, he's Bill. That's exactly what makes him a real person, with real experiences, instead of a simulation. You must see that, surely?

Linespace

Bitbucket I see that this is some weird kind of essentialism. Just look at the human body - even you won't deny that it has organs and structures which are for particular jobs. They may not literally have been designed, but in many ways they look as if they were. It's the same with the brain. We don't know all the answers yet, but surely it's clear enough that certain parts of the brain do certain jobs, and that neurons were meant, in some sense, to work a particular way. What if Bill spontaneously develops a cancer in his brain which makes his speech garbled. Are you going to say that's all part of his nature, and perfectly OK? It seems to me that in such cases, something has clearly gone wrong with the brain, in the same way things can go wrong with a designed machine - and sometimes we can even put that kind of thing right again. 

Anyway, in a sense, he is an approximate instance of Billhood. As a phenotype, he is one instantiation of the design embodied in his genotype - his DNA.

Linespace

 BlandulaNo way. If that were true, Bill and his identical twin would share a single identity...