John Searle

John SearleBlandulaSearle is a kind of Horatius, holding the bridge againt the computationalist advance. He deserves a large share of the credit for halting, or at least checking, the Artificial Intelligence bandwagon which, until his paper ‘Minds, Brains and Programs’ of 1980 seemed to be sweeping ahead without resistance. Of course, the project of “strong AI” (a label Searle invented), which aims to achieve real consciousness in a machine, was never going to succeed , but there has always been (and still is) a danger that some half-way convincing imitation would be lashed together and then hailed as conscious. The AI fraternity has a habit of redefining difficult words in order to make things easier. Terms for things which, properly understood, imply understanding, and which computers can’t, therefore, handle – are redefined as simpler things which computers can cope with. At the time Searle wrote his paper, it looked as if “understanding” might quickly go the same way, with claims that computers running certain script-based programs could properly be said to exhibit at least a limited understanding of the things and events described in their pre-programmed scenarios. If this creeping debasement of the language had been allowed to proceed unchallenged, it would not have been long before ‘conscious’, ‘person’ and all of the related moral vocabulary were similarly subverted, with dreadful consequences.

After all, if machines can be people, people can be regarded as merely machines, with all that implies for our attitude to using them and switching them on or off

Blandula Are you actually going to tell us anything about Searle’s views, or is this just a general sermon?

Blandula Searle’s main counter-stroke against the trend was the famous ‘Chinese Room’ . This has become the most famous argument in contemporary philosophy; about the only one which people who aren’t interested in philosophy might have heard of. A man is locked up, given a lot of data in Chinese characters, and runs by hand a program which answers questions in Chinese. He can do that easily enough (given time), but since he doesn’t understand Chinese, he doesn’t understand the questions or the answers he’s generating. Since he’s doing exactly what a computer would do, the computer can’t understand either.

Blandula The trouble with the so-called Chinese Room argument is that it isn’t an argument at all. It’s perfectly open to us to say that the man in the machine understands the Chinese inputs if we want to. There is a perfectly good sense in which a man with a code book understands messages in code.

However, that isn’t the line I take myself. It’s clearto me that the ‘systems’ response, which Searle quotes himself, is the correct diagnosis. The man alone may not understand, but the man plus the program forms a system which does. Now elsewhere, Searle stresses the importance of the first person point of view, but if we apply that here we find he’s hoist with his own petard. What’s the first-person view of whatever entity is answering the questions put to the room? Suppose instead of just asking about the story, we could ask the room about itself: who are you, what can you see? Do you think the answer would be ‘I’m this man trapped in a room manipulating meaningless symbols’? Of course not. To answer questions about the man’s point of view, the program would need to elicit his views in a form he understood, and if it did that it would no longer be plausible that the man didn’t know what was going on. The answers are clearly coming from the system, or in any case from some other entity, not from the man. So it isn’t the man’s understanding which is the issue. Of course the man, without the program, doesn’t understand. In just the same way, nobody claims an unprogrammed computer can understand anything.

But even as a purely persuasive story, I don’t think it works. Searle doesn’t specify how the instructions used by the man in the room work: we just know they do work. But this is important. If the program is simple or random, we probably wouldn’t think any understanding was involved. But if the instructions have a high degree of complexity and appear to be governed by some sophisticated overall principle, we might have a different view. With the details Searle gives, I actually think it’s hard to have any strong intuitions one way or the other.

Blandula Actually, Searle never claimed it was a logical argument, only a gedankenexperiment. So far as details of how the instructions work, it’s pretty clear in the original version that Searle means the kind of program developed by Roger Schank: but it doesn’t matter much, because it’s equally clear that Searle draws the conclusion for any possible computer program.

Whatever you think about the story’s persuasiveness, it has in practice been hugely influential. Whether they like it or not (and some of them certainly don’t), all the people in the field of Artificial Intelligence have had to confront it and provide some kind of answer. This in itself represented a radical change; up to that point they had not even had to talk about the sceptical case. The angriness of some of the exchanges on this subject is remarkable (it’s fair to say that Searle’s tone in the first place was not exactly emollient) and Searle and Dennett have become the Holmes and Moriarty of the field – which is which depends on your own opinion. At the same time, it’s fair to say that those of a sceptical turn of mind often speak warmly of Searle, even if they don’t precisely agree with him – Edelman , for example, and Colin McGinn . But if the Chinese Room specifically doesn’t work for you, it doesn’t matter that much. In the end, Searle’s point comes down to the contention – surely unarguable – that you can’t get semantics from syntax. Just shuffling symbols around according to formal instructions can never result in any kind of understanding.

Blandula But that is what the whole argument is about! By merely asserting that, you beg the question. If the brain is a machine, it seems obvious to me that mechanical operations must be capable of yielding whatever the brain can yield.

BlandulaWell, let’s try a different tack. The Chinese Room is so famous, it tends to overshadow Searle’s other views, but as you mentioned, he puts great emphasis on the first-person perspective, and regards the problem of qualia as fundamental. In fact, in arguing with Dennett, he has said that it is the problem of consciousness. This is perhaps surprising at first glance, because the Chinese Room and its associated arguments about semantics are clearly to do with meaning, not qualia. But Searle thinks the two are linked. Searle has detailed theories about meaning and intentionality which are arguably far more interesting (and if true, important) than the Chinese Room. It’s difficult to do them justice briefly, but if I understand correctly, he analyses meaning in terms of intentionality (which in philosophy means aboutness ), and intentionality is grounded in consciousness. How the consciousness gets added to the picture remains an acknowledged mystery, and actually it’s one of Searle’s virtues that he is quite clear about that. His hunch is that it has something to do with particular biological qualities of the brain, and he sees more scientific research as the way forward.

One of Searle’s main interests is the way certain real and important entities (money, football) exist because someone formally declared that they did, or because we share a common agreement that they do. He thinks meaning is partly like that. The difference between uttering a string of noises and meaning something by them is that in the latter case we perform a kind of implicit declaration in respect of them. In Searle’s terminology, each formula has conditions of satisfaction, the conditions which make it true or false: when we mean it, we add conditions of satisfaction to the conditions of satisfaction. This may sound a bit obscure, but for our purposes Searle’s own terminology is dispensable: the point is that meaning comes from intentions. This is intuitively clear – all it comes down to is that when we mean what we say, we intend to say it.

So where does intentionality, and intentions in particular, come from? The mystery of intentionality – how anything comes to be about anything – is one of the fundamental puzzles of philosophy. Searle stresses the distinction between original and derived intentionality. Derived intentionality is the aboutness of words or pictures – they are about something just because someone meant them to be about something, or interpreted them as being about something: they get their intentionality from what we think about them. Our thoughts themselves, however, don’t depend on any convention, they just are inherently about things. According to Searle, this original intentionality develops out of things like hunger. The basic biochemical processes of the brain somehow give rise to a feeling of hunger, and a feeling of hunger is inherently about food.

Thus, in Searle’s theory, the two basic problems of qualia and meaning are linked. The reason computers can’t do semantics is because semantics is about meaning; meaning derives from original intentionality, and original intentionality derives from feelings – qualia – and computers don’t have any qualia. You may not agree, but this is surely a most comprehensive and plausible theory.

Blandula Except that both qualia and intrinsic intentionality are incoherent myths! How can anything just be inherently about anything? Searle’s account falls apart at several stages. He acknowledges he has no idea how the biomechanical processes of the brain give rise to ‘real feelings’ of hunger, and he also has no account of how these real feelings then prompt action. In fact, of course, the biomechanical story of hunger does not suddenly stop at some point: it flows on smoothly into the biomechanical processes of action, of seeking food and of eating. Nothing in that process is fundamentally mysterious, and if we want to say that a real feeling of hunger is involved in causing us to eat, we must say that it is part of that fully-mechanical, computable, non-mysterious process – otherwise we will be driven into epiphenomenalism .

When you come right down to it, I just do not understand what motivates Searle’s refusal to accept common sense. He agrees that the brain is a machine, he agrees that the answer is ultimately to be found in normal biological processes, and he has a well-developed theory of how social processes can give rise to real and important entities. Why doesn’t he accept that the mind is a product of just those physical and social processes? Why do we need to postulate inherent meaningfulness that doesn’t do any work, and qualia that have no explanation? Why not accept the facts – it’s the system that does the answering in the Chinese Room, and it’s a system that does the answering in our heads!

BlandulaIt is not easy for me to imagine how someone who was not in the grip of an ideology would find that idea at all plausible!


  1. 1. michael hall says:

    No one mentions it, but the so-called Chinese Room argument actually goes back to objections (gripes, really) from chess players in the ’60s and ’70s about having to compete against computers in tournaments. The gripe was that “computers cheat”, they do not really play chess. They are like someone seeming to play chess by running off to a chess library to look up the next best move. It is obvious that such a person could play chess at the highest level without knowing how to play chess. So it is not a good answer to the CR argument to say that the person may not know (chinese, chess), but the person plus the program does. Computer chess is a cheat, the mere illusion of a performance. Computers do not really play chess, the Chinese Room does not really speak chinese, Watson is not really playing Jeopardy, etc. etc. Forget about how intelligible the performance is; it is still not an intelligent performance.

  2. 2. H.J. Broers says:

    “Searle’s point comes down to the contention (…) that you can’t get syntax from semantics.”

    It is actually the other way around: Searle argued that you cannot get semantics from syntax.

    As a side note I would add, that “syntax” means something different in linguistics than it does in computer programming. Within linguistics, syntax means ‘the rules that make up meaningful sentences’. Here, syntax and semantics name not two empirically separate constituents, but analytically distinguished aspects of a single natural phenomenon: human language. They overlap in various degrees, and with language syntax and semantics manifest themselves in the performance of indivisible communicative acts. Within computer programming syntax does not mean ‘the rules that make up meaningful sentences’, but is defined solely as ‘operations on linear aggregates’ (also called ‘strings’). The concept of a ‘pure’ syntax seems more readily at home in computer programming, referring solely to the manipulation of uninterpreted units as defined within an abstract formal system.
    Despite this difference, in both cases it seems nonsensical to ask whether syntax is sufficient for semantics. In linguistics, both simultaneously shape and constrain meaning in different ways and the terms are deployed as descriptions of just this function. Within computer programming, syntax pertains to purely formal operations that are devoid of meaning. It is certainly true that the data that become syntactic units for the software can be seen as meaning-loaded, but this is done by its mind-endowed users for whom they count as such, as Searle stressed. He seems therefore right in arguing that syntax is not sufficient for semantics.

  3. 3. Peter says:

    Thanks, H.J.: all these years and you’re the first person to point out the reversal!

  4. 4. Bill Walton says:

    There are people that discuss consciousness as emergence,a phase transition phenomena.
    This perspective makes the fact that we will both understand and contruct consciousness resonable.

  5. 5. UVP says:

    @michael hall:

    “So it is not a good answer to the CR argument to say that the person may not know (chinese, chess), but the person plus the program does”

    That’s not really a good summary of the “Systems Reply”. Searle posited a man, a human, sitting in a room, as a parallel for a computer. People responded by pointing out that the person who Searle described, receiving messages, consulting a sort of code book to match them with responses, and responding by writing down and sending out the responses, that in that story, the whole thing is parallel with a computer, all of it, not just the man. It’s a basic flaw in the illustration and makes any argument made using it to say anything about AI or computers equally flawed.

    In fact, in some ways if Searle is conceding by the nature of the story that such a setup could respond fluently to questions in Chinese, he’s conceding that this sort of intelligence can reside in a system (the code book, the man, the inputs and outputs). The systems reply is just pointing that out, really.

    The rest of the flaws have been detailed in the many reams and reams of pages on the topic, which I won’t try to paraphrase here, but it’s all available.

  6. 6. Damocles says:

    Searle proved, that the author of the rule book is smart and understands Chinese.
    Thats about all there is to that thought experiment.

    Its like saying: A programmer writes a simeple BASIC program (rule book).
    Because the computer running the BASIC program is doing just simple stuff as
    the computer can never so more comlex things.

  7. 7. Sci says:

    Searle, AFAICTell, updated his argument – now he thinks the idea of a brain being a computer is not just false but not even wrong:

  8. 8. Richard Wein says:

    Hi Sci,

    Thanks for the link. I hadn’t seen that paper before. But I see it’s from 1990. The most recent version of the CRA that I’ve come across is this one from 2009:

    One thing I would agree with in the 1990 paper is that we need to think carefully about what we mean by such expressions as “digital computer”, “computational process”, etc. Unfortunately, I think Searle is so misguided in his thinking about this subject that he’s incapable of addressing such questions successfully.

  9. 9. Richard Wein says:

    Hi again,

    I recently wrote a careful refutation of the Chinese Room and Searle’s related arguments. It may be of interest:

  10. 10. cervantes says:

    The Chinese room as a system is unlike the system of the brain in an important respect — it has no body, and no sensoria. Our consciousness is embodied — we perceive the world “out there” and also have proprioception, physical pain and pleasure, and so on. Furthermore we have desires and aversions, which arise from the development of the CNS, which has certain inherent predispositions, in interaction with its environment. The Chinese room therefore lacks most of what is essential to consciousness.

    However, you can certainly give computers all of those missing attributes, at least in principle.

Leave a Reply