Picture: Pinocchio. You may have seen that Edge, following its annual custom, posed an evocative question to a selection of intellectuals to mark the new year. This time the question was ‘What have you changed your mind about? Why?’. This attracted a number of responses setting out revised attitudes to artificial consciousness (not all of them revised in the same direction). Roger Schank, notably, now thinks that ‘machines as smart as we are’ are much further off than he once believed, though he thinks Specialised Intelligences – machines with a narrow area of high competence but not much in the way of generalised human-style thinking skill – are probably achievable in the shorter term.

One remark he makes which I found thought-provoking is about expert systems, the approach which enjoyed such a vogue for a while, but ultimately did not deliver as expected. The idea was to elicit the rules which expert humans applied to a particular topic, embody them in a suitable algorithm, and arrive at a machine which understood the subject in question as well as the human expert, but didn’t need years of training and never made mistakes (and incidentally, didn’t need a salary and time off, as those who tried to exploit the idea for real-world business applications noted). Schank contrasts expert systems with human beings: the more rules the system learned, he says, the longer it typically took to reach a decision; but the more humans learn about a topic, the quicker they get.

Now there are various ways we could explain this difference, but I think Schank is right to see it as a symptom of an underlying issue, namely that human experts don’t really think about problems by applying a set of rules (except when they do, self-consciously): they do something else which we haven’t quite got to the bottom of. This is obviously a problem – as Schank says, how can we imitate what humans are doing when humans don’t know what they are doing when they do it?

Another Edge respondent expressing a more cautious view about the road to true AI is none other than Rodney Brooks. Noting that the preferred metaphor for the brain has always tended to be the most advanced technology of the day – steam engines, telephone exchanges, and now inevitably computers – he expresses doubt about whether computation is everything we have been tempted to believe it might be. Perhaps it isn’t, after all, the ultimate metaphor.

It seems to me that in different ways Schank and Brooks have identified the same underlying problem. There’s some important element of the way the brain works that just doesn’t seem to be computational. But why the hell not? Roger Penrose and others have presented arguments for the non-computability of consciousness, but the problem I always have in this connection is getting an intuitive grasp of what exactly the obstacle to programming consciousness could be. We know, of course, that there are plenty of non-computable problems, but somehow that doesn’t seem to touch our sense that we can ultimately program a computer to do practically anything we like: they’re not universal machines for nothing.

One of John Searle’s popular debating points is that you don’t get wet from a computer simulation of rain. Actually, I’m not sure how far that’s true: if the computer simulation of a tropical shower is controlling the sprinkler system in a greenhouse at Kew Gardens, you might need your umbrella after all. Many of the things AI tries to model, moreover, are not big physical events like rain, but things that can well be accomplished by text or robot output. However, there’s something in the idea that computer programs simulate rather than instantiate mental processes. Intuitively, I think this is because the patterns of causality are necessarily different when a program is involved: I’ve never succeeded in reducing this idea to a rigorous position, but the gist is that in a computer the ‘mental states’ being modelled don’t really cause each other directly; they’re simply the puppets of a script which is really operating elsewhere.

Why should that matter? You could argue that human mental states operate according to a program which is simply implicit in the structure of the brain, rather than being kept separately in some neural register somewhere; but even if we accept that there is a difference, why is it a difference that makes a difference?

I can’t say for sure, but I suspect it has to do with intentionality, meaningfulness. Meaning is one of those things computers can’t really handle, which is why computer translations remain rather poor: to translate properly you have to understand what the text means, not just apply a look-up table of vocabulary. It could be that in order to mean something, your mental states have to be part of an appropriate pattern of causality, which operating according to a script or program will automatically mess up. I would guess further that it has to do with a primitive form of indexicality or pointing which lies at the foundation of intentionality: if your actions aren’t in a direct causal line with your behaviour, you don’t really intend them, and if your perceptions aren’t in a direct causal line with sensory experience, you don’t really feel them. At the moment, I don’t think anyone quite knows what the answer is.

If that general line of thought is correct, of course, it would be the case that we cannot ever program or build a conscious entity – but we should be able to create circumstances in which consciousness arises or evolves. This would be a blow for Asimovians, since there would be no way of building laws into the minds of future robots: they would be as free and gratuitous as we are, and equally prone to corruption and crime. On the other hand, they would also share our capacity for reform and improvement; our ability, sometimes, to transcend ourselves and turn out better than any programmer could have foreseen – to start something good and unexpected.

Belatedly, Happy New Year!


  1. 1. Gary says:

    I highly suggest reading Hubert Dreyfus’s recent paper entitled “Why Heideggerian AI Failed and how fixing it would require making it more Heideggerian”.


    In it he talks about how current approaches to AI cannot surmount the frame problem because they ignore the “referential totality” that imbues significance onto human experience. He isn’t entirely bleak though, and talks about how Walter Freeman’s neurodynamic approach could possibly accurately account for the phenomenology of finding significance in the world.


  2. 2. Mark says:

    An interesting article. Generally, we just don’t now enough about how we think. I am surprised that the realisation of this hasn’t taught people a little more caution in their predictions – every 10 years or so someone states confidently that we will have intelligent machines within 10 years! I remember them saying this in 1970, because they had managed to create a glass probe that could pickup the activity in a single neuron.

    My intuition is that the major problem with modelling the human mind (or emulating, developing or whatever)is that we are both computational and organic – when we learn tasks the physical structures in our brains change as well as our store of experience. This, I believe, is at least part of the explanation of why experts conclude faster the more they learn. Having repeated a problem solving process many times, a combination of optimised neural paths and some kind of pattern recognition (“I have seen a problem like this before”) lead us to the answer without conscious thought.

    Remember, we have spent millions of years evolving problem solving abilities, and most of them operate without our conscious participation. Given that we don’t know how we do it ourselves, how can we hope to teach a computer?

  3. 3. Peter says:

    I’m not quite old enough to remember the 1970 prediction, but I certainly remember many similarly optimistic ones since then!

    Thanks for those links, Gary: I’m sorry your comment got mistakenly quarantined by my anti-spam software.

  4. 4. Christophe Menant says:

    The fact that ‘meaning’ is one of those things that computers can’t really handle is indeed an interesting question in the field of artificial consciousness (and of artificial intelligence). Also, considering that the usage of meaningful information is not a human specificity but rather belongs to the organic world opens a horizon for the evolution of meaning. The concept of meaning in an evolutionary background can be an entry point to the understanding of some aspects of human mind. And part of the problem with the today interest on ‘meaning’ is that it is implicitly associated with intentionality or with language, which tends to focus the notion of ‘meaning’ on human mind. What about animals? What about very simple animals where no characteristic of human mind has to be taken into account ?
    Trying to understand the concept of meaning for basic living elements can open a path for a bottom up approach. And analyzing the concept of meaning thru evolution could probably bring up tools usable for the clarification of some aspects of human nature.
    One step further in this direction is proposed at http://cogprints.org/4531/
    It is also interesting to underline our mental states as part of appropriate patterns of causality. And these patterns are to be considered as a very rich and complex outcome of billions of years of evolution.
    Going from basic vital constraints satisfaction to dealing with human constraints like ego valorisation deserves some specific in-depth analysis.
    The patterns of causality of our mental states may be a cave picture of multidimensional networks of chained meanings, with bypasses usable for rapid data processing.
    Complexity is part of our today science, happily enough.

  5. 5. Gilbert Wesley Purdy says:

    Meaning, as we experience it, is impossible without assigning values to objects and events. Values are arrived at by a combination of emotional and rational/computational factors. Without emotions everything just is, nothing is more desirable than anything else, nothing means anything. On the purely rational level, food=energy. On the organic, emotio-rational level, one needs food in order to survive and survival is highly desirable. Meaning begins to be constructed from just such basic value judgments: all meaning.

    I suspect that this may be in line with what Searle meant in the quote you cite. A computer has no means of acquiring experience that is similar to ours. It can’t think “It is raining and one can catch a cold in the rain and I do not desire to catch a cold.” It can only detect rain and execute one or more program steps, a process in which a programmer has had to supply whatever “meaning” can be attributed to the operation.

  6. 6. Gilbert Wesley Purdy says:

    Comment comment: Aggravating as all get out to have somehow dropped the delimiter for the first italicized word. I didn’t have time to set it right yesterday so I popped over to fix it today. Nice blog, by the bye. Perhaps you’ve noticed that I’ve added it to the side-links at my Online Bibliography.

  7. 7. Peter says:

    Many thanks, Gilbert – the Bibliography is a great resource. I’m sorry about the strange italicisation problem – I don’t know what caused it, but I’ve edited your comments to sort it out to the best of my ability. I hope it doesn’t put you off commenting.

    I think you’re right that Searle’s thinking is very much along the lines you set out. There’s a passage somewhere where he talks about hunger being inherently about food, and that kind of aboutness being at the root of intentionality and meaning.

Leave a Reply