You may have seen that Edge, following its annual custom, posed an evocative question to a selection of intellectuals to mark the new year. This time the question was ‘What have you changed your mind about? Why?’. This attracted a number of responses setting out revised attitudes to artificial consciousness (not all of them revised in the same direction). Roger Schank, notably, now thinks that ‘machines as smart as we are’ are much further off than he once believed, though he thinks Specialised Intelligences – machines with a narrow area of high competence but not much in the way of generalised human-style thinking skill – are probably achievable in the shorter term.
One remark he makes which I found thought-provoking is about expert systems, the approach which enjoyed such a vogue for a while, but ultimately did not deliver as expected. The idea was to elicit the rules which expert humans applied to a particular topic, embody them in a suitable algorithm, and arrive at a machine which understood the subject in question as well as the human expert, but didn’t need years of training and never made mistakes (and incidentally, didn’t need a salary and time off, as those who tried to exploit the idea for real-world business applications noted). Schank contrasts expert systems with human beings: the more rules the system learned, he says, the longer it typically took to reach a decision; but the more humans learn about a topic, the quicker they get.
Now there are various ways we could explain this difference, but I think Schank is right to see it as a symptom of an underlying issue, namely that human experts don’t really think about problems by applying a set of rules (except when they do, self-consciously): they do something else which we haven’t quite got to the bottom of. This is obviously a problem – as Schank says, how can we imitate what humans are doing when humans don’t know what they are doing when they do it?
Another Edge respondent expressing a more cautious view about the road to true AI is none other than Rodney Brooks. Noting that the preferred metaphor for the brain has always tended to be the most advanced technology of the day – steam engines, telephone exchanges, and now inevitably computers – he expresses doubt about whether computation is everything we have been tempted to believe it might be. Perhaps it isn’t, after all, the ultimate metaphor.
It seems to me that in different ways Schank and Brooks have identified the same underlying problem. There’s some important element of the way the brain works that just doesn’t seem to be computational. But why the hell not? Roger Penrose and others have presented arguments for the non-computability of consciousness, but the problem I always have in this connection is getting an intuitive grasp of what exactly the obstacle to programming consciousness could be. We know, of course, that there are plenty of non-computable problems, but somehow that doesn’t seem to touch our sense that we can ultimately program a computer to do practically anything we like: they’re not universal machines for nothing.
One of John Searle’s popular debating points is that you don’t get wet from a computer simulation of rain. Actually, I’m not sure how far that’s true: if the computer simulation of a tropical shower is controlling the sprinkler system in a greenhouse at Kew Gardens, you might need your umbrella after all. Many of the things AI tries to model, moreover, are not big physical events like rain, but things that can well be accomplished by text or robot output. However, there’s something in the idea that computer programs simulate rather than instantiate mental processes. Intuitively, I think this is because the patterns of causality are necessarily different when a program is involved: I’ve never succeeded in reducing this idea to a rigorous position, but the gist is that in a computer the ‘mental states’ being modelled don’t really cause each other directly; they’re simply the puppets of a script which is really operating elsewhere.
Why should that matter? You could argue that human mental states operate according to a program which is simply implicit in the structure of the brain, rather than being kept separately in some neural register somewhere; but even if we accept that there is a difference, why is it a difference that makes a difference?
I can’t say for sure, but I suspect it has to do with intentionality, meaningfulness. Meaning is one of those things computers can’t really handle, which is why computer translations remain rather poor: to translate properly you have to understand what the text means, not just apply a look-up table of vocabulary. It could be that in order to mean something, your mental states have to be part of an appropriate pattern of causality, which operating according to a script or program will automatically mess up. I would guess further that it has to do with a primitive form of indexicality or pointing which lies at the foundation of intentionality: if your actions aren’t in a direct causal line with your behaviour, you don’t really intend them, and if your perceptions aren’t in a direct causal line with sensory experience, you don’t really feel them. At the moment, I don’t think anyone quite knows what the answer is.
If that general line of thought is correct, of course, it would be the case that we cannot ever program or build a conscious entity – but we should be able to create circumstances in which consciousness arises or evolves. This would be a blow for Asimovians, since there would be no way of building laws into the minds of future robots: they would be as free and gratuitous as we are, and equally prone to corruption and crime. On the other hand, they would also share our capacity for reform and improvement; our ability, sometimes, to transcend ourselves and turn out better than any programmer could have foreseen – to start something good and unexpected.
Belatedly, Happy New Year!