l was interested recently to read about Babybot, a research robot intended to model some of the characteristics of a two-year old child. Babybot reminded me slightly of Steve Grand’s Lucy without her mask (there seems to be a consensus in engineering circles that for consciousness you only need one arm and no legs). A bad omen, I’m afraid: poor Lucy has apparently been gathering cobwebs for a while now.
The thinking behind Babybot is based on a process model of consciousness, which sounds interesting, but my impression is that the researchers have spent more time on the technological challenges of the sensorimotor apparatus than on the philosophical issues (quite reasonably, no doubt).
It wasn’t so much that that interested me, though (and provoked a largely unrelated chain of thought), as the idea that you needed to produce a baby’s consciousness before moving on to the adult version. As a practical research strategy, this has some obvious appeal – infant movements and senses provide a slightly easier challenge and may yield insights into the developmental process. But could it be that there is actually a stronger constraint here – that consciousness cannot be generated full-blown, but has to go through embryonic and infantile forms? Alan Turing certainly implied that this was a possibility in his famous paper of 1950, albeit in a tone which characteristically mingled the frivolous with the profound (“It will not, for instance, be provided with legs, so that it could not be asked to go out and fill the coal scuttle.”), and even said that he had conducted some experiments on a child machine.
The emergence of consciousness in human beings is itself an unclear and controversial matter, of course. We can feel pretty sure that a newly-formed zygote lacks consciousness (perhaps I ought to specify human-style consciousness, for those of a panpsychist leaning); we can feel reasonably sure that a two-year-old has consciousness, though perhaps without the refined self-awareness of an adult. But we don’t know exactly when consciousness dawns, and we don’t know whether it is like switching on the light or something much more gradual, passing through a series of partly-conscious states (whatever those might be). I think we tend to assume that the arrival of consciousness could be sudden, even if it isn’t: that in principle we could construct an artificial consciousness in any arbitrary state x, where x might correspond with say, thinking about the cup of tea you’re going to have when you get home, or trying to remember what your brother gave you for your twelfth birthday.
But not all states of affairs are programmable, and it might be that conscious mental states are not. Even machines sometimes need to be constructed in a particular sequence, so that state z, the finished product, can only be reached through a suitable series of earlier states. When in operation, some machines also have states which are constrained by sequences of previous states. Analogue clocks provide a simple example: you can’t get the hands to a reading of tea-time without passing through readings which correspond to adjacent times. I once saw an orrery, which showed not only the date and year, but the position of the planets at any given time. Most clocks allow the hands to be disengaged from the mechanism and turned quickly to any time – a good enough practical approximation to being able to set times arbitrarily: but in this one the mechanism did not allow the planetary ‘hands’ to be wound forward quickly. If the clock ever stopped, the only way to reset it was effectively to take it apart and reconstruct it in a later date configuration which you had worked out separately.
What if conscious states were like a much more complex version of this? What if you could only get to the state of thinking about tea-time through an appropriate series of earlier states? It might be that our whole conscious life is made up of a kind of rope of these threads of relevant states, stretching all the way back to the inscrutable autopoietic event in which our consciousness appeared out of nothing. If that were so, it might account for our sense of being responsible for our own actions: while the causes acting on inanimate objects are simply those that happen to be around in the environment at the time, a dominant factor in our own behaviour would be the self-contained stream of causality running along in our heads.
Moreover, an artificial intelligence would indeed have to start life in the same kind of unready and undefined state as a new baby, and generate itself as it went along. It would also follow that a computer was a uniquely unsuitable machine for supporting such an entity. In principle a computer can go directly into any state: if you want to introduce a rule that state B follows state A, you have to do it through the program: so although a computer might be capable of exhibiting the right sequence of states which occur in thinking about tea (supposing those could be defined), the causal relationships between those states would be actually indirect. All you would get is a simulation, analogous to the simulation of motion provided by the rapid sequence of frames in a film.
What about sleep? It seems a pretty good piece of evidence that human beings can indeed switch off and then resume when the right time comes. It might be hard to imagine coming into existence already thinking about a cup of tea; but awakening and starting at once to think about it is a thoroughly ordinary experience. It may be that some kind of mental activity continues even in sleep (and a theory of continuity might provide a new rationale for dreams); but people also come out of a coma, or dreamless unconsciousness. Unless we want to say that these are new people who merely inhabit the bodies and memories of their predecessors, it seems there is a difficulty.
Perhaps, in response, we could argue that beliefs persist even in sleep and coma. I may not think about anything while unconscious, but in some sense I go on believing that the Earth goes round the Sun, and not vice versa. Worryingly, in a similar sense beliefs continue even after death: does Luther still believe in God (discounting, for the sake of argument, the possibility of his surviving in a better place)? It seems odd to say so, but he certainly hasn’t become an atheist in the last few centuries. Personally, however, I don’t much like that line of argument, which seems to make our continuity both too absolute and too abstract at the same time. I would rather say that our continuity is not essentially disturbed if some of the states in the sequence persist, recorded in our memories and otherwise, through periods of inactivity.
Ultimately, though, the beginnings of consciousness remain as frustratingly unclear as most of its other aspects.