Forget AI…

Picture: heraldic whale. … it’s AGI now. I was interested to hear via Robots.net that Artificial General Intelligence had enjoyed a successful second conference recently.

In recent years there seems to have been a general trend in AI research towards more narrow and perhaps more realistic sets of goals; towards achieving particular skills and designing particular modules tied to specific tasks rather than confronting the grand problem of consciousness itself. The proponents of AGI feel that this has gone so far that the terms ‘artificial intelligence’ and AI no longer really designate the topic they’re interested in, the topic of real thinking machines.  ‘An AI’ these days is more likely to refer to the bits of code which direct the hostile goons in a first-person shooter game than to anything with aspirations to real awareness, or even real intelligence.

The mention of  ‘real intelligence’ of course, reminds us that plenty of other terms have been knocked out of shape over the years in this field. It is an old complaint from AI sceptics that roboteers keep grabbing items of psychological vocabulary and redefining them as something simpler and more computable. The claim that machines can learn, for example, remains controversial to some, who would insist that real learning involves understanding, while others don’t see how else you would describe the behaviour of a machine that gathers data and modifies its own behaviour as a result.

I think there is a kind of continuum here, from claims it seems hard to reject to those it seems bonkers to accept, rather like this…

Claim: machines… Objection
add numbers. Really the ‘numbers’ are a human interpretation of meaningless switching operations.
control factory machines. Control implies foresight and intentions whereas machines just follow a set of instructions.
play chess. Playing a game involves expectations and social interaction, which machines don’t really have.
hold conversations Chat-bots merely reshuffle set phrases to give the impression of understanding.
react emotionally There may be machines that display smiley faces or even operate in different ’emotional’ modes, but none of that touches the real business of emotions.

Readers will probably find it easy to improve on this list, but you get the gist. Although there’s something in even the first objection, it seems pointless to me to deny that machines can do addition – and equally pointless to claim that any existing machine experiences emotions – although I don’t rule even that idea out of consideration forever.

I think the most natural reaction is to conclude that in all such cases, but especially in the middling ones, there are two different senses – there’s playing chess and really playing chess. What annoys the sceptics is their perception that AIers have often stolen terms for the easy computable sense when the normal reading is the difficult one laden with understanding, intentionality and affect.

But is this phenomenon not simply an example of the redefinition of terms which science has always introduced? We no longer call whales fish, because biologists decided it made sense to make fish and mammals exclusive categories – although people had been calling whales fish on and off for a long time before that. Aren’t the sceptics on this like diehard whalefishers? Hey, they say, you claimed to be elucidating the nature of fish, but all you’ve done is make it easy for yourself by making the word apply just to piscine fish, the easy ones to deal with. The difficult problem of elucidating the deeper fishiness remains untouched!

The analogy is debatable, but it could be claimed that redefinitions of  ‘intelligence’ and ‘learning’ have actually helped to clarify important distinctions in broadly the way that excluding the whales helped with biological taxonomy. However, I think it’s hard to deny that there has also at times been a certain dilution going on. This kind of thing is not unique to consciousness – look what happened to ‘virtual reality’, which started out as quite a demanding concept, and was soon being used as a marketing term for any program with slight pretensions to 3D graphics.

Anyway, given all that background it would be understandable if the sceptical camp took some pleasure in the idea that the AI people have finally been hoist with their own petard, and that just as the sceptics, over the years, have been forced to talk about ‘real intelligence’ and ‘human-level awareness’, the robot builders now have to talk about ‘artificial general intelligence’.

But you can’t help warming to people who want to take on the big challenge. It was the bold advent of the original AI project which really brought consciousness back on to the agenda of all the other disciplines, and the challenge of computer thought which injected a new burst of creative energy into the philosophy of mind, to take just one example. I think even the sceptics might tacitly feel that things would be a little quiet without the ‘rude mechanicals’: if AGI means they’re back and spoiling for a fight, who could forbear to cheer?

Cryptic Consciousness

Picture: cryptic entity. I was thinking about the New Mysterian position the other day, and it occurred to me that there are some scary implications which I, at any rate, had never noticed before.

As you may know, the New Mysterian position, cogently set out by Colin McGinn, is that our minds may simply not be equipped to understand consciousness. Not because it is in itself magic or inexplicable, but because our brains just don’t work in the necessary way. We suffer from cognitive closure. Closure here means that we have a limited repertoire of mental operations; using them in sequence or combination will take us to all sorts of conceptual places, but only within a certain closed domain. Outside that domain there are perfectly valid and straightforward ideas which we can simply never reach, and unfortunately one or more of these unreachable ideas is required in order to understand consciousness.

I don’t think that is actually the case, but the possibility is undeniable; I must admit that personally there’s a certain element of optimistic faith involved in my rejection of Mysterianism. I just don’t want to give up on the possibility of a satisfying answer.

Anyway, suppose we do suffer from some form of cognitive closure (it’s certainly true that human minds have their limitations, notably in the complexity of the ideas they can entertain at any one time). One implication is that we can conceive of a being whose repertoire of mental operations might be different from ours. It could be a god-like being which understands everything we understand, and other things besides; its mental domain might be an overlapping territory, not very different in extent from ours; or it might deal in a set of ideas all of which are inaccessible to us, and find ours equally unthinkable.

That conclusion in itself is no more than a somewhat frustrating footnote to Mysterianism; but there’s worse to follow. It seems just about inevitable to me that if we encountered a being with the last-mentioned fully cryptic kind of consciousness, we would not recognise it. We wouldn’t realise it was conscious: we probably wouldn’t recognise that it was an agent of any kind. We recognise consciousness and intelligence in others because we can infer the drift of their thoughts from their speech and behaviour and recognise their cogency. In the case of the cryptics, their behaviour would be so incomprehensible, we wouldn’t even recognise it as behaviour.

So it could be that now and then in AI labs a spark of cryptic consciousness has flashed and died without ever being noticed. This is not quite as bonkers as it sounds. It is apparently the case that when computers were used to apply a brute force, exhaustive search was applied to a number of end-game positions in chess, it turned out that several which had long been accepted as draws turned out to have winning strategies (if anyone can provide more details of this research I’d be grateful – my only source is the Oxford Companion to the Mind) . The strategies proved incomprehensible; to human eyes, even those of expert chess players; the computer appeared merely to bimble about with its pieces in a purposeless way, until unexpectedly, after a prolonged series of moves, checkmate emerged. Let’s suppose a chess-playing program had independently come up with this strategy and begun playing it. Long before checkmate emerged – or perhaps even afterwards – the human in charge would have lost patience with the endless bimbling (in a position known to be a draw, after all), and withdrawn the program for retraining or recoding.

Perhaps, for that matter, there are cryptically conscious entities on other planets elsewhere in the Galaxy. The idea that aliens might be incomprehensible is not new, but here there is a reasonable counter-argument. All forms of life, presumably, are going to be the product of a struggle for survival similar to the one which produced us on Earth. Any entity which has come through millions of years of such a struggle is going to have to have acquired certain key cognitive abilities, and these at least will surely be held in common. Certain basic categories of thought and communication are surely going to be recognisable; threats, invitations, requests, and the like are surely indispensable to the conduct of any reasonably complex life, and so even if there are differences at the margins, there will be a basis for communication. We may or may not have a set of cognitive tools which address a closed domain – but we certainly haven’t got a random selection of cognitive tools.

That’s a convincing argument, though not totally conclusive. On Earth, the basic body plans of most animal groups were determined long ago; a few good basic designs triumphed and most animals alive today are variations on one of these themes. But it’s possible that if things had been different we might have emerged with a somewhat different set of basic blueprints. Perhaps there are completely different designs on other planets; perhaps there are phyla full of animals with wheels, say. Hard to be sure how likely that is, because we only have one planet to go on. But at any rate, if that much is true of body plans, the same is likely to be true of Earthbound minds; a few basic mental architectures that seemed to work got established way back in history, and everything since is a variation. But perhaps radically different mental set-ups would have worked equally well in ways we can’t even imagine, and perhaps on other worlds, they do.

The same negative argument doesn’t apply to artificial intelligence, of course, since AI generally does not have to be the result of thousands of generations of evolution and can jump to positions which can’t be reached by any coherent evolutionary path.

Common sense tells us that the whole idea of cryptic consciousness is more of a speculative possibility than a serious hypothesis about reality – but I see no easy way to rule it out. Never mind the AI labs and the alien planets; it’s possible in principle that the animists are right and that we’re surrounded every day by conscious entities we simply don’t recognise…

Where do I begin?

Babybotl was interested recently to read about Babybot, a research robot intended to model some of the characteristics of a two-year old child. Babybot reminded me slightly of Steve Grand’s Lucy without her mask (there seems to be a consensus in engineering circles that for consciousness you only need one arm and no legs). A bad omen, I’m afraid: poor Lucy has apparently been gathering cobwebs for a while now.

The thinking behind Babybot is based on a process model of consciousness, which sounds interesting, but my impression is that the researchers have spent more time on the technological challenges of the sensorimotor apparatus than on the philosophical issues (quite reasonably, no doubt).

It wasn’t so much that that interested me, though (and provoked a largely unrelated chain of thought), as the idea that you needed to produce a baby’s consciousness before moving on to the adult version. As a practical research strategy, this has some obvious appeal – infant movements and senses provide a slightly easier challenge and may yield insights into the developmental process. But could it be that there is actually a stronger constraint here – that consciousness cannot be generated full-blown, but has to go through embryonic and infantile forms? Alan Turing certainly implied that this was a possibility in his famous paper of 1950, albeit in a tone which characteristically mingled the frivolous with the profound (“It will not, for instance, be provided with legs, so that it could not be asked to go out and fill the coal scuttle.”), and even said that he had conducted some experiments on a child machine.

The emergence of consciousness in human beings is itself an unclear and controversial matter, of course. We can feel pretty sure that a newly-formed zygote lacks consciousness (perhaps I ought to specify human-style consciousness, for those of a panpsychist leaning); we can feel reasonably sure that a two-year-old has consciousness, though perhaps without the refined self-awareness of an adult. But we don’t know exactly when consciousness dawns, and we don’t know whether it is like switching on the light or something much more gradual, passing through a series of partly-conscious states (whatever those might be). I think we tend to assume that the arrival of consciousness could be sudden, even if it isn’t: that in principle we could construct an artificial consciousness in any arbitrary state x, where x might correspond with say, thinking about the cup of tea you’re going to have when you get home, or trying to remember what your brother gave you for your twelfth birthday.

linespace

But not all states of affairs are programmable, and it might be that conscious mental states are not. Even machines sometimes need to be constructed in a particular sequence, so that state z, the finished product, can only be reached through a suitable series of earlier states. When in operation, some machines also have states which are constrained by sequences of previous states. Analogue clocks provide a simple example: you can’t get the hands to a reading of tea-time without passing through readings which correspond to adjacent times. I once saw an orrery, which showed not only the date and year, but the position of the planets at any given time. Most clocks allow the hands to be disengaged from the mechanism and turned quickly to any time – a good enough practical approximation to being able to set times arbitrarily: but in this one the mechanism did not allow the planetary ‘hands’ to be wound forward quickly. If the clock ever stopped, the only way to reset it was effectively to take it apart and reconstruct it in a later date configuration which you had worked out separately.

linespace

What if conscious states were like a much more complex version of this? What if you could only get to the state of thinking about tea-time through an appropriate series of earlier states? It might be that our whole conscious life is made up of a kind of rope of these threads of relevant states, stretching all the way back to the inscrutable autopoietic event in which our consciousness appeared out of nothing. If that were so, it might account for our sense of being responsible for our own actions: while the causes acting on inanimate objects are simply those that happen to be around in the environment at the time, a dominant factor in our own behaviour would be the self-contained stream of causality running along in our heads.

Moreover, an artificial intelligence would indeed have to start life in the same kind of unready and undefined state as a new baby, and generate itself as it went along. It would also follow that a computer was a uniquely unsuitable machine for supporting such an entity. In principle a computer can go directly into any state: if you want to introduce a rule that state B follows state A, you have to do it through the program: so although a computer might be capable of exhibiting the right sequence of states which occur in thinking about tea (supposing those could be defined), the causal relationships between those states would be actually indirect. All you would get is a simulation, analogous to the simulation of motion provided by the rapid sequence of frames in a film.

linespace

What about sleep? It seems a pretty good piece of evidence that human beings can indeed switch off and then resume when the right time comes. It might be hard to imagine coming into existence already thinking about a cup of tea; but awakening and starting at once to think about it is a thoroughly ordinary experience. It may be that some kind of mental activity continues even in sleep (and a theory of continuity might provide a new rationale for dreams); but people also come out of a coma, or dreamless unconsciousness. Unless we want to say that these are new people who merely inhabit the bodies and memories of their predecessors, it seems there is a difficulty.

Perhaps, in response, we could argue that beliefs persist even in sleep and coma. I may not think about anything while unconscious, but in some sense I go on believing that the Earth goes round the Sun, and not vice versa. Worryingly, in a similar sense beliefs continue even after death: does Luther still believe in God (discounting, for the sake of argument, the possibility of his surviving in a better place)? It seems odd to say so, but he certainly hasn’t become an atheist in the last few centuries. Personally, however, I don’t much like that line of argument, which seems to make our continuity both too absolute and too abstract at the same time. I would rather say that our continuity is not essentially disturbed if some of the states in the sequence persist, recorded in our memories and otherwise, through periods of inactivity.

Ultimately, though, the beginnings of consciousness remain as frustratingly unclear as most of its other aspects.