Cryptic Consciousness

Picture: cryptic entity. I was thinking about the New Mysterian position the other day, and it occurred to me that there are some scary implications which I, at any rate, had never noticed before.

As you may know, the New Mysterian position, cogently set out by Colin McGinn, is that our minds may simply not be equipped to understand consciousness. Not because it is in itself magic or inexplicable, but because our brains just don’t work in the necessary way. We suffer from cognitive closure. Closure here means that we have a limited repertoire of mental operations; using them in sequence or combination will take us to all sorts of conceptual places, but only within a certain closed domain. Outside that domain there are perfectly valid and straightforward ideas which we can simply never reach, and unfortunately one or more of these unreachable ideas is required in order to understand consciousness.

I don’t think that is actually the case, but the possibility is undeniable; I must admit that personally there’s a certain element of optimistic faith involved in my rejection of Mysterianism. I just don’t want to give up on the possibility of a satisfying answer.

Anyway, suppose we do suffer from some form of cognitive closure (it’s certainly true that human minds have their limitations, notably in the complexity of the ideas they can entertain at any one time). One implication is that we can conceive of a being whose repertoire of mental operations might be different from ours. It could be a god-like being which understands everything we understand, and other things besides; its mental domain might be an overlapping territory, not very different in extent from ours; or it might deal in a set of ideas all of which are inaccessible to us, and find ours equally unthinkable.

That conclusion in itself is no more than a somewhat frustrating footnote to Mysterianism; but there’s worse to follow. It seems just about inevitable to me that if we encountered a being with the last-mentioned fully cryptic kind of consciousness, we would not recognise it. We wouldn’t realise it was conscious: we probably wouldn’t recognise that it was an agent of any kind. We recognise consciousness and intelligence in others because we can infer the drift of their thoughts from their speech and behaviour and recognise their cogency. In the case of the cryptics, their behaviour would be so incomprehensible, we wouldn’t even recognise it as behaviour.

So it could be that now and then in AI labs a spark of cryptic consciousness has flashed and died without ever being noticed. This is not quite as bonkers as it sounds. It is apparently the case that when computers were used to apply a brute force, exhaustive search was applied to a number of end-game positions in chess, it turned out that several which had long been accepted as draws turned out to have winning strategies (if anyone can provide more details of this research I’d be grateful – my only source is the Oxford Companion to the Mind) . The strategies proved incomprehensible; to human eyes, even those of expert chess players; the computer appeared merely to bimble about with its pieces in a purposeless way, until unexpectedly, after a prolonged series of moves, checkmate emerged. Let’s suppose a chess-playing program had independently come up with this strategy and begun playing it. Long before checkmate emerged – or perhaps even afterwards – the human in charge would have lost patience with the endless bimbling (in a position known to be a draw, after all), and withdrawn the program for retraining or recoding.

Perhaps, for that matter, there are cryptically conscious entities on other planets elsewhere in the Galaxy. The idea that aliens might be incomprehensible is not new, but here there is a reasonable counter-argument. All forms of life, presumably, are going to be the product of a struggle for survival similar to the one which produced us on Earth. Any entity which has come through millions of years of such a struggle is going to have to have acquired certain key cognitive abilities, and these at least will surely be held in common. Certain basic categories of thought and communication are surely going to be recognisable; threats, invitations, requests, and the like are surely indispensable to the conduct of any reasonably complex life, and so even if there are differences at the margins, there will be a basis for communication. We may or may not have a set of cognitive tools which address a closed domain – but we certainly haven’t got a random selection of cognitive tools.

That’s a convincing argument, though not totally conclusive. On Earth, the basic body plans of most animal groups were determined long ago; a few good basic designs triumphed and most animals alive today are variations on one of these themes. But it’s possible that if things had been different we might have emerged with a somewhat different set of basic blueprints. Perhaps there are completely different designs on other planets; perhaps there are phyla full of animals with wheels, say. Hard to be sure how likely that is, because we only have one planet to go on. But at any rate, if that much is true of body plans, the same is likely to be true of Earthbound minds; a few basic mental architectures that seemed to work got established way back in history, and everything since is a variation. But perhaps radically different mental set-ups would have worked equally well in ways we can’t even imagine, and perhaps on other worlds, they do.

The same negative argument doesn’t apply to artificial intelligence, of course, since AI generally does not have to be the result of thousands of generations of evolution and can jump to positions which can’t be reached by any coherent evolutionary path.

Common sense tells us that the whole idea of cryptic consciousness is more of a speculative possibility than a serious hypothesis about reality – but I see no easy way to rule it out. Never mind the AI labs and the alien planets; it’s possible in principle that the animists are right and that we’re surrounded every day by conscious entities we simply don’t recognise…

On the level about computers and minds

Picture: cogs. Ari N. Schulman had an interesting piece in the New Atlantis recently on Why Minds Are Not Like Computers. Very briefly his view is that the aims of the strong AI project have quietly become less ambitious over time. In particular, from aiming to find the algorithms which directly generate high-level consciousness in one fell swoop, researchers have turned to simulating lower level mechanisms and modules; in some cases they’ve gone to a further level and are attempting to simulate the brain at neuronal level. Who knows, they might end up trying to do it at molecular or subatomic level, he says, but the point is that at these low levels the game is already lost; even if the simulation runs, we still won’t understand what’s happening on the higher levels. If we have to go to the level of simulating individual neurons, the original claim that the brain works the way a computer does has implicitly been abandoned.

Schulman thinks that a misleading application of an analogy between computers and the mind is key to the problem. In computers we have the well-established set of layers which goes from high-level languages through machine code all the way down to physical transistors;  researchers assumed, he thinks, that they would in effect be able to reverse engineer the source code of the brain and come up with high-level scripts which explicated the mechanisms of consciousness; and that they would be able to do it simply by ensuring that the input-output relationships were reproduced without having to worry about whether the hidden inner mechanisms of their version were actually the same as those of nature. But it never happened, and as time goes by it seems less and less likely that those algorithms will ever be found; it looks as if the mind just isn’t like that after all.

Although there’s something recognisable about that, I’m not sure whether this is a completely accurate account historically. It is certainly true that the misplaced early optimism of Good Old Fashioned Artificial Intelligence is a thing of the past – but that’s hardly breaking news. I’m not sure that even in the most upbeat period people thought that they could do the entire mind in one go: even then they surely looked to start with simplified tasks and build single modules. It’s just that they thought these modules and tasks would be dealt with more quickly and easily than they really were. The recent emergence of projects like the Blue Brain, seeking to simulate the neuronal level working of the brain, are also less a sign of lack of confidence in AI and more a sign of growing confidence in the number-crunching power of the computers now available. I don’t think such projects are exactly typical of where things are at these days in any case.

Still, plausible claims?  Of course, working out the high-level code, or even recognising the general drift of a program, from looking only at the machine-code level is not at all an easy business, so the fact that looking at neurons for many years has not yet led us to a general theory of consciousness is not necessarily a sign that it never will.  Schulman does not, like Searle (whose views he discusses), take the view that something about the physical stuff of brains is essential; his objection to functionalism seems merely to be that it hasn’t worked yet. Perhaps we just need more patience.

We also need to be a little careful about the diagnosis.  There are actually different ways of dividing the whole business into levels; one is the programmer-facing way which Schulman mainly focuses on; another is the user-facing one. Here the bottom level is made up of meaningless symbols; somewhere in the middle is organised data; and at the top is meaningful information. Surely it’s here that the aim of AI researchers has been focussed; they expected consciousness, not as high-level program code, but as outputs which mean something to a human being, or which ‘make sense’ in the context of a task. Ultimately, for consciousness, the outputs have to make sense to the machine itself.  If some form of computationalism can deliver these results, I don’t think the absence of a high-level theory in the other sense would indicate philosophical failure.

Even if we do ultimately have to go beyond a narrow functionalist view, we need not abandon the overall quest.  We should perhaps hang on to the distinction between consciousness as computation and consciousness as computable. The idea that the mind actually is just the programs running in the brain may look less plausible than it did; the idea that programs running somewhere might sustain a mind might yet be getting a second wind. It might be too narrow a view of clocks to say that they are nothing more than cogs, springs, and other pieces of mechanism; the works don’t tell us what the essence of a clock is. But the ironmongery is all we need to make a clock, and perhaps we could make one that worked before we fully understood the principle of the escapement mechanism…?