Disastrous Consciousness

Hugh Howey gives a bold, amusing, and hopelessly optimistic account of how to construct consciousness in Wired. He thinks it wouldn’t be particularly difficult. Now you might think that a man who knows how to create artificial consciousness shouldn’t be writing articles; he should be building the robot mind. Surely that would make his case more powerfully than any amount of prose? But Howey thinks an artificial consciousness would be disastrous. He thinks even the natural kind is an unfortunate burden, something we have to put up with because evolution has yet to find a way of getting the benefits of certain strategies without the downsides. But he doesn’t believe that conscious AI would take over the world, or threaten human survival, so I would still have thought one demonstration piece was worth the effort? Consciousness sucks, but here’s an example just to prove the point?

What is the theory underlying Howey’s confidence? He rests his ideas on Theory of Mind (which he thinks is little discussed); the ability to infer the thoughts and intentions of others. In essence, he thinks that was a really useful capacity for us to acquire, helping us compete in the cut-throat world of human society; but when we turn it on ourselves it disastrously generates wrong results, in particular about our own having of conscious states.

It remains a bit mysterious to me why he thinks a capacity that is so useful applied to others should be so disastrously and comprehensively wrong when applied to ourselves. He mentions priming studies, where our behaviour is actually determined by factors we’re unaware of; priming’s reputation has suffered rather badly recently in the crisis of non-reproducibility, but I wouldn’t have thought even ardent fans of priming would claim our mental content is entirely dictated by priming effects.

Although Dennett doesn’t get a mention, Howey’s ideas seem very Denettian, and I think they suffer from similar difficulties. So our Theory of Mind leads us to attribute conscious thoughts and intentions  to others; but what are we attributing to them? The theory tells us that neither we nor they actually have these conscious contents; all any of us has is self-attributions of conscious contents. So what, we’re attributing to them some self-attributions of self-attributions of…  The theory covertly assumes we already have and understand the very conscious states it is meant to analyse away. Dennett, of course, has some further things to say about this, and he’s not as negative about self-attributions as Howie.

But you know, it’s all pretty implausible intuitively. Suppose I take a mouthful of soft-boiled egg which tastes bad, and I spit it out. According to Howey what went on there is that I noticed myself spitting out the egg and thought to myself: hm, I infer from this behaviour that it’s very probable I just experienced a bad taste, or maybe the egg was too hot, can’t quite tell for sure.

The thing is, there are real conscious states irrespective of my own beliefs about them (which indeed, may be plagued by error). They are characterised by having content and intentionality, but these are things Howie does not believe in, or rather it seems has never thought of; his view that a big bank of indicator lights shows a language capacity suggests he hasn’t gone into this business of meaning and language quite deeply enough.

If he had to build an artificial consciousness, he might set up a community of self-driving cars, let one make inferences about the motives of the others and then apply that capacity to itself. But it would be a stupid thing to do because it would get it wrong all the time; in fact at this point Howie seems to be tending towards a view that all Theory of Mind is fatally error-prone. It would better, he reckons, if all the cars could have access to all of each other’s internal data, just as universal telepathy would be better for us (though in the human case it would be undermined by mind-masking freeloaders.

Would it, though? If the cars really had intentions, their future behaviour would not be readily deducible  simply from reading off all the measurements. You really do have to construct some kind of intentional extrapolation, which is what the Dennettian intentional stance is supposed to do.

I worry just slightly that some of the things Howey says seem to veer close to saying, hey a lot of these systems are sort of aware already; which seems unhelpful. Generally, it’s a vigorous and entertaining exposition, even if, in my view, on the wrong track.

Ape Interpretation

Do apes really have “a theory of mind”? This research, reported in the Guardian, suggests that they do. We don’t mean, of course, that chimps are actually drafting papers or holding seminars, merely that they understand that others can have beliefs which may differ from their own and which may be true or false. In the experiment the chimps see a man in a gorilla suit switch hiding places; but when his pursuer appears, they look at the original hiding place. This is, hypothetically, because they know that the pursuer didn’t see the switch, so presumably he still believes his target is in the original hiding place, and that’s where we should expect him to go.

I must admit I thought similar tell-tale behaviour had already been observed in wild chimps, but a quick search doesn’t turn anything up, and it’s claimed that the research establishes new conclusions. Unfortunately I think there are several other quite plausible ways to interpret the chimps’ behaviour that don’t require a theory of mind.

  1. The chimps momentarily forgot about the switch, or needed to ‘check’ (older readers, like me, may find this easy to identify with).
  2. The chimps were mentally reviewing ‘the story so far’, and so looked at the old hiding place.
  3. Minute clues in the experimenters’ behaviour told the chimps what to expect. The famous story of Clever Hans shows that animals can pick up very subtle signals humans are not even aware of giving.

This illustrates the perennial difficulty of investigating the mental states of creatures that cannot report them in language. Another common test of animal awareness involves putting a spot on the subject’s forehead and then showing them a mirror; if they touch the spot it is supposed to demonstrate that they recognise the reflection as themselves and therefore that they have a sense of their own selfhood. But it doesn’t really prove that they know the reflection is their own, only that the sight of someone with a spot causes them to check their own forehead. A control where they are shown another real subject with a spot might point to other interpretations, but I’ve never heard of it being done. It is also rather difficult to say exactly what belief is being attributed to the subjects. They surely don’t simply believe that the reflection is them: they’re still themselves. Are we saying they understand the concepts of images and reflections? It’s hard to say.

The suggestion of adding a control to this experiment raises the wider question of whether this sort of experiment can be generally tightened up by more ingenious set-ups? Who knows what ingenuity might accomplish, but it does seem to me that there is an insoluble methodological issue. How can we ever prove that particular patterns of behaviour relate to beliefs about the state of mind of others and not to similar beliefs in the subject’s own minds?

It could be that the problem really lies further back: that the questions themselves make no sense. Is it perhaps already fatally anthropomorphic to ask whether other animals have “a theory of mind” or “a conception of their own personhood”; perhaps these are already incorrigibly linguistic ideas that just don’t apply to creatures with no language. If so, we may need to unpick our thinking a bit and identify more purely behavioural ways of thinking, ones that are more informative and appropriate?

The Pongid Theory of Mind

PongidResearchers from the Max Planck Institute and St Andrew’s University have come up with some fresh evidence that chimps have a theory of mind (ToM) – that is to say that they are aware that other individuals possess knowledge and that what they know doesn’t always match what we know.

The researchers placed dummy snakes in the path of wild chimps: the chimps gave warning calls more frequently in the presence of others who, so far as they could tell, had no prior knowledge of the presumed hazard.

This kind of research is fraught with difficulty. Morgan’s Canon tells us that we should only use consciousness as an explanation for some item of behaviour where no simpler explanation is available, and similarly we should be reluctant to grant chimps ToM unless there is no alternative.  Couldn’t the explanation be, for example, that chimps who are alone are more likely to give warning calls, either because that response is just hard-wired, or because they are more fearful when alone? Alternatively, perhaps the observed behaviour could be largely explained if chimps are programmed to give a warning call, but only one, for each member of the troupe they spot or hear approaching?

Although I think Morgan’s Canon is absolutely the right kind of principle to apply, it is difficult to satisfy, and if read too literally perhaps impossible. We know from all the discussions of philosophical zombies that there are plenty of thoughtful people who find it conceivable that all of human behaviour could be produced without consciousness (at any rate, without the kind of consciousness that requires actual phenomenal subjective experience). If that’s really true then there are surely no cases in which behaviour can strictly be explained only by consciousness. It’s equally hard, going on impossible, to rule out every conceivable alternative explanation for the chimps’ behaviour – but the researchers were well aware of the problem and the key point of the research is the observation of circumstances where, for example, chimp A could be presumed to have heard an earlier warning, but chimp B could not. So we can take the claims they make as well grounded. It seems that with some inevitable margin of doubt we can reasonably take it as established that chimps do have ToM.

So what? We might have been willing to assume that that was probably the case anyway. We already know chimps are extremely bright and there are many who believe they can develop language skills which approach human levels. Language is what makes it so much easier to know for sure that human beings have ToM – they can tell us about it – so if chimps are anywhere near that level it’s really no surprise that they also have ToM. (Interesting, by the way that the current research uses the chimps’ proto-linguistic warning calls.) One further conclusion offered by the researchers themselves is that ToM must have emerged in the primate lineage at a point before the divergence of chimp and human ancestors: but that ain’t necessarily so. It could equally be that each lineage has developed a functionally comparable capacity in parallel, one which the latest shared ancestor need never have had.

Do we and our pongid cousins have the same ToM? In some respects obviously not. For one thing, we humans really do have actual academic theories of mind; and we write novels filled with the putative contents of minds that never existed. We have ToM on levels which completely transcend the mental lives of chimps. Are these, though, just fancy overlays on an underlying ability which remains essentially the same?  Alas, there’s no easy way of telling without knowing what’s going on in the chimp’s mind – what it is like to be a chimp – and Nagel long ago told us that that was impossible.

Attempting to know the unknowable is nothing new for us, though, so let’s at least briefly try to achieve the impossible. There are lots of possibilities for what might be passing through the chimp’s mind: by way of illustration it could be any of the following.

  1. A cloudy sense of something indefinable but importantly snake-related which is missing in Chimp B.
  2. A mental picture of Chimp B continuing to advance and stumbling on the snake.
  3. A brief empathetic sense of being Chimp B, and a recollection that seeing the snake or hearing a warning has not occurred.
  4. Routine enumeration of the troupe and its whereabouts leading to a realisation that Chimp B hasn’t been around for a while.
  5. Occurence of proto-verbal content equivalent to uttering the sentence “Look there’s B, who doesn’t know about the snake yet!”

There are plenty of other possibilities: cataloguing them would in itself be a challenging task. Moreover, humans are clearly capable of operating on two or more of these levels at once, and it would be mere speciesism to assume that chimps are not. Still, can we pare it down a bit: given that chimps lack full-blown human linguistic abilities and are relatively limited in their foresight, can we plausibly hypothesise that cases like 5 and other involving relatively complex levels of abstraction are probably absent from the chimp experience? I’m not sure, and even if we can it doesn’t help all that much.

So instead I ask myself what state obtained in my own mind last time I warned someone about a potential hazard. Luckily I do remember a couple of occasions, but interestingly introspection leaves me quite uncertain about the answer. This could be a result of hazy memory, but I think it’s worse than that: I think the main problem is that so far as conscious thought goes I could have been thinking anything.  It feels as if there is no distinct single state of mind which corresponds to noticing that somebody needs to be warned about something; curiously I feel tempted to examine my own behaviour and conclude that if I did go on to warn someone, I must have been thinking that they needed warning.

That kind of approach is another option, I suppose: we can take a behaviourist tack and say that if chimps behave in a way that displays ToM, then they have it, and that’s all there is to be said about it. If we can’t formulate clearly what kind of behaviour that would be, that just means ToM itself turns out to be mentalistic nonsense.  The snag with that is that ToM is pretty certainly mentalistic nonsense to behaviourists anyway; so if we think the question is worth answering we have to look elsewhere.

We could get neuronal on this: we might, for example, be able to scan human and chimp brains and detect some distinctive patterns of activity which occur just when the relevant primate appears to be getting ready to issue a warning. If these patterns of activity occurred in the corresponding sections of the chimp and human brain (perhaps involving some of those special mirror neurons) we should be inclined to conclude that our ToMs were basically the same: if they occurred in different places we should be very tempted to conclude that evolution had recruited different sections of the two species’ brains to carry out the same function. This latter case is quite plausible – in human brains, for example, the areas used for speech don’t match the bits of the chimp brain used for vocalisations (which apparently correspond to areas used by humans only for involuntary gasps and cries and, strangely enough, for swearing).

Results like that might settle the evolutionary question; but not the deeper philosophical one. Even if we did use a different set of neurons, it wouldn’t prove we weren’t running the same ToM. Different human beings certainly use somewhat different arrays of neurons – no two brains are wired identically. If we came across the yeti and found he was fully up to human levels of consciousness, able to hold an impeccably normal human-style conversation with us and discuss ToM just as we do, and then we made the astonishing discovery that he had no prefrontal cortex and was using what in humans would have been his cerebellum to do his conscious thinking with, we would not on that account alone say he had a different kind of consciousness (at least, I don’t think we would).

So it looks to me as if we have a radical pattern of variation at both ends. All sorts of neuronal wiring (or maybe silicon or beer cans and string – why not?) will do at the bottom level; all sorts of cogitative content will do at the top levels. Somewhere in the middle is there a level of description where deciding that someone needs to be warned is just that and nothing else, and where we can meaningfully compare and contrast human and chimp?  I suspect there is, but I also suspect that it resides in something analogous to a high-level mental metacode of a kind we should need a proper theory of mind even to begin imagining.