All done with mirrors

Ramachandran Vilayanur S. Ramachandran has a short piece on Edge about the neurology of self-awareness – more specifically about the role played by mirror neurons. Ramachandran seems to have a special interest in mirrors, having used them in a famous series of experiments which succeeded in eliminating the pain from the ‘phantom limbs’ sometimes experienced by amputees. He certainly attaches considerable importance to mirror neurons.

We’ve known for some time the relatively unsurprising fact that certain groups of neurons fire whenever an experimental subject performs a certain action. More recently it has emerged that some of these neurons also fire when the subject sees someone else performing the same action. It’s as though when the brain sees an action performed, it goes through a small pantomime of triggering the same action; it mimics, or mirrors the presumed brain activity of the person being observed. Similarly, among the neurons that fire when a subject is poked, there are some which also fire when the subject sees someone else being poked.
These mirror neurons are clearly interesting in a number of ways, but perhaps the most striking is that they appear to provide a clear neurological basis for empathy, and perhaps for the ability humans (and a few other animals) have to reason successfully about other points of view. This capacity is often called a ‘theory of mind’, or ‘theory of other minds’ though I think that’s a misleading label which implies a much more explicit understanding than is actually at work. Being able to tell what your rivals know and don’t know, and how they are likely to behave as a result, clearly opens up a whole new field of opportunities for a cunning organism, but it involves a level of abstraction which few animals appear to have reached, and indeed it seems the ability does not become fully developed even in humans until they are well into early childhood.

The mirror neurons we know about to date are not, of course, enough by themselves to provide any very high-flown level of empathy, but they suggest that similar processes may be at work on a higher level, and they might well be part of the final answer. Ramachandran wants to go a bit further, however, and suggest that they help constitute our sense of self. I think it would be true to say that the traditional view here is that we begin as solipsists, with a natural sense of self but not really distinguishing other people from the inanimate objects around us. Then we go on to make the awesome conceptual leap into realising that there are other people like ourselves out there, and that they have thoughts and feelings similar to our own (with some psycopaths, perhaps, never making the leap and remaining permanently indifferent to other people’s pain).

Ramachandran’s proposal is that the process works the other way: we start by observing the behaviour of other people and come to have a basic understanding of them sufficient to attribute a kind of selfhood to each of them. It’s only then that the empathy supported by mirror neurons leads us to realise that we too have a self of the same kind. Ramachandran acknowledges that others have offered theories along roughly similar lines, but I think this is the first time the link with mirror neurons has been drawn in this way.

Are these ideas right? I think it’s important to be clear about what kind of selfhood we’re talking about here. In spite of the poking experiments, I don’t think it can be anything to do with our experiences, or feelings. Take pain: if we applied the theory here we should be saying that to begin with we may be aware of pain, but don’t really attribute it to ourselves: noticing how other people try to avoid it at all costs, we begin to think that the pian we experience actually belongs to us. I suppose it could be so, but knowing as we do what pain is actually like, and how one of its properties is a location in our body, it seems implausible to me. That in turn casts some doubt on whether this kind of explanation from others to ourselves can deal with important cases like our ability to tell that what other people can see is different from what we ourselves are aware of. It doesn’t seem very likely, in other words, that we come to know there are things hidden from our view because we have previously noticed that things may be hidden from other people.
I could be wrong about that, but perhaps we are really talking about the self as the origin of volition; the mysterious thing that makes the decisions and (at least when things are running normally) does the talking. We attribute the intentions and thoughts of other people to an essential core called the self, and through empathy we come to attribute our own to a similar self. This seems a much more appealing line of argument.

There is certainly something hard to pin down about our selfhood in this sense. Hume famously observed that when he tried to observe his inner self there didn’t seem to be anything there except a bundle of perceptions; Dennett has suggested that the self is a kind of explanatory abstraction akin to a centre of gravity. I think the problem is that the self in this sense is not so much a thing as a source – like the spring which provides the origin of a river. Hume says all he can see is a lot of water; Dennett says the source is a geometrical abstraction, not a real physical thing: there’s something in both points of view but both also have an element of perversity.

I’ll make the bold claim here that Hume actually missed something, in that sometimes we actually experience the emergence of thoughts and intentions. I submit that we sometimes know, in an inexplicit way, what we are about to say, and even what we are about to think about, before the event actually occurs; we directly experience the emergence of intentional stuff, and hence have a direct handle on our own selfhood. I must admit that this claim, resting as it does on introspective evidence, is not very strongly supported, but if it is true, it contradicts Ramachandran’s view: we know about our selfhood from direct experience, not from observing others, and in a way quite different from the way we know about the selfhood of others.

It may be that Ramachandran is in fact thinking about yet another kind of selfhood; apart from the two I have touched on. But there is a further reason for doubting whether mirror neurons are as important as they seem. A lot depends here on the way the story unfolds. First we have neurons that fire as part of the neurological business of performing a task, or of experiencing a sensation. Then it turns out some of them fire when we merely see someone else performing the action, or having the experience. We draw the conclusion that these neurons are reflecting, or simulating, what’s going on in the other person’s brain; doing a subliminal imitation of what the other person is going through. But perhaps we ought to reinterpret our original view that the firing of these neurons is part of the business of performing the action. Perhaps these neurons were only ever part of a system for recognising actions. There was a perfect correlation between them firing and our performing the action, but that was because every time we did the action, we recognised it: the neurons were never part of the actual performance of the action. It’s then not surprising that the same neurons fire when we see the action performed by someone else. It remains interesting that a single set of neurons respond to the same action whether we are acting, or someone else; but once we shed the preconception that these neurons are part of our own control system, some of the wider implications fall away.

Mind the Gap

Gorilla The Royal Society had a joint event with the Royal Society of Literature back in November, devoted to the question of the mental ‘gap’ between human beings and animals. An eminent panel (Nicky Clayton, Doris Lessing, Will Self and Andrew Whiten) offered thoughts and addressed a few questions.There’s a video here, though I wouldn’t bother trying it unless you have a really fast connection.
In fact, although entertaining and somewhat enlightening, the discussion never made more than glancing contact with any of the philosophical issues; a lot of the time was spent in remarking on the charm and intelligence of apes, corvids, cats, and other animals. Will Self asserted that there were no differences between apes and humans. Having read his novel Great Apes, which provides a number of interesting details about the social and sexual behaviour of chimpanzees, I thought this shed a surprising light on his own private life; as did, perhaps, his insistence that consciousness was greatly over-rated.

Anyway, I thought it might be worth trying to scratch the surface of the issues a little more deeply. What are the fundamental mental differences between animals and humans? For our purposes three areas stand out: moral, intellectual, and phenomenal.


A number of moves to upgrade the official moral status of animals have been debated in recent times: the Great Ape Project, for example, seeks to obtain recognition of three basic legal rights for our nearer relatives among the animals. Whether this is philosphically a sound approach is open to doubt; Roger Scruton, at least, has argued that since animals do not have moral duties, they cannot have rights either. I find the idea that duties and rights go together in this way very attractive: unfortunately, I can’t see any compelling reason to think it’s true. However, it could reasonably be argued that for talk about rights to be meaningful, the person with those rights has to have the ability to assert or give up the right in question, and some animals surely don’t have the intellectual capacity to assert or give up anything that abstract. How far this is true of all animals, of course, depends on your view of their intellectual capacities, and to some degree on how explicit you feel the assertion of rights needs to be – is struggling to survive an assertion of the right to live?

We don’t have to talk in terms of rights, however, and in fact the idea of natural or moral rights is a slippery concept which sometimes gives a false air of dignity to mere desires or opinions – when I say animals have a right to something, I may be doing no more than urging others to agree to giving it. Peter Singer’s line on animal morality takes a different tack, making them parties to Utilitarianism. The argument here is that animal pain or pleasure should be weighed in the scales together with human feelings. If you’re a utilitarian, this conclusion seems a logical extension of the system, and even if you’re not you may well feel that animal pain is morally relevant. However, the argument does rest on the assumption that animals feel pain, which has been denied by some, about some animals in any case; so this line of argument may rest on what view we take of the phenomenal capacities of animals and how far their version of pain resembles ours (although it would be perfectly possible, logically, to adopt a form of utilitarianism which took no account of phenomenal qualities and merely took it that pain responses had negative utility regardless of whether they ‘really hurt’).

One argument which tends to lend weight to a Scrutonish line of thinking is the plausible contention that animals are moral objects, but not moral subjects: that is, there may be moral rules about what can be done to them, but not about what they should do. It may be a moral principle that a wolf should not be caused unnecessary pain; but we don’t consider holding wolves accountable for their behaviour and subjecting them to the apparatus of the criminal justice system for any excess pain they may have caused to other animals.

Or do we? In training and working with a dog we certainly do use punishments and rewards: we talk in censorious tones of bad behaviour and admiring ones of good: not all that different from the way we might behave towards a child. There’s scope for argument here over whether we really think a ‘bad dog’ is morally bad or bad in some lesser, more pragmatic sense – whether, to get a bit Kantian, we’re dealing with categorical or merely hypothetical imperatives.

It seems reasonable to me, skipping over that discussion, that animals fall on a moral scale so far as responsibility is concerned: most animals have a negligible level of moral responsibility, but the most intelligent show at least the beginnings of a capacity for it, albeit at a level well below that of most adult human beings.

We seem, anyway, to have established a pleasingly symmetrical (but suspiciously neat) structure whereby there are two lines to take on the moral qualities of animals, each of which depends at least partly on the view we take of one of the other two areas I mentioned to begin with.

So what about the intellectual capacities of animals? The Royal Society discussion touched on the way tool use, once considered a uniquely human practice, has gradually been found to occur in a range of species and a number of different forms (Including, apparently, the nose-picking tool, something not yet achieved by human technology so far as I know. I hope the designers of the Swiss Army Knife are not reading this.) This is undoubtedly true, but it still does seem that human tool use is distinctively different, even if it’s difficult to formulate the distinction in qualitative terms.

The subject of animal linguistic competence causes strong disagreement. The work of Sue Savage-Rumbaugh and others has produced chimps which appear to manage a high degree of communication through special keyboards or through sign language; but while some find the results compelling evidence of real linguistic competence verging on the human, others remain unconvinced.

It strikes me that there is an interesting comparison to be made here with the Turing test. Both candidates for human-style cognition, the chimp and the computer, are to be judged very largely on their ability to hold a conversation (maybe we’re not so much Homo Sapiens as Homo Loquens), and in both cases the results are interpreted very differently according to the predisposition of the observer. However, it seems to me we expect much more grammatical competence from a computer than we do from a chimpanzee: the Turing test requires properly constructed human sentences, whereas the chimpanzees are provided with a simplified system where it’s much harder to go wrong. In fact, we require the machine’s text responses to be indistinguishable from those of a human being, but a few errors on the chimp’s part would not cause us to rise from the desk, satisfied that the creature was a sham.

In a different respect, though, we’re harder on the primate: we take it for granted that if the chimp can communicate at all it will be able to tell us about objects nearby and carry out simple instructions even in unfamilliar surroundings: to ask the computer to fetch a ball and give it to its sister would not be considered fair play.

I think the comparison of our attitudes in the two cases suggests that we do, almost unthinkingly, attribute to apes some of the basic mental abilities we enjoy ourselves: but the fact that we let them off the tougher linguistic tasks we expect an inanimate algorithm to be able to deal with shows that even the pro-primate school of thought is aware of a distinct gap.

What then, about the phenomenal experience of animals? There is a pretty well established consensus for practical purposes which says the bigger your brain, the more you feel and the greater the care that must be taken to avoid unnecessary suffering. This may short-change some non-mammals a bit, but it seems an appealing rule of thumb. A minority school of thought says that humans are qualitatively different here; only humans know that they are in pain, and that’s what makes it so bad. Dennett, who as an enemy of qualia in all their forms, wants pain to be a matter of interpretation, seems to say it’s largely a matter of crushed hopes and projects, though I don’t think he goes on to draw the implied negative conclusion about the pain capacity of projectless animals.

The real philosophical problem is that we have no secure grounds for believing anyone feels pain except ourselves: it’s only a weak induction from the single case where we have direct evidence. Perhaps the best we can say is that the induction applies to some animals almost as well as it applies to our fellow human beings.

Where does that leave us? Are animals conscious? I don’t know, but my guess is that there is no clear qualitative gap between humans and animals – there’s nothing we’ve got that at least some of them haven’t got in at least a vestigial form. But there’s a huge quantitative gap; they’re never more than slightly conscious.

The Genuine Problem

brain module Pete Mandik has posted a thought-provoking paper (pdf) by Jack, Robbins, and Roepstorff which suggests we may have been considering the wrong issue all along. The problem with the Hard Problem (how do we square our ineffable, subjective experience of the world with the mechanical reality described by physics), they say, is that we tend to regard subjective experiences as being out there in the same sort of way as physical objects. This makes it hard for us to understand how our two pictures of the world can be reconciled. We end up looking for a mysterious missing ingredient in subjective experience, but that search is hopeless. In fact, JR&R suggest, the difference between the two accounts of the world arises from our using two different brain modules: one aimed at the world in general, one aimed specifically at phenomenal states.

That seems plausible enough at first sight and JR&R contend that it is a parsimonious theory too. It does require an additional brain module, but if you assume that the alternative is some form of dualism (as I think they do) then they’re right, since the additional ontological commitment invoved in dualism would easily outweigh the merely neurological one required for an extra brain module. Moreover, there is apparently some good evidence to support the existence of the phenomenal brain module. It has been shown that activity in parts of the brain concerned with the external world correlates negatively with activity in the parts concerned with thinking about our own mental states (not too surprising, this – it’s hard to imagine paying close attention to your own feelings and to the details of what is going on around you at the same time). More dubiously, JR&R suggest that autism looks a bit like what you get when your phenomenal module fails to operate correctly.

This doesn’t seem quite right, however. If your phenomenal module ceases to function, you surely ought to become a philosophical ‘zombie’ – someone who has no subjective experience. That wouldn’t be at all like autism, however. The behaviour of a philosophical zombie is perfectly normal (since your behaviour is determined by your non-phenomenal cognition): autism, however, certainly does affect your behaviour, in some cases very severely.

The problem is that JR&R are actually assigning three distinct roles to their module: they want it to provide phenomenal experience, to be a kind of higher-order facility which tells us about our own mental states, and a theory-of-mind machine which enables us to understand other people and social interactions (the bit most relevant to autism). The paper, I think, is a little light on explaining why these three things arise from the same basic function – in fact it almost seems to treat them as evidently equivalent. In fairness the paper doesn’t pretend to be more than a sketch of quite a wide-ranging set of ideas.

Do the three go together? I suppose the insight that links them all is that knowing how something feels to us helps us understand how similar experiences feel to other people (only helps, though – I think our understanding of other people consists of a good deal more than just empathy). It is certainly plausible that our understanding of our own mental states arises from our understanding of other people’s (though there are those who would say that it is our understanding of other people’s minds that leads us to think we have our own). Less persuasive on the face of it is the view that our subjective experience is a matter of knowledge about our own inner states. My subjective experiences appear to me to be about the external world for the most part, and it isn’t immediately clear why second-order knowledge of my own mental states should endow them with subjective qualities. Of course, some people have put forward theories very much along those lines – Nicholas Humphrey, for example. But you certainly can’t, as it were, have that conclusion for nothing.

Anyway if JR&R are at least broadly right, then there will always appear to be a mysterious Hard Problem, because we’re just built that way. But they hold out instead the possibility of addressing instead the ‘Genuine Problem’, namely the question of the structure of cognition and its two modules. The good news, they say, is that this question can be addressed scientifically, so we won’t have to wait around to see whether philosophers can get anywhere with the issues over the next thousand years or so. As a project, this is unquestionably a good idea: if science could explain the differences between the two modules and how one gives rise to subjectivity, that would be a very major advance. Unfortunately, I think merely saying that makes it clear how much remains to be done, and raises a fear that JR&R have themselves fallen into a trap they describe: of setting out to explain qualia and ending up explaining something more amenable to science instead.

JR&R also make a plea for the return of the subjective as a field of proper research, mentioning the introspectionists of bygone days. Rhetorically this may be a mistake: I found my own automatic reaction was more or less the same as if they had called for a fresh look at the virtues of Ptolemaic astronomy. In fact they are careful to distinguish between the problematic efforts of Titchener and Wundt and the more measured approach they advocate.

A stimulating paper, anyway, though I for one will continue to beat my head philosophically against the good old Hard Problem.