Teacups and mirrors

Picture: teacups. Mirror neurons have been widely described as a crucial discovery and possibly ‘the next big thing’ (I’m not sure, when I come to think about it, what the last big thing was). Ramachandran describes them as ’empathy neurons’ or even ‘Dalai Lama neurons’, and others have been almost equally enthusiastic. But are they really so good? The trenchant title of a short paper by Emma Borg asks ‘If mirror neurons are the answer, what was the question?’.

Your mirror neurons fire both when you perform an action, and when you see someone else perform that action. Borg contrasts them with ‘canonical neurons’, which fire in response to an object offering the right kind of affordances. In other words, if I’ve got it right, we have a large group of neurons that fire when we, for example, take a sip of tea: some of them are mirror neurons which also fire when we see someone else drink; others are canonical neurons which also fire when we see a cup or a teapot – ‘tea-drinking things’.

At a basic level, the argument that mirror neurons might help to explain empathy, or our understanding of other people, is clear enough. When I see A do x, the mirror neurons mean my mental activity has at least some limited features in common with A’s (presumed) mental activity, or at least what A’s mental activity would be if A were me. You can see why this resembles telepathy of a sort, and it seems a natural hypothesis that it might form the basis of our understanding of other people. One of the many theories on offer to explain autism, in fact, holds that it is caused by a deficiency in mirror neuron activity. Apparently there is evidence to show that autistic people don’t show the same kinds of activity in the relevant regions as normal people when they observe other people’s behaviour. It could be that the absence of mirror neuron activity has left them with no basis for a ‘theory of mind’: of course it could also be that the absence of an effective theory of mind, caused by something else altogether, is somehow suppressing the activity of their mirror neurons.

Borg’s target is the idea that mirror neurons in themselves give us the ability to attribute high-level intentions to other people, by running simulated intentions of our own that match the observed actions of the other person. The initial idea is roughly that when we see someone lift a cup, some of our neurons start doing that tea-cup lifting thing in sympathy (off-line in some way, of course, or we should grab a cup ourselves). This is like harbouring the intention of lifting the cup, but we are able to attribute the intention to the other person. However, this only gets us as far as the deliberate lifting of the cup: it has been further claimed that mirror neurons give us the ability to deduce the over-arching intention – drinking a cup of tea. The claim is that mirror neurons not only resonate with the current action but also more faintly (or rather, in smaller numbers) with the next likely action, and this provides a guide to the higher-level activity of which the single act is part.

Borg points out that actions in themselves are highly ambiguous. I may lift a cup to test its weight, or to stop you getting it, rather than in order to drink from it. It’s certainly not the case that every basic act dictates its successors, or we should be trapped in a cycle of stereotyped behaviour. When we run our mental simulation then, how can we know which secondary echoes we need to start off in our mirror neurons – unless we already know which higher-level course of action we are dealing with? In short, mirror neurons are not enough unless we already have a working theory of mind from some other source.

We might argue that we don’t need to know the intention in advance, because the simulation allows us to test out several different higher-level courses of action at once. But again, the mere observation of the single act before us won’t allow us to choose between them. In the end we’ll always be driven back to appealing to something more than mere mirror neuron activity. None of this suggests that mirror neurons are uninteresting, but perhaps they are not, after all, going to be our Rosetta Stone in deciphering the brain.

Borg describes her argument as anti-behaviourist, resisting the idea that intentions and other ‘mentalistic’ states can be reduced to simple patterns of activity. Fair enough, but given that behaviourism doesn’t put up much of a fight these days, it may be more interesting that it bears a distinct resemblance – or so it seems to me – to many other problems which have afflicted attempts to reduce or naturalise intentionality, up to and including the frame problem. It’s as though we were trying to find a way through an impenetrable hedge: every so often someone finds a promising looking thin patch and starts to shove through; but sooner or later they meet one or another stretch of suspiciously-similar looking brick wall.

Axioms of consciousness

Picture: Kernel Architecture.

Picture: Bitbucket. I’ve been meaning to mention the paper in Issue 7 of the current volume of the JCS which had about the most exciting abstract that I remember seeing. The paper is “Why Axiomatic Models of Being Conscious?” . Abstracts of academic papers don’t tend to be all that thrillingly written, but this one, by Igor Aleksander and Helen Morton, proposes to break consciousness down into five components and offers a ‘kernel architecture’ to put them back together again. It offers ‘an appropriate way of doing science on a first-person phenomenon.

The tone was quite matter-of-fact, of course, but merely analysing consciousness to this extent seems remarkable to me. The mere phrase ‘axioms of consciousness’ has the same kind of exotic ring as ‘the philosopher’s stone’ in my mind: never mind proposing an overall architecture. Is this a breakthrough at last? Even if it’s not right, it must be interesting.

Picture: Blandula. Yes, my eyebrows practically flew off the top of my forehead at some of the implied claims in that abstract. But not altogether in a good way. Perhaps we really are in the same realm of myth and confusion as the philosopher’s stone. Any excitement was undercut by the certainty that the paper would prove to have missed the point or otherwise failed to deliver; we’ve been here so many times before. Moreover, you know, even those axioms are not exactly a new discovery – Aleksander, with a different collaborator, first floated them in 2003.

Picture: Bitbucket. OK, so maybe I missed them at the time, but surely we ought to look at them with an open mind rather than assuming failure is a certainty? The five are stated in the first person, in accordance with the introspective basis of the paper.

  1. Presence: I feel that I am an entity in the world that is outside of me
  2. Imagination: I can recall previous sensory experience as a more or less degraded version of that experience. Driven by language, I can imagine experiences I never had.
  3. Attention: I am selectively conscious of the world outside of me and can select sensory events I wish to imagine.
  4. Volition: I can imagine the results of taking actions and select an action I wish to take.
  5. Emotion: I evaluate events and the expected results of actions according to criteria usually called emotions.

That seems an interesting and pretty comprehensive list, though clearly it’s impossible to adopt a bold approach like this without raising a lot of different issues.

Picture: Blandula. You can say that again. Just look at number 5, for example. Apparently emotions are criteria? My criteria were so strong I just burst into tears? The sight of my true love’s face filled my heart with an inexpressibly deep criterion? And their role is to help evaluate events and the result of actions? I mean, apart from the fact that emotions generally interfere with our evaluation of events and actions, that just isn’t the essence of emotion at all. I can sit here listening to Bach and be swept along by a whole range of profound emotions which I can hardly even put a name to – I feel exalted, energised, but that doesn’t really cover it – without any connection to events or possible actions whatever.

If that isn’t enough, Aleksander and Morton claim to be developing an introspective analysis, rather than a functional one. But emotions, it turns out, are basically there to condition our actions. So that’s an introspective definition?

Picture: Bitbucket. Not the point. We’re not trying to capture the ineffable innerness of things here, we’re trying to set up a scientific framework; and from the vantage point of that framework – guess what? It turns out the ineffable innerness is entirely negligible and adds nothing to our scientific understanding.

I said in the first place there are two parts to the enterprise: the analysis and then the construction of the architecture. The analysis starts from an introspective view, but it would be absurd to think an architecture could have no regard to function. If that pollutes the non-functionalist purity of the approach from a philosophical point of view, who cares? We’re not interested in which scholastic labels to apply to the theory, we’re interested in whether it’s true.

Picture: Blandula. Well, you say that, but Aleksander and Morton seem to want to draw some philosophical conclusions – and so do you. They reckon their analysis shows that there is no real ‘hard problem’ of consciousness. I’m not convinced. For one thing, their attack seems to be quite tightly tied to the Chalmersian version of the hard problem. If subjective states can’t be rigorously related to physical states, they say, it opens the way to zombies, people who are physically like us but have no inner life. That would be Chalmers’ view, maybe, and I grant that it has attained something close to canonical status. But it’s quite possible to disbelieve in the possibility of zombies and still find the relation between the physical and the subjective profoundly problematic.

Much more fundamental than that, though, their whole approach seems to me yet another instance of a phenomenon we might call ‘scientist’s slide’. Virtually all the terms in this field can be given two values. We can talk about a robot’s ‘actions’, meaning just the way it moves, or we can talk about ‘actions’, meaning the freely-willed deliberate behaviour of a conscious agent. We can talk about our PC ‘thinking’ in an inoffensive sense that just means computational processing, or we can talk about ‘thinking’ in the subjective, conscious sense that no-one would attribute to an ordinary computer on their desk. To put it more technically, the same words in ordinary language can often be used to refer either to access consciousness, a-consciousness, the ‘easy problem’ kind, or to phenomenal consciousness, p-consciousness, the ‘hard problem kind’.

Now over and over again scientists in this area have fallen into the trap of announcing that their theory explains ‘real’ p-consciousness, but sliding into explaining a-consciousness without even noticing. Not that the explanation of a-consciousness is trivial or uninteresting: but it ain’t the hard problem. I suspect it happens in part because people with a strong scientific background have to overcome a lot of ingrained empiricist mental habits just to grasp the idea of p-consciousness at all: they can do it, but when they get involved in developing a theory and their attention is divided, it just slips away from them again. Not meaning to be rude about scientists: it’s the same with philosophers when they just about manage to get their heads round quantum physics, but in discussion it transmutes into magic pixie dust.

Picture: Bitbucket. No, no: you’re mistaking a deliberate assertion for a confusion. It’s not that Aleksander and Morton don’t grasp the concept of p-consciousness: they understand it perfectly, but they challenge its utility.

Let me challenge you on this. Take a case where we can’t shelter behind philosophical obfuscation, where we have to make a practical decision. Suppose we talk about the morality of killing or hurting animals. Now I believe you would say that we should not harm animals because they feel pain in somewhat the same way we do. But how do we know? Philosophically you can’t have any certainty that other humans feel pain, let alone animals.

In practice, I submit, you base your decision on knowledge of the nervous system of the creature in question. We know human beings have brains and nerves just like ours, so we attribute similar feelings to them. Mammals and other creatures with large brains are also assumed to have at least some feelings. By the time we get down to ants, we don’t really care: ants are capable of very complex behaviour, maybe, but they don’t have very big brains, so we don’t worry about their feelings. Plants are living things, but have no nervous system at all, and so we care no more about them than about inanimate objects.

Now I don’t think you can deny any of that – so if we stick to practical reasoning, what are we going to look at when we decide the criteria for ‘proper’ consciousness? It has to be the architecture of the brain and its processes. That’s where the paper is aiming: rational criteria for deciding such issues as whether an animal is conscious or whether it has higher order thought. If we can get this tied down properly to the relevant neurology, it may, you know, be possible to bypass the philosophy altogether.