Picture: teacups. Mirror neurons have been widely described as a crucial discovery and possibly ‘the next big thing’ (I’m not sure, when I come to think about it, what the last big thing was). Ramachandran describes them as ‘empathy neurons’ or even ‘Dalai Lama neurons’, and others have been almost equally enthusiastic. But are they really so good? The trenchant title of a short paper by Emma Borg asks ‘If mirror neurons are the answer, what was the question?’.

Your mirror neurons fire both when you perform an action, and when you see someone else perform that action. Borg contrasts them with ‘canonical neurons’, which fire in response to an object offering the right kind of affordances. In other words, if I’ve got it right, we have a large group of neurons that fire when we, for example, take a sip of tea: some of them are mirror neurons which also fire when we see someone else drink; others are canonical neurons which also fire when we see a cup or a teapot – ‘tea-drinking things’.

At a basic level, the argument that mirror neurons might help to explain empathy, or our understanding of other people, is clear enough. When I see A do x, the mirror neurons mean my mental activity has at least some limited features in common with A’s (presumed) mental activity, or at least what A’s mental activity would be if A were me. You can see why this resembles telepathy of a sort, and it seems a natural hypothesis that it might form the basis of our understanding of other people. One of the many theories on offer to explain autism, in fact, holds that it is caused by a deficiency in mirror neuron activity. Apparently there is evidence to show that autistic people don’t show the same kinds of activity in the relevant regions as normal people when they observe other people’s behaviour. It could be that the absence of mirror neuron activity has left them with no basis for a ‘theory of mind’: of course it could also be that the absence of an effective theory of mind, caused by something else altogether, is somehow suppressing the activity of their mirror neurons.

Borg’s target is the idea that mirror neurons in themselves give us the ability to attribute high-level intentions to other people, by running simulated intentions of our own that match the observed actions of the other person. The initial idea is roughly that when we see someone lift a cup, some of our neurons start doing that tea-cup lifting thing in sympathy (off-line in some way, of course, or we should grab a cup ourselves). This is like harbouring the intention of lifting the cup, but we are able to attribute the intention to the other person. However, this only gets us as far as the deliberate lifting of the cup: it has been further claimed that mirror neurons give us the ability to deduce the over-arching intention – drinking a cup of tea. The claim is that mirror neurons not only resonate with the current action but also more faintly (or rather, in smaller numbers) with the next likely action, and this provides a guide to the higher-level activity of which the single act is part.

Borg points out that actions in themselves are highly ambiguous. I may lift a cup to test its weight, or to stop you getting it, rather than in order to drink from it. It’s certainly not the case that every basic act dictates its successors, or we should be trapped in a cycle of stereotyped behaviour. When we run our mental simulation then, how can we know which secondary echoes we need to start off in our mirror neurons – unless we already know which higher-level course of action we are dealing with? In short, mirror neurons are not enough unless we already have a working theory of mind from some other source.

We might argue that we don’t need to know the intention in advance, because the simulation allows us to test out several different higher-level courses of action at once. But again, the mere observation of the single act before us won’t allow us to choose between them. In the end we’ll always be driven back to appealing to something more than mere mirror neuron activity. None of this suggests that mirror neurons are uninteresting, but perhaps they are not, after all, going to be our Rosetta Stone in deciphering the brain.

Borg describes her argument as anti-behaviourist, resisting the idea that intentions and other ‘mentalistic’ states can be reduced to simple patterns of activity. Fair enough, but given that behaviourism doesn’t put up much of a fight these days, it may be more interesting that it bears a distinct resemblance – or so it seems to me – to many other problems which have afflicted attempts to reduce or naturalise intentionality, up to and including the frame problem. It’s as though we were trying to find a way through an impenetrable hedge: every so often someone finds a promising looking thin patch and starts to shove through; but sooner or later they meet one or another stretch of suspiciously-similar looking brick wall.


  1. 1. Lloyd Rice says:

    Your response to the issue of mirror ambiguities seems to me to overlook the
    concurrent observations of the history of actions leading up to the cup holding,
    the nuances of related actions involved in cup grabbing or cup testing, etc.,
    each of which could have, if not its own mirror system, at least the connections
    to provide context for the cup-holding mirrors. Such would allow a disambiguation
    of the cup-holding actions — maybe not completely in all cases — but then we
    don’t always understand fully what others are trying to do. I don’t see that
    your argument demonstrates a requirement for additional related circuitry.

  2. 2. Against Truth « Rolfe Schmidt says:

    [...] When you have a skill — like piano playing or neurosurgery — it changes the way you manipulate the world around you. If you thought of yourself as a fancy computer, you might say that understanding is how you receive input and skill is how you produce output. Input and output are closely linked, as recent hoopla over mirror neurons shows. When we see someone doing something we know how to do, we have mirror neurons — neurons that are usually responsible for producing action — that fire. Perhaps this neural activity is understanding, and the links in our mind that cause the excitement are knowledge. [...]

  3. 3. Peter says:

    Lloyd – true, there could be many other clues available from memory and from the detail of the actions involved which my bald account doesn’t mention. It might actually be possible to work out from the way my fingers were arranging themselves that they were being set up to grasp a teacup handle some while beforehand, for example. But at the end of the day, we’re still not very much better off than we might have been simply from observing the other person’s behaviour – the fact that our mirror neurons are resonating in harmony still doesn’t really help get us to the high-level intentions.

  4. 4. Lloyd Rice says:

    Peter – Yes, I agree this is not the high-level end of the story.
    Just a good beginning.

  5. 5. peter reynolds says:

    This idea of the ‘intentional stance’.Your argument is that it will be selected as the optimal stance. I suggest that this is almost diametrically opposite to consciousness. The optimal stance is what a machine would do. Because it couldn’t be progammed with consciousness. If the program wasn’t directed – the object would produce a random meaningless behaviour. The only aspect of consciousness it would have is that which you programmed it with or imbibed it with if you took that stance toward it. You have to appreciate that Dennett likes magic. And he himself states that the magic of a trick is in its initial staement. All he is saying is that the magic of imbibing something with consciousness is in stating that it is. A machine can only be conscious to the level you are able to program into it. if you program a machine with all the neural correlates of consciousness – you still wouldn’t know if it was conscious. his intentional stance is one in which he attempts to lift himself up b his bootlaces.

  6. 6. peter reynolds says:

    If you view life on earth on the grander scale of a component in for example Gaia. Or as a component in an earth-like chemical container where, on balance, life acts as an intermediary to, increase entropy, and you view it as a machine with a purpose in this wider picture and furthermore, you view consciousness to be a part of the overall operation of this reaction – how would you determine the inputs of your computer model of the brain. In what form would you extract the information in food for example so as to use it as an input. (remembering that the experiment of Libet suggests that consciousness itself might be a story that we make up about the chemical reaction our body is conducting)

  7. 7. peter reynolds says:

    As regards ‘brain in a vat’ – i think the point is that you can never KNOW that your brain isn’t in a vat. You can only know what comes through your senses – and if these are fed by undetectable wires – then you are never going to know – irrespective of who is feeding your senses or how. Your reality is what is coming through these wires. You are never going to develop a logic or mind which can let you see the ‘other side’. Its actually a good dualist argument that pure mind cannot influence reality tangibly. Because whatever your mind does with this incoming data – you are never going to contact ‘the other side’.

  8. 8. peter reynolds says:

    I think the brain in a vat is also an argument for not being able to Know ‘the thing in itself’. I suppose you could say that there is no way of knowing that the thing you think you see is any ‘thing’ at all.

  9. 9. peter reynolds says:

    Its the reason that philosophy exists – that its always open to the criticism – that might be the case in you particular vat.

Leave a Reply