The mereological fallacy

Call.  It isn’t actually your brain that does the thinking at all. In fact, the very idea that it does is virtually incoherent: not just wrong, but meaningless: the mereological fallacy. Only you as a whole entity can do anything like thinking or believing.

That’s pretty much the bombshell dropped by Max Bennett and Peter Hacker in their 2003 blockbuster Philosophical Foundations of Neuroscience. Mereology is a branch of logic dealing with the relations of parts and wholes; it represents one of the attempts which were made to sort out the chaos left behind in the wake of the catastrophic collapse of Frege’s theories. Bennett and Hacker’s book has given the term a new lease of life with a somewhat looser sense. Actually, the book is large and complex, and I can’t do anything like justice to it here, but the idea of the mereological fallacy, a kind of leitmotiv running through the book, certainly deserves some attention.

What do they mean? If I don’t think with my brain, do I think with my foot? There is, of course, a sense in which my foot seems to know things. It ‘knows’ roughly what forces to apply where in order to keep me walking straight. My arm knows how to move in order to be in the right place to catch a ball heading in my direction, not a process I myself could describe in any great detail. But although athletes talk about muscle memory, this kind of knowledge is really little more than a metaphor, or at best an example of knowing how to x, not of knowing that x. It certainly isn’t what Bennett and Hacker have in mind, anyway: the last thing they want to do is transfer cognitive functions to another mere part of the body: their point is that only whole people have these capacities.

But what’s a whole person? If my foot, or even my leg, is cut off, it doesn’t seem to have any relevance to most of my thought processes. Indeed, if the stock science fiction/philosophy example is to be believed, a brain removed from the body altogether and sustained in some kind of life-support tank could perfectly well go on thinking for itself. That may be no more than a thought-experiment with a dubious basis in medical reality, but it does suggest that the brain is the thinking organ, just as the heart is the blood-pumping organ.

I think that to appreciate Bennett and Hacker’s point properly, you have to consider it in historical context (they provide plenty of this: I’m afraid poor Descartes gets yet another drubbing for inflicting his dualism on us all, though for once his positive contribution is also acknowledged). Philosophically, I think we can see the doctrine of the mereological fallacy as being in opposition to the problematic old theory of sense-data. For a very long time, it was accepted that we didn’t see the world, we saw sense-data (or images, or some similar intermediary). In philosophy, this view was popular because it seemed to make it easier to explain cases of error or illusion; it was just defective sense-data. Scientifically the fact that a nice image appeared on the retina, within the eye, must also have predisposed people in its favour. If discussions of the senses had been mainly about touch, where there is no apparent intermediary, rather than sight, things might have been different.

On the whole, I think it has been accepted within philosophy that the sense-data point of view was a confusing mistake: when we see a table, we see a table, not some patches of brown within the visual field: the brown patches, the images and the sense-data, if there are such things, are just part of the means by which we see. Bennett and Hacker, if I understand them correctly, want a root and branch application of this kind of revised thinking to all aspects of cognition. Brains sustain patterns of neuron firing, and patterns of neuron firing correlate with cognitive activity: but to say that the firing neurons are the thoughts is just a mistake. As a bonus, this (apparently) eliminates the binding problem: if your experiences aren’t in your brain at all, the question of where in your brain the different elements are put together no longer arises. (But perhaps some related questions still do!)

Up to a point, this is quite right, and a necessary corrective. I think Bennett and Hacker are probably right to think that Crick, for example, has fallen into the trap they are describing: but it’s not so clear to me that everyone has. Some people, and Crick was one of them, do assert that neurons firing just are mental experiences, full stop: but most people, even those who assert the identity of mental and neural events, only mean that the two things are different aspects of the same phenomenon; the same, but on different levels of interpretation.
Perhaps an analogy will help to make this clearer. Hennett and Backer, let’s say, have a controversial theory about phone conversations: when you call your mother, they say, the conversation does not go through the local telephone exchange: it can’t be found in some cable somewhere: that’s just nonsensical – it takes place between you and your mother. Now the theory may seem a little unexpected, but the thing is, the opposite view has held sway for a long time. People are always saying things like “You imagine you’re talking to your mother, but actually science has conclusively established that you’re actually talking to the phone. OK, the phone may tell you something about what your mother is saying – maybe quite a lot – but it’s only the phone you’re talking to. You actually have no contact with your mother, and no certainty about what she’s telling you.”

In this case it’s obvious that there are two equally valid levels of interpretation: it’s just that the two sides are being pig-headed about recognising it. Of course you’re talking to your mother, and having a conversation which we wouldn’t normally describe as happening in a cable: but in another sense you are addressing the phone (your mother’s miles away!) and there is a sense in which the conversation passes though the local exchange. If we imagine the telephone exchange as very old-fashioned (and I do) it’s actually crucial that the operator knows which plug and socket your words need to travel down.

So yes, it is people who see things and think thoughts: but so long as we keep hold of that fact, and don’t allow ourselves to be seduced by the sterile charms of eliminative reductionism, surely we can safely talk in another valid sense, about the brain doing these things?

(Some subsequent discussion is here.)

Really selfish

Dawkins. Maybe genes really are selfish, says Stephen Pinker, in a short essay published recently in the Times. The piece is one of a collection which OUP will be publishing as a tribute to Richard Dawkins, though the idea that the selfishness of genes might be more than just a metaphor seems a slightly ambivalent choice of theme in this context. As Pinker points out, since coining the phrase 30 years ago (yes, it really is that long) Dawkins has spent a fair amount of time carefully and repeatedly explaining that no, he doesn’t suppose genes have tiny brains and personalities, it’s simply a metaphor, albeit a vivid and useful one. Dawkins has reacted good-humouredly in the past when some of his ideas were taken rather further than he might have wanted to go with them himself – notably in the case of Susan Blackmore’s bold extension of the idea of memes into a full-blown theory of consciousness. But Pinker’s suggestion surely puts him in a difficult position: he either has to disown the friendly tribute and seem ungrateful or risk looking inconsistent.

What is Pinker on about anyway? He praises Dawkins for seeing life and evolution as matters best understood though ideas about information and computation, and suggests a similar perspective has shaped modern views about cognition. Another common feature between Dawkinsian biology and recent cognitive science, he suggests, is the use of a ‘mentalistic’ vocabulary. That rather glosses over the point that cognitive scientists try to keep mentalistic terms out of the business end of their theories – if an explanation of mentalistic concepts uses mentalistic concepts itself, we obviously still have work to do.

It also picks on the very feature of Dawkins that I personally like least. OK, so the selfishness of genes is a metaphor. But a good one? If we are to think of ourselves as lumbering gene carriers, we have to assume that the passengers designed a craft which randomly minces some of them every time they change ship, and seeks a better class of passenger for the next generation rather than offering its own original genes continued safety. I wouldn’t build a vessel like that for myself. But worse, the metaphor hides from view the cardinal virtue of Darwin’s theory, namely that it does away with the need to invoke intentions and conscious design: that elegant economy of means is the prime reason why Darwin’s version of evolution is better than all the others. It’s as though Dawkins were to explain Newton to us in terms of angels wanting to push masses together: altogether better, if you can, to make the small effort required to describe things the way the theory says they actually are.

Pinker, however, argues that we can safely – in fact correctly – use mentalistic terms about genes and the camouflage patterns of animals (which embody ‘knowledge’ about the environments of the animals’ ancestors). After all, when we use such terms about human beings, they aren’t about phenomenal conscious experience, because nobody understands that anyway: they’re about the other, computational stuff, and all that is an ‘entirely tractable’ scientific topic.

That seems rather breezy. Of course there are those who think intentionality, meaningfulness, can – ultimately – be reduced to computation or information processing theory, but we haven’t yet reached the stage where that view is uncontroversial: and indeed, it isn’t clear to me in any case that all such theories abolish the distinction between the ‘design’ of a leopard and the design of a battleship. In my own view, the core mystery of intentionality is just about as intractable as the mystery of qualia.

In fairness, it is true that it is sometimes hard to talk about complex organisms without relapsing into ‘mentalistic’ or teleological terms: it’s hard to say that when a baby bird hatches, its wings don’t embody some kind of expectation of flying. But I would argue we need a new vocabulary for this kind of thing, along the lines of the ‘natural meaning’ of H.P.Grice (‘those spots mean measles’). Indeed, on another occasion I’ve suggested we could co-opt the word ‘point’, conflating two of its senses: the point of wings is flying, and wings point to flight. But while I sympathise with Pinker’s thesis to that extent, I think it would be confusing and unhelpful to apply ordinary ‘mentalistic’ terms where they don’t strictly belong in anything other than a metaphorical sense.

Of course, it is true that in ordinary usage we do attribute intentions and design to natural objects – but not necessarily in the way Pinker would want. He, I think, would expect us to talk about how the leopard’s spots derive from the knowledge it, or its genes, have: but it would be more normal to talk about how perfectly designed for its lifestyle the creature is, or how well Providence has equipped it for its role. I can see the proponents of Intelligent Design patting Pinker on the back: “That’s right Steve: of course we should talk about the aims of organisms and the thinking behind their design: we won’t say it’s G__, but we agree with you it’s more than a metaphor…”