Once upon a time (1750, in fact – or in any case, some time before the development of modern chemistry) there were two very similar planets. One was Earth: the other was also called Earth, but to save confusion let’s call it Twin Earth. The general size, geography, climate and biology of Twin Earth were all pretty much like those of Earth – in fact, the two planets were virtually indistinguishable, with each person having an identical twin on the other planet.
However, there was one particular difference. On Twin Earth, the transparent liquid which made up the seas, lakes and rivers, which the animals drank, and which fell as rain – the substance, in fact, which the locals called ‘water’ – was not H2O, but XYZ. In most respects, and without resort to more sophisticated chemistry, it was impossible to spot any difference between the qualities and behaviour of XYZ and those of H2O.
(At this point, scientifically inclined readers may look worried and begin saying things like ‘Yeah, but look… it couldn’t be exactly the same as real water’. Well no, not really: with a bit of thought we could probably find a more plausible version, but H2O versus XYZ was what Hilary Putnam originally chose, and it doesn’t really affect the argument.)
The strange result is that when Robinson, on Earth, thinks about the contents of his glass, he is thinking about H2O. But when Twin-Earth Robinson thinks an exactly similar thought, with exactly similar brain states, he is thinking about XYZ. The difference in what they are thinking about arises entirely from differences in the external world, not from any difference in the two brains. In the words of Putnam’s famous slogan ‘meaning isn’t in the head’.
In one way, this doesn’t seem so surprising: it seems almost common sense that meaning is affected by context. On the other hand, it seems a natural assumption that what you think about is pretty much under your own control, something you arrange for yourself within your own skull irrespective of the outside world. The Twin Earth argument undercuts the idea that meaning can arise from mental images or representations alone, which raises a difficulty for anyone wanting to endow a computer with consciousness, and anyone applying a functionalist interpretation to human consciousness. Computational representations in one’s head cannot, it seems, be the same thing as psychological propositions in one’s mind, at least not without some further ingredient.
The problem is illusory. Dennett sees this. First, he suggests, consider the behaviour of the coin-recognising mechanism in a slot machine. It may have been designed for American coins, but what if it is put to work ‘recognising’ Panamanian quarter-balboas (which are the same shape and weight). Do we say the machine is still (mistakenly) recognising US quarters, or has it somehow switched over to being a quarter-balboa recogniser? Who cares? If we like we can say that the intention of the designer means that the machine is still a US-quarter-recogniser, or equally, that the person who installed it in Panama has effectively transformed it into a ‘q-balber’ instead. In the end, we can interpret it whichever way suits us, can’t we?
OK then. Now consider Twin Earth. Let’s suppose, instead of the water case, we think about horses. On Twin Earth, let’s say, they have schmorses instead; animals which resemble horses closely apart from genetics and some internal details. If some helpful philosophical aliens transport Robinson from Earth to Twin Earth, and he sees a schmorse, what does he mean when he says ‘there’s a horse’? Does he still mean ‘horse’, or does he now really mean ‘schmorse’?
Isn’t this the same as the coin machine? Horse, schmorse, what’s the difference?
But the two cases aren’t the same. It’s OK to decide arbitrarily in the case of the coin machine, because the coin machine doesn’t mean to identify any particular coin. It does whatever we intend it to do. But my thoughts don’t depend on how you interpret my behaviour. The machine only has derived intentionality – any meanings come from the designer or user. But people have real, original intentionality. When I mean something, I mean it all by myself, irrespective of how other people may construe my meaning.
Ah, but that’s just where you’re wrong, and that’s what Dennett explicitly denies. On your argument, meanings must remain forever a magic mystery: if you want a rational explanation, you have to accept that original, intrinsic meaningfulness is absurd. How can anything, even a brain, mean anything intrinsically?
I might ask, if all intentionality is derived, where does it ultimately come from? Dennett seems to think we can generate it from nothing by, as it were, taking in each other’s washing. But if there’s one thing that’s clear to me, it is that the meaning of my thoughts doesn’t in itself depend on other people’s interpretations. I’ll agree that Dennett is right about one thing though – this is one of the key issues, where people’s intuitions divide sharply – almost as if they were on different planets…