Archive for February, 2007

Intentions There was a good deal of media attention recently for Reading Hidden Intentions in the Human Brain (Haynes et al, Current Biology 2007), a paper which was said to herald the possibility of mind-reading. My initial reaction, I must admit, was sceptical from two directions. First, this seemed like another example of the hyperbole which afflicts the field (I particularly remember British Telecom announcing the possibility of a ‘soul catcher’ which would would record the electrical activity of the brain and hence – of course – capture one’s very identity. I don’t think they had a neurologist on the team.) Second, there’s actually nothing new about being able to detect neural precursors of a physical action before it occurs.

However, on examination the research is both new and interesting, if a little less earth-shattering than reports might have suggested. It is based on MRI scanning, but uses new techniques to extract a finer level of detail from the same hardware; and it detects an abstract intention rather than the precursors of a physical movement.

The experiments went like this: the subjects were told they would be given two numbers; they were to chose now whether they would add them or subtract the smaller from the larger. They then had to keep this intention in mind during a variable interval before the actual numbers were displayed, and in a final stage indicate from a choice of four numbers (one the correct result of addition, one of subtraction, and two irrelevant numbers) the result of their chosen calculation (from which, as the paper says, their choice of addition or subtraction could reasonably be inferred). Applying pattern recognition to the scans, the researchers found that they were able to identify the subjects’ choices with up to 70% accuracy. They were, incidentally, able to do the same from a slightly different region of the brain while the subjects were indicating the results; but the interesting part is the ability to detect a difference between the two options before the subjects knew the location of the right answer, and hence before any preparation of the appropriate physical movement could have started.

That’s quite a long way from mind-reading in the fullest sense. Only one distinction was tested – between a brain about to add and a brain about to subtract. It may well be that these are particular examples of a vast set of different patterns for all sorts of mental activity: but in strict logic it could be that this technique only allows us to distinguish between broadly augmentative or positive operations and diminutive or negative ones: it would be interesting to see whether multiplication and division, for example, are as readily distinguishable from addition and multiplication as the latter two are from each other. So far as I can tell, there is no evidence yet of any particular commonality, other than being in the same brain region, between the addition pattern in one individual and the addition pattern in another, or between the addition pattern in one subject today and the same thing in the same person next week.

But the results are a clear step forwards, and the fact that they open up many further questions is really a virtue, since a number of the further issues seem to be readily susceptible to further research. One of the more likely practical developments of the method, I imagine is the development of a superior lie detector (though it would need a higher success rate than 70%). One of the early experiments which would be needed to firm up this possibility would presumably be to test whether the subjects can fake their own results. They would have to be asked to hold in their minds the intention of adding, say, while covertly knowing that they would in fact subtract. Would the result be the addition pattern, the subtraction pattern, or something different altogether?
Considering that point takes us into some of the more confusing issues raised by the research. When the subjects were holding an ‘intention to subtract’ in their minds, what were they actually doing? There are a number of possibilities. They could have started up their mental subtraction engine, and be keeping it idling until the numbers arrived. They could have put up some kind of unconscious internal flag which merely indicated the intention, to be consulted when the task was performed. They could have been holding on to a conscious mental image of the relevant algebraic symbol, or the word ‘subtract’. They could have been in a neutral mental state, relying on memory to retrieve the decision made a moment before.

These considerations may lead us to note the somewhat artifical nature of what the subjects were being asked to do, and consequently question whether the patterns identified in their brains really represent an intention, an awareness of an intention, a willed maintenance of an intention, the declaration of an intention, an awareness of a declaration of an intention, or whatever other higher-order possibilities we might come up with. Not that this makes the research any less interesting or significant, of course; rather the reverse.

The language used in the paper suggests an optimistic view, since it speaks of intentions being ‘encoded’ in the brain. Probably no particular commitment was intended by this and the word was being used in the sort of loose sense in which your shadow can be said to encode information about your height and shape. If there really were a decipherable mental code which was common to all brains, then we really could look forward to mind-reading in the fullest sense, but given the variation between brains, it may be that our individual ways of holding intentions have nothing much in common. Indeed, it may turn out that even on an individual basis our mental life is not organised in an encoded way. Computer code has to make sense on two levels: it has to be comprehensible to the programmer and deliver cogent outputs. The human brain is under no obligation to make sense except in output terms, and since rendering one’s code understandable usually involves a small overhead, it’s probable that evolution hasn’t done so. If that pessimistic view is right, we’re never going to get more than one-to-one matchings between neural activity and particular thoughts.

But that is the pessimistic view. It’s a bit hard to believe that orderly thought could emerge from something that didn’t have at least an implicit orderliness. Perhaps, to pursue the analogy, we can’t expect the brain’s code to include helpful comments; but surely efficiency will have led to some modularisation and categorisation of the kind which may give a foothold to interpretation?

If a clear answer about that emerges from further research, as it may, that certainly will be worth a big media write-up.

Subjects A short paper by Gray, Gray and Wegner in Science recently sets out the results of an interesting survey of how people view minds. People were asked to make comparisons amongst a strange group of 13 miscellaneous entities (see picture), rating them against a series of criteria. There is an online version of the questionnaire, with a different cast of characters, here.

The main finding is that people seem to rank minds along two scales rather than one. Analysis of the data suggested that the two basic qualities of minds as perceived by the respondents were: experience (hunger, fear, pain, pleasure, rage, desire, personality, consciousness, pride, embarrassment, joy) and agency (self-control, morality, memory, emotion recognition, planning, communication, and thought). This result is agreeably in line with philosophical thinking about consciousness; I don’t think it would be too much of a stretch to claim that the ‘hard’ and ‘easy’ problems of consciousness are the problems of experience and agency respectively.

That’s nice, but having one’s preconceptions confirmed so neatly might be grounds for suspicion. Is it possible, for example, that the authors built in the distinction which emerged from the research by their choice of examples? The list of entities for the original experiment is certainly a strange one (in some respects the online version is even stranger: it consists mainly of living human beings, but also features two birds – surely unlikely to score very differently). Clearly these are meant to be a set of examples of special interest from a mental point of view, but the list is certainly not exhaustive: we could add aliens, ghosts, computers and a number of others. In fairness I must admit I don’t know exactly what an exhaustive list would be in this context: but to illustrate the possibility of skewing the results, consider what might happen if we added the following (ghost, djinn, angel, hallucination, my mirror image, chat-bot program) it seems possible that something like solidity might have emerged as another apparently salient quality of minds, which is clearly incorrect (or is it?).

It’s difficult, moreover, with this kind of research, to be sure extraneous considerations are being kept out. Take the results about God. God rates high on agency but very low on experience. I think this must be because negative items like pain and hunger feature prominently, and people found the idea of God experiencing pain unlikely. It might be that they thought of pain as the province of creatures with bodies, but I suspect that to some extent they were just distracted by the observation that an omnipotent, omniscient being ought to be well able to keep out of the way of pain – which is beside the point, strictly speaking. It’s an an odd result to get from a largely Christian group, in any case, given that Jesus surely experienced pain and hunger, if perhaps not rage.
In fact, on this showing God is more or less a philosophical zombie: all agency and no experience. This raises another issue: if you have a problem with the concept of such zombies, you might be inclined to deny that the two dimensions identified here are really independent, arguing that agency actually implies experience. Presumably you might also be sceptical about the possibility of anti-zombies – creatures with full experience but no agency. I think, though, that I’d be I’d be inclined to reverse the sceptical argument and see the research as providing some good evidence against the assertion that zombies are inconceivable. They may well, on further consideration, and given some further argumentation, turn out to be impossible (as a matter of fact I think they do); but this research makes it difficult to argue that they are outright incoherent or unimaginable. It’s not very often that you get empirical results which bear on matters of philosophical interest, so this is surely some cause for celebration.

Quibbles aside, moreover, I think the results are essentially correct. I’m wondering now if an attempt to construct an ‘exhaustive’ list of interestingly different examples of consciousness/non-consciousness might itself be a useful, if perhaps doomed, exercise.