Ape Interpretation

Do apes really have “a theory of mind”? This research, reported in the Guardian, suggests that they do. We don’t mean, of course, that chimps are actually drafting papers or holding seminars, merely that they understand that others can have beliefs which may differ from their own and which may be true or false. In the experiment the chimps see a man in a gorilla suit switch hiding places; but when his pursuer appears, they look at the original hiding place. This is, hypothetically, because they know that the pursuer didn’t see the switch, so presumably he still believes his target is in the original hiding place, and that’s where we should expect him to go.

I must admit I thought similar tell-tale behaviour had already been observed in wild chimps, but a quick search doesn’t turn anything up, and it’s claimed that the research establishes new conclusions. Unfortunately I think there are several other quite plausible ways to interpret the chimps’ behaviour that don’t require a theory of mind.

  1. The chimps momentarily forgot about the switch, or needed to ‘check’ (older readers, like me, may find this easy to identify with).
  2. The chimps were mentally reviewing ‘the story so far’, and so looked at the old hiding place.
  3. Minute clues in the experimenters’ behaviour told the chimps what to expect. The famous story of Clever Hans shows that animals can pick up very subtle signals humans are not even aware of giving.

This illustrates the perennial difficulty of investigating the mental states of creatures that cannot report them in language. Another common test of animal awareness involves putting a spot on the subject’s forehead and then showing them a mirror; if they touch the spot it is supposed to demonstrate that they recognise the reflection as themselves and therefore that they have a sense of their own selfhood. But it doesn’t really prove that they know the reflection is their own, only that the sight of someone with a spot causes them to check their own forehead. A control where they are shown another real subject with a spot might point to other interpretations, but I’ve never heard of it being done. It is also rather difficult to say exactly what belief is being attributed to the subjects. They surely don’t simply believe that the reflection is them: they’re still themselves. Are we saying they understand the concepts of images and reflections? It’s hard to say.

The suggestion of adding a control to this experiment raises the wider question of whether this sort of experiment can be generally tightened up by more ingenious set-ups? Who knows what ingenuity might accomplish, but it does seem to me that there is an insoluble methodological issue. How can we ever prove that particular patterns of behaviour relate to beliefs about the state of mind of others and not to similar beliefs in the subject’s own minds?

It could be that the problem really lies further back: that the questions themselves make no sense. Is it perhaps already fatally anthropomorphic to ask whether other animals have “a theory of mind” or “a conception of their own personhood”; perhaps these are already incorrigibly linguistic ideas that just don’t apply to creatures with no language. If so, we may need to unpick our thinking a bit and identify more purely behavioural ways of thinking, ones that are more informative and appropriate?

Monkey see?

Picture: Macaque. Can monkeys have blindsight? Sean Allen-Hermanson defends the idea in a recent JCS paper. Blindsight is one of those remarkable phenomena that ought to be a key to understanding conscious perception; but somehow we can never quite manage to agree how the key should be turned.  Blindsight, to put it very briefly, is where subjects with certain kinds of brain lesions deny seeing something, but can reliably point to it when prompted to try. It’s as though the speaking, self-conscious part of the brain is blind, but some other part, well capable of directing the hand when given a chance, can see as well as ever.

There are a number of ways we might account for blindsight. One of the simplest is to suppose that the visual system is degraded but not destroyed in these cases; the signals from the eye are still getting through, but in some way at reduced power. This reduced power level puts them below the limit required for entry into conscious awareness, but they are still sufficient to bias the subject towards the correct response when they are prompted to guess or have a random try. Another popular theory suggests that the effect arises because there are two separate visual channels, only one of which is knocked out in blindsight. There is a good neurological story which can be told in support of this theory, which weighs strongly in its favour;  against it, there have been reports of analogous phenomena in the case of other senses, where it is harder to sustain the idea of physically separate channels. Allen-Hermanson cites claims for touch, smell and hearing (I’ve wondered in the past whether the celebrated deaf percussionist Evelyn Glennie might be an example of “deafhearing”); and even suggestions that the case of alexithymia, in which things not consciously perceived nevertheless cause anxiety or fear, might be similar.  It’s possible, of course, that blindsight itself comes in more than one form with more than one kind of cause, and that there is something in both these theories – which unfortunately would make matters all the more difficult to elucidate.

For those of us whose main interest in is consciousness, blindsight holds out the tantalising possibility of an experimental route into the mystery of qualia, of what it is for there to be a way something looks.  It’s tempting to suppose that what is missing in blindsight patients is indeed phenomenal experience. Like the much-discussed zombies, they receive the data from their senses and are able to act on it, but have no actual experience.  So if we can work out how blindsight works, we’ve naturalised qualia and the hard problem is cracked…

Well, no, of course it isn’t really that easy. The point about qualia, strictly interpreted, is that they don’t cause actions;  qualia-free zombies behave just the same as normal people, and that includes speech behaviour. So the absence of qualia could have no effect on what you say;  since whatever blindsight patients are missing does affect what they say, it can’t be qualia.  Moreover we have no conclusive evidence that blindsight patients have no visual experience; it could be that they have the experience but are simply unable to report it. That might seem a strange state to be in, but patients with brain damage are known to assert or confabulate all sorts of things which are at odds with the evidence of their senses; in fact I believe there are subjects who claim with every sign of sincerity to see perfectly when in fact they are demonstrably blind, which is a nice reversal of the blindsight case.

Still, blindsight is a tantalising glimpse of something important about conscious experience, and has all sorts of implications. To pick out one at random, it casts an interesting light on split-brain patients.  In blindsight cases, we can have an apparent disconnect between the knowledge the patient expresses with the voice, and the knowledge expressed with the hand; that’s pretty much what we get in many experiments on split-brain patients (since normally only one hemisphere has use of the vocal apparatus and the other can only express itself by hand movements).  Any claims that split-brain patients are therefore shown to be two different people in a single skull are undercut unless we’re willing to take up the unlikely position that blindsight patients are also split people.

One interesting extension of blindsight research is the apparent discovery by Cowey and Stoerig of the same phenomenon in monkeys. There is an obvious difficulty here, since human blindsight experiments typically rely on the subject to report in words what they can see, something monkeys can’t do. Cowey and Stoerig devised two experiments; in the first the monkeys were trained to touch a screen where a stimulus appeared; all were able to do this without problems. In the second experiment, the stimulus did not always appear on cue; when it did not, the monkeys were required to press a separate button. Normal monkeys could do this without difficulty, but monkeys with lesions thought to be analogous to those causing blindsight now went wrong when the stimulus appeared in their blind spot, hitting the ‘no stimulus’  button. Taking the two experiments together, it was concluded that blindsight was effectively demonstrated; the damaged monkeys who could earlier touch the right part of the screen even when the stimulus was in their blind spot,  later ‘reported’ the same stimulus as absent.

(Readers may wonder about the ethical propriety of damaging the brains of living primates for these experiments; I haven’t read the original papers, but I suppose we must assume that at any rate the experiments had medical as well as merely philosophical value.)

Of course, these experiments differ significantly from those carried out on human subjects, and as Allen-Hermanson reports,  reasonable doubts were subsequently raised in a 2006 paper by Mole and Kelly, who pointed out that relying on two separate experiments, which made differing demands, made the results inconclusive.  In particular, the second task was more complex than the first, and it could plausibly be argued that the result of having to deal with this additional complexity was that the monkeys simply failed to notice in the second experiment the stimulus they had picked up successfully in the first.

Allen-Hermanson’s aim is to rescue Cowey and Stoerig’s conclusions, while acknowledging the validity of the criticisms. He proposes a new experiment: first the monkeys are trained to press a green button if there is a stimulus (no need to point to where it is any more), and a red one if there is none. Then we introduce two different stimuli: Xs and Os. Both the green and red buttons are now divided in two, one side labelled for X, the other for O. If there is a stimulus, the monkeys must now press either green X or green O depending on which appeared: if there is no stimulus, they can press either red button. Allen-Hermanson believes the blindsighted monkeys will consistently press red X correctly if the stimulus is X, even though they are effectively asserting that there is no stimulus.

Maybe. I can’t help feeling that all the monkeys will be puzzled by a task which effectively asks them to state whether a stimulus is present, and then, if not present, say whether it was an X or O. The experiment has not been carried out; but Allen-Hermanson goes on to suggest that Mole and Kelly’s alternative hypothesis is actually implausible on other grounds.  On their interpretation, for example, the blindsighted monkeys simply fail to notice a stimulus in their blind spot: yet it has been demonstrated that they cannot recognise objects as salient in monkey terms as ripe fruit when they are presented to the blind spot – so it seems unlikely that we’re dealing with something as simple as inattention.

What would it mean if monkeys did have blindsight? It would seem to show, at least, that monkeys are not automata; that they do have something which corresponds to at least one important variety of human consciousness. Allen-Hermanson proposes working further along the mammalian line, and he seems to expect that mammals and even some other vertebrates would yield similar results (he draws the line at toads).

At any rate, we’re left feeling that human consciousness is not as unique as it might have seemed.  I can’t help also feeling more strongly than before that the really unique feature of human awareness is the way it is shot through with language; we may not have the only form of consciousness, but we certainly seem to have the talkiest.

Libet was wrong…?

Picture:  clock on screen. One of the most frequently visited pages on Conscious Entities is this account of Benjamin Libet’s remarkable experiments, which seemed to show that decisions to move were really made half a second before we were aware of having decided. To some this seemed like a practical disproof of the freedom of the will – if the decision was already made before we were consciously aware of it, how could our conscious thoughts have determined what the decision was?  Libet’s findings have remained controversial ever since they were published; they have been attacked from several different angles, but his results were confirmed and repeated by other researchers and seemed solid.

However, Libet’s conclusions rested on the use of Readiness Potentials (RPs). Earlier research had shown that the occurence of an RP in the brain reliably indicated that a movement was coming along just afterwards, and they were therefore seen as a neurological sign that the decision to move had been taken (Libet himself found that the movement could sometimes be suppressed after the RP had appeared, but this possibility, which he referred to as ‘free won’t ‘, seemed only to provide an interesting footnote). The new research, by Trevena and Miller at Otago, undermines the idea that RPs indicate a decision.

Two separate sets of similar experiments were carried out. They resembled Libet’s original ones in most respects, although computer screens and keyboards replaced Libet’s more primitive equipment, and the hand movement took the form of a key-press. A clock face similar to that in Libet’s experiments was shown, and they even provided a circling dot. In the earlier experiments this had provided an ingenious way of timing the subject’s awareness that a decision had been made – the subject would report the position of the dot at the moment of decision – but in Trevena and Miller’s research the clock and dot were provided only to make conditions resemble Libet’s as much as possible. Subjects were told to ignore them (which you might think rendered their inclusion pointless). This was because instead of allowing the subject to choose their own time for action, as in Libet’s original experiments, the subjects in the new research were prompted by a randomly-timed tone. This is obviously a significant change from the original experiment; the reason for doing it this way was that Trevena and Miller wanted to be able to measure occasions when the subject decided not to move as well as those when there was movement. Some of the subjects were told to strike a key whenever the tone sounded,  while the rest were asked to do so only about half the time (it was left up to them to select which tones to respond to, though if they seemed to be falling well below a 50-50 split they got a reminder in the latter part of the experiment).  Another significant difference from Libet’s tests is that left and right hands were used: in one set of experiments the subjects were told by a letter in the centre of the screen whether they should use the right or left hand on each occasion, in the other it was left up to them.

There were two interesting results. One was that the same kind of RP appeared whether the subject pressed a key or not. Trevena and Miller say this shows that the RP was not, after all, an indication of a decision to move, and was presumably instead associated with some more general kind of sustained attention or preparing for a decision. Second, they found that a different kind of RP, the Lateralised Readiness Potential or LRP, which provides an indication of readiness to move a particular hand, did provide an indication of a decision, appearing only where a movement followed; but the LRP did not appear until just after the tone. This suggests, in contradiction to Libet, that the early stages of action followed the conscious experience of deciding, rather than preceding it.

The differences between these new experiments and Libet’s originals provide a weak spot which Libetians will certainly attack.  Marcel Brass, whose own work with fMRI scanning confirmed and even extended Libet’s delay, seeming to show that decisions could be predicted anything up to ten seconds before conscious awareness, has apparently already said that in his view the changes undermine the conclusions Trevena and Miller would like to draw. Given the complex arguments over the exact significance of timings in Libet’s results, I’m sure the new results will prove contentious. However, it does seem as if a significant blow has been struck for the first time against the foundations of Libet’s remarkable results.