Quantum Cognition

cats jailedWe know all about the theory espoused by Roger Penrose and Stuart Hameroff, that consciousness is generated by a novel (and somewhat mysterious) form of quantum mechanics which occurs in the microtubules of neurons. For Penrose this helps explain how consciousness is non-computational, a result he has provided a separate proof for.

I have always taken it that this theory is intended to explain consciousness as we know it; but there are those who also think that quantum theory can provide a model which helps explain certain non-rational features of human cognition. This feature in the Atlantic nicely summarises what some of them say.

One example of human irrationality that might be quantumish is apparently provided by the good old Prisoner’s Dilemma. Two prisoners who co-operated on a crime are offered a deal. If one rats, that one goes free while the other serves three years. If both rat, they serve two years; if neither do, they do one year. From a selfish point of view it’s always better to rat, even though overall the no-rat strategy leads to least time in jail for the prisoners. Rationally, everyone should always rat, but in fact people quite often behave against their selfish interest by failing to do so. Why would that be?

Quantum theorists suggest that it makes more sense if we think of the prisoners being in a superposition of states between ratting and not-ratting, just as Schroedinger’s cat superimposes life and death. Instead of contemplating the possible outcomes separately, we see them productively entangled (no, I’m not sure I quite get it, either).

There is of course another explanation; if the prisoners see the choice, not as a one-off, but as one in a series of similar trade-offs, the balance of advantage may shift, because those who are known to rat will be punished while those who don’t may be rewarded by co-operation . Indeed, since people who seek to establish implicit agreements to co-operate over such problems will tend to do better overall in the long run, we might expect such behaviour to have positive survival value and be favoured by evolution.

A second example of quantum explanation is provided by the fact that question order can affect responses. There’s an obvious explanation for this if one question affects the context by bringing something to the forefront of someone’s mind. Asking someone whether they plan to drive home before asking them whether they want another drink may produce different results from asking the other way round for reasons that are not really at all mysterious. However, it’s not always so clear cut and research demonstrates that a quantum model based on complementarity is really pretty good at making predictions.

How seriously are we to take this? Do we actually suppose that exotic quantum events in microtubules are directly responsible for the ‘irrational’ decisions? I don’t know exactly how that would work and it seems rather unlikely. Do we go to the other extreme and assume that the quantum explanations are really just importing a useful model – that they are in fact ultimately metaphorical? That would be OK, except that metaphors typically explain the strange by invoking something understood. It’s a little weird to suppose we could helpfully explain the incomprehensible world of human motivation by appealing to the readily understood realm of quantum physics.

Perhaps it’s best to simply see this as another way of thinking about cognition, something that surely can’t be bad?

Beyond Libet

Picture: dials. Libet’s famous experiments are among the most interesting and challenging in neuroscience; now they’ve been taken further. A paper by Fried, Mukamel and Kreiman in Neuron (with a very useful overview by Patrick Haggard) reports on experiments using a number of epilepsy patients where it was ethically possible to implant electrodes and hence to read off the activity of individual neurons, giving a vastly more precise picture than anything achievable by other means. In other respects the experiments broadly followed the design of Libet’s own, using a similar clock-face approach to measure the time when subjects felt they decided to press a button. Libet discovered that a Readiness Potential (RP) could be detected as much as half a second before the subject was conscious of deciding to move; the new experiments show that data from a population of 250 neurons in the SMA (the Supplementary Motor Area) were sufficient to predict the subject’s decision 700 ms in advance of the subject’s own awareness, with 80% accuracy.

The more detailed picture which these experiments provide helps clarify some points about the relationship between pre-SMA and SMA proper, and suggest that the sense of decision reported by subjects is actually the point at which a growing decision starts to be converted into action, rather than the beginning of the decision-forming process, which stretches back further. This may help to explain the results from fMRI studies which have found the precursors of a decision much earlier than 500 ms beforehand. There are also indications that a lot of the activity in these areas might be more concerned with suppressing possible actions than initiating them – a finding which harmonises nicely with Libet’s own idea of ‘free won’t’ – that we might not be able to control the formation of impulses to act, but could still suppress them when we wanted.

For us, though, the main point of the experiments is that they appear to provide a strong vindication of Libet and make it clear that we have to engage with his finding that our decisions are made well before we think we’re making them.

What are we to make of it all then? I’m inclined to think that the easiest and most acceptable way of interpreting the results is to note that making a decision and being aware of having made a decision are two different things (and being able to report the fact may be yet a third). On this view we first make up our minds; then the process of becoming aware of having done so naturally takes some neural processing of its own, and hence arrives a few hundred milliseconds later.

That would be fine, except that we strongly feel that our decisions flow from the conscious process, that the feelings we are aware of, and could articulate aloud if we chose, are actually decisive. Suppose I am deciding which house to buy: house A involves a longer commute while house B is in a less attractive area. Surely I would go through something like an internal argument or assessment, totting up the pros and cons, and it is this forensic process in internal consciousness which causally determines what I do? Otherwise why do I spend any time thinking about it at all – surely it’s the internal discussion that takes time?

Well, there is another way to read the process: perhaps I hold the two possibilities in mind in turn: perhaps I imagine myself on the long daily journey or staring at the unlovely factory wall. Which makes me feel worse? Eventually I get a sense of where I would be happiest, perhaps with a feeling of settling one alternative and so of what I intend to do. On this view the explicitly conscious part of my mind is merely displaying options and waiting for some other, feeling part to send back its implicit message. The talky, explicit part of consciousness isn’t really making the decision at all, though it (or should I say ‘I’?) takes responsibility for it and is happy to offer explanations.

Perhaps there are both processes in involved in different decisions to different degrees. Some purely rational decisions may indeed happen in the explicit part of the mind, but in others – and Libet’s examples would be in this category – things have to feel right. The talky part of me may choose to hold up particular options and may try to nudge things one way or another, but it waits for the silent part to plump.

Is that plausible? I’m not sure. The willingness of the talky part to take responsibility for actions it didn’t decide on and even to confect and confabulate spurious rationales, is very well established (albeit typically in cases with brain lesions), but introspectively I don’t like the idea of two agents being at work I’d prefer it to be one agent using two approaches or two sets of tools – but I’m not sure that does the job of accounting for the delay which was the problem in the first place…

(Thanks to Dale Roberts!)