Unconscious decisions

Picture: hourglasses. Benjamin Libet’s experimental finding that decisions had in effect already been made before the conscious mind became aware of making them is both famous and controversial; now new research (published in a ‘Brief Communication’ in Nature Neuroscience by Chun Siong Soon, Marcel Brass, Hans-Jochen Heinze and John-Dylan Haynes) goes beyond it. Whereas the delay between decision and awareness detected by Libet lasted 500 milliseconds, the new research seems to show that decisions can be predicted up to ten seconds before the deciders are aware of having made up their minds.

The breakthrough here, like so many recent advances, comes from the use of fMRI scanning. Libet could only measure electrical activity, and had to use the Readiness Potential (RP) as an indicator that a decision had been made: the new research can go much further, mapping activity in a number of brain regions and applying statistical pattern recognition techniques to see whether any of it could be used to predict the subject’s decision.

The design of the experiments varied slightly from Libet’s original ingenious set-up. This time a series of letters was displayed on a screen. The subject were asked to press either a right or a left button at a moment chosen by them; they then identified the letter which had been displayed at the moment they felt themselves deciding to press either right or left. In the main series of experiments, no time constraints were imposed.

Two regions proved to show activity which predicted the subject’s choice: primary motor cortex and the Supplementary Motor Area (SMA) – the SMA is the source of the RP which Libet used in his research. In the SMA the researchers found activity which predicted the decision some five seconds before the moment of conscious awareness, but it was elsewhere that the earliest signs appeared – in the frontopolar cortex and the precuneus. Here the subject’s decision could be seen as much as seven seconds ahead of time: allowing for the delay in the fMRI response, this tots up to a real figure of ten seconds. One contrast with earlier findings is that there was no activation of the dorsolateral prefrontal cortex: the researchers hypothesise that this was because the design of their experiment did not require subjects to remember previous button presses. Another difference, of course, is the huge delay of five seconds in the SMA, which one would have expected to be comparable with the findings of earlier, RP-based research. Here the suggested explanation is that in the new experiments the timing of button presses was wholly unconstrained so that there was more time for activity to build up. The time delay in the fMRI study apparently means that the possibility that there was additional activity within the last few hundred milliseconds cannot be excluded: I conjecture that this offers another possible explanation if the RP studies actually detected a late spike which the fMRI couldn’t detect.

The experimenters also ran a series of experiments where the subject chose left or right at a pre-determined time: this does not seem to have shortened the delays, but it showed up a difference between the activation in the frontopolar cortex and the precuneus: briefly, it looks as if the former peaks at the earliest stage, with the precuneus ‘storing’ the decision through more continuous activation.

What is the significance of these new findings? The researchers suggest the results do three things: they show that the delay is not confined to areas which are closely associated with motor activity, but begins in ‘higher’ areas; they demonstrate clearly that the activity relates to identifiable decisions, not just general preparation; and they rule out one of the main lines of attack on Libet’s findings, namely that the small delay observed is a result of mistiming, error, or misunderstanding of the chronology. That seems correct – a variety of arguments of differing degrees of subtlety have been launched against the timings of Libet’s original work. Although Libet himself was scrupulous about demonstrating solid reasons for his conclusions, it always seemed that a delay of a few hundred milliseconds might perhaps be attributable to some sort of error in the book-keeping, especially since timing a decision is obviously a tricky business. A delay of ten seconds is altogether harder to explain away.

However, it seems to me that while the new results close off one line of attack, they reinforce another – the claim that these experiments do not represent normal decision making. We do not typically make random decisions at a random moment of our choosing, and it can therefore fairly be argued that the research has narrower implications than might appear, or even that they are merely a strange by-product of the peculiar mental processes the subjects were asked to undertake. While the delay was restricted to half a second, it was intuitively believable that all our normal decisions were subject to a similar time-lag – surprising, but believable. A delay of ten seconds in normal conscious thought is not credible at all; it’s easy to think of cases where an unexpected contingency arises and we act on it thoughtfully and consciously within much shorter periods than that.

The researchers might well bite the bullet so far as that goes, accepting that their results show only that the delay can be as long as ten seconds, not that it invariably is. Libet himself, had he lived to see these results might perhaps have been tempted to elaborate his idea of ‘free won’t’ – that while decisions build up in our brains for a period before we are aware of them, the conscious mind retains a kind of veto at the last moment.

What would be best of all, of course, is further research into decisions made in more real-life circumstances, though devising a way in which decisions can be identified and timed accurately in such circumstances is something of a challenge.

In the meantime, is this another blow to the idea of free will generally? The research will certainly hearten hard determinists, but personally I remain a compatibilist. I think making a decision and becoming aware of having made that decision are two different things, and I have no deep problem with the idea that they may occur at different times. The delay between decision and awareness does not mean the decision wasn’t ours, any more than the short delay before we hear our own voice means we didn’t intend what we said. Others, I know, will feel that this relegates consciousness to the status of an epiphenomenon.

More not the same

Picture: brain not computer. Chris Chatham has gamely picked up on my remarks about his interesting earlier piece, which set out a number of significant differences between brains and computers. We are, I think, somewhere between 90 and 100 percent in agreement about most of this, but Chris has come back with a defence of some of the things I questioned. So it seems only right that I should respond in turn and acknowledge some overstatement on my part.

Chris points out that “processing speed” is a well-established psychometric construct. I must concede that this is true: the term is used to cover various sound and useful measures of speed of recognition and the like: so I was too sweeping when I said that ‘processing speed’ had no useful application to the brain. That said, this kind of measurement of humans is really a measurement of performance rather than directly of the speed of internal working, as it would be when applied to computers. Chris also mentions some other speed constraints in the brain – things like rate of firing of neurons, speed of transmission along nerves – which are closer to what ‘processing speed’ means in computers (though not that close, as he was saying in the first place). In passing, I wonder how much connection there is between the two kinds of speed constraint in humans? The speed of performance of a PC is strongly affected by its clock speed; but do variations in rates of neuron firing have a big influence on human performance? It seems intuitively plausible that in older people neurons might take slightly longer to get ready to fire again, and that this might make some contribution to longer reaction times and the like (I don’t know of any research on the subject), but otherwise I suspect differences in performance speed between individual humans arise from other factors.

In a nutshell, Chris is right when he says that Peter is probably uncomfortable with equating “sparse distributed representation” with “analog,”. To me it looks like a whole nother thing from what used to go on in analog computers, where a particular value might be represented by a particular level of current. The whole topic of mental representation is a scary one for me in any case: if Chris wanted a twelfth difference to add to his list, I think he could add Computers don’t really do representation. That may seem an odd thing to say about machines that are all about manipulating symbols; but nothing in a computer represents anything except in so far as a human or humans have deemed or designed it to represent something. Whereas the human brain somehow succeeds in representing things to itself, and then to other humans, and manages to confer representational qualities on noises, marks on paper – and computers. This surely remains one of the bogglingly strange abilities of the mind, and it’s unlikely the computer analogy is going to help much in understanding it.

I do accept that in some intuitive sense the brain can be described as ‘massively parallel’ – people who so describe it are trying to put their finger on a characteristic of the brain which is real and important . But the phrase is surely drawn from massively parallel computing, which really isn’t very brain-like. ‘Parallel’ is a good way of describing how different bits of a process can be shepherded in an orderly manner through different CPUs or computers and then reintegrated; in the brain, it looks more as if the processes are going off in all directions, constantly interfering with each other, and reaching no definite conclusion. How all this results in an ordered and integrated progression of conscious experience is of course another notorious boggler, which we may solve if we can get a better grasp of how this ‘massively parallel’ – I’d rather say ‘luxuriantly non-linear’ way of operating works. My fear is that use of the phrase ‘massively parallel’ risks deluding us into thinking we’ve got the gist already.

Whatever the answer there, Chris’s insightful remarks and the links he provided have certainly improved my grasp of the gist of things in a number of areas, however, for which I’m very grateful.

Not the same

Picture: brain not computer. Chris Chatham has a nice summary of ten key differences between brains and computers which is well worth a read. Briefly, the list is:

  • Brains are analogue; computers are digital
  • The brain uses content-addressable memory
  • The brain is a massively parallel machine; computers are modular and serial
  • Processing speed is not fixed in the brain; there is no system clock
  • Short-term memory is not like RAM
  • No hardware/software distinction can be made with respect to the brain or mind
  • Synapses are far more complex than electrical logic gates
  • Unlike computers, processing and memory are performed by the same components in the brain
  • The brain is a self-organizing system
  • Brains have bodies
  • The brain is much, much bigger than any [current] computer

Actually eleven differences – there’s a bonus one in there.

Hard to argue with most of these, and there are some points among them which are well worth making. There is always a danger, when comparing the capacities of brains and computers, of assuming a similarity even when trying to point up the contrast. There have, for example, been many attempts to estimate the storage capacity of the brain, or the memory, over the years for example, always concluding that it is huge; but a figure in megabytes doesn’t really make much sense. Asking how many bytes there are in the memory is like asking how many pixels Leonardo needed to do the Mona Lisa: it’s not like that. Chris generally steers just clear of this danger, although I’d be more inclined to say, for example, that the concept of processing speed has no useful application in the brain rather than that it isn’t fixed.

I wonder a bit about some of the positive assertions he makes. Are brains analogue? Granted they’re not digital, at least not in the straightforward way that a digital computer is, but unless we take ‘analogue’ as a synonym for ‘non-digital’ it’s not really clear to me. I take digital and analogue to be two different ways of representing real-world quantities; I don’t think we really know exactly how the brain represents things at the moment. It’s possible that when we do know, the digital/analogue distinction may seem to be beside the point.
And are brains ‘massively parallel’? It’s a popular phrase, but one of the older bits of this site long ago looked at a few reasons why the resemblance to massively parallel computing seems slight. In fairness, when people make this assertion they aren’t really saying that the brain is like a parallel processing set-up; they’re trying to describe a quality of the brain for which there is no good word; ie that things seem to be going on all over it at the same time. Chris is really warning against an excessively modular view. Once again we can agree that the brain is unlike computers – this time in the way they funnel data and instructions together in one or more processors; but the positive comparison is more problematic.

There’s some underlying scope for confusion, too, about what we mean when we assert that brains are not computers. We could just intend the trivial point that there isn’t actually a physical PC in our heads. More plausibly, we could mean that the brain doesn’t have the same general architecture and functional features as a computer (which I think is about what Chris means to do). We could mean that the brain doesn’t do stuff that we could easily recognise as computation, although it might be functionally similar in the sense of producing the same output to input relationships as a computed process. We might mean it doesn’t do stuff that we could easily recognise as computation, and that there appears to be no non-trivial way of deriving algorithms which would do the same thing. We might go one further and assert, as Roger Penrose does, that some of what the brain is doing is non-computable in the same sort of way as the tiling problem (though here again we have to ask is it really like that since the question of computability seems to assume the brain is typically solving problems and seeking proofs). Finally, we could be saying that the brain has altogether mysterious properties of free will and phenomenal experience which go beyond anything in our current understanding of the physical world, and ergo far beyond anything a mere computer might possess.

A good thought-provoking discussion, in any case.