Not Exactly Free

Picture: qualintentionality. I see that this piece on nature.com has drawn quite a bit of attention. It provides a round-up of views on the question of whether free will can survive in a post-Libet world, though it highlights more recent findings along similar lines by John-Dylan Haynes and others. The piece seems to be prompted in part by Big Questions in Free Will a project funded by the John Templeton Foundation, which  is probably best known for the Templeton Prize, a very large amount of cash which gets given to respectable scientists who are willing to say that the universe has a spiritual dimension, or at any rate that materialism is not enough. BQFW itself is offering funding for theology as well as science: “science of free will ($2.8 million); theoretical underpinnings of free will, round 1 ($165,000); and theology of free will, round 1 ($132,000)”. I suppose ‘theoretical underpinnings’, if it’s not science and not theology, must be philosophy; perhaps they called it that because they want some philosophy done but would prefer it not to be done by a philosopher. In certain lights that would be understandable. The presence of theology in the research programme may not be to everyone’s taste, although what strikes me most is that it seems to have got the raw end of the deal in funding terms. I suppose the scientists need lots of expensive kit, but on this showing it seems the theologians don’t even get such comfortable armchairs as the theorists, which is rough luck.

We have of course discussed the Haynes results and Libet, and other related pieces of research many times in the past. I couldn’t help wondering whether, having all this background, I could come up with something on the subject that might appeal to the Templeton Foundation and perhaps secure me a modest emolument? Unfortunately most of the lines one could take are pretty well-trodden already, so it’s difficult to come up with an appealing new presentation, let alone a new argument. I’m not sure I have anything new to say. So I’ve invited a couple of colleagues to see what they can do.

Bitbucket Free will is nonsense; I’m not helping you come up with further ‘compatibilist’ fudging if that’s what you’re after. What I can offer you is this: it’s not just that Libertarians have the wrong answer, the question doesn’t even make sense. The way the naturenews piece sets up the discussion is to ask: how can you have free will if the decision was made before you were even aware of it? The question I’m asking is: what the hell is ‘you’?

Both Libet’s original and the later experiments are cleverly designed to allow subjects to report the moment at which they became aware of the decision: but ‘they’ are thereby implicitly defined as whatever it is that is doing the reporting. We assume without question that the reporting thing is the person, and then we’re alarmed by the fact that some other entity made the decision first. But we could equally well take the view that the silent deciding entity is the person and be unsurprised that a different entity reports it later.

You will say in your typically hand-waving style, I expect, that that can’t be right because introspection or your ineffable sense of self or something tells you otherwise. You just feel like you are the thing that does the reporting. Otherwise when words come out of your mouth it wouldn’t be you talking, and gosh, that can’t be right, can it?

Well, let me ask you this. Suppose you were the decision-making entity, how would it seem to you? I submit it wouldn’t seem any way, because as that entity you don’t do seeming-to: you just do decisions. You only seem to yourself to have made the decision when it gets seemed back to you by a seeming entity – in fact, by that same reporting entity.  In short, because all reports of your mental activity come via the reporting entity, you mistake it for the source of all your mental activity. In fact all sorts of mental processes are going on all over and the impression of a unified consistent centre is a delusion. At this level, there is no fixed ‘you’ to have or lack free will. Libet’s experiments merely tell us something interesting but quite unworrying about the relationship of two current mental modules.

So libertarians ask: do we have free will? I reply that they have to show me the ‘we’ that they’re talking about before they even get to ask that question – and they can’t.

BlandulaNot much of a challenge to come up with something more appealing than that! I’ve got an idea the Templeton people might like, I think: Dennettian theology.

You know, of course, Dennett’s idea of stances. When we’re looking to understand something we can take various views. If we take the physical stance, we just look at the thing’s physical properties and characteristics. Sometimes it pays to move on to the design stance: then we ask ourselves, what is this for, how does it work? This stance is productive when considering artefacts and living things, in the main. Then in some cases it’s useful to move on to the intentional stance, where we treat the thing under consideration as if it had plans and intentions and work out its likely behaviour on that basis. Obviously people and some animals are suitable for this, but we also tend to apply the same approach to various machines and natural phenomena, and that’s OK so long as we keep a grip.

But those three stances are clearly an incomplete set. We could also take up the Stance of Destiny: when we do that we look at things and ask ourselves: was this always going to happen? Is this inevitable in some cosmic sense? Was that always meant to be like that? I think you’ll agree that this stance sometimes has a certain predictive power: I knew that was going to happen, you say: it was, as it were, SoD’s Law.

Now this principle gives us by extrapolation an undeniable God – the God who is the intending, the destining entity. Does this God really exist? Well, we can take our cue from Dennett: like the core of our personhood in his eyes, it doesn’t exist as a simple physical thing you can lay your hands on: but it’s a useful predictive tool and you’d be a fool to overlook it, so in a sense it’s real enough: it’s a kind of  explanatory centre of gravity, a way of summarising the impact of millions of separate events.

So what about free will? Well of course, one thing you can say about a free decision is that it wasn’t destined. How does that come about? I suggest that the RPs Libet measured are a sign of de-destination, they are, as it were, the autopilot being switched off for a moment. Libet himself demonstrated that the impending action could be vetoed after the RP, after all. Most of the time we run on destined automatic, but we have a choice. The human brain, in short, has a unique mechanism which, by means we don’t fully understand, can take charge of destiny.

I think my destiny is to hang on to the day job for the time being.

Thatter way to consciousness

Picture: Raymond Tallis‘Aping Mankind’ is a large scale attack by Raymond Tallis on two reductive dogmas which he characterises as ‘Neuromania’ and ‘Darwinitis’.  He wishes especially to refute the identification of mind and brain, and as an expert on the neurology of old age, his view of the scientific evidence carries a good deal of weight. He also appears to be a big fan of Parmenides, which suggests a good acquaintance with the philosophical background. It’s a vigorous, useful, and readable contribution to the debate.

Tallis persuasively denounces exaggerated claims made on the basis of brain scans, notably claims to have detected the ‘seat of wisdom’ in the brain.  These experiments, it seems, rely on what are essentially fuzzy and ambiguous pictures arrived at by subtraction in very simple experimental conditions, to provide the basis for claims of a profound and detailed understanding far beyond what they could possibly support. This is no longer such a controversial debunking as it would have been a few years ago, but it’s still useful.

Of course, the fact that some claims to have reduced thought to neuronal activity are wrong does not mean that thought cannot nevertheless turn out to be neuronal activity, but Tallis pushes his scepticism a long way. At times he seems reluctant to concede that there is anything more than a meaningless correlation between the firing of neurons in the brain and the occurence of thoughts in the mind.  He does agree that possession of a working brain is a necessary condition for conscious thought, but he’s not prepared to go much further. Most people, I think, would accept that Wilder Penfield’s classic experiments, in which the stimulation of parts of the brain with an electrode caused an experience of remembered music in the subject, pretty much show that memories are encoded in the brain one way or another; but Tallis does not accept that neurons could constitute memories. For memory you need a history, you need to have formed the memories in the first place, he says: Penfield’s electrode was not creating but merely reactivating memories which already existed.

Tallis seems to start from a kind of Brentanoesque incredulity about the utter incompatibility of the physical and the mental. Some of his arguments have a refreshingly simple (or if you prefer, naive) quality: when we experience yellow, he points out, our nerve impulses are not yellow.  True enough, but then a word need not be printed in yellow ink to encode yellowness either. Tallis quotes Searle offering a dual-aspect explanation: water is H2O, but H2O molecules do not themselves have watery properties: you cannot tell what the back of a house loks like from the front, although it is the same house. In the same way our thoughts can be neural activity without the neurons themselves resembling thoughts. Tallis utterly rejects this: he maintains that to have different aspects requires a conscious observer, so we’re smuggling in the very thing we need to explain.  I think this is an odd argument. If things don’t have different aspects until an observer is present, what determines the aspects they eventually have? If it’s the observer, we seem to slipping towards idealism or solipsism, which I’m sure Tallis would not find congenial. Based on what he says elsewhere, I think Tallis would say the thing determines its own aspects in that it has potential aspects which only get actualised when observed; but in that case didn’t it really sort of have those aspects all along? Tallis seems to be adopting the view that an appearance (say yellowness) can only properly be explained by another thing that already has that same appearance (is yellow). It must be clear that if we take this view we’re never going to get very far with our explanations of yellow or any other appearance.

But I think that’s the weakest point in a sceptical case which is otherwise fairly plausible. Tallis is Brentanoesque in another way in that he emphasises the importance of intentionality – quite rightly, I think. He suggests it has been neglected, which I think is also true, although we must not go overboard: both Searle and Dennett, for example, have published whole books about it. In Tallis’ view the capacity to think explicitly about things is a key unique feature of human mindfulness, and that too may well be correct. I’m less sure about his characterisation of intentionality as an outward arrow. Perception, he says, is usually represented purely in terms of information flowing in, but there is also a corresponding outward flow of intentionality. The rose we’re looking at hits our eye (or rather a beam of light from the rose does so), but we also, as it were, think back at the rose. Is this a useful way of thinking about intentionality? It has the merit of foregrounding it, but I think we’d need a theory of intentionality  in order to judge whether talk of an outward arrow was helpful or confusing, and no fully-developed theory is on offer.

Tallis has a very vivid evocation of a form of the binding problem, the issue of how all our different sensory inputs are brought together in the mind coherently. As normally described, the binding problem seems like lip-synch issues writ large: Tallis focuses instead on the strange fact that consciousness is united and yet composed of many small distinct elements at the same time.  He rightly points out that it’s no good having a theory which merely explains how things are all brought together: if you combine a lot of nerve impulses into one you just mash them. I think the answer may be that we can experience a complex unity because we are complex unities ourselves, but it’s an excellent and thought-provoking exposition.

Tallis’ attack on’ Darwinitis’ takes on Cosmidoobianism, memes and the rest with predictable but entertaining vigour. Again, he presses things quite a long way. It’s one thing to doubt whether every feature of human culture is determined by evolution: Tallis seems to suggest that human culture has no survival value, or at any rate, had none until recently, too recently to account for human development. This is reminiscent of the argument put by Alfred Russel Wallace, Darwin’s co-discoverer of the principle of survival of the fittest: he later said that evolution could not account for human intelligence because a caveman could have lived his life perfectly well with a much less generous helping of it. The problem is that this leaves us needing a further explanation of why we are so brainy and cultured; Wallace, alas, ended up resorting to spiritualism to fill the gap (we can feel confident that Tallis, a notable public champion of disbelief, will never go that way). It seems better to me to draw a clear distinction between the capacity for human culture, which is wholly explicable by evolutionary pressure, and the contents of human culture, which are largely ephemeral, variable, and non-hereditary.

Tallis points out that some sleight of hand with vocabulary is not unknown in this area, in particular the tactic of the transferrred epithet: a word implying full mental activity is used metaphorically – a ‘smart’ bomb is said to be ‘hunting down’ its target – and the important difference is covertly elided. He notes the particular slipperiness of the word ‘information’, something we’ve touched on before.

It is a weakness of Tallis’ position that he has no general alternative theory to offer in place of those he is attacking – consciousness remains a mystery (he sympathises with Colin McGinn’s mysterianism to some degree, incidentally, but reproves him for suggesting that our inability to understand ourselves might be biological). However, he does offer positive views of selfhood and free will, both of which he is concerned to defend. Rather than the brain, he chooses to celebrate the hand as a defining and influential human organ: opposable thumbs allow it to address itself and us: it becomes a proto-tool and this gives us a sense of ourselves as acting on the world in a tool-like manner. In this way we develop a sense of ourselves as a distinct entity and an agent, an existential intuition.  This is OK as far as it goes though it does sound in places like another theory of how we get a mere impression, or dare I say an illusion, of selfhood and agency, the very position Tallis wants to refute. We really need more solid ontological foundations. In response to critics who have pointed to the elephant’s trunk and the squid’s tentacles, Tallis grudgingly concedes that hands alone are not all you need and a human brain does have something to contribute.

Turning to free will, Tallis tackles Libet’s experiments (which seem to show that a decision to move one’s hand is actually made a measurable time before one becomes aware of it). So, he says, the decision to move the hand can be tracked back half a second? Well, that’s nothing: if you like you can track it back days, to when the experimental subject decided to volunteer; moreover, the aim of the subject was not just to move the hand, but also to help that nice Dr Libet, or to forward the cause of science. In this longer context of freely made decisions the precise timing of the RP is of no account.

To be free according to Tallis, an act must be expressive of what the agent is, the agent must seem to be the initiator, and the act must deflect the course of events. If we are inclined to doubt that we can truly deflect the course of events, he again appeals to a wider context: look at the world around us, he says, and who can doubt that collectively we have diverted the course of events pretty substantially?  I don’t think this will convert any determinists. The curious thing is that Tallis seems to be groping for a theory of different levels of description, or well, a dual aspect theory.  I would  have thought dual-aspect theories ought to be quite congenial to Tallis, as they represent a rejection of ‘nothing but’ reductionism in favour of an attempt to give all levels of interpretation parity of esteem, but alas it seems not.

As I say, there is no new theory of consciousness on offer here, but Tallis does review the idea that we might need to revise our basic ideas of how the world is put together in order to accommodate it. He is emphatically against traditional dualism, and he firmly rejects the idea that quantum physics might have the explanation too. Panpsychism may have a certain logic but generate more problems than it solves.  Instead he points again to the importance of intentionality and the need for a new view that incorporates it: in the end ‘Thatter’, his word for the indexical, intentional quality of the mental world, may be as important as matter.

Beyond Libet

Picture: dials. Libet’s famous experiments are among the most interesting and challenging in neuroscience; now they’ve been taken further. A paper by Fried, Mukamel and Kreiman in Neuron (with a very useful overview by Patrick Haggard) reports on experiments using a number of epilepsy patients where it was ethically possible to implant electrodes and hence to read off the activity of individual neurons, giving a vastly more precise picture than anything achievable by other means. In other respects the experiments broadly followed the design of Libet’s own, using a similar clock-face approach to measure the time when subjects felt they decided to press a button. Libet discovered that a Readiness Potential (RP) could be detected as much as half a second before the subject was conscious of deciding to move; the new experiments show that data from a population of 250 neurons in the SMA (the Supplementary Motor Area) were sufficient to predict the subject’s decision 700 ms in advance of the subject’s own awareness, with 80% accuracy.

The more detailed picture which these experiments provide helps clarify some points about the relationship between pre-SMA and SMA proper, and suggest that the sense of decision reported by subjects is actually the point at which a growing decision starts to be converted into action, rather than the beginning of the decision-forming process, which stretches back further. This may help to explain the results from fMRI studies which have found the precursors of a decision much earlier than 500 ms beforehand. There are also indications that a lot of the activity in these areas might be more concerned with suppressing possible actions than initiating them – a finding which harmonises nicely with Libet’s own idea of ‘free won’t’ – that we might not be able to control the formation of impulses to act, but could still suppress them when we wanted.

For us, though, the main point of the experiments is that they appear to provide a strong vindication of Libet and make it clear that we have to engage with his finding that our decisions are made well before we think we’re making them.

What are we to make of it all then? I’m inclined to think that the easiest and most acceptable way of interpreting the results is to note that making a decision and being aware of having made a decision are two different things (and being able to report the fact may be yet a third). On this view we first make up our minds; then the process of becoming aware of having done so naturally takes some neural processing of its own, and hence arrives a few hundred milliseconds later.

That would be fine, except that we strongly feel that our decisions flow from the conscious process, that the feelings we are aware of, and could articulate aloud if we chose, are actually decisive. Suppose I am deciding which house to buy: house A involves a longer commute while house B is in a less attractive area. Surely I would go through something like an internal argument or assessment, totting up the pros and cons, and it is this forensic process in internal consciousness which causally determines what I do? Otherwise why do I spend any time thinking about it at all – surely it’s the internal discussion that takes time?

Well, there is another way to read the process: perhaps I hold the two possibilities in mind in turn: perhaps I imagine myself on the long daily journey or staring at the unlovely factory wall. Which makes me feel worse? Eventually I get a sense of where I would be happiest, perhaps with a feeling of settling one alternative and so of what I intend to do. On this view the explicitly conscious part of my mind is merely displaying options and waiting for some other, feeling part to send back its implicit message. The talky, explicit part of consciousness isn’t really making the decision at all, though it (or should I say ‘I’?) takes responsibility for it and is happy to offer explanations.

Perhaps there are both processes in involved in different decisions to different degrees. Some purely rational decisions may indeed happen in the explicit part of the mind, but in others – and Libet’s examples would be in this category – things have to feel right. The talky part of me may choose to hold up particular options and may try to nudge things one way or another, but it waits for the silent part to plump.

Is that plausible? I’m not sure. The willingness of the talky part to take responsibility for actions it didn’t decide on and even to confect and confabulate spurious rationales, is very well established (albeit typically in cases with brain lesions), but introspectively I don’t like the idea of two agents being at work I’d prefer it to be one agent using two approaches or two sets of tools – but I’m not sure that does the job of accounting for the delay which was the problem in the first place…

(Thanks to Dale Roberts!)

Experimental Free Will

Picture: Experiment. Shaun Nichols’ recent paper in Science drew new attention to the ancient issue of free will and also to the very modern method known as ‘experimental philosophy’. Experimental philosophy is liable – perhaps intended – to set the teeth of the older generation on edge, for several reasons. One is that it sounds like an attempt to smuggle into philosophy stuff that shouldn’t be there: if your conclusions can be tested experimentally they’re science, not philosophy. We don’t want real philosophy crowded out by half-baked science. It also sounds like excessive, cringing deference to those assertive scientists, as though some bullied geek started wearing football shirts and fawning on the oppressors. We may have to put up with the physicists taking our lunch money, but we don’t have to pretend we want to be like them.

Actually though, there doesn’t seem to be any harm in experimental philosophy. All the philosophy that goes by the name appears to be real philosophy, often very interesting philosophy; the experiments are not used improperly to clinch a solution but to help clarify and dramatise the problems. Often this works pretty well, and by tethering the discussion to the real world it may even help to prevent an excessive drift into abstract hair-splitting. Philosophers have always been happy to draw on the experiments of scientists as a jumping-off point for discussion, and there seems no special reason why they shouldn’t do the same with experiments of their own.

In this particular case, Nichols shows that there is something odd about people’s intuitive grasp of free will. Subjects were told to assume that determinism, the view that all events are dictated by the laws of physics, applied, and then asked whether someone would be responsible for various things. In the vaguest case they all agreed that in general, given determinism, people were not responsible for events. Given a specific example of a morally debatable act they were less sure; and when they were offered the example of a man who takes out a murder contract on his wife and children, most felt sure he was responsible even given determinism.

This is odd because it’s normally assumed that determinism means no-one can be responsible for anything. In order to be responsible, you have to have been able to do something else, and according to determinism the laws of physics say you couldn’t have done anything but what you did. It’s odder because of the distinction drawn between the cases. Where did that come from?

It could be that something in the experiment predisposed subjects to think they were required to make distinctions of this kind, or it could be that ordinary subjects are just not very good at coming up with strictly logical consequences of artificial assumptions; but I don’t think that’s really it. The distinction between the three cases appears to be a matter of who we’d blame – so it looks as if the man in the street doesn’t really grasp the philosophical concept of responsibility and relies instead on some primitive conception of blameworthiness!!!

But,  um – what is the philosophical concept of responsibility? It’s pretty clear when we cite the laws of physics that we’re talking about causal responsibility – but causal responsibility and moral responsibility don’t coincide. It’s clear that you can be causally responsible for an event without being morally responsible: someone pushed you from behind so that you in turn pushed someone under a train. Less clearly, in some cases it is held that you can be morally responsible for events you didn’t deliberately bring about: the legal doctrine strict liability, Oedipus bringing a curse on Thebes, poor Clarissa wondering whether having been raped is in itself a sin.  All of these are debatable; we might be inclined to see strict liability as a case of legal overkill: “we care so much about this that we’re not even going to entertain any discussion of responsibility – you’d better just make damn sure things are OK” . In the other cases we typically think the assignments of blame are just wrong (although Milan Kundera notably reclaimed the moral superiority of Oedipus in The Unbearable Lightness of Being). Nevertheless the distinction between moral and causal responsibility is clear enough: does the determinist case equivocate between the two, and were Nichols’ subject actually just too shrewd to be taken in?

It seems it might be so. No-one would suggest to a writer that he was not the author of his novel because it was all the result of the laws of physics, although in one sense it’s so. No-one would accept on similar grounds that I’m not responsible for a debt, however abstract and conventional the notions of debt and money may be compared with the rigorous physical account of events. So why should should the physical story stop us concluding that on another level of description we can be interestingly and coherently blameworthy?  That would be a form of compatibilism, the view that we can have our determinist cake and eat our free will, too. (I’d be a little uncomfortable leaving it there without some fundamental account of agency and morality, just as I’d be a bit unhappy to say that debt is a convention without some underpinning concept of money and economics – but that’s another discussion.) So perhaps Nichols’ subjects were compatibilists.

That would be an interesting discovery but… I hate to say this… an interesting discovery in psychology. The fact that most people are instinctively compatibilists provides no particular reason to think compatibilism is true. For that, we still have to do the philosophy the old-fashioned way. Scientists may be able to gather truth from the world, like bees with nectar: philosophers are still obliged, like spiders, to spin their webs out of their own internal resources.

 

Unconscious Free Will?

Picture: Sleepwriting. Here’s an interesting piece by Neil Levy from a few months back, on Does Consciousness Matter? (I only came across it because of its  nomination for a 3QD prize). Actually the topic is a little narrower than the title suggests; it asks whether consciousness is essential for free will and moral responsibility (it being assumed that the two are different facets of the same thing – I’m comfortable with that, but some might challenge it).  Neil notes that people typically suppose Libet’s findings – which suggest our decisions are made before we are aware of them – make free will impossible.

Neil is not actually all that worried by Libet: the impulses from the event of intention formation ought to be registered later than the event itself in any case, he says; so Libet’s results are exactly what we should have expected. Again, I’m inclined to agree: making a conscious decision is one thing;  the second-order business of being conscious of that conscious decision naturally follows slightly later.  (Some, of course, would take a very different view here too; some indeed would say that the second-order awareness is actually what constitutes consciousness.)

Neil particularly addresses two arguments. One says that consciousness is important because only those objectives that are consciously entertained reflect the quality of my will; if I’m not aware that I’m hitting you, I can’t be morally responsible for the blows.  Neil feels this is a question-begging response which just assumes that conscious awareness is essential; I think he’s perhaps a bit over-critical, but of course if we can get a more fully-worked answer, so much the better.

He prefers a slightly different argument which says that factors we were not conscious of cannot influence our deliberations about some act, and hence we can only be held responsible for acts consciously chosen.  George Sher, it seems, rejected this argument on the grounds that our actions are influenced by unconscious factors; but Neil rejects this, saying that although unconscious factors certainly influence out behaviour, we have no opportunity to consider them, which is the critical point.

Personally, I would say that agency is inherently a conscious matter because it requires intentionality. In order to form an intention we have to hold in mind (in some sense) an objective, and that requires intentionality.  Original intentionality is unique to consciousness – in fact you could argue that it’s constitutive of consciousness if you believe all consciousness is consciousness of something – though I myself I wouldn’t go quite so far.

But what about those unconscious factors? Subconscious factors would seem to possess intentionality as well as conscious ones, if Freud is to be believed: I don’t see how I could hold a subconscious desire to kill my father and marry my mother without those desires being about my parents. Neil would argue that this isn’t relevant because we can’t endorse, question or reject motives outside consciousness – but how does he know that? If there’s unconscious endorsement, questioning and so on going on, he wouldn’t know, would he? It could be that the unconscious plays Hyde to the Jekyll of conscious thought, with plans and projects that are in its own terms no less complex or rational than conscious ones.

I think Neil is right: but it doesn’t seem securely proved in the way he was hoping for. The unconscious part of our minds never has chance to set down an explanation of its behaviour, after all: it could still in principle be the case that conscious rationalising is privileged in the literature and in ordinary discourse mainly because it’s the conscious part of the brain that does the talking and writes the blogs…

Unconscious free will

Picture: unconscious will. Does the idea of unconscious free will even make sense? Paula Droege, in the recent JCS, seems to think it might. Generally experiments like Libet’s famous ones, which seemed to show that decisions are made well before the decider is consciously aware of them, are considered fatal to free will. If the conscious activity came along only after the matter had been settled, it must surely have been powerless to affect it (there are some significant qualifications to this: Libet himself, for example, considered there was a power of last-minute veto which he called ‘free won’t’ – but still the general point is clear). If our conscious thoughts were irrelevant, it seems we didn’t have any say in the matter.

However, this view implies a narrow conception of the self in which unconscious processes are not really part of me and I only really consist of that entity that does all the talking. Yet in other contexts, notably in psychoanalysis, don’t we take the un- or sub-conscious to be more essential to our personality than the fleeting surface of consciousness, to represent more accurately what we ‘really’ want and feel? Droege, while conceding that if we take the narrow view there’s a good deal in the sceptical position, would prefer a wider view in which unconscious acts are valid examples of agency too. She would go further and bring in social influences (though it’s not entirely clear to me how the effects of social pressure can properly be transmuted into examples of my own free will), and she offers the conscious mind the consolation prize of being able to influence habits and predispositions which may in turn have a real causal influence on our actions.

I suppose there are several ways in which we exercise our agency. We perhaps tend to think of cases of conscious premeditation because they are the clearest, but in everyday life we just do stuff most of the time without thinking about it much, or very explicitly. Many of the details of our behaviour are left to ‘autopilot’, but in the great majority of cases the conscious mind would nevertheless claim these acts as its own. Did you stop at the traffic light and then move off again when it turned green? You don’t really remember doing it, but are generally ready to agree that you did. In unusual cases, we know that people sometimes even elaborate or confabulate spurious rationales for actions they didn’t really determine.

But it’s much more muddled than that. We do also at times seek to disown moral responsibility for something done when we weren’t paying proper attention, or where our rational responses were overwhelmed by a sudden torrent of emotion. Should someone who responds to the sight of a hated enemy by swerving to collide with the provoker be held responsible because the murderous act stems from emotions which are just as valid as cold calculation? Perhaps, but sometimes the opposite is taken to be the case, and the overwhelming emotion of a crime passionnel can be taken as an excuse. Then again few would accept the plea of the driver who says he shouldn’t be held responsible for an accident because he was too drunk to drive properly.

I think there may be an analogy with the responsibility held by the head of a corporation: the general rule is that the buck stops with the chief, even if the chief did not give orders for the particular action which subordinates have taken; in the same way we’re presumed by default to be responsible for what we do: but there are cases where control is genuinely and unavoidably lost, no matter what prudent precautions the chief may have put in place. There may be cases where the chief properly has full and sole responsibility; in other cases where the corporation has blundered on in pursuit of its own built-in inclinations it may be appropriate for the organization as a whole to accept blame for its corporate personality: and where confusion reigned for reasons beyond reasonable control, no responsibility may be assigned at all.

If that’s right, then Droege is on to something; but if there are two distinct grades of responsibility in play, there ought really to be two varieties of free will; the one exercised explicitly by the fully conscious me, and the other by ‘whole person’ me, in which the role of the conscious me, while perhaps not non-existent is small and perhaps mostly indirect. This is an odd thought, but if, like Droege, we broadly accept that Libet has disproved the existence of the first variety of free will, it means we don’t have the kind we can’t help believing in, but do have another kind we never previously thought of – which seems even odder.

The Ego Tunnel (pt 2)

Picture: Autoscopy. Among a number of interesting features, The Ego Tunnel includes a substantial account of out-of-body experiences (OBEs) and similar phenomena. Experiments where the subjects are tricked into mistaking a plastic dummy for their real hand (all done with mirrors), or into feeling themselves to be situated somewhere behind their own head (you need a camera for this) show that our perception of our own body and our own location are generated within our brain and are susceptible to error and distortion; and according to Metzinger this shows that they are really no more than illusions (Is that right, by the way – or are they only illusions when they’re wrong or misleading? The fact that a camera can be made to generate false or misleading pictures doesn’t mean that all photographs are delusions, does it?).

There are many interesting details in this account, quite apart from its value as part of the overall argument.  Metzinger briefly touches on four varieties of autoscopic (self-seeing) phenomena, all of which can be related to distinct areas of the brain:  autoscopic hallucination, where the subject sees an image of themselves; the feeling of a presence, where the subject has the strong sense of someone there without seeing anyone; the particularly disturbing heautoscopy, where the subject sees another self and switches back and forth into and out of it, unsure which is ‘the real me’; and the better-known OBE. OBEs arise in various ways: often detachment from the body is sudden, but in other cases the second self may lift out gradually from the feet, or may exit the corporeal body via the top of the head.  Metzinger tells us that he himself has experienced OBEs and made many efforts to have more (going so far as to persuade his anaesthetist to use ketamine on him in advance of an operation, with no result – I wonder whether the anaesthetist actually kept his word) ; speaking of lucid dreams, another personal interest, he tells the story of having one in which he dreamed an OBE. That seems an interesting bit of evidence: if you can dream a credible OBE, mightn’t they all be dreams? This seems to undercut the apparently strong sense of reality which typically accompanies them.

Interestingly, Metzinger reports that a conversation with Susan Blackmore helped him understand his own experiences.  Blackmore is of course another emphatic denier of the reality of the self. I don’t in any way mean to offer an ad hominem argument here, but it is striking that these two people both seem to have had a particular interest in ‘spooky’ dualistic phenomena which their rational scientific minds ultimately rejected, leading on to an especially robust rejection of the self. Perhaps people who lean towards dualism in their early years develop a particularly strong conception of the self, so that when they adopt monist materialism they reject the self altogether instead of seeking to redefine and accommodate it, as many of us would be inclined to do?

On that basis, you would expect Metzinger to be the hardest of hard determinists; his ideas seem to lean in that direction, but not decisively. He suggests that certain brain processes involved in preparing actions are brought up into the Ego Tunnel and hence seem to belong to us. They seem to be our own thoughts, our own goals and because the earlier stages remain outside the Tunnel, they seem to have come from nowhere, to be our own spontaneous creations. There are really no such things as goals in the world, any more than colours, but the delusion that they do exist is useful to us; the idea of being responsible for our own actions enables a kind of moral competition which is ultimately to our advantage (I’m not quite sure exactly how this  works). But in this case Metzinger pulls his punch: perhaps this is not the full story, he says, and describes compatibilism as the most beautiful position.

Metzinger pours scorn on the idea that we must have freedom of the will because we feel our actions to be free, yet he does give an important place to the phenomenology of the issue, pointing out that it is more complex than might appear. The more you look at them, he suggests, the more evasive conscious intentions become.  How curious it is then, that Metzinger, whose attention to phenomenology is outstandingly meticulous, should seem so sure that we have at all times a robust (albeit delusional) sense of our selves. I don’t find it so at all, and of course on this no less a person than David Hume is with me; with characteristically gentle but devastating scepticism, he famously remarked “For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I never can catch myself at any time without a perception, and never can observe any thing but the perception.”

Metzinger concludes by considering a range of moral and social issues which he thinks we need to address as our understanding of the mind improves. In his view, for example, we ought not to try to generate artificial consciousness. As a conscious entity, the AI would be capable of suffering, and in Metzinger’s view the chances are its existence would be more painful than pleasant. One reason for thinking so is the constrained and curtailed existence it could expect; another is that we only have our own minds to go on and would be likely to produce inferior, messed-up versions of it. But more alarming, Metzinger argues that human life itself involves an overall preponderance of pain over pleasure; he invokes Schopenhauer and Buddha. With characteristic thoroughness, he concedes that pleasure and pain may not be all that life is about; otherr achievements can justify a life of discomfort. But even so, the chances for an artificial consciousness, he feels are poor.

This is surely too bleak. I see no convincing reason to think that pain outweighs pleasure in general (certainly the Buddhist case, based on the perverse assumption that change is always painful, seems a weak point in that otherwise logical religion), and I see some reasons to think that a conscious robot would be less vulnerable to bad experiences than we are. It’s millions of years of evolution which have ingrained in us a fear of death and the motivating experience of pain:  the artificial consciousness need have none of that, but would surely be most likely to face its experiences with superhuman equanimity.

Of course caution is justified, but Metzinger in effect wants us to wait until we’ve sorted out the meaning of life before we get on with living it.

His attempt to raise this and other issues is commendable though; he’s right that the implications of recent progress have not received enough intelligent attention. Unfortunately I think the chances of some of these issues being addressed with philosophic rationality are slim. Another topic Metzinger raises, for example, is the question of what kinds of altered or enhanced mental states, from among the greatly expanded repertoire we are likely to have available in the near future, we ought to allow or facilitate; not much chance that his mild suggestions on that will have much impact.

There’s a vein of pessimism in his views on another topic. Metzinger fears that the progress of science, before the deeper issues have been sorted out, could inspire an unduly cynical, stripped-down view of human nature; a ‘vulgar materialism’, he calls it. Uninformed members of the public falling prey to this crude point of view might be tempted to think:

“The cat is out of the bag. We are gene-copying bio-robots, living out here on a lonely planet in a cold and empty physical universe. We have brains but no immortal souls and after seventy years or so the curtain drops. There will never be an afterlife, or any kind of reward or punishment for anyone… I get the message.”

Gosh: do we know anyone vulgar and unsophisticated enough to think like that?

Libet was wrong…?

Picture:  clock on screen. One of the most frequently visited pages on Conscious Entities is this account of Benjamin Libet’s remarkable experiments, which seemed to show that decisions to move were really made half a second before we were aware of having decided. To some this seemed like a practical disproof of the freedom of the will – if the decision was already made before we were consciously aware of it, how could our conscious thoughts have determined what the decision was?  Libet’s findings have remained controversial ever since they were published; they have been attacked from several different angles, but his results were confirmed and repeated by other researchers and seemed solid.

However, Libet’s conclusions rested on the use of Readiness Potentials (RPs). Earlier research had shown that the occurence of an RP in the brain reliably indicated that a movement was coming along just afterwards, and they were therefore seen as a neurological sign that the decision to move had been taken (Libet himself found that the movement could sometimes be suppressed after the RP had appeared, but this possibility, which he referred to as ‘free won’t ‘, seemed only to provide an interesting footnote). The new research, by Trevena and Miller at Otago, undermines the idea that RPs indicate a decision.

Two separate sets of similar experiments were carried out. They resembled Libet’s original ones in most respects, although computer screens and keyboards replaced Libet’s more primitive equipment, and the hand movement took the form of a key-press. A clock face similar to that in Libet’s experiments was shown, and they even provided a circling dot. In the earlier experiments this had provided an ingenious way of timing the subject’s awareness that a decision had been made – the subject would report the position of the dot at the moment of decision – but in Trevena and Miller’s research the clock and dot were provided only to make conditions resemble Libet’s as much as possible. Subjects were told to ignore them (which you might think rendered their inclusion pointless). This was because instead of allowing the subject to choose their own time for action, as in Libet’s original experiments, the subjects in the new research were prompted by a randomly-timed tone. This is obviously a significant change from the original experiment; the reason for doing it this way was that Trevena and Miller wanted to be able to measure occasions when the subject decided not to move as well as those when there was movement. Some of the subjects were told to strike a key whenever the tone sounded,  while the rest were asked to do so only about half the time (it was left up to them to select which tones to respond to, though if they seemed to be falling well below a 50-50 split they got a reminder in the latter part of the experiment).  Another significant difference from Libet’s tests is that left and right hands were used: in one set of experiments the subjects were told by a letter in the centre of the screen whether they should use the right or left hand on each occasion, in the other it was left up to them.

There were two interesting results. One was that the same kind of RP appeared whether the subject pressed a key or not. Trevena and Miller say this shows that the RP was not, after all, an indication of a decision to move, and was presumably instead associated with some more general kind of sustained attention or preparing for a decision. Second, they found that a different kind of RP, the Lateralised Readiness Potential or LRP, which provides an indication of readiness to move a particular hand, did provide an indication of a decision, appearing only where a movement followed; but the LRP did not appear until just after the tone. This suggests, in contradiction to Libet, that the early stages of action followed the conscious experience of deciding, rather than preceding it.

The differences between these new experiments and Libet’s originals provide a weak spot which Libetians will certainly attack.  Marcel Brass, whose own work with fMRI scanning confirmed and even extended Libet’s delay, seeming to show that decisions could be predicted anything up to ten seconds before conscious awareness, has apparently already said that in his view the changes undermine the conclusions Trevena and Miller would like to draw. Given the complex arguments over the exact significance of timings in Libet’s results, I’m sure the new results will prove contentious. However, it does seem as if a significant blow has been struck for the first time against the foundations of Libet’s remarkable results.

The Three Laws revisited

Picture: Percy - Brains he has nix. Ages ago (gosh, it was nearly five years ago) I had a piece where Blandula remarked that any robot clever enough to understand Isaac Asimov’s Three Laws of Robotics would surely be clever enough to circumvent them.  At the time I think all I had in mind was the ease with which a clever robot would be able to devise some rationalisation of the harm or disobedience it was contemplating.  Asimov himself was of course well aware of the possibility of this kind of thing in a general way.  Somewhere (working from memory) I think he explains that it was necessary to specify that robots may not, through inaction, allow a human to come to harm, or they would be able to work round the ban on outright harming by, for example, dropping a heavy weight on a human’s head.  Dropping the weight would not amount to harming the human because the robot was more than capable of catching it again before the moment of contact. But once the weight was falling, a robot without the additional specification would be under no obligation to do the actual catching.

That does not actually wrap up the problem altogether. Even in the case of robots with the additional specification, we can imagine that ways to drop the fatal weight might be found. Suppose, for example, that three robots, who in this case are incapable of catching the weight once dropped, all hold on to it and agree to let go at the same moment. Each individual can feel guiltless because if the other two had held on, the weight would not have dropped. Reasoning of this kind is not at all alien to the human mind;  compare the planned dispersal of responsibility embodied in a firing squad.

Anyway, that’s all very well, but I think there may well be a deeper argument here: perhaps the cognitive capacity required to understand and apply the Three Laws is actually incompatible with a cognitive set-up that guarantees obedience.

There are two problems for our Asimovian robot: first it has to understand the Laws; second, it has to be able to work out what actions will deliver results compatible with them.  Understanding, to begin with, is an intractable problem.  We know from Quine that every sentence has an endless number of possible interpretations; humans effortlessly pick out the one that makes sense, or at least a small set of alternatives; but there doesn’t seem to be any viable algorithm for picking through the list of interpretations. We can build in hard-wired input-output responses, but when we’re talking about concepts as general and debatable as ‘harm’, that’s really not enough. If we have a robot in a factory, we can ensure that if it detects an unexpected source of heat and movement of the kind a human would generate, it should stop thrashing its painting arm around – but that’s nothing like intelligent obedience of a general law against harm.

But even if we can get the robot to understand the Laws, there’s an equally grave problem involved in making it choose what to do.  We run into the frame problem (in its wider, Dennettian form). This is, very briefly, the problem that arises from tracking changes in the real world. For a robot to keep track of everything that changes (and everything which stays the same, which is also necessary) involves an unmanageable explosion of data. Humans somehow pick out just relevant changes; but again a robot can only pick out what’s relevant by sorting through everything that might be relevant, which leads straight back to the same kind of problem with indefinitely large amounts of data.

I don’t think it’s a huge leap to see something in common between the two problems; I think we could say that they both arise from an underlying difficulty in dealing with relevance in the face of  the buzzing complexity of reality. My own view is that humans get round this problem through recognition; roughly speaking, instead of looking at every object individually to determine whether it’s square, we throw everything into a sort of sieve with holes that only let square things drop through. But whether or not that’s right, and putting aside the question of how you would go about building such a faculty into a robot, I suggest that both understanding and obedience involve the ability to pick out a cogent, non-random option from an infinite range of possibilities.  We could call this free will if we were so inclined, but let’s just call it a faculty of choice.

Now I think that faculty, which the robot is going to have to exercise in order to obey the Laws, would also unavoidably give it the ability to choose whether to obey them or not. To have the faculty of choice, it has to be able to range over an unlimited set of options, whereas constraining it to any given set of outcomes  involves setting limits. I suppose we could put this in a more old-fashioned mentalistic kind of way by observing that obedience, properly understood, does not eliminate the individual will but on the contrary requires it to be exercised in the right way.

If that’s true (and I do realise that the above is hardly a tight knock-down argument) it would give Christians a neat explanation of why God could not have made us all good in the first place – though it would not help with the related problem of why we are exposed to widely varying levels of temptation and opportunity.  To the rest of us it offers, if we want it, another possible compatibilist formulation of the nature of free will.

Chaotic consciousness

Picture: Etch-a-Sketch. An interesting New Scientist piece recently reviewed research suggesting that chaos has an important part in the way the brain functions. More specifically, the suggestion is that the brain operates ‘on the edge of chaos’, in self-organised criticality;  sometimes it runs in ways which are predictable at a macro level, more or less like a conventional machine; but at times it also goes into chaotic states. The behaviour of the system in these states is still fully deterministic in a wholly traditional, classical way, but depends so exquisitely on the fine detail of the starting state that the behaviour of the system is in practice unpredictable. The analogy offered here is a growing pile of sand; you can’t tell exactly when it will suddenly go through a state shift – collapse – although over a long period the number of large and small collapses is amenable to statistical treatment (actually, I have to say I’ve never noticed piles of sand behaving in this interesting way, but that just shows what a poor observer I am).

The suggestion is that the occasional ‘avalanches’ of neuronal firing in the brain are useful, allowing the brain to enter new states more rapidly than it could otherwise do. Being on the edge of chaos allows “maximum transmission with minimum risk of descending into chaos”. The arrival of a neuronal avalanche is related to the sudden popping-up of an idea in the mind, or perhaps the unexpected recurrence of a random memory. There is also evidence that the duration of phase-shifts is related to IQ scores – perhaps in this case because the longer shift allows the recruitment of more neurons. The recruitment of additional neurons is presumed in such cases to be a good thing (I feel there must be some caveats about that), but there are also suggestions that excess time spent in phase-shifts could be a cause of schizophrenia (someone should set out a list somewhere of all the things that at one time or another have been put forward as causes of schizophrenia); while not enough phase-shifting in parts of the brain to do with social behaviour might have something to do with autism.

One claim not made in the article, but one which could well be made, is that all this might account for the sensation of free will. If the brain occasionally morphs through chaos into a new state, might it not be that the conclusions which emerge would seem to have come out of nowhere? We might be led to assume that these thoughts were freely generated, distinct from the normal predictable pattern. I think the temptation would be to frame such a theory as an explanation of the  illusion of free will: why we feel as if some of our decisions are free even though, in the final analysis, determinism rules. But I can also imagine that a compatibilist might claim that chaotic phase shifts really were freedom. A free act is one which is not predictable, such a person might argue; however, we don’t mean unpredictable in practice – none of us is currently able to look at a brain and predict the decisions it will make in any given circumstances. We mean predictable in principle; predictable if we had all the data plus unlimited time and computing power. Now are chaotic changes predictable in principle or not? They occur within normal physical rules, so in the ultimate sense they are clearly deterministic. But the difficulties are so great that to say that they’re only unpredictable in practice seems to stretch ‘practice’ a long way – we might easily need perfection of measurement to a degree which is never going to be obtainable under any imaginable real circumstances. Couldn’t we rather say, then, that we’re dealing with a third kind of unpredictability, neither quite unpredictability in mere practice nor quite unpredictability in principle, and take the view that decisions subject to this level of unpredictability deserve to be called free? I think we could, but ultimately I’m disinclined to do so because in the final analysis that feels more like inventing a new concept of freedom than justifying the existing one.

There’s another issue here that affects a number of the speculations in the article. We must beware of assuming too easily that features of the underlying process necessarily correspond directly with phenomenal features of experience. So, for example, it’s assumed that when the brain goes quickly into a new state in terms of its neuronal firing, that would be like a new thought popping up suddenly in our conscious minds, an idea which seemed to have come out of nowhere. It ain’t necessarily so (though it would be an interesting question to test). The fact that the brain uses chaos to achieve its results does not mean that the same chaos is directly experienced in our thoughts, any more than I experience say, that old 40Hz buzz starting up in my right parietal, or whatever. At the moment (not having read the actual research, of course) it seems equally likely that phase shifts are wholly outside conscious experience, perhaps, for example, being required in order to allow subordinate systems to catch up rapidly with a separate conscious process which they don’t directly influence. Or perhaps they’re just the vigorous shaking which clears our mental etch-a-sketch, correlated with but not constitutive of, the sophisticated complication of our conscious doodlings.