Feeling free

Eddy Nahmias recently reported on a ground-breaking case in Japan where a care-giving robot was held responsible for agreeing to a patient’s request for a lethal dose of drugs. Such a decision surely amounts to a milestone in the recognition of non-human agency; but fittingly for a piece published on 1 April, the case was in fact wholly fictional.

However, the imaginary case serves as an introduction to some interesting results from the experimental philosophy Nahmias has been prominent in developing. The research – and I take it to be genuine – aims not at clarifying the metaphysics or logical arguments around free will and responsibility, but at discovering how people actually think about those concepts.

The results are interesting. Perhaps not surprisingly, people are more inclined to attribute free will to robots when told that the robots are conscious. More unexpectedly, they attach weight primarily to subjective and especially emotional conscious experience. Free will is apparently thought to be more a matter of having feelings than it is of neutral cognitive processing.

Why is that? Nahmias offers the reasonable hypothesis that people think free will involves caring about things. Entities with no emotions, it might be, don’t have the right kind of stake in the decisions they make. Making a free choice, we might say, is deciding what you want to happen; if you don’t have any emotions or feelings you don’t really want anything, and so are radically disqualified from an act of will. Nahmias goes on to suggest, again quite plausibly, that reactive emotions such as pride or guilt might have special relevance to the social circumstances in which most of our decisions are made.

I think there’s probably another factor behind these results; I suspect people see decisions based on imponderable factors as freer than others. The results suggest, let’s say, that the choice of a lover is a clearer example of free will than the choice of an insurance policy; that might be because the latter choice has a lot of clearly calculable factors to do with payments versus benefits. It’s not unreasonable to think that there might be an objectively correct choice of insurance policy for me in my particular circumstances, but you can’t really tell someone their romantic inclinations are based on erroneous calculations.

I think it’s also likely that people focus primarily on interesting cases, which are often instances of moral decisions; those in turn often involve self-control in the face of strong desires or emotions.

Another really interesting result is that while philosophers typically see freedom and responsibility as two sides of the same coin, people’s everyday understanding may separate the two. It looks as though people do not generally distinguish all that sharply between the concepts of being causally responsible (it’s because of you it happened, whatever your intentions) and morally responsibly (you are blameworthy and perhaps deserve punishment). So, although people are unwilling to say that corporations or unconscious robots have free will, they are prepared to hold them responsible for their actions. It might be that people generally are happier with concepts such as strict liability than moral philosophers mainly are; or of course, we shouldn’t rule out the possibility that people just tend to suffer some mild confusion over these issues.

Thought-provoking stuff, anyway, and further evidence that experimental philosophy is a tool we shouldn’t reject.

Just deserts

Dan Dennett and Gregg Caruso had a thoughtful debate about free will on Aeon recently. Dennett makes the compatibilist case in admirably pithy style. You need, he says, to distinguish between causality and control. I can control my behaviour even though it is ultimately part of a universal web of causality. My past may in the final sense determine who I am and what I do, but it does not control what I do; for that to be true my past would need things like feedback loops to monitor progress against its previously chosen goals, which is nonsensical. This concept of being in control, or not being in control, is quite sufficient to ground our normal ideas of responsibility for our actions, and freedom in choosing them.

Caruso, who began by saying he thought their views might turn out closer than they seemed, accepts pretty well all of this, agreeing that it is possible to come up with conceptions of responsibility that can be used to underpin talk of free will in acceptable terms. But he doesn’t want to do that; instead he wants to jettison the traditional outlook.

At this point Caruso’s motivation may seem puzzling. Here we have a way of looking at freedom and responsibility which provides a philosophically robust basis for our normal conception of those two moral basics – ideas we could not easily do without in our everyday lives. Now sometimes philosophy may lead us to correct or reject everyday ideas, but typically only when they appear to be without rational justification. Here we seem to have a logically coherent justification for some everyday moral concepts. Isn’t that a case of ‘job done’?

In fact, as he quickly makes clear, Caruso’s objections mainly arise from his views on punishment. He does not believe that compatibilist arguments can underpin ‘basic desert’ in the way that would be needed to justify retributive punishment. Retribution, as a justification for punishment, is essentially backward looking; it says, approximately, that because you did bad things, bad things must happen to you. Caruso completely rejects this outlook, and all justifications that focus on the past (after all, we can’t change the past, so how can it justify corrective action?). If I’ve understood correctly, he favours a radically new regime which would seek to manage future harms from crime in broadly the way we seek to manage the harms that arise from ill-health.

I think we can well understand the distaste for punishments which are really based on anger or revenge, which I suspect lies behind Caruso’s aversion to purely retributive penalties. However, do we need to reject the whole concept of responsibility to escape from retribution? It seems we might manage to construct arguments against retribution on a less radical basis – as indeed, Dennett seeks to do. No doubt it’s right that our justification for punishments should be forward looking in their aims, but that need not exclude the evidence of past behaviour. In fact, I don’t know quite how we should manage if we take no account of the past. I presume that under a purely forward-looking system we assess the future probability of my committing a crime; but if I emerge from the assessment with a clean bill of health, it seems to follow that I can then go and do whatever I like with impunity. As soon as my criminal acts are performed, they fall into the past, and can no longer be taken into account. If people know they will not be punished for past acts, doesn’t the (supposedly forward-looking) deterrent effect evaporate?

That must surely be wrong one way or another, but I don’t really see how a purely future-oriented system can avoid unpalatable features like the imposition of restrictions on people who haven’t actually done anything, or the categorisation of people into supposed groups of high or low risk. When we imagine such systems we imagine them being run justly and humanely by people like ourselves; but alas, people like us are not always and everywhere in charge, and the danger is that we might be handing philosophical credibility to people who would love the chance to manage human beings in the same way as they might manage animals.

Nothing, I’m sure, could be further from Gregg Caruso’s mind; he only wants to purge some of the less rational elements from our system of punishment. I find myself siding pretty much entirely with Dennett, but it’s a stimulating and enlightening dialogue.

Forgotten Crimes

Should people be punished for crimes they don’t remember committing? Helen Beebee asks this meaty question.

Why shouldn’t they? I take the argument to have two main points. First, because they don’t remember, it can be argued that they are no longer the same person as the one who committed the crime. Second, if you’re not the person who committed the crime, you can’t be responsible and therefore should not be punished. Both of these points can be challenged.

The idea that not remembering makes you a different person takes memory to be the key criterion of personal identity, a view associated with John Locke among others. But memory is in practice a very poor criterion. If I remember later, do I then become responsible for the crime? We remember unconsciously things we cannot recall explicitly; does unconscious memory count, and if so, how would we know? If I remember only unconsciously, is my unconscious self the same while my conscious one is not, so that perhaps I ought to suffer punishment I’m only aware of unconsciously? If I do not remember details, but have that sick sense of certainty that, yes, I did it alright, am I near enough the same person? What if I have a false, confabulated memory of the crime, but one that happens to be veridical, to tell the essential truth, if inaccurately? Am I responsible? If so, and if false memories will therefore do, then ought I to be held responsible even if in fact I did not commit the crime, so long as I ‘remember’ doing it?

Moreover, aren’t the practical consequences unacceptable? If forgetting the crime exculpates me, I can commit a murder and immediately take mnestic drugs that will make me forget it. If that tactic is itself punishable, I can take a few more drugs and forget even coming up with the idea. Surely few people think it really works as selectively as that. In order to be free of blame, you really need to be a different person, and that implies losing much more than a single memory. Perhaps it requires the loss of most memories, or more plausibly a loss of mentally retained things that go a lot wider than mere factual or experiential memory; my habits of thought, or the continuity of my personality. I think it’s possible that Locke would say something like this if he were still around. So perhaps the case ought to be that if you do not remember the crime, and other features of your self have suffered an unusual discontinuity, such that you would no longer commit a similar crime in similar circumstances, then you are off the hook. How we could establish such a thing forensically is quite another matter, of course.

What about the second point, though? Does the fact that I am now a different, and also a better person, one who doesn’t remember the crime, mean I shouldn’t be punished? Not necessarily. Legally, for example, we might look to the doctrines of joint enterprise and strict liability to show that I can sometimes be held responsible in some degree for crimes I did not directly commit, and even ones which I was powerless to prevent, if I am nevertheless associated with the crime in the required ways.

It partly depends on why we think people should be punished at all. Deterrence is a popular justification, but it does not require that I am really responsible.  Being punished for a crime may well deter me and others from attempting similar crimes in future, even if I didn’t do it at all, never mind cases where my responsibility is merely attenuated by loss of memory. The prevention of revenge is another justification that doesn’t necessarily require me to have been fully guilty. Or there might be doctrines of simple justice that hold to the idea of crime being followed by punishment, not because of any consequences that might follow, but just as a primary ethical principle. Under such a justification, it may not matter whether I am responsible in any strong sense. Oedipus did not know he had killed his father, and so could not be held responsible for patricide, at lest on most modern understandings; but he still put out his own eyes.

Much more could be said about all that, but for me the foregoing arguments are enough to suggest that memory is not really the point, either for responsibility or for personal identity. Beebee presents an argument about Bruce Banner and the Hulk; she feels Banner cannot directly be held responsible for the mayhem caused by the Hulk. Perhaps not, but surely the issue there is control, not memory. It’s irrelevant whether Banner remembers what the Hulk did, all that matters is whether he could have prevented it. Beebee makes the case for a limited version of responsibility which applies if Banner can prevent the metamorphosis into Hulk in the first place, but I think we have already moved beyond memory, so the fact that this special responsibility does not apply in the real life case she mentions is not decisive.

One point which I think should be added to the account, though it too is not decisive, is that the loss of responsibility may entail loss of personhood in a wider sense. If we hold that you are no longer the person who committed the crime, you are not entitled to their belongings or rights either. You are not married to their spouse, nor the heir to their parents. Moreover, if we think you are liable to turn into someone else again at some stage, and we know that criminals are, as it were, in your repertoire of personalities, we may feel justified in locking you up anyway; not as a punishment, but as prudent prevention. To avoid consequences like these and retain our integrity as agents, we may feel it is worth claiming our responsibility for certain past crimes, even if we no longer recall them.

Bad Bots: Retribution

jailbotIs there a retribution gap? In an interesting and carefully argued paper John Danaher argues that in respect of robots, there is.

For human beings in normal life he argues that a fairly broad conception of responsibility works OK. Often enough we don’t even need to distinguish between causal and moral responsibility, let alone worrying about the six or more different types identified by hair-splitting philosophers.

However, in the case of autonomous robots the sharing out of responsibility gets more difficult. Is the manufacturer, the programmer, or the user of the bot responsible for everything it does, or does the bot properly shoulder the blame for its own decisions? Danaher thinks that gaps may arise, cases in which we can blame neither the humans involved nor the bot. In these instances we need to draw some finer distinctions than usual, and in particular we need to separate the idea of liability into compensation liability on one hand and and retributive liability on the other. The distinction is essentially that between who pays for the damage and who goes to jail; typically the difference between matters dealt with in civil and criminal courts. The gap arises because for liability we normally require that the harm must have been reasonably foreseeable. However, the behaviour of autonomous robots may not be predictable either by their designers or users on the one hand, or by the bots themselves on the other.

In the case of compensation liability Danaher thinks things can be patched up fairly readily through the use of strict and vicarious liability. These forms of liability, already well established in legal practice, give up some of the usual requirements and make people responsible for things they could not have been expected to foresee or guard against. I don’t think the principles of strict liability are philosophically uncontroversial, but they are legally established and it is at least clear that applying them to robot cases does not introduce any new issues. Danaher sees a worse problem in the case of retribution, where there is no corresponding looser concept of responsibility, and hence, no-one who can be punished.

Do we, in fact, need to punish anyone? Danaher rightly says that retribution is one of the fundamental principles behind punishment in most if not all human societies, and is upheld by many philosophers. Many, perhaps, but my impression is that the majority of moral philosophers and lay opinion actually see some difficulty in justifying retribution. Its psychological and sociological roots are strong, but the philosophical case is much more debatable. For myself I think a principle of retribution can be upheld , but it is by no means as clear or as well supported as the principle of deterrence, for example. So many people might be perfectly comfortable with a retributive gap in this area.

What about scapegoating – punishing someone who wasn’t really responsible for the crime? Couldn’t we use that to patch up the gap?  Danaher mentions it in passing, but treats it as something whose unacceptability is too obvious to need examination. I think, though, that in many ways it is the natural counterpart to the strict and vicarious liability he endorses for the purposes of compensation. Why don’t we just blame the manufacturer anyway – or the bot (Danaher describes Basil Fawlty’s memorable thrashing of his unco-operative car)?

How can you punish a bot though? It probably feels no pain or disappointment, it doesn’t mind being locked up or even switched off and destroyed. There does seem to be a strange gap if we have an entity which is capable of making complex autonomous decisions, but doesn’t really care about anything. Some might argue that in order to make truly autonomous decisions the bot must be engaged to a degree that makes the crushing of its hopes and projects a genuine punishment, but I doubt it. Even as a caring human being it seems quite easy to imagine working for an organisation on whose behalf you make complex decisions, but without ultimately caring whether things go well or not (perhaps even enjoying a certain schadenfreude in the event of disaster). How much less is a bot going to be bothered?

In that respect I think there might really be a punitive gap that we ought to learn to live with; but I expect the more likely outcome in practice is that the human most closely linked to disaster will carry the case regardless of strict culpability.

Eagleman’s Law

Picture: Incognito. I thought David Eagleman’s book SUM was excellent – it’s a series of short accounts of the afterlife in which each version turns out to be surprising and disappointing in a variety of imaginative ways. So I looked forward to Incognito, his serious account of the conscious mind. The dazzling striped lettering on the dustjacket suggests we’re in for a lively read, and it does not mislead.

The first part of the book – the bulk of it in fact – is a highly readable account of many of the interesting and counter-intuitive discoveries of modern research about how the brain in general, and perception in particular, works.  There are many old friends here – blindsight, split brains, and synaesthesia, and so on – but also some stuff that was new to me. If you want to know what Japanese chicken-sexers and WWII British plane spotters have in common, this is the book for you.  In places I felt Eagleman’s account could have benefited from a few caveats and qualifications, and at times the breeziness of his explanation seems to carry him away. Is the tendency to talk freely about your life to anonymous strangers – people you might meet on a train journey for example – the ‘explanation’ for the continuance of confession in the Catholic church?  Er, no: back to the drawing board on that one, I think.  But in fairness this is essentially a popular account, not a scientific paper.

As the book progresses it becomes clear that Eagleman’s purpose is not merely to summarise interesting research: in fact, he’s been softening us up for some points of his own.  He speaks warmly of Minsky’s Society of Mind, but suggests that to complete the picture we need to assume that there is an ongoing competition for control among the various agents running our minds. I don’t think this idea is quite as novel as Eagleman seems to suppose, and he goes on to make a very traditional use of it by drawing a distinction between a rational controller, able to defer gratification, and a short-term pleasure seeker. This sort of echoes Freud, and for that matter Plato’s charioteer.

In fact Eagleman’s main purpose is to change our view of moral responsibility and legal responses to crime.  He spends some time quoting examples of people who committed crimes under the influence of drugs or brain tumours, and recounts the well-known story of Phineas Gage, whose behaviour was changed for the worse when a tamping iron was accidentally fired through his brain. This last example perhaps needs to be handled with a little more care than Eagleman gives it, as there are reasons for a degree of scepticism about how changed or how bad Gage’s behaviour became.  Eagleman foresees a day when neuroscience will make it impossible to hang on to the idea that free will means anything or that anyone is ultimately responsible for anything – or as puts it blameworthy. I think Eagleman is giving up too quickly: without re-fighting the Free Will issue, aren’t there special faculties of planning and decision-making which conscious humans have and other creatures lack? If so, isn’t it worth appealing to them, and isn’t blame a tool for doing so?

Eagleman seems to have a bit of a campaign going against blame. His base is at Baylor College of Medicine in Houston where he directs the College’s Initiative on Neuroscience and Law (as well as the Eagleman Laboratory for Perception and Action). He approvingly quotes Lord Bingham:

In the past, the law has tended to base its approach… on a series of rather crude working assumptions: adults of competent mental capacity are free to choose whether they will act in one way or another; they are presumed to act rationally, and in what they conceive to be their own best interests; they are credited with such foresight of the consequences of their actions as reasonable people in their position could ordinarily be expected to have; they are generally taken to mean what they say. Whatever the merits or demerits of working assumptions such as these in the ordinary range of cases, it is evident that they do not provide a uniformly accurate guide to human behaviour.

At first sight it’s possible to read this as a liberal, even  humane point of view: isn’t it ridiculous the way we blame and punish all these people for things they couldn’t really help?  But a moment’s reflection shows that the noble lord is letting these common folk off their punishment only because he’s demoting them into a sub-human category.  You know, he says, for some reason our legal system bends over backwards to treat people as if they were dignified creatures of some moral standing: it goes on treating them as worthy of admonition, worthy indeed of the honour of punishment, long after they have proved themselves to be pond life. You would almost think the law considered these people in some sense the equal of their judges!

The more I read the quote the less I like it.

Eagleman certainly has no truck with equality, and devotes a section to denouncing it. The question of whether equality of income is in itself a moral desideratum or an economic advantage, or the reverse, is of course politically contentious: but I had thought that no-one anywhere on the spectrum denied moral equality. Can Eagleman possibly mean that we don’t all equally deserve a fair hearing, natural justice, and to be judged by our actions alone?  Reader, I fear he does.

In fact what Eagleman is advocating in practice is not clear to me in any detail, but a couple of points are well established. People will still go to jail, he is keen to emphasise: not in order to be punished, but in order to protect society. It follows, though Eagleman does not dwell on this, that they might stay in jail forever, if they continue to be deemed dangerous. Indeed, if neurolaw (it seems to be as much a matter of genetics and other factors as genuine neuroscience) gives them the thumbs down as high-risk future perps, it seems to follow that they might find themselves locked up before they’ve done anything.

The other point, a little more appealing, is that Eagleman believes people can be trained out of their vicious propensities. Crime, he suggests, occurs when the struggle between the different agents in our mind goes the wrong way: when the rational self loses out to the weak-willed glutton. He mentions experiments conducted by his colleagues, which use neural feedback to try to help people overcome their desire for chocolate cake: he thinks a similar ‘prefrontal workout’ might help criminals get ready to re-enter society. So perhaps they won’t be in jail long, after all?

But does crime really arise from weakness of will? Is it that everyone wants to be good, but sometimes some people give way to an immediate temptation? Isn’t that a rather minor part of the problem? I can’t help thinking what a sunlit, untroubled life Eagleman must have led – never having experienced in himself or having reason to suspect in others any of the deep dark recesses of the soul – if he thinks evil is more or less the same as finding it hard to stop eating chocolate cake.

Another instance of naivety seems embodied in the idea that in the pandemonic struggle of our minds one side is always good and the other bad. Suppose we train our criminals to overcome their temptations and enthrone the rational, long-term part of their brain – will that make them model citizens, or will they become psychopaths who’ve learnt to rein in the empathy and repugnance which would otherwise have prevented their crimes? Terrible things have been done on grounds that seemed entirely cold and rational to their perpetrators. Sometimes it’s better to leave Falstaff in charge: there may be a glut of chocolate cake, but the Battle of Shrewsbury gets cancelled.

Eagleman is aware of the poor precedents for science-driven justice, but he has a curious way of immunising his own mind against them. He describes the failure of psychologists to predict the rate of re-offending: he describes lobotomy with distaste, yet somehow he manages to construe both as relics of the traditional way of doing things rather than early attempts at the kind of science-driven crime management he too is advocating (of course, they got it wrong while he will get it right). He describes the plot of One Flew Over the Cuckoo’s Nest without seeming to realise how much the predicament of McMurphy resembles that of an inmate in a new Eaglemanic detention facility. When McMurphy was in jail, under the benighted old system, his punishment was limited to match his crime and when his term was up a system like that deplored by Lord Bingham persisted in trying to make a free moral agent of him again. In the asylum, as in Eagleman’s jail, you don’t get out till the men in white coats are pleased with you.

At the end of the day, I think there’s a fundamental concept missing from Eagleman’s analysis: justice. He’s not the only one in recent times to overlook it or mistake it for revenge, but I’ll go out on a limb and say that it’s fundamental. There is a moral imperative that we should ensure good behaviour leads to good things and bad behaviour to bad regardless of other considerations, and this is what lies at the root of all punishment. Deterrence and rehabilitation are additional benefits we should strive to secure, but justice is what it’s about. Only justice permits, in theory and often in practice, the exercise of governmental or judicial power: to put it aside, however beguiling the reason, is tyranny.

Coming off that high horse, I suppose we can thank Eagleman for stating explicitly conclusions which others have avoided or fudged, and thereby promoting the clarity of debate. He could go a bit further in setting out the implications of his theses and proposing what they might really mean in practice. There’s no doubting that our current judicial systems are far from perfect, and even if we reject Eagleman’s prescriptions, they might in the end help things move on.