Posts tagged ‘morality’

What is the moral significance of consciousness? Jim Davies addressed the question in a short but thoughtful piece recently.

Davies quite rightly points out that although the nature of consciousness is often seen as an academic matter, remote from practical concerns, it actually bears directly on how we treat animals and each other (and of course, robots, an area that was purely theoretical not that long ago, but becomes more urgently practical by the day). In particular, the question of which entities are to be regarded as conscious is potentially decisive in many cases.

There are two main ways my consciousness affects my moral status. First, if I’m not conscious, I can’t be a moral subject, in the sense of being an agent (perhaps I can’t anyway, but if I’m not conscious it really seems I can’t get started). Second, I probably can’t be a moral object either; I don’t have any desires that can be thwarted and since I don’t have any experiences, I can’t suffer or feel pain.

Davies asks whether we need to give plants consideration. They respond to their environment and can suffer damage, but without a nervous system it seems unlikely they feel pain. However, pain is a complex business, with a mix of simple awareness of damage, actual experience of that essential bad thing that is the experiential core of pain, and in humans at least, all sorts of other distress and emotional response. This makes the task of deciding which creatures feel pain rather difficult, and in practice guidelines for animal experimentation rely heavily on the broad guess that the more like humans they are, the more we should worry. If you’re an invertebrate, then with few exceptions you’re probably not going to be treated very tenderly. As we come to understand neurology and related science better, we might have to adjust our thinking. This might let us behave better, but it might also force us to give up certain fields of research which are useful to us.

To illustrate the difference between mere awareness of harm and actual pain, Davies suggests the example of watching our arm being crushed while heavily anaesthetised (I believe there are also drugs that in effect allow you to feel the pain while not caring about it). I think that raises some additional fundamental issues about why we think things are bad. You might indeed sit by and watch while your arm was crushed without feeling pain or perhaps even concern. Perhaps we can imagine that for some reason you’re never going to need your arm again (perhaps now you have a form of high-tech psychokinesis, an ability to move and touch things with your mind that simply outclasses that old-fashioned ‘arm’ business), so you have no regrets or worries. Even so, isn’t there just something bad about watching the destruction of such a complex and well-structured limb?

Take a different example; everyone is dead and no-one is ever coming back, not even any aliens. The only agent left is a robot which feels no pleasure or pain but makes conscious plans; it’s a military robot and it spends its time blowing up fine buildings and destroying works of art, for no particular reason. Its vandalistic rampage doesn’t hurt anyone and cannot have any consequences, but doesn’t its casual destructiveness still seem bad?

I’d like to argue that there is a badness to destruction over and above its consequential impact, but it’s difficult to construct a pure example, and I know many people simply don’t share my intuition. It is admittedly difficult because there’s always the likelihood that one’s intuitions are contaminated by ingrained assumptions about things having utility. I’d like to say there’s a real moral rule that favours more things and more organisation, but without appealing to consequentialist arguments it’s hard for me to do much more than note that in fact moral codes tend to inhibit destruction and favour its opposite.

However, if my gut feeling is right, it’s quite important, because it means the largely utilitarian grounds used for rules about animal research and some human matters, re not quite adequate after all; the fact that some piece of research causes no pain is not necessarily enough to stop its destructive character bring bad.

It’s probably my duty to work on my intuitions and arguments a bit more, but that’s hard to do when you’re sitting in the sun with a beer in the charming streets of old Salamanca…

dagstuhl-ceIs there an intermediate ethical domain, suitable for machines?

The thought is prompted by this summary of an interesting seminar on Engineering Moral Agents, one of the ongoing series hosted at Schloss Dagstuhl. It seems to have been an exceptionally good session which got into some of the issues in a really useful way – practically oriented but not philosophical naive. It noted the growing need to make autonomous robots – self-driving cars, drones, and so on – able to deal with ethical issues. On the one hand it looked at how ethical theories could be formalised in a way that would lend itself to machine implementation, and on the other how such a formalisation could in fact be implemented. It identified two broad approaches: top-down, where in essence you hard-wire suitable rules into the machine, and bottom-up, where the machine learns for itself from suitable examples. The approaches are not necessarily exclusive, of course.

The seminar thought that utilitarian or Kantian theories of morality were both prima facie candidates for formalisation. Utilitarian or more broadly, consequentialist theories look particularly promising because calculating the optimal value (such as the greatest happiness of the greatest number) achievable from the range of alternatives on offer looks like something that can be reduced to arithmetic fairly straightforwardly. There are problems in that consequentialist theories usually yield at least some results that look questionable in common sense terms (finding the initial values to slot into your sums is also a non-trivial challenge – how do you put a clear numerical value on people’s probable future happiness?)

A learning system eases several of these problems. You don’t need a fully formalised system (so long as you can agree on a database of examples). But you face the same problems that arise for learning systems in other contexts; you can’t have the assurance of knowing why the machine behaves as it does, and if your database had unnoticed gaps or bias you may suffer from sudden catastrophic mistakes.  The seminar summary rightly notes that a machine that learned its ethics will not be able to explain its behaviour; but I don’t know that that means it lacks agency; many humans would struggle to explain their moral decisions in a way that would pass muster philosophically. Most of us could do no more than point to harms avoided or social rules observed at best.

The seminar looked at some interesting approaches, mentioned here with tantalising brevity: Horty’s default logic, Sergot’s STIT (See To It That) logic; and the possibility of drawing on the decision theory already developed in the context of micro-economics. This is consequentialist in character and there was an examination of whether in fact all ethical theories can be restated in consequentialist terms (yes, apparently, but only if you’re prepared to stretch the idea of a consequence to a point where the idea becomes vacuous). ‘Reason-based’ formalisations presented by List and Dietrich interestingly get away from narrow consequentialisms and their problems using a rightness function which can accommodate various factors.

The seminar noted that society will demand high, perhaps precautionary standards of safety from machines, and floated the idea of an ethical ‘black box’ recorder. It noted the problem of cultural neutrality and the risk of malicious hacking. It made the important point that human beings do not enjoy complete ethical agreement anyway, but argue vigorously about real issues.

The thing that struck me was how far it was possible to go in discussing morality when it is pretty clear that the self-driving cars and so on under discussion actually have no moral agency whatever. Some words of caution are in order here. Some people think moral agency is a delusion anyway; some maintain that on the contrary, relatively simple machines can have it. But I think for the sake of argument we can assume that humans are moral beings, and that none of the machines we’re currently discussing is even a candidate for moral agency – though future machines with human-style general understanding may be.

The thing is that successful robots currently deal with limited domains. A self-driving car can cope with an array of entities like road, speed, obstacle, and so on; it does not and could not have the unfettered real-world understanding of all the concepts it would need to make general ethical decisions about, for example, what risks and sacrifices might be right when it comes to actual human lives. Even Asimov’s apparently simple Laws of Robotics required robots to understand and recognise correctly and appropriately the difficult concept of ‘harm’ to a human being.

One way of squaring this circle might be to say that, yes, actually, any robot which is expected to operate with any degree of autonomy must be given a human-level understanding of the world. As I’ve noted before, this might actually be one of the stronger arguments for developing human-style artificial general intelligence in the first place.

But it seems wasteful to bestow consciousness on a roomba, both in terms of pure expense and in terms of the chronic boredom the poor thing would endure (is it theoretically possible to have consciousness without the capacity for boredom?). So really the problem that faces us is one of making simple robots, that operate on restricted domains, able to deal adequately with occasional issues from the unrestricted domain of reality. Now clearly ‘adequate’ is an important word there. I believe that in order to make robots that operate acceptably in domains they cannot understand, we’re going to need systems that are conservative and tend towards inaction. We would not, I think, accept a long trail of offensive and dangerous behaviour in exchange for a rare life-saving intervention. This suggests rules rather than learning; a set of rules that allow a moron to behave acceptably without understanding what is going on.

Do these rules constitute a separate ethical realm, a ‘sub-ethics’ that substitute for morality when dealing with entities that have autonomy but no agency? I rather think they might.

Interesting exchange about Eric Schwitzgebel’s view that we have special obligations to robots…

crimbotSome serious moral dialogue about robots recently. Eric Schwitzgebel put forward the idea that we might have special duties in respect of robots, on the model of the duties a parent owes to children, an idea embodied in a story he wrote with Scott Bakker. He followed up with two arguments for robot rights; first, the claim that there is no relevant difference between humans and AIs, second, a Bostromic argument that we could all be sims, and if we are, then again, we’re not different from AIs.

Scott has followed up with a characteristically subtle and bleak case for the idea that we’ll be unable to cope with the whole issue anyway. Our cognitive capacities, designed for shallow information environments, are not even up to understanding ourselves properly; the advent of a whole host of new styles of cognition will radically overwhelm them. It might well be that the revelation of how threadbare our own cognition really is will be a kind of poison pill for philosophy (a well-deserved one on this account, I suppose).

I think it’s a slight mistake to suppose that morality confers a special grade of duty in respect of children. It’s more that parents want to favour their children, and our moral codes are constructed to accommodate that. It’s true society allocates responsibility for children to their parents, but that’s essentially a pragmatic matter rather than a directly moral one. In wartime Britain the state was happy to make random strangers responsible for evacuees, while those who put the interests of society above their own offspring, like Brutus (the original one, not the Caesar stabber) have sometimes been celebrated for it.

What I want to do though, is take up the challenge of showing why robots are indeed relevantly different to human beings, and not moral agents. I’m addressing only one kind of robot, the kind whose mind is provided by the running of a program on a digital computer (I know, John Searle would be turning in his grave if he wasn’t still alive, but bear with me). I will offer two related points, and the first is that such robots suffer grave problems over identity. They don’t really have personal identity, and without that they can’t be moral agents.

Suppose Crimbot 1 has done a bad thing; we power him down, download his current state, wipe the memory in his original head, and upload him into a fresh robot body of identical design.

“Oops, I confess!” he says. Do we hold him responsible; do we punish him? Surely the transfer to a new body makes no difference? It must be the program state that carries the responsibility; we surely wouldn’t punish the body that committed the crime. It’s now running the Saintbot program, which never did anything wrong.

But then neither did the copy of Crimbot 1 software which is now running in a different body – because it’s a copy, not the original. We could upload as many copies of that as we wanted; would they all deserve punishment for something only one robot actually did?

Maybe we would fall back on the idea that for moral responsibility it has to be the same copy in the same body? By downloading and wiping we destroyed the person who was guilty and merely created an innocent copy? Crimbot 1 in the new body smirks at that idea.

Suppose we had uploaded the copy back into the same body? Crimbot 1 is now identical, program and body, the same as if we had merely switched him off for a minute. Does the brief interval when his data registers had different values make such a moral difference? What if he downloaded himself to an internal store, so that those values were always kept within the original body? What if he does that routinely every three seconds? Does that mean he is no longer responsible for anything, (unless we catch him really quickly) while a version that doesn’t do the regular transfer of values can be punished?

We could have Crimbot 2 and Crimbot 3; 2 downloads himself to internal data storage every second and the immediately uploads himself again. 3 merely pauses every second for the length of time that operation takes. Their behaviour is identical, the reasons for it are identical; how can we say that 2 is innocent while 3 is guilty?

But then, as the second point, surely none of them is guilty of anything? Whatever may be true of human beings, we know for sure that Crimbot 1 had no choice over what to do; his behaviour was absolutely determined by the program. If we copy him into another body, and set him uip wioth the same circumstances, he’ll do the same things. We might as well punish him in advance; all copies of the Crimbot program deserve punishment because the only thing that prevented them from committing the crime would be circumstances.

Now, we might accept all that and suggest that the same problems apply to human beings. If you downloaded and uploaded us, you could create the same issues; if we knew enough about ourselves our behaviour would be fully predictable too!

The difference is that in Crimbot the distinction between program and body is clear because he is an artefact, and he has been designed to work in certain ways. We were not designed, and we do not come in the form of a neat layer of software which can be peeled off the hardware. The human brain is unbelievably detailed, and no part of it is irrelevant. The position of a single molecule in a neuron, or even in the supporting astrocytes, may make the difference between firing and not firing, and one neuron firing can be decisive in our behaviour. Whereas Crimbot’s behaviour comes from a limited set of carefully designed functional properties, ours comes from the minute specifics of who we are. Crimbot embodies an abstraction, he’s actually designed to conform as closely as possible to design and program specs; we’re unresolvably particular and specific.

Couldn’t that, or something like that, be the relevant difference?

David Gunkel of NIU has produced a formidable new book (via) on the question of whether machines should now be admitted to the community of moral beings.

He lets us know what his underlying attitudes are when he mentions by way of introduction that he thought of calling the book A Vindication of the Rights of Machines, in imitation of Mary Wollstonecraft. Historically Gunkel sees the context as one in which a prolonged struggle has gradually extended the recognised moral domain from being the exclusive territory of rich white men to the poor, people of colour, women and now tentatively perhaps even certain charismatic animals (I think it overstates the case a bit to claim that ancient societies excluded women, for example, from the moral realm altogether: weren’t women like Eve and Pandora blamed for moral failings, while Lucretia and Griselda were held up as fine examples of moral behaviour – admittedly of a rather grimly self-subordinating kind? But perhaps I quibble.) Given this background the eventual admission of machines to the moral community seems all but inevitable; but in fact Gunkel steps back and lowers his trumpet. His more modest aim, he says, is like Socrates simply to help people ask better questions. No-one who has read any Plato believes that Socrates didn’t have his answers ready before the inquiry started, so perhaps this is a sly acknowledgement that Gunkel too, thinks he really knows where this is all going.

For once we’re not dealing here with the Anglo-Saxon tradition of philosophy: Gunkel may be physically in Illinois, but intellectually he is in the European Continental tradition, and what he proposes is a Derrida-influenced deconstruction. Deconstruction, as he concisely explains, is not destruction or analysis or debunking, but the removal from an issue of the construction applied heretofore.  We can start by inverting the normal understanding, but then we look for the emergence of a new way of ‘thinking otherwise’ on the topic which escapes the traditional framing. Even the crustiest of Anglo-Saxons ought to be able to live with that as a modus operandi for an enquiry.

The book falls into three sections: in the first two Gunkel addresses moral agency, questions about the morality of what we do, and then moral patiency, about the morality of what is done to us. This is a sensible and useful division. Each section proceeds largely by reportage rather than argument, with Gunkel mainly telling us what others have said, followed by summaries which are not really summaries (it would actually be very difficult to summarise the multiple, complex points of view explored) but short discussions moving the argument on. The third and final section discusses a number of proposals for ‘thinking otherwise’.

On agency, Gunkel sets out a more or less traditional view as a starting point and notes that identifying agents is tricky because of the problem of ‘other minds’: we can never be sure whether some entity is acting with deliberate intention because we can never know  the contents of another mind. He seems to me to miss a point here; the advent of the machines has actually transformed the position. It used to be possible to take it for granted that the problem of other minds was outside the scope of science, but the insights generated by AI research and our ever-increasing ability to look inside working brains with scanners mean that this is no longer the case. Science has not yet solved the problem, but the idea that we might soon be able to identify agents by objective empirical measurement no longer requires reckless optimism.

Gunkel also quotes various sources to show that actual and foreseen developments in AI are blurring the division between machine and human cognition (although we could argue that that seems to be happening more because the machines are getting better and moving into marginal territory than because of any fundamental flaws in the basic concepts).  Instrumentalism would transfer all responsibility to human beings, reducing machines to the status of tools, but against that Gunkel quotes a Heideggerian concept of machine autonomy and criticisms of inherent anthropocentrism. He gently rejects the argument of Joanna Bryson that robots should be slaves (in normal usage I think slaves have to be people, which in this discussion begs the question). He rightly describes the concept of personhood as central and points out its links with consciousness, which, as we can readily agree, throws a whole new can of worms into the mix. All in all, Gunkel reasonably concludes that we’re in some difficulty and that the discussion appears to  ‘lead into that kind of intellectual cul-de-sac or stalemate that Hegel called a “bad infinite.”’

He doesn’t give up without considering some escape routes. Deborah Johnson interestingly proposes that although computers lack the metal states required for proper agency, they have intentionality and are therefore moral players at some level. Various others offer proposals which lower the bar for moral agency in one way or another; but all of these are in some respects unsatisfactory. In the end Gunkel thinks we might do well to drop the whole mess and try an approach founded on moral patiency instead.

The question now becomes, not should we hold machines responsible for their actions, but do we have a moral duty to worry about what we do to them? Gunkel feels that this side of the question has often been overshadowed by the debate about agency, but in some ways it ought to be easier to deal with. An interesting proposition here is that of a Turing Triage Test: if a machine can talk to us for a suitable time about moral matters without our being able to distinguish it from a human being, we ought to give it moral status and presumably, not turn it off. Gunkel notes reasonable objections that such a test requires all the linguistic and general cognitive capacity of the original Turing test simply in order to converse plausibly, which is surely asking too much. Although I don’t like the idea of the test very much, I think there might be ways round these objections if we could reduce the interaction to multiple-choice button-pushing, for example.

It can be argued that animals, while lacking moral agency, have a good claim to be moral patients. They have no duties, but may have rights, to put it another way. Gunkel rightly points here to the Benthamite formulation that what matters is not whether they can think, but whether they can suffer; but he notes considerable epistemological problems (we’re up against Other Minds again). With machines the argument from suffering is harder to make because hardly anyone believes they do suffer: although they may simulate emotions and pain, it is most often agreed that in this area at least Searle was right that simulations and reality are poles apart. Moreover it’s debatable whether bringing animals into a human moral framework is an unalloyed benefit or to some extent simply the reassertion of human dominance. Nevertheless some would go still further and Gunkel considers proposals to extend ethical status to plants, land, and artefacts. Information Ethics essentially completes the extension of the ethical realm by excluding nothing at all.

This, then is one of the ways of thinking otherwise – extending the current framework to include non-human individuals of all kinds. But there are other ways: one is to extend the individual: a number of influential voices have made the case in recent years for an extended conception of consciousness, and that might be the most likely way for machines to gravitate within the moral realm – as adjuncts of a more broadly conceived humanity.

More radically, Gunkel suggests we might adopt proposals to decentre the system; instead of working with fixed Cartesian individuals we might try to grant each element in the moral system the rights and responsibilities appropriate to it at the time (I’m not sure exactly how that plays out in real situations); or we could modify and distribute our conception of agency. There is an even more radical possibility which Gunkel clearly finds attractive in the ethics of Emmanuel Levinas, which makes both agency and patiency secondary and derived while ethical interactions become primary, or to put it more accurately:

The self or the ego, as Levinas describes it… becomes what it is as a by-product of an uncontrolled and incomprehensible exposure to the face of the Other that takes place prior to any formulation of the self in terms of agency.

I warned you this was going to get a bit Continental – but actually making acts define the agent rather than the other way about may not be as unpalatably radical as all that. He clearly likes Levinas’ drift, anyway, and perhaps even better Silvia Benso’s proposed ‘mash-up’ which combines Levinasian non-ethics with Heideggerian non-things (tempting but a little unfair to ask what kind of sense that presumably makes).

Actually the least appealing proposal reported by Gunkel, to me at least, is that of Anne Foerst, who would reduce personhood to a social construct which we assign or withhold: this seems dangerously close to suggesting that say, concentration camp guards can actually withdraw real moral patienthood from their victims and hence escape blame (I’m sure that’s not what she actually means).

However, on the brink of all this heady radicalism Gunkel retreats to common sense. At the beginning of the book he suggested that Descartes could be seen as ‘the bad guy’ in his reduction of animals to the status of machines and exclusion of both from the moral realm; but perhaps after all, he concludes, we are in the end obliged to imitate Descartes’ provisional approach to life, living according to the norms of our society while the philosophical issues resist final resolution. This gives the book a bit of a dying fall and cannot but seem a  bit of a cop-out.

Overall, though, the book provides a galaxy of challenging thought to which I haven’t done anything like justice and Gunkel does a fine job of lucid and concise exposition. That said, I don’t find myself in sympathy with his outlook. For Gunkel and others in his tradition the ethical question is essentially political and under our control: membership of the moral sphere is something that can be given or not given rather like the franchise. It’s not really a matter of empirical or scientific fact, which helps explain Gunkel’s willingness to use fictional examples and his relative lack of interest in what digital computers actually can and cannot do. While politics and social convention are certainly important aspects of the matter, I believe we are also talking about real, objective capacities which cannot be granted or held back by the fiat of society any more than the ability to see or speak. To put it in a way Gunkel might find more congenial: when ethnic minorities and women are granted equal moral status, it isn’t simply an arbitrary concession of power, the result of a social tug-of-war but the recognition of hard facts about the equality of human moral capacity.

Myself I should say that moral agency is very largely a matter of understanding what you are doing; an ability to allow foreseen contingencies to influence current action. This is something machines might well achieve: arguably the only reason they haven’t got it already is an understandable human reluctance to trust free-ranging computational decision-making given an observable tendency for programs to fail more disastrously and less gracefully than human minds.

Moral patienthood on the other hand is indeed partly a matter of matter of the ability to experience pain and other forms of suffering, and that is problematic for machines; but it’s also a matter of projects and wishes, and machines fall outside consideration here because they simply don’t have any. They literally don’t care, and hence they simply aren’t in the game.

That seems to leave me with machines that I should praise and blame but need not worry about harming: should be good for a robot butler anyway.


Picture: devil dopamine. Normally we try to avoid casting aspersions on the character of those who hold a particular opinion; we like to take it for granted that everyone in the debate is honest, dispassionate, and blameless. But a recent paper by Baumeister, Masicampo and DeWall (2009), described in Psyblog, suggests that determinism (disbelief in free will) is associated with lower levels of helpfulness and higher levels of aggression.  Another study reported in Cognitive Daily found that determinists are also cheats.

It’s possible to question the way these experiments were done. They involved putting deterministic thoughts into some of the subjects’ minds by, for example, reading them passages from the works of Francis Crick (who besides being an incorrigible opponent of free will in philosophical terms, also, I suppose, opened the way for genetic determinism). That’s all very well, but it could be that, as it were,  habitual determinists are better able to resist the morally corrosive effect of their beliefs than people who have recently been given a dose of persuasive determinism.

However, the results certainly chime with a well-established fear that our growing ability to explain human behaviour is tending to reduce our belief in responsibility, so that malefactors are able to escape punishment merely by quoting factors that influenced their behaviour.  I was powerless; the crime was caused by chemical changes in my brain.

PsyBlog concludes  that we must cling to belief in free will, which sounds perilously close to suggesting that we should pretend to believe in it even if we don’t.  But leaving aside for a moment the empirical question of whether determinists are morally worse than those who believe in free will, why should they be?

The problem arises because the traditional view of moral responsibility requires that the evil act must be freely chosen in order for the moral taint to rub off on the agent. If no act is ever freely chosen, we may do bad things but we shall never ourselves be truly bad, so moral rules have no particular force. A few determinists, perhaps, would bite this bullet and agree that morality is a delusion, but I think most would not. It would be possible for determinists to deny the requirement for freedom and say instead that people are guilty of wrong-doing simply when connected causally or in other specified ways with evil acts, regardless of whether their behaviour is free or not.  This restores the validity of moral judgements and justifies punishment, although it leaves us curiously helpless. This tragic view was actually current in earlier times:  Oedipus considered himself worthy of punishment even though he had had no knowledge of the crimes he was committing,  and St Augustine had to argue against those who contended that the rape suffered by Lucretia made her a sinful adulteress – something which was evidently still a live issue in 1748 when Richardson was writing Clarissa, where the same point is raised.  Even currently in legal theory we have the notion of strict liability, whereby people may be punished for things they had no control over (if you sell poisonous food, you’re liable, even if it wasn’t you that caused it be poisonous). This is, I think a case of ancients and moderns reaching similar conclusions from almost antithetical understandings; in the ancient world you could be punished for things you couldn’t have prevented because moral taint was so strong; in the contemporary world you can be punished for things you couldn’t have prevented because moral taint is irrelevant and punishment is merely a matter of deterrence.

That is of course, the second escape route open to determinists; it’s not about moral responsibility, it’s about deterrence, social sanctions, and inbuilt behavioural norms, which together are enough to keep us all on the straight and narrow. This line of argument opens up an opportunity for the compatibilists, who can say: you evidently believe that human beings have some special capacity to change their behaviour in response to exhortation or punishment – why don’t we just call that free will? More dangerously, it leaves the door open for the argument that those who believe their decisions have real moral consequence are likely to behave better than those who comply with social norms out of mere pragmatism and conditioning.

Meantime, to the rescue come De Brigard, Mandelbaum, and Ripley (pdf): as a matter of fact, they say, our experiments show that giving a neurological explanation for bad behaviour has no effect on people’s inclination to condemn it. It seems to follow that determinism makes no difference. They are responding to Nahmias, who put forward the interesting idea of bypassing:  people are granted moral immunity if they are thought to suffer from some condition that bypasses their normal decision-making apparatus, but not if they are subject to problems which are thought to leave that apparatus in charge. In particular, Nahmias found that subjects tended to dismiss psychological excuses, but accept neurological ones. De Brigard, Mandelbaum and Ripley, by contrast, found it made no difference to their subjects reactions whether a mental condition such as anosognosia was said to be psychological or neurological; the tendency to assign blame was much the same in both cases. I’m not sure their tests did enough to make sure the distinction between neurological and psychological explanations was understood by the subjects; but their research does underline a secondary implication of the other papers; that most people are not consistent and can adopt different interpretations on different occasions (notably there were signs that subjects were more inclined to assign blame where the offence was more unpleasant, which is illogical but perhaps intuitively understandable).

I suspect that people’s real-life moral judgements are for the most part not much affected by the view they take on a philosophical level, and that modern scientific determinism has really only provided a new vocabulary for defence lawyers. A hundred or two hundred years ago, they might have reminded a jury of the powerful effect of Satan’s wiles on an innocent but redeemable mind;  now it may be the correctable impact of a surge of dopamine they prefer to put forward.