Posts tagged ‘free will’

Picture: unconscious will. Does the idea of unconscious free will even make sense? Paula Droege, in the recent JCS, seems to think it might. Generally experiments like Libet’s famous ones, which seemed to show that decisions are made well before the decider is consciously aware of them, are considered fatal to free will. If the conscious activity came along only after the matter had been settled, it must surely have been powerless to affect it (there are some significant qualifications to this: Libet himself, for example, considered there was a power of last-minute veto which he called ‘free won’t’ – but still the general point is clear). If our conscious thoughts were irrelevant, it seems we didn’t have any say in the matter.

However, this view implies a narrow conception of the self in which unconscious processes are not really part of me and I only really consist of that entity that does all the talking. Yet in other contexts, notably in psychoanalysis, don’t we take the un- or sub-conscious to be more essential to our personality than the fleeting surface of consciousness, to represent more accurately what we ‘really’ want and feel? Droege, while conceding that if we take the narrow view there’s a good deal in the sceptical position, would prefer a wider view in which unconscious acts are valid examples of agency too. She would go further and bring in social influences (though it’s not entirely clear to me how the effects of social pressure can properly be transmuted into examples of my own free will), and she offers the conscious mind the consolation prize of being able to influence habits and predispositions which may in turn have a real causal influence on our actions.

I suppose there are several ways in which we exercise our agency. We perhaps tend to think of cases of conscious premeditation because they are the clearest, but in everyday life we just do stuff most of the time without thinking about it much, or very explicitly. Many of the details of our behaviour are left to ‘autopilot’, but in the great majority of cases the conscious mind would nevertheless claim these acts as its own. Did you stop at the traffic light and then move off again when it turned green? You don’t really remember doing it, but are generally ready to agree that you did. In unusual cases, we know that people sometimes even elaborate or confabulate spurious rationales for actions they didn’t really determine.

But it’s much more muddled than that. We do also at times seek to disown moral responsibility for something done when we weren’t paying proper attention, or where our rational responses were overwhelmed by a sudden torrent of emotion. Should someone who responds to the sight of a hated enemy by swerving to collide with the provoker be held responsible because the murderous act stems from emotions which are just as valid as cold calculation? Perhaps, but sometimes the opposite is taken to be the case, and the overwhelming emotion of a crime passionnel can be taken as an excuse. Then again few would accept the plea of the driver who says he shouldn’t be held responsible for an accident because he was too drunk to drive properly.

I think there may be an analogy with the responsibility held by the head of a corporation: the general rule is that the buck stops with the chief, even if the chief did not give orders for the particular action which subordinates have taken; in the same way we’re presumed by default to be responsible for what we do: but there are cases where control is genuinely and unavoidably lost, no matter what prudent precautions the chief may have put in place. There may be cases where the chief properly has full and sole responsibility; in other cases where the corporation has blundered on in pursuit of its own built-in inclinations it may be appropriate for the organization as a whole to accept blame for its corporate personality: and where confusion reigned for reasons beyond reasonable control, no responsibility may be assigned at all.

If that’s right, then Droege is on to something; but if there are two distinct grades of responsibility in play, there ought really to be two varieties of free will; the one exercised explicitly by the fully conscious me, and the other by ‘whole person’ me, in which the role of the conscious me, while perhaps not non-existent is small and perhaps mostly indirect. This is an odd thought, but if, like Droege, we broadly accept that Libet has disproved the existence of the first variety of free will, it means we don’t have the kind we can’t help believing in, but do have another kind we never previously thought of – which seems even odder.

Picture: Autoscopy. Among a number of interesting features, The Ego Tunnel includes a substantial account of out-of-body experiences (OBEs) and similar phenomena. Experiments where the subjects are tricked into mistaking a plastic dummy for their real hand (all done with mirrors), or into feeling themselves to be situated somewhere behind their own head (you need a camera for this) show that our perception of our own body and our own location are generated within our brain and are susceptible to error and distortion; and according to Metzinger this shows that they are really no more than illusions (Is that right, by the way – or are they only illusions when they’re wrong or misleading? The fact that a camera can be made to generate false or misleading pictures doesn’t mean that all photographs are delusions, does it?).

There are many interesting details in this account, quite apart from its value as part of the overall argument.  Metzinger briefly touches on four varieties of autoscopic (self-seeing) phenomena, all of which can be related to distinct areas of the brain:  autoscopic hallucination, where the subject sees an image of themselves; the feeling of a presence, where the subject has the strong sense of someone there without seeing anyone; the particularly disturbing heautoscopy, where the subject sees another self and switches back and forth into and out of it, unsure which is ‘the real me’; and the better-known OBE. OBEs arise in various ways: often detachment from the body is sudden, but in other cases the second self may lift out gradually from the feet, or may exit the corporeal body via the top of the head.  Metzinger tells us that he himself has experienced OBEs and made many efforts to have more (going so far as to persuade his anaesthetist to use ketamine on him in advance of an operation, with no result – I wonder whether the anaesthetist actually kept his word) ; speaking of lucid dreams, another personal interest, he tells the story of having one in which he dreamed an OBE. That seems an interesting bit of evidence: if you can dream a credible OBE, mightn’t they all be dreams? This seems to undercut the apparently strong sense of reality which typically accompanies them.

Interestingly, Metzinger reports that a conversation with Susan Blackmore helped him understand his own experiences.  Blackmore is of course another emphatic denier of the reality of the self. I don’t in any way mean to offer an ad hominem argument here, but it is striking that these two people both seem to have had a particular interest in ‘spooky’ dualistic phenomena which their rational scientific minds ultimately rejected, leading on to an especially robust rejection of the self. Perhaps people who lean towards dualism in their early years develop a particularly strong conception of the self, so that when they adopt monist materialism they reject the self altogether instead of seeking to redefine and accommodate it, as many of us would be inclined to do?

On that basis, you would expect Metzinger to be the hardest of hard determinists; his ideas seem to lean in that direction, but not decisively. He suggests that certain brain processes involved in preparing actions are brought up into the Ego Tunnel and hence seem to belong to us. They seem to be our own thoughts, our own goals and because the earlier stages remain outside the Tunnel, they seem to have come from nowhere, to be our own spontaneous creations. There are really no such things as goals in the world, any more than colours, but the delusion that they do exist is useful to us; the idea of being responsible for our own actions enables a kind of moral competition which is ultimately to our advantage (I’m not quite sure exactly how this  works). But in this case Metzinger pulls his punch: perhaps this is not the full story, he says, and describes compatibilism as the most beautiful position.

Metzinger pours scorn on the idea that we must have freedom of the will because we feel our actions to be free, yet he does give an important place to the phenomenology of the issue, pointing out that it is more complex than might appear. The more you look at them, he suggests, the more evasive conscious intentions become.  How curious it is then, that Metzinger, whose attention to phenomenology is outstandingly meticulous, should seem so sure that we have at all times a robust (albeit delusional) sense of our selves. I don’t find it so at all, and of course on this no less a person than David Hume is with me; with characteristically gentle but devastating scepticism, he famously remarked “For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I never can catch myself at any time without a perception, and never can observe any thing but the perception.”

Metzinger concludes by considering a range of moral and social issues which he thinks we need to address as our understanding of the mind improves. In his view, for example, we ought not to try to generate artificial consciousness. As a conscious entity, the AI would be capable of suffering, and in Metzinger’s view the chances are its existence would be more painful than pleasant. One reason for thinking so is the constrained and curtailed existence it could expect; another is that we only have our own minds to go on and would be likely to produce inferior, messed-up versions of it. But more alarming, Metzinger argues that human life itself involves an overall preponderance of pain over pleasure; he invokes Schopenhauer and Buddha. With characteristic thoroughness, he concedes that pleasure and pain may not be all that life is about; otherr achievements can justify a life of discomfort. But even so, the chances for an artificial consciousness, he feels are poor.

This is surely too bleak. I see no convincing reason to think that pain outweighs pleasure in general (certainly the Buddhist case, based on the perverse assumption that change is always painful, seems a weak point in that otherwise logical religion), and I see some reasons to think that a conscious robot would be less vulnerable to bad experiences than we are. It’s millions of years of evolution which have ingrained in us a fear of death and the motivating experience of pain:  the artificial consciousness need have none of that, but would surely be most likely to face its experiences with superhuman equanimity.

Of course caution is justified, but Metzinger in effect wants us to wait until we’ve sorted out the meaning of life before we get on with living it.

His attempt to raise this and other issues is commendable though; he’s right that the implications of recent progress have not received enough intelligent attention. Unfortunately I think the chances of some of these issues being addressed with philosophic rationality are slim. Another topic Metzinger raises, for example, is the question of what kinds of altered or enhanced mental states, from among the greatly expanded repertoire we are likely to have available in the near future, we ought to allow or facilitate; not much chance that his mild suggestions on that will have much impact.

There’s a vein of pessimism in his views on another topic. Metzinger fears that the progress of science, before the deeper issues have been sorted out, could inspire an unduly cynical, stripped-down view of human nature; a ‘vulgar materialism’, he calls it. Uninformed members of the public falling prey to this crude point of view might be tempted to think:

“The cat is out of the bag. We are gene-copying bio-robots, living out here on a lonely planet in a cold and empty physical universe. We have brains but no immortal souls and after seventy years or so the curtain drops. There will never be an afterlife, or any kind of reward or punishment for anyone… I get the message.”

Gosh: do we know anyone vulgar and unsophisticated enough to think like that?

Picture:  clock on screen. One of the most frequently visited pages on Conscious Entities is this account of Benjamin Libet’s remarkable experiments, which seemed to show that decisions to move were really made half a second before we were aware of having decided. To some this seemed like a practical disproof of the freedom of the will – if the decision was already made before we were consciously aware of it, how could our conscious thoughts have determined what the decision was?  Libet’s findings have remained controversial ever since they were published; they have been attacked from several different angles, but his results were confirmed and repeated by other researchers and seemed solid.

However, Libet’s conclusions rested on the use of Readiness Potentials (RPs). Earlier research had shown that the occurence of an RP in the brain reliably indicated that a movement was coming along just afterwards, and they were therefore seen as a neurological sign that the decision to move had been taken (Libet himself found that the movement could sometimes be suppressed after the RP had appeared, but this possibility, which he referred to as ‘free won’t ‘, seemed only to provide an interesting footnote). The new research, by Trevena and Miller at Otago, undermines the idea that RPs indicate a decision.

Two separate sets of similar experiments were carried out. They resembled Libet’s original ones in most respects, although computer screens and keyboards replaced Libet’s more primitive equipment, and the hand movement took the form of a key-press. A clock face similar to that in Libet’s experiments was shown, and they even provided a circling dot. In the earlier experiments this had provided an ingenious way of timing the subject’s awareness that a decision had been made – the subject would report the position of the dot at the moment of decision – but in Trevena and Miller’s research the clock and dot were provided only to make conditions resemble Libet’s as much as possible. Subjects were told to ignore them (which you might think rendered their inclusion pointless). This was because instead of allowing the subject to choose their own time for action, as in Libet’s original experiments, the subjects in the new research were prompted by a randomly-timed tone. This is obviously a significant change from the original experiment; the reason for doing it this way was that Trevena and Miller wanted to be able to measure occasions when the subject decided not to move as well as those when there was movement. Some of the subjects were told to strike a key whenever the tone sounded,  while the rest were asked to do so only about half the time (it was left up to them to select which tones to respond to, though if they seemed to be falling well below a 50-50 split they got a reminder in the latter part of the experiment).  Another significant difference from Libet’s tests is that left and right hands were used: in one set of experiments the subjects were told by a letter in the centre of the screen whether they should use the right or left hand on each occasion, in the other it was left up to them.

There were two interesting results. One was that the same kind of RP appeared whether the subject pressed a key or not. Trevena and Miller say this shows that the RP was not, after all, an indication of a decision to move, and was presumably instead associated with some more general kind of sustained attention or preparing for a decision. Second, they found that a different kind of RP, the Lateralised Readiness Potential or LRP, which provides an indication of readiness to move a particular hand, did provide an indication of a decision, appearing only where a movement followed; but the LRP did not appear until just after the tone. This suggests, in contradiction to Libet, that the early stages of action followed the conscious experience of deciding, rather than preceding it.

The differences between these new experiments and Libet’s originals provide a weak spot which Libetians will certainly attack.  Marcel Brass, whose own work with fMRI scanning confirmed and even extended Libet’s delay, seeming to show that decisions could be predicted anything up to ten seconds before conscious awareness, has apparently already said that in his view the changes undermine the conclusions Trevena and Miller would like to draw. Given the complex arguments over the exact significance of timings in Libet’s results, I’m sure the new results will prove contentious. However, it does seem as if a significant blow has been struck for the first time against the foundations of Libet’s remarkable results.

Picture: Percy - Brains he has nix. Ages ago (gosh, it was nearly five years ago) I had a piece where Blandula remarked that any robot clever enough to understand Isaac Asimov’s Three Laws of Robotics would surely be clever enough to circumvent them.  At the time I think all I had in mind was the ease with which a clever robot would be able to devise some rationalisation of the harm or disobedience it was contemplating.  Asimov himself was of course well aware of the possibility of this kind of thing in a general way.  Somewhere (working from memory) I think he explains that it was necessary to specify that robots may not, through inaction, allow a human to come to harm, or they would be able to work round the ban on outright harming by, for example, dropping a heavy weight on a human’s head.  Dropping the weight would not amount to harming the human because the robot was more than capable of catching it again before the moment of contact. But once the weight was falling, a robot without the additional specification would be under no obligation to do the actual catching.

That does not actually wrap up the problem altogether. Even in the case of robots with the additional specification, we can imagine that ways to drop the fatal weight might be found. Suppose, for example, that three robots, who in this case are incapable of catching the weight once dropped, all hold on to it and agree to let go at the same moment. Each individual can feel guiltless because if the other two had held on, the weight would not have dropped. Reasoning of this kind is not at all alien to the human mind;  compare the planned dispersal of responsibility embodied in a firing squad.

Anyway, that’s all very well, but I think there may well be a deeper argument here: perhaps the cognitive capacity required to understand and apply the Three Laws is actually incompatible with a cognitive set-up that guarantees obedience.

There are two problems for our Asimovian robot: first it has to understand the Laws; second, it has to be able to work out what actions will deliver results compatible with them.  Understanding, to begin with, is an intractable problem.  We know from Quine that every sentence has an endless number of possible interpretations; humans effortlessly pick out the one that makes sense, or at least a small set of alternatives; but there doesn’t seem to be any viable algorithm for picking through the list of interpretations. We can build in hard-wired input-output responses, but when we’re talking about concepts as general and debatable as ‘harm’, that’s really not enough. If we have a robot in a factory, we can ensure that if it detects an unexpected source of heat and movement of the kind a human would generate, it should stop thrashing its painting arm around – but that’s nothing like intelligent obedience of a general law against harm.

But even if we can get the robot to understand the Laws, there’s an equally grave problem involved in making it choose what to do.  We run into the frame problem (in its wider, Dennettian form). This is, very briefly, the problem that arises from tracking changes in the real world. For a robot to keep track of everything that changes (and everything which stays the same, which is also necessary) involves an unmanageable explosion of data. Humans somehow pick out just relevant changes; but again a robot can only pick out what’s relevant by sorting through everything that might be relevant, which leads straight back to the same kind of problem with indefinitely large amounts of data.

I don’t think it’s a huge leap to see something in common between the two problems; I think we could say that they both arise from an underlying difficulty in dealing with relevance in the face of  the buzzing complexity of reality. My own view is that humans get round this problem through recognition; roughly speaking, instead of looking at every object individually to determine whether it’s square, we throw everything into a sort of sieve with holes that only let square things drop through. But whether or not that’s right, and putting aside the question of how you would go about building such a faculty into a robot, I suggest that both understanding and obedience involve the ability to pick out a cogent, non-random option from an infinite range of possibilities.  We could call this free will if we were so inclined, but let’s just call it a faculty of choice.

Now I think that faculty, which the robot is going to have to exercise in order to obey the Laws, would also unavoidably give it the ability to choose whether to obey them or not. To have the faculty of choice, it has to be able to range over an unlimited set of options, whereas constraining it to any given set of outcomes  involves setting limits. I suppose we could put this in a more old-fashioned mentalistic kind of way by observing that obedience, properly understood, does not eliminate the individual will but on the contrary requires it to be exercised in the right way.

If that’s true (and I do realise that the above is hardly a tight knock-down argument) it would give Christians a neat explanation of why God could not have made us all good in the first place – though it would not help with the related problem of why we are exposed to widely varying levels of temptation and opportunity.  To the rest of us it offers, if we want it, another possible compatibilist formulation of the nature of free will.

Picture: Etch-a-Sketch. An interesting New Scientist piece recently reviewed research suggesting that chaos has an important part in the way the brain functions. More specifically, the suggestion is that the brain operates ‘on the edge of chaos’, in self-organised criticality;  sometimes it runs in ways which are predictable at a macro level, more or less like a conventional machine; but at times it also goes into chaotic states. The behaviour of the system in these states is still fully deterministic in a wholly traditional, classical way, but depends so exquisitely on the fine detail of the starting state that the behaviour of the system is in practice unpredictable. The analogy offered here is a growing pile of sand; you can’t tell exactly when it will suddenly go through a state shift – collapse – although over a long period the number of large and small collapses is amenable to statistical treatment (actually, I have to say I’ve never noticed piles of sand behaving in this interesting way, but that just shows what a poor observer I am).

The suggestion is that the occasional ‘avalanches’ of neuronal firing in the brain are useful, allowing the brain to enter new states more rapidly than it could otherwise do. Being on the edge of chaos allows “maximum transmission with minimum risk of descending into chaos”. The arrival of a neuronal avalanche is related to the sudden popping-up of an idea in the mind, or perhaps the unexpected recurrence of a random memory. There is also evidence that the duration of phase-shifts is related to IQ scores – perhaps in this case because the longer shift allows the recruitment of more neurons. The recruitment of additional neurons is presumed in such cases to be a good thing (I feel there must be some caveats about that), but there are also suggestions that excess time spent in phase-shifts could be a cause of schizophrenia (someone should set out a list somewhere of all the things that at one time or another have been put forward as causes of schizophrenia); while not enough phase-shifting in parts of the brain to do with social behaviour might have something to do with autism.

One claim not made in the article, but one which could well be made, is that all this might account for the sensation of free will. If the brain occasionally morphs through chaos into a new state, might it not be that the conclusions which emerge would seem to have come out of nowhere? We might be led to assume that these thoughts were freely generated, distinct from the normal predictable pattern. I think the temptation would be to frame such a theory as an explanation of the  illusion of free will: why we feel as if some of our decisions are free even though, in the final analysis, determinism rules. But I can also imagine that a compatibilist might claim that chaotic phase shifts really were freedom. A free act is one which is not predictable, such a person might argue; however, we don’t mean unpredictable in practice – none of us is currently able to look at a brain and predict the decisions it will make in any given circumstances. We mean predictable in principle; predictable if we had all the data plus unlimited time and computing power. Now are chaotic changes predictable in principle or not? They occur within normal physical rules, so in the ultimate sense they are clearly deterministic. But the difficulties are so great that to say that they’re only unpredictable in practice seems to stretch ‘practice’ a long way – we might easily need perfection of measurement to a degree which is never going to be obtainable under any imaginable real circumstances. Couldn’t we rather say, then, that we’re dealing with a third kind of unpredictability, neither quite unpredictability in mere practice nor quite unpredictability in principle, and take the view that decisions subject to this level of unpredictability deserve to be called free? I think we could, but ultimately I’m disinclined to do so because in the final analysis that feels more like inventing a new concept of freedom than justifying the existing one.

There’s another issue here that affects a number of the speculations in the article. We must beware of assuming too easily that features of the underlying process necessarily correspond directly with phenomenal features of experience. So, for example, it’s assumed that when the brain goes quickly into a new state in terms of its neuronal firing, that would be like a new thought popping up suddenly in our conscious minds, an idea which seemed to have come out of nowhere. It ain’t necessarily so (though it would be an interesting question to test). The fact that the brain uses chaos to achieve its results does not mean that the same chaos is directly experienced in our thoughts, any more than I experience say, that old 40Hz buzz starting up in my right parietal, or whatever. At the moment (not having read the actual research, of course) it seems equally likely that phase shifts are wholly outside conscious experience, perhaps, for example, being required in order to allow subordinate systems to catch up rapidly with a separate conscious process which they don’t directly influence. Or perhaps they’re just the vigorous shaking which clears our mental etch-a-sketch, correlated with but not constitutive of, the sophisticated complication of our conscious doodlings.

Picture: devil dopamine. Normally we try to avoid casting aspersions on the character of those who hold a particular opinion; we like to take it for granted that everyone in the debate is honest, dispassionate, and blameless. But a recent paper by Baumeister, Masicampo and DeWall (2009), described in Psyblog, suggests that determinism (disbelief in free will) is associated with lower levels of helpfulness and higher levels of aggression.  Another study reported in Cognitive Daily found that determinists are also cheats.

It’s possible to question the way these experiments were done. They involved putting deterministic thoughts into some of the subjects’ minds by, for example, reading them passages from the works of Francis Crick (who besides being an incorrigible opponent of free will in philosophical terms, also, I suppose, opened the way for genetic determinism). That’s all very well, but it could be that, as it were,  habitual determinists are better able to resist the morally corrosive effect of their beliefs than people who have recently been given a dose of persuasive determinism.

However, the results certainly chime with a well-established fear that our growing ability to explain human behaviour is tending to reduce our belief in responsibility, so that malefactors are able to escape punishment merely by quoting factors that influenced their behaviour.  I was powerless; the crime was caused by chemical changes in my brain.

PsyBlog concludes  that we must cling to belief in free will, which sounds perilously close to suggesting that we should pretend to believe in it even if we don’t.  But leaving aside for a moment the empirical question of whether determinists are morally worse than those who believe in free will, why should they be?

The problem arises because the traditional view of moral responsibility requires that the evil act must be freely chosen in order for the moral taint to rub off on the agent. If no act is ever freely chosen, we may do bad things but we shall never ourselves be truly bad, so moral rules have no particular force. A few determinists, perhaps, would bite this bullet and agree that morality is a delusion, but I think most would not. It would be possible for determinists to deny the requirement for freedom and say instead that people are guilty of wrong-doing simply when connected causally or in other specified ways with evil acts, regardless of whether their behaviour is free or not.  This restores the validity of moral judgements and justifies punishment, although it leaves us curiously helpless. This tragic view was actually current in earlier times:  Oedipus considered himself worthy of punishment even though he had had no knowledge of the crimes he was committing,  and St Augustine had to argue against those who contended that the rape suffered by Lucretia made her a sinful adulteress – something which was evidently still a live issue in 1748 when Richardson was writing Clarissa, where the same point is raised.  Even currently in legal theory we have the notion of strict liability, whereby people may be punished for things they had no control over (if you sell poisonous food, you’re liable, even if it wasn’t you that caused it be poisonous). This is, I think a case of ancients and moderns reaching similar conclusions from almost antithetical understandings; in the ancient world you could be punished for things you couldn’t have prevented because moral taint was so strong; in the contemporary world you can be punished for things you couldn’t have prevented because moral taint is irrelevant and punishment is merely a matter of deterrence.

That is of course, the second escape route open to determinists; it’s not about moral responsibility, it’s about deterrence, social sanctions, and inbuilt behavioural norms, which together are enough to keep us all on the straight and narrow. This line of argument opens up an opportunity for the compatibilists, who can say: you evidently believe that human beings have some special capacity to change their behaviour in response to exhortation or punishment – why don’t we just call that free will? More dangerously, it leaves the door open for the argument that those who believe their decisions have real moral consequence are likely to behave better than those who comply with social norms out of mere pragmatism and conditioning.

Meantime, to the rescue come De Brigard, Mandelbaum, and Ripley (pdf): as a matter of fact, they say, our experiments show that giving a neurological explanation for bad behaviour has no effect on people’s inclination to condemn it. It seems to follow that determinism makes no difference. They are responding to Nahmias, who put forward the interesting idea of bypassing:  people are granted moral immunity if they are thought to suffer from some condition that bypasses their normal decision-making apparatus, but not if they are subject to problems which are thought to leave that apparatus in charge. In particular, Nahmias found that subjects tended to dismiss psychological excuses, but accept neurological ones. De Brigard, Mandelbaum and Ripley, by contrast, found it made no difference to their subjects reactions whether a mental condition such as anosognosia was said to be psychological or neurological; the tendency to assign blame was much the same in both cases. I’m not sure their tests did enough to make sure the distinction between neurological and psychological explanations was understood by the subjects; but their research does underline a secondary implication of the other papers; that most people are not consistent and can adopt different interpretations on different occasions (notably there were signs that subjects were more inclined to assign blame where the offence was more unpleasant, which is illogical but perhaps intuitively understandable).

I suspect that people’s real-life moral judgements are for the most part not much affected by the view they take on a philosophical level, and that modern scientific determinism has really only provided a new vocabulary for defence lawyers. A hundred or two hundred years ago, they might have reminded a jury of the powerful effect of Satan’s wiles on an innocent but redeemable mind;  now it may be the correctable impact of a surge of dopamine they prefer to put forward.