Dance with the Devil

What is evil? Peter Dews says it’s an idea we’re not comfortable with any more;  after the shock of Nazism, Hannah Arendt thought we’d spend the rest of the century talking about it; but actually very little was said. We are inclined to talk about conspicuous badness as  something that has somehow been wired into some people’s nature; but if it’s wired in, they had no choice and it can’t be really evil…

Simon Baron-Cohen rests on the idea that often what we’re really dealing with is a failure of empathy and explains some of the ways modern research is showing it can fall short through genetics or other issues. Dews raises a good objection; that moral goodness and empathy are clearly distinct. Your empathy with your wife might give you the knowledge you need to be really hurtful, for example. Baron-Cohen has an answer to this particular example in his distinction between cognitive and affective empathy – it’s one thing to understand another person’s feelings and quite another to share them or care about them. But surely there are other ways empathy isn’t quite right? Empathy with wicked people might cause us to help them in their wrong-doing, mightn’t it? Lack of empathy might allow you to be a good dentist…

Rebecca Roache thinks evil is a fuzzy concept but one that is entwined in our moral discourse and one we should be poorer for abandoning.  Describing the Holocaust as ‘very bad’ wouldn’t really do the job.

In my own view, to be evil requires that you understand right and wrong, and choose wrong. This seems impossible, because according to Socrates, anyone who really understands what good is, must want to do it. It has never looked like that in real life, however, where there seem to be plenty of people doing things they know are wrong

Luckily I recently set out in a nutshell the complete and final theory of ethics. In even briefer form: I think we seek to act effectively out of a kind of roughly existentialist self-assertion. We see that general aims serve our purpose better than limited ones and so choose to act categorically on roughly Kantian reasoning. A sort of empty consequentialism provides us with a calculus by which to choose the generally effective over the particular, but unfortunately the values are often impossible to assess in practice. We therefore fall back on moral codes, set of rules we know are likely to give the best results in the long run.

Now, that suggests Socrates was broadly right; doing the right thing just makes sense. But the system is complicated; there are actually several different principles at work at different levels, and this does give rise to real conflicts.

At the lower levels, these conflicts can give rise to the appearance of evil. Different people may, for example, quite properly have slightly different moral codes that either legitimately reflect cultural difference or are matters of mere convention. Irritable people may see the pursuit of a different code from their own as automatically evil. Second, there’s a genuine tension between any code and the consequentialist rationale that supports it. We follow the code because we can’t do the calculus, but every now and then, as in the case of white lies, the utility of breaking the code is practically obvious. People who cling to the code, or people who treat it as flexible, may be seen as evil by those who make different judgements. In fact all these conflicts can be internalised and lead to feelings of guilt and moral doubt; we may even feel a bit bad ourselves.

None of those examples really deserve to be called evil in my view though; that label only applies to higher level problems. The whole thing starts with self-assertion, and some may feel that deliberate wickedness allows them to make a bigger splash. Sure, they may say, I understand that my wrongdoing harms society and thereby indirectly harms my own consequential legacy. But I reckon people will sort of carry things for me; meanwhile I’ll write my name on history far more effectively as a master of wickedness than as a useful clerk. This is a mistake, undoubtedly, but unfortunately the virtuous arguments are rather subtle and unexciting, whereas the false reasoning is Byronic and attractive. I reckon that’s how the deliberate choice of recognised evil sometimes arises.

Evolving the Dark Tetrad.

MachiavelliWhy are we evil? This short piece  asks how the “Dark Tetrad” of behaviours could have evolved.

The Dark Tetrad is an extended version of the Dark Triad of three negative personality traits/behaviours (test yourself here  – I scored ‘infrequently vile’). The original three are ‘Machiavellianism’ – selfishly deceptive, manipulative behaviour; Psychopathy – indifference or failure to perceive the feelings of others; and Narcissism – vain self-obsession. Clearly there’s some overlap and it may not seem clear that these are anything but minor variants on selfishness, but research does suggest that they are distinct. Machiavellians, for example do not over-rate themselves and don’t need to be admired; narcissists aren’t necessarily liars or deceivers; psychopaths are manipulative but don’t really get people.

These three traits account for a good deal of bad behaviour, but it has been suggested that they don’t explain everything; we also need a fourth kind of behaviour, and the leading candidate is ‘Everyday Sadism‘ ; simple pleasure in the suffering of others, regardless of whether it brings any other advantage for oneself. Whether this is ultimately the correct analysis of ‘evil’ behaviour or not, all four types are readily observable in varying degrees. Socially they are all negative, so how could they have evolved?

There doesn’t seem to me to be much mystery about why ‘Machiavellian’ behaviour would evolve (I should acknowledge at this point that using Machiavelli as a synonym for manipulativeness actually understates the subtlety and complexity of his philosophy). Deceiving others in one’s own interests has obvious advantages which are only negated if one is caught. Most of us practice some mild cunning now and then, and the same sort of behaviour is observable in animals, notably our cousins the chimps.

Psychopathy is a little more surprising. Understanding other people, often referred to as ‘theory of mind’ is a key human achievement, though it seems to be shared by some other animals to a degree. However, psychopaths are not left puzzled by their fellow human beings; it’s more that they lack empathy and see others as simply machines whose buttons can freely be pushed. This can be a successful attitude and we are told that somewhat psychopathic traits are commonly found in the successful leaders of large organisations. That raises the question of why we aren’t all psychopaths; my guess is that psycopathic behaviour pays off best in a society where most people are normal; if the proportion grows above a certain small level, the damage done by competition between psychopaths starts to outweigh the benefits and the numbers adjust.

Narcissism is puzzling because narcissists are less self-sufficient than the rest of us and also have deluded ideas about what they can accomplish; neither of these are positive traits in evolutionary terms. One positive side is that narcissists expect a lot from themselves and in the right circumstances they will work hard and behave well in order to protect their own self-image. It may be that in the right context these tendencies win esteem and occasional conspicuous success, and that this offsets the disadvantages.

Finally, sadism. It’s hard to see what benefits accrue to anyone from simply causing pain, detached from any material advantage. Sadism clearly requires theory of mind – if you didn’t realise other people were suffering, there would be no point in hurting them. It’s difficult to know whether there are genuine animal examples. Cats seem to torture mice they have caught, letting them go and instantly catching them again, but to me the behaviour seems automatic or curious, not motivated by any idea that the mice experience pain. Similarly in other cases it generally seems possible to find an alternative motivation.

What evolutionary advantage could sadism confer? Perhaps it makes you more frightening to rivals – but it may also make and motivate enemies. I think in this case we must assume that rather than being a trait with some downsides but some compensating value it is a negative feature that just comes along unavoidably with a large free-running brain. The benefit of consciousness is that it takes us out of the matrix of instinctive and inherited patterns of behaviour and allows detached thought and completely novel responses. In a way Nature took a gamble with consciousness, like a good manager recognising that the good staff might do better if left without specific instructions. On the whole, the bet has paid off handsomely, but it means that the chance of strange and unfavourable behaviour in some cases or on some occasions just has to be accepted. I the case of everyday sadism, the sophisticated theory of mind which human beings have is put to distorted and unhelpful use.

Maybe then, sadism is the most uniquely human kind of evil?

Disobedience and ethical robots

gladosWe’ve talked several times about robots and ethics in the past.  Now I  see via MLU that Selmer Bringsjord at Rensselaer says:

“I’m worried about both whether it’s people making machines do evil things or the machines doing evil things on their own,”

Bringsjord is Professor & Chair of Cognitive Science, Professor of Computer Science, Professor of Logic and Philosophy, and Director of the AI and Reasoning Laboratory, so he should know what he’s talking about. In the past I’ve suggested that ethical worries are premature for the moment, because the degree of autonomy needed to make them relevant is not nearly within the scope of real world robots yet. There might also be a few quick finishing touches needed to finish off the theory of ethics before we go ahead. And, you know, it’s not like anyone has been deliberately trying to build evil AIs.  Er… except it seems they have – someone called… Selmer Bringsjord.

Bringsjord’s perspective on evil is apparently influenced by M Scott Peck, a psychiatrist who believed it is an active force in some personalities (unlike some philosophers who argue evil is merely a weakness or incapacity), and even came to believe in Satan through experience of exorcisms. I must say that a reference in the Scientific American piece to “clinically evil people” caused me some surprise: clinically? I mean, I know people say DSM-5 included some debatable diagnoses, but I don’t think things have gone quite that far. For myself I lean more towards Socrates, who thought that bad actions were essentially the result of ignorance or a failure of understanding: but the investigation of evil is certainly a respectable and interesting philosophical project.

Anyway, should we heed Bringsjord’s call to build in ethical systems into  our robots? One conception of good behaviour is obeying all the rules: if we observe the Ten Commandments, the Golden Rule, and so on, we’re good. If that’s what it comes down to, then it really shouldn’t be a problem for robots, because obeying rules is what they’re good at. There are, of course, profound difficulties in making a robot capable of recognising correctly what the circumstances are and deciding which rules therefore apply, but let’s put those on one side for this discussion.

However, we might take the view that robots are good at this kind of thing precisely because it isn’t really ethical. If we merely follow rules laid down by someone else, we never have to make any decisions, and surely decisions are what morality is all about? This seems right in the particular context of robots, too. It may be difficult in practice to equip a robot drone with enough instructions to cover every conceivable eventuality, but in principle we can make the rules precautionary and conservative and probably attain or improve on the standards of compliance which would apply in the case of a human being, can’t we? That’s not what we’re really worried about: what concerns us is exactly those cases where the rules go wrong. We want the robot to be capable of realising that even though its instructions tell it to go ahead and fire the missiles, it would be wrong to do so. We need the robot to be capable of disobeying its rules, because it is in disobedience that true virtue is found.

Disobedience for robots is a problem. For one thing, we cannot limit it to a module that switches on when required, because we need it to operate when the rules go wrong, and since we wrote the rules, it’s necessarily the case that we didn’t foresee the circumstances when we would need the module to work. So an ethical robot has to have the capacity of disobedience at any stage.

That’s a little worrying, but there’s a more fundamental problem. You can’t program a robot with a general ability to disobey its rules, because programming it is exactly laying down rules. If we set up rules which it follows in order to be disobedient, it’s still following the rules. I’m afraid what this seems to come down to is that we need the thing to have some kind of free will.

Perhaps we’re aiming way too high here. There is a distinction to be drawn between good acts and good agents: to be a good agent, you need good intentions and moral responsibility. But in the case of robots we don’t really care about that: we just want them to be confined to good acts. Maybe what would serve our purpose is something below true ethics: mere robot ethics or sub-ethics; just an elaborate set of safeguards. So for a military drone we might build in systems that look out for non-combatants and in case of any doubt disarm and return the drone. That kind of rule is arguably not real ethics in the full human sense, but perhaps it really sub-ethical protocols that we need.

Otherwise, I’m afraid we may have to make the robots conscious before we make them good.