TankBack in November Human Rights Watch (HRW) published a report - Losing Humanity – which essentially called for a ban on killer robots – or more precisely on the development, production, and use of fully autonomous weapons,  backing it up with a piece in the Washington Post. The argument was in essence that fully autonomous weapons are most probably not compatible with international conventions on responsible ethical military decision making, and that robots or machines lack (and perhaps  always will lack) the qualities of emotional empathy and ethical judgement required to make decisions about human lives.

You might think that in certain respects that should be fairly uncontroversial. Even if you’re optimistic about the future potential of robotic autonomy, the precautionary principle should dictate that we move with the greatest of caution when it comes to handing over lethal weapons . However, the New Yorker followed up with a piece which linked HRW’s report with the emergence of driverless cars and argued that a ban was ‘wildly unrealistic’. Instead, it said, we simply need to make machines ethical.

I found this quite annoying; not so much the suggestion as the idea that we are anywhere near being in a position to endow machines with ethical awareness. In the first place actual autonomy for robots is still a remote prospect (which I suppose ought to be comforting in a way). Machines that don’t have a specified function and are left around to do whatever they decide is best, are not remotely viable at the moment, nor desirable. We don’t let driverless cars argue with us about whether we should really go to the beach, and we don’t let military machines decide to give up fighting and go into the lumber business.

Nor, for that matter, do we have a clear and uncontroversial theory of ethics of the kind we should need in order to simulate ethical awareness. So the New Yorker is proposing we start building something when we don’t know how it works or even what it is with any clarity. The danger here, to my way of thinking, is that we might run up some simplistic gizmo and then convince ourselves we now have ethical machines, thereby by-passing the real dangers highlighted by HRW.

Funnily enough I agree with you that the proposal to endow machines with ethics is premature, but for completely different reasons. You think the project is impossible; I think it’s irrelevant. Robots don’t actually need the kind of ethics discussed here.

The New Yorker talks about cases where a driving robot might have to decide to sacrifice its own passengers to save a bus-load of orphans or something. That kind of thing never happens outside philosophers’ thought experiments. In the real world you never know that you’re inevitably going to kill either three bankers or twenty orphans – in every real driving situation you merely need to continue avoiding and minimising impact as much as you possibly can. The problems are practical, not ethical.

In the military sphere your intelligent missile robot isn’t morally any different to a simpler one. People talk about autonomous weapons as though they are inherently dangerous. OK, a robot drone can go wrong and kill the wrong people, but so can a ballistic missile. There’s never certainty about what you’re going to hit. A WWII bomber had to go by the probability that most of its bombs would hit a proper target, not a bus full of orphans (although of course in the later stages of WWII they were targeting civilians too).  Are the people who get killed by a conventional bomb that bounces the wrong way supposed to be comforted by the fact that they were killed by an accident rather than a mistaken decision? It’s about probabilities, and we can get the probabilities of error by autonomous robots down to very low levels.  In the long run intelligent autonomous weapons are going to be less likely to hit the wrong target than a missile simply lobbed in the general direction of the enemy.

Then we have the HRW’s extraordinary claim that autonomous weapons are wrong because they lack emotions! They suggest that impulses of mercy and empathy, and unwillingness to shoot at one’s own people sometimes intervene in human conflict, but could never do so if robots had the guns. This completely ignores the obvious fact that the emotions of hatred, fear, anger and greed are almost certainly what caused and sustain the conflict in the first place!  Which soldier is more likely to behave ethically: one who is calm and rational, or one who is in the grip of strong emotions? Who will more probably observe the correct codes of military ethics, Mr Spock or a Viking berserker?

We know what war is good for (absolutely nothing). The costs of a war are always so high that a purely rational party would almost always choose not to fight. Even a bad bargain will nearly always be better than even a good war. We end up fighting for reasons that are emotional, and crucially because we know or fear that the enemy will react emotionally.

I think if you analyse the HRW statement enough it becomes clear that the real reason for wanting to ban autonomous weapons is simply fear; a sense that machines can’t be trusted. There are two facets to this. The first and more reasonable is a fear that when machines fail, disaster may follow. A human being may hit the odd wrong target, but it goes no further: a little bug in some program might cause a robot to go on an endless killing spree. This is basically a fear of brittleness in machine behaviour, and there is a small amount of justification for it. It is true that some relatively unsophisticated linear programs rely on the assumptions built into their program and when those slip out of synch with reality things may go disastrously and unrecoverably wrong. But that’s because they’re bad programs, not a necessary feature of all autonomous systems and it is only cause for due caution and appropriate design and testing standards, not a ban.

The second facet, I suggest, is really a kind of primitive repugnance for the idea of a human’s being killed by a lesser being; a secret sense that it is worse, somehow more grotesque, for twenty people to be killed by a thrashing robot than by a hysterical bank robber. Simply to describe this impulse is to show its absurdity.

It seems ethics are not important to robots be cause for you they’re not important to anyone. But I’m pleased you agree that robots are outside the moral sphere.

Oh no, I don’t say that. They don’t currently need the kind of utilitarian calculus the New Yorker is on about, but I think it’s inevitable that robots will eventually end up developing not one but two separate codes of ethics. Neither of these will come from some sudden top-down philosophical insight – typical of you to propose that we suspend everything until the philosophy has been sorted out in a few thousand years or so – they’ll be built up from rules of thumb and practical necessity.

First, there’ll be rules of best practice governing their interaction with humans.  There may be some that will have to do with safety and the avoidance of brittleness and many, as Asimov foresaw, will essentially be about deferring to human beings.  My guess is that they’ll be in large part about remaining comprehensible to humans; there may be a duty to report , to provide rationales in terms that human beings can understand, and there may be a convention that when robots and humans work together, robots do things the human way, not using procedures too complex for the humans to follow, for example.

More interesting, when there’s a real community of autonomous robots they are bound to evolve an ethics of their own. This is going to develop in the same sort of way as human ethics, but the conditions are going to be radically different. Human ethics were always dominated by the struggle for food and reproduction and the avoidance of death: those things won’t matter as much in the robot system. But they will be happy dealing with very complex rules and a high level of game-theoretical understanding, whereas human beings have always tried to simplify things. They won’t really be able to teach us their ethics; we may be able to deal with it intellectually but we’ll never get it intuitively.

But for once, yes, I agree: we don’t need to worry about that yet.


  1. 1. DiscoveredJoys says:

    Millions of (barely) autonomous robots are already in our homes. Yes the dishwasher and washing machine follow a program set by a human, but my washing machine can vary the drum speed and duration of parts of the cycle to suit the load and its balance, and the dishwasher varies the rinse cycle according to the cleanliness of the water. Both heat the fill water to a set temperature.

    Admittedly they are square and don’t move around much. Neither could refuse to wash bloodstained knives or clothes (evidence of a crime). Still it is a start – they are *designed* for a function and limited to what they can do. Perhaps that is the starting place for machine ethics?

  2. 2. Lloyd Rice says:

    Presumably, if and when we do get around to programming emotional responses into robotic software, we will model such behavior after the human examples. That could be a serious mistake. According to a column by Marc Bekoff in today’s Boulder Daily Camera, animals, by and large, do not have the vicious and hateful elements in their natural behavior that humans so clearly display. Why would we want to make another species as violent as we ourselves are.

  3. 3. Vicente says:

    What is the difference between those killing bots and a land-mine left somewhere near a poor village. The mine-bot (a lesser being?) has a simple binary algorithm, it detects a poor kid’s leg and it blows it away. Not to mention the children soldiers, that is the best way of creating the most sophisticated war robots, by children brain washing (Lloyd these ones do have “programmed” emotions).

    Lesser beings? Peter, many, many millions of humans beings are not better than most living creatures, the only difference is that the former have the genetical-neural potential to improve themselves, while the latter don’t (can’t teach a chimp to meditate).

    Lloyd, one thing is to fake emotional behaviours or facial expressions, and another is to feel emotions that trigger very difficult to control “reptilian” responses.

    Robots will just be another expression of humans, try to have good humans, and you will have good robots. But you need good humans to start educational programmes to improve society, and the system, too often, promotes scum in disguise to those posts, and the process gets so slow….

    Are we worried about robots developing two codes of ethics, what about humans that have developed 7 billion codes, and increasing.

    I find it funny that we worry about robots when we still have so much to worry about humans. Interesting topic, but don’t let this press articles serve as a smoke curtain, to move attention from the current (dangerous enough) scenario, to a still fictional future. Take care of your own thoughts and emotions and the neighbours’, and you’ll see how nice robots will turn out to be.

  4. 4. quen_tin says:

    Robots are not agents but tools designed by engineers for performing human goals. The question of their autonomy is only a matter of sophistication of these goals (and with sophistication comes risk) but I don’t see why any specific ethic should apply there. Do people think we live in a SF movie or what?

  5. 5. Callan S. says:

    It seems funny, as we start wars, that we think we can make something else ‘ethical’?

    We can’t even make ourselves ethical!

    Let alone raise our robot children right.

    Talk about born to kill…

    What is the difference between those killing bots and a land-mine left somewhere near a poor village.
    That it was laid by a human. The mine bot was laid by a mine laying robot with the proposed blessings of humans, but without anything attached to humanity. We have joke sayings like ‘Computer says no!’. What about ‘Computer says die’?

  6. 6. Paul Bello says:

    Hi Peter:
    V. interesting. You might be fascinated by the research grant solicitation found at the location given below. All the way on the very last page.


    Marcello Guarini and I were cited in the Human Rights Watch report. In large part, I have a feeling that the whole machine ethics discussion hinges on the false premise that there could ever be a static set of prescriptions for every morally-charged situation — of course, this is even if we get far enough to determine whether or not a situation can be conceived of as having an unequivocally moral character. One man’s moral dilemma is another man’s humdrum decision. This isn’t to say that there aren’t any moral universals or general principles — but I suspect they are few and far between, often defeasible and usually imperfectly utilized due to resource constraints on our cognitive systems. In any case, there are process-level hurdles for any kind of rich account of machine ethics that need to be overcome before we can declare any sort of victory — one of which clearly seems to be having a similarly detailed computational story about mental states. Exciting times — but no reason for anyone to start screaming Armageddon.

  7. 7. Callan S. says:

    Why not armageddon? Does it become like the way our abattoirs are tidily tucked away, so we can eat meat without ironically seeing the kill upon kill that’d make us throw up?

    So here the robots head off into another part of the world to make abattoir of…well, not humans. Them. The other.

    And because we never think of ourselves as non human, we never think of ourselves as ‘them’ or ‘other’, we don’t even consider being on the sharp end of an autonimous killer.

    It’d be ironic if the auto-k’s mind were want to ponder that itself, as it goes about its work.

  8. 8. Peter says:


    Thanks – fascinating indeed! Some very interesting potential there.

  9. 9. Infrateach says:

    Modern infrastructure needs rely entirely on automation and the use of robots. We are dependent on them because the planet is overpopulated.

  10. 10. Lisa says:

    Ashe also has a talent known as Hawkshot which lets
    you see whats infront of you by firing an arrow inside the direction
    you would like to determine. One Pride” campaign to unify
    the city behind the team. The premise of the game is to control one of many champions and fight your way past hordes of spawning troops (called creep), then advance past towers to destroy the enemy base.

    my homepage … riots points [Lisa]

  11. 11. Clarita says:

    There are many to choose from, but the best business laptops are in a league of their own.
    Knowing when to move up and pressure and when to play defensive while free farming is the characteristic of not only a great ADC but of any good laner in League of Legends.
    The middle lane’s role is to provide high damage in the form
    of ability power or AP.

    My site: free riot points (Clarita)

Leave a Reply