What’s wrong with Killer Robots?

killer robotThere is a gathering campaign against the production and use of ‘killer robots’, weapons endowed with a degree of artificial intelligence and autonomy. The increasing use of drones by the USA in particular has produced a sense that this is no longer science fiction and that the issues need to be addressed before they go by default. The United Nations recently received a report proposing national moratoria, among other steps.

However, I don’t feel I’ve yet come across a really clear and comprehensive statement of why killer robots are a problem; and it’s not at all a simple matter. So I thought I’d try to set out a sketch of a full overview, and that’s what this piece aims to offer. In the process I’ve identified some potential safguarding principles which, depending on your view of various matters, might be helpful or appropriate; these are collected together at the end.

I would be grateful for input on this – what have I missed? What have I got wrong?

In essence I think there are four broad reasons why hypothetically we might think it right to be wary of killer robots: first, because they work well; second because in other ways they don’t work well, third because they open up new scope for crime, and fourth because they might be inherently unethical.

A. Because they work well.

A1. They do bad things. The first reason to dislike killer robots is simply that they are a new and effective kind of weapon. Weapons kill and destroy; killing and destruction is bad. The main counter-argument at an equally simple level is that in the hands of good people they will be used only for good purposes, and in particular to counter and frustrate the wicked use of other weapons. There is room for many views about who the good people may be, ranging from the view that anyone who wants such weapons disqualifies themselves from goodness automatically, to the view that no-one is morally bad and we’re all equally entitled to whatever weapons we want; but we can at any rate pick up our first general principle.

P1. Killer Robots should not be produced or used in a way that allows them to fall into the hands of people who will use them unethically.

A2. They make war worse. Different weapons have affected the character of warfare in different ways. The machine gun, perhaps, transformed the mobile engagements of the nineteenth century into the fixed slaughter of the First World War. The atomic bomb gave us the capacity to destroy entire cities at a time; potentially to kill everyone on Earth, and arguably made total war unthinkable for rational advanced powers. Could it be that killer robots transform war in a way which is bad? There are several possible claims.

A2 (i) They make warfare easier and more acceptable for the belligerent who has them. No soldier on your own side is put at risk when a robot is used, so missions which are unacceptable because of the risk to your own soldier’s lives become viable. In addition robots may be able to reach places or conduct attacks which are beyond the physical capacity of a human soldier, even one with special equipment. Perhaps, too, the failure of a drone does not involve a loss of face and prestige in the same way as the defeat of a human soldier. If we accept the principle that some uses of weapons are good, then for those uses this kind of objection is inverted; the risk elimination and extra capability are simply further benefits for the good user. To restrict the use of killer robots for good purposes on these grounds might then be morally wrong. Even if we think the objection is sound it does not necessarily constitute grounds for giving up killer robots altogether; instead we can merely adopt the following restriction.

P2. Killer Robots should not be used for any mission which you would not be prepared to assign to a human soldier if a human soldier were capable of executing it.

A2 (ii) They tip the balance of advantage in war in favour of established powers and governments. Because advanced robots are technologically sophisticated and expensive, they are most easily or exclusively accessible to existing authorities and large organisations. This may make insurgency and rebellion more difficult, and that may have undemocratic consequences and strengthen the hold of tyrants. Tyrants normally have sufficient hardware to defeat rebellions in a direct confrontation anyway – it’s not easy to become a tyrant otherwise. Their more serious problems come from such factors as not being able or willing to kill in the massive numbers required to defeat a large-scale popular rebellion (because that would disrupt society and hence damage their own power), disloyalty among subordinates who have control of the hardware, or inability to deal with both internal and external enemies at the same time. Killer robots make no difference to the first of these factors; they would only affect the second if they made it possible for the tyrant to dispense with human subordinates, controlling the repression-bots direct from a central control panel, or being able to safely let them get on with things at their own discretion – still a very remote contingency; and on the third they make only the same kind of difference as additional conventional arms, providing no particular reason to single out robots for restriction. However, robots might still be useful to a tyrant in less direct ways, notably by carrying out targeted surveillance and by taking out leading rebels individually in a way which could not be accomplished by human agents.

A2 (iii) They tip the balance of advantage in war against established powers and governments Counter to the last point, or for a different set of robots, they may help anti-government forces. In a sense we already have autonomous weapons in the shape of land-mines and IEDs, which wait, detect an enemy (inaccurately) and fire. Hobbyists are already constructing their own drones, and it’s not at all hard to imagine that with some cheap or even recycled hardware it would be possible to make bombs that discriminate a little better; automatic guns that recognise the sound of enemy vehicles or the appearance of enemy soldiers, and aim and fire selectively; and crawling weapons that infiltrate restricted buildings intelligently before exploding. In A2 (ii) we thought about governments that were tyrannical, but clearly as well as justifiable rebels there are insurgents and plain terrorists whose causes and methods are wrong and wicked.

A2 (iv) They bring war further into the civilian sphere As a consequence of some of the presumed properties of robots – their ability to navigate complex journeys unobtrusively and detect specific targets accurately – it may be that they are used in circumstances where any other approach would bring unacceptable risks of civilian casualties. However, they may still cause collateral deaths and injuries and in general diminish the ‘safe’ sphere of civilian life which would in general be regarded as something to be protected, something whose erosion could have far-reaching effects on the morale and cohesion of society. Principle P2 would guard against this problem.

A3 They circumvent ethical restrictions The objection here is not that robots are unethical, which is discussed below under D, but that their lack of ethics makes them more effective because it enables them to do things which couldn’t be done otherwise, extending the scope and consequences of war, war being inherently bad as we remember.

A3 (i) Robots will do unethical things which could not otherwise be done. It doesn’t seem that robots can do anything that wouldn’t otherwise be done for ethical reasons: either they approximate to tools, in which case they are under the control of a human agent who would presumably have done the same things without a robot had they been physically possible; or if the robots are sufficiently advanced to have real moral agency of their own, they correspond to a human agent and, subject to the discussion below, there is no reason to think they would be any better or worse than human beings.

A3 (ii) Using robots gives the belligerent an enabling sense of having ‘clean hands’. Being able to use robots may make it easier for belligerents who are evil but squeamish to separate themselves from their actions. This might be simply because of distance: you don’t have to watch the consequences of a drone attack up close; or it might be because of a genuine sense that the robot bears some of the responsibility. The greater the autonomy of the robot, the greater this latter sense is likely to be. If there are several stages in the process this sense of moral detachment might be increased: suppose we set out overall mission objectives to a strategic management computer, which then selects the resources and instructs the drones on particular tasks, leaving them with options on the final specifics and an ability to abort if required. In such circumstances it might be relatively easy to disregard the blood eventually sprayed across the landscape as a result of our instructions.

A3 (iii) Robots can easily be used for covert missions, freeing the belligerent from the fear of punishment for unethical behaviour. Robots facilitate secrecy both because they may be less detectable to the enemy and because they may require fewer humans to be briefed and aware, humans being naturally more inclined to leak information than a robot, especially a one-use robot.

B Because they don’t work well

B1 They are inherently likely to go wrong. In some obvious ways robots are more reliable than human agents; their capacities can be known much more exactly and they are very predictable. However, it can be claimed that they have weaknesses, and the most important is probably inability to deal with open-ended real-life situations. AI has shown it can do well in restricted domains, but to date it has not performed well where clear parameters cannot be established in advance. This may change, but for the moment while it’s quite conceivable we might send a robot after a man who fitted a certain description, it would not be a good idea to send a robot to take out anyone ‘behaving like an insurgent’. This need not be an insuperable problem because in many cases battlefield situations or the conditions of a particular mission may be predictable enough to render the use of a robot sufficiently safe; the risk may be no greater or perhaps even less, than the risk inevitably involved in using unintelligent weapons. We can guard against this systematically by adopting another principle.

P3. Killer Robots should not be used for any mission in unpredictable circumstances or where the application of background understanding may be required.

B2 The consequences of their going wrong are especially serious. This is the Sorcerer’s Apprentice scenario: a robot is sent out on a limited mission but for some reason does not terminate the job, and continues to threaten and kill victims indefinitely; or it destroys far more than was intended. The answer here seems to be; don’t design a robot with excess killing capacity. And impose suitable limits and safeguards.

P4. Killer Robots should not be equipped with capacities that go beyond the immediate mission; they should be subject to built-in time limits and capable of being shut down remotely.

B3 They tempt belligerents into over-ambitious missions. It seems plausible enough that sometimes the capacities of killer robots might be over-estimated, but that’s a risk that applies to all weapons. I don’t see anything about robots in themselves that makes the error especially seductive.

B4 They lack understanding of human beings. So do bullets, of course; the danger here only arises if we are asking the robot to make very highly sophisticated judgements about human behaviour. Robots can be equipped with game-theoretic rules that allow them to perform well in defined tactical situations; otherwise the risk is covered by principle P3. That, at least applies to current AI. It is conceivable that in future we may develop robots that actually possess ‘theory of mind’ in the sense required to understand human beings.

B5 They lack emotion. Lack of emotion can be a positive feature if the emotions are anger, spite and sadistic excitement. In general a good military agent follows rules and subject to P3, robots can do that very satisfactorily. There remains the possibility that robots will kill when a human soldier would refrain from feelings of empathy and mercy. As a countervailing factor the human soldier need not be doing the best thing, of course, and short-term immediate mercy might in some circumstances lead to more deaths overall. I believe that there are in practice few cases of soldiers failing to carry out a bad mission through feelings of sympathy and mercy, though I have no respectable evidence either way. I mentioned above the possibility that robots may eventually be endowed with theory of mind. Anything we conclude about this must be speculative to some degree, but there is a possibility that the acquisition of theory of mind requires empathy, and if so we could expect that robots capable of understanding human emotion would necessarily share it. It need not be a permanent fact that robots lack emotion. While current AI simulation of emotion is not generally impressive or useful, that may not remain the case

C Because they create new scope for crime

C1 They facilitate ‘rational’ crime. We tend to think first of military considerations, but it is surely the case that killer robots, whether specially designed or re-purposed from military uses could be turned to crime. The scope for use in murder, robbery and extortion is obvious.

C2 They facilitate irrational crime. A particularly unpleasant prospect is the possibility of autonomous weapons being used for irrational crime – mass murder, hate crime and so on. When computer viruses became possible, people created them even though thre was no benefit involved; it cannot be expected that people will refrain from making murder-bots merely because it makes no sense.

D Because they are inherently unethical

D1 They lack ethical restraint. If ethics involves obeying a set of rules, then subject to their being able to understand what is going on, robots should in that limited sense be ethical. If soldiers are required to apply utilitarian ethics, then robots at current levels of sophistication will be capable of applying Benthamite calculus, but have great difficulty identifying the values to be applied. Kantian ethics requires one to have a will and desires of one’s own, so pending the arrival of robots with human-level cognition, they are presumably non-starters at it, as they would presumably be for non-naturalistic or other ethical systems. But we’re not requiring robots to behave well, only to avoid behaving badly – I don’t think anything beyond obedience to a set of rules is generally required because if the rules are drawn conservatively it should be possible to avoid the grey areas.

D2 They disperse or dispel moral responsibility. Under A3(ii) I considered the possibility that using robots might give a belligerent a false sense of moral immunity; but what if robots really do confer moral immunity? Isaac Asimov gives an example of a robot subject to a law that it may not harm human beings, but without a duty to protect them. It could, he suggests, drop a large weight towards a human being: because it knows it has ample time to stop the weight this in itself does not amount to harming the human. But once the weight is on its way, the robot has no duty to rescue the human being and can allow the weight to fall. I think it is a law of ethics that responsibility ends up somewhere except in cases of genuine accident; either with the designer, the programmer, the user, or, if sufficiently sophisticated, the robot itself. Asimov’s robot equivocates; dropping the weight is only not murder in the light of a definite intention to stop the weight, and changing its mind subsequently amounts to murder.

D3 They may become self-interested. The classic science fiction scenario is that robots develop interests of their own and Kill All Humans in order to protect them. This could only be conceivable in the still very remote contingency that robots were endowed with full human-level capacities for volition and responsibility. If that were the case, then robots would arguably have a right to their own interests in just the same way as any other sentient being; there’s no reason to think they would be any more homicidial in pursung them than human being themselves.

D4 They resemble slaves. The other side of the coin then, is that if we had robots with full human mental capacity, they would amount to our slaves, and we know that slavery is wrong. That isn’t necessarily so, we could have killer robots that were citizens in good standing; however all of that is still a very remote prospect. Of more immediate concern is the idea that having mechanical slaves would lead us to treat human beings more like machines. A commander who gets used to treating his drones as expendable might eventually begin to be more careless with human lives. Against his there is a contrary argument that being relieved from the need to accept that war implied death for some on one’s own soldiers would mean it in fact became even more shocking and less acceptable in future.

D5 It is inherently wrong for a human to be killed by a machine. Could it be that there is something inherently undignified or improper about being killed by a mechanical decision, as opposed to a simple explosion? People certainly speak of it’s being repugnant that a machine should have the power of life or death over a person. I fear there’s some equivocation going on here: either the robot is a simple machine, in which case no moral decision is being made and no power of life or death is being exercised; or the robot is a thinking being like us, in which case it seems like mere speciesism to express repugnance we wouldn’t feel for a human being in the same circumstances. I think nevertheless that even if we confine ourselves to non-thinking robots there may perhaps be a residual sense of disgust attached to killing by robot, but it does not seem to have a rational basis, and again the contrary argument can be made; that death by robot is ‘clean’. To be killed by a robot seems in one way to carry some sense of humiliation, but in another sense to be killed by a robot is to be killed by no-one, to suffer a mere accident.

What conclusion can we reach? There are at any rate some safeguards that could be put in place on a precautionary basis. Beyond that I think one’s verdict rests on whether the net benefits, barely touched on here, exceed the net risks; whether the genie is already out of the bottle, and if so whether it is allowable or even a duty to try to ensure that the good people maintain parity of fire-power with the bad.

P1. Killer Robots should not be produced or used in a way that allows them to fall into the hands of people who will use them unethically.

P2. Killer Robots should not be used for any mission which you would not be prepared to assign to a human soldier if a human soldier were capable of executing it.

P3. Killer Robots should not be used for any mission in unpredictable circumstances or where the application of background understanding may be required.

P4. Killer Robots should not be equipped with capacities that go beyond the immediate mission; they should be subject to built-in time limits and capable of being shut down remotely.

7 thoughts on “What’s wrong with Killer Robots?

  1. I think there are other reasons why killer robots may be unethical… Existing simple killer robots such as mines and delayed activation cluster bomblets are considered unethical because, even though they are effective in times of war, after the war is over civilians continue to be maimed and killed by the remaining mines etc.

    Are killer robots that can be recalled or remotely deactivated more ethical?

  2. I think that the people building these things will put many safeguards in because the main risk with any system of this nature is that the enemy turns your advantage against you by subverting its inherent security.

    I can think of several basic rules to prevent a “Judgment Day” type scenario:
    1. Do not connect drones to a massive open network.
    2. Do not give the drones an interconnected network AI. If you must make them smart, link them to a central AI that can be shut down and *physically* disconnected via several remote overrides.
    3. Have manual overrides always be the primary input.
    4. Build drones specifically engineered to counteract the original drones you built in case of subversion or loss of control.

    Much more interesting to me becomes the question of the ethics that come up regarding AI rights.

    What guarantee is there that if you make these robotic soldiers smart enough, they do not themselves suffer? Surely it would be extremely immoral to mass produce machines that are made only to suffer and inflict suffering on other equivalent machines. Of course, I think the basic ethical questions surrounding AI or emulated brains have been inadequately addressed. I mean, has anyone involved with BlueBrain actually stopped to think about what they’re doing? I actually asked one of the engineers of SPAUN directly if he thought there were any associated mental phenomena with his semantic pointer approach, and he didn’t really seem to think the issue was important (his answer was ‘no’ and I tentatively agree).

  3. The piece seems to have a certain acceptance of war behind it. As if there’s a ‘fair’ way to wage war – as if those inclined to wage war care about the other side having a fair chance (instead of using their own economic might to crush the other side – no symetrical chess board here. Full set for us, one pawn for them – that’s fine by these people)

    If I were to take the acceptance tact, I would stress the idea of power corupts. Unfortunately in the piece it’s refered to as ‘evil’ – which has the exact opposite effect, because who raises their hand to and accepts themselves as evil? No one. Thus any advice based on it will have the opposite effect as people go ‘well, I don’t have to listen to that, because I’m good!’

    But I can’t really take on the acceptance tact – war is a monumental F’ up where peace amongst men should reign (I say ‘should’ in as much as atleast at a near family level, even when relatives annoy us, we tend to maintain peace – why this should somehow not extend across seas and foreign lands, I do not know). At best you are talking to the people who lobby against war (who no doubt are littered amongst government agencies). In such a case I’d say to highlight how that power corrupts – how it allows them to treat other humans, even more so than sending troops to blow them appart, as even less than human than ever before. To highlight that when a drone kills, we couldn’t even be bothered to get that close to our relatives, before we kill them. How were seperating even further from them.

    The piece here seems to treat it not being run fairly as the problem. But any idea of ‘fair’ will simply fall apart sans any moral anchor point? Why is it not fair to send a robot to do a job that a man could have easily done? What’s the big issue that causes that to be unfair?

    Without a anchoring moral issue, it seems an arbitrary restriction. Heck, I’m guessing the next approach will be ‘robots take our jobs’ – because no one really has a clue as to why this is any kind of problem, so they’ll take the most unimportant (yet relatively visable) reason.

    To me it seems that war is a monumental F’ up in the brotherhood of men. This seems to have been forgotten – war is glory, war is self empowerment (cash) and somehow the other side is merely a…hurdle, rather than people, on the road to glory.

    The anchoring moral issue should be the brotherhood of men. Yeah, we’ve murdered each other for millenia as if the other side were orcs and trolls. There was room for nievety back in an age of little information. But what are people now who try and pretend the other side are orcs? Is anyone going to really believe it when a leader tries and treats the other side as orcs?

    I really don’t think you can play the ‘lets make war fair’ angle – it sweeps the brotherhood of men moral anchor away, and without it what makes one thing any less fair than another thing? It just becomes game design – where any design is pretty much valid.

    I’m guessing the responce will be ‘I’m not accepting war’. But the thing is, I’m just seeing these robots being described as the problem. There is no reference to a deeper problem – a deeper moral anchor, wronged. I could just be reading poorly and have missed the reference. But right now the focus seems to be just on the robots – and I think such a focus, as much as it lacks any deeper moral anchor, will have zero effect. The original question seemed to be, roughly, how effective would this be in convincing/how comprehensive would it be. I’m merely pointing out an area I think it lacks in.

    As to robots, we aught to raise our iron children right. Except we’d first birth them as child soldiers? And wonder at how they would eventually treat their parents, in the end.

    Jorge, your #4 almost seems an after thought, and yet isn’t it just how a judgement day comes about? Those machines have the lateral thought that taking over those machines is the best means of eventual destruction of the other machines. And somewhat like mastermould from x-man comics declaring all humans are mutants (true enough), perhaps these machines will decide all humans are machines (true…enough?)

  4. This is a fascinating article. Essentially, killer robots are a weapon like any other, but what these weapons different is the potential complexity of their programming. This brings up all the fun AI discussions in a war setting. A point and a digression,

    On D3, I don’t think human level capacities would be required to for robots to malfunction and cause incredible catastrophe for humans. They would definitely need better AI than they have now, but once their commands reach a certain level of complexity, it’s possible that a coding error and not even ‘will’ or ‘interest’ could cause them to turn on us. Though I would say Jorge’s suggestions on limiting network capabilities would probably minimize that threat.

    So, to get a little off topic, why do robots have to be ‘killer’? As robots, with no ‘lives’ on the line, that might make certain parts of war not kill or be killed situations. Robots already perform nonkilling tactic operations, like surveillance and those that blow up IEDs. Essentially, what I’m getting to, is how about ‘capture robots’. It could use tranquilizers or knock out gas or something like that.

    When considering the purpose of war is usually to either a battle for resources or to stabilize or destabilize a political situation, it seems that capture capabilities would actually more strategically sound. Live prisoners are a hundred times more valuable than corpses.

    Much of your essay would still apply to this. It would still put a lot of power in the hands of the already powerful. Though it might be less of an issue sending a nonlethal robot into a situation we wouldn’t normally send a soldier. Either way, the idea is that it may be a way to reduce violence as well as serve a tactical purpose.

    -Charlie

  5. Forget about anthropomorphizing the robots, or any discussions of sentience or robot ethics, this issue should focus on the key word of *automation*. The biggest threat from weaponized robots and the greatest moral hazard is the automation of lethal force. Taking the moral actors out of the loop, or even just distancing them from the act, has dire consequences on how people behave and the decisions they make.

  6. It is interesting to see this published here. It is a well written piece but it should be made clear that there is a considerable literature on the topic that has been pouring out since 2007 in philosophy, engineering, AI, robotics and law journals not to mention the press. I am glad to see it being discussed here nonetheless – thank you.

    First the definition of killer robot, lethal autonomous robot or autonomous weapon (used interchangeably).

    “A weapon system that once activated can select and engage/attack/kill targets without further human supervision.”

    This is the definition that the DoD, MoD, Humanitarian groups and the UN are using

    The campaign to stop killer robots is only concerned with prohibiting the automation of the kill function – “selecting targets and attacking without human supervision”

    The major arguments (in brief) against autonomous weapons to date (some of them implicit in the above as well).

    1. Autonomous weapons cannot comply with International Humanitarian Law (IHL): the principle of distinction and the principle of proportionality.

    2. Autonomous weapons cannot be held accountable for mishaps or war crimes and the muddy the whole chain of accountability that is necessary to allocate blame.

    3. Autonomous weapons will destabalise world security by (i) lowering the threshold for conflict (so-called bloodless wars), (ii) the could trigger unintentional conflicts (iii) the behavious of competing algorithms is unpredictable, (iii) there will be a new arms race with large international corporation competiting for business.

    4. Delegating the kill decision to computer program crosses a fundamental moral line. Some argue that having a machine decide to kill you is the ultimate human indignity.

    On the other side – lawyers in the US argue that such systems will not be fielded until they can be shown to comply with IHL and so to keep ahead of the game research and development should continue.

    They and (very few) roboticists argue that one day robots will be able to comply with IHL. This is promisary and speculative and if true, it may be a very very long way away. In the meantime the developments will continue.

    The reason for a pre-emptive ban on autonomous weapons is a lack of trust in the worlds Nation States. Certain hi-tech countries are asking us to believe that they will develop these weapons and not use them until they can comply with IHL.

    BUT 1. these are legacy systems and new governments who inherit the weapons might have different views.

    2. the military necessity card can always be played in major conflicts. “This was the best weapon of choice and so we used it”

    All of these argument have been hotly debated and I am only giving a glimpse of them here. There is also a lot of discussion about falling into the wrong hands etc – authoritarian dictators – but no time to go through that as well.
    noel

Leave a Reply

Your email address will not be published. Required fields are marked *