Ethical kill-bots

Picture: Bender. Robot ethics have been attracting media attention again recently. Could autonomous kill-bots be made to behave ethically  – perhaps even more ethically than human beings?

Can robots be ethical agents at all? The obvious answer is no, because they aren’t really agents; they don’t make any  genuine decisions, they just follow the instructions they have been given. They really are ‘only following orders’, and  unlike human beings, they have no capacity to make judgements about whether to obey the rules or not, and no moral responsibility for what they do.

On the other hand, the robots in question are autonomous to a degree. The current examples, so far as I know, are  relatively simple, but it’s not impossible to imagine, at least, a robot which was bound by the interactions within its silicon only in the sort of way human beings are bound by the interactions within their neurons. After all it’s still an open debate whether we ourselves, in the final analysis, make any decisions, or just act in obedience to the laws of biology and physics.

The autonomous kill-bots certainly raise qualms of a kind which seem possibly moral in nature. We may find  land-mines morally repellent in some sense (and perhaps the abandonment of responsibility by the person who places them  is part of that) but a robot which actually picks out its targets and aims a gun seems somehow worse (or does it?). I  think part of the reason is the deeply embedded conviction that commission is worse than omission; that we are more to  blame for killing someone then for failing to save their life. This feels morally right, but philosophically it’s hard to  argue for a clear distinction. Doctors apparently feel that injecting a patient in a persistent vegetative state with  poison would be wrong, but that it’s OK to fail to provide the nutrition which keeps the patient alive: rationally it’s hard to explain what the material difference might be.

Suppose we had a more moral kind of land mine. It only goes off if heavy pressure is applied, so that it is  unlikely to blow up wandering children, only heavy military vehicles. If anything, that seems better than the  ordinary kind of mine; yet an automated machine gun which seeks out military targets on its own initiative seems somehow worse than a manual one. Rules which restrain seem good, while rules which allow the robot to kill people it could not have killed otherwise  seem bad; unfortunately, it may be difficult to make the distinction. A kill-bot which picks out its own targets may be the result of giving new cybernetic powers to a gun which would otherwise sit idle, or it may be result of imposing some constraints on a bot which otherwise shoots everything that moves.

In practice the real challenge arises from the need to deal with messy reality. A kill-bot can easily be given a set of rules of engagement, required to run certain checks before firing, and made to observe appropriate restraint. It will follow the rules more rigorously than a human soldier, show no fear, and readily risk its own existence rather than breach the rules of engagement. In these respects, it may be true that the robot can exceed the human in propriety.

But practical morality comes in two parts; working out what principle to apply, and working out what the hell is going on. In the real world, even for human beings, the latter is the real problem more often than not. I may know perfectly clearly that it is my duty to go on defending a certain outpost until there ceases to be any military utility in doing so; but has that point been reached? Are the enemy so strong it is hopeless already? Will reinforcements arrive if I just hang on a bit longer? Have I done enough that my duty to myself now supersedes my duty to the unit? More fundamentally, is that the enemy, or a confused friendly unit, or partisans, or non-combatants? Are they trying to slip past, to retreat, to surrender, to annihilate me in particular? Am I here for a good reason, is my role important, or am I wasting my time, holding up an important manoeuvre, wasting ammunition? These questions are going to be much more difficult for the kill-bots to tackle.

Three things seem inevitable. There will be a growing number of people working on the highly interesting question of which algorithms produce, for any given set of computing and sensory limitations, the optimum ratio of dead enemies and saved innocents over a range of likely sets of circumstances. They will refer to the rules which emerge from their work as ethical, whether they really are or not. Finally, those algorithms will in turn condition our view of how human beings should behave in the same circumstances, and affect our real moral perceptions. That doesn’t sound too good, but again the issue is two-sided. Perhaps on some distant day the chief kill-bot, having absorbed and exhaustively considered the human commander’s instructions for an opportunistic war will use the famous formula:

“I’m sorry Dave, I’m afraid I can’t do that..”

Sorry November was a bit quiet, by the way. Most of my energy was going into my Nanowrimo effort – successful, I’m glad to say. Peter

11 thoughts on “Ethical kill-bots

  1. Maybe the gun-aiming robot is more ethical because it does not kill indiscriminately. In war (as they say), killing the enemy is the goal. So the gun-aiming robot is doing its job directly and efficiently — with reduced collateral damage. Of course, there’s the issue of the spread of technology. Whatever the soldiers do will one day become household goods and services.

    But I have a deeper question. Your approach here, Peter, seems to follow the GOFAI pattern of assuming that robots follow man-made rules. My claim is that a robot following only the rules that it had learned on its own, as we humans must necessarily do, would be a very different beast. Yes, as children, we are sometimes spoon fed with lists of “important” social rules. But the way we go about learning these, and whether or not to give them any importance in our lives, is very different from the way the imputed traditional AI-style robot follows rules by executing its program.

    Of course, if the rules, as they were being learned, were stored in the form of program code, one must suppose that such rules would have to be followed in much the same way as a program that was fed in by the programmers. One claim would be that the kinds of rules that could be learned by a functioning, mobile entity, interacting socially, would be like nothing any programmer would ever write. Another claim would be that, even if the set of rules turned out to be essentially the same as a programmer would write, the way those rules get interpreted by the entity would result in very different behavior. This hypothesis rests on the concept that all the time while an organism (or an appropriately programmed machine) is behaving according to some scheduled plan of instructions, whether such a plan was downloaded or learned online, the organism would also be continuously alert to various ongoing world events. This awareness could change the course of action of the organism in ways that no programmer could foresee.

    Can a robot really learn to manage in the world given that it must learn the rules on its own? So far, there is little evidence of this. But a project at Michigan State University (http://www.cse.msu.edu/%7Eweng/research/LM.html) seems to point in the right direction. Some of Rodney Brooks efforts at MIT might be said to be following such a trail, but his stated goals are not as broad and general as those of the MSU project. And, of course, learning is not the only thing we as humans do. We also inherit a very large and complex set of autonomic and emotional responses. These built-in factors obviously play large roles in determining our behavior. And even if we one day learn what all of these factors are and how they work, it is by no means given that we will put all or any of them in our robots.

    The question is whether such a machine, if such is at all possible, would seem to us humans as being able to decide things for itself. My suspicion is that it would seem to be behaving independently, it would think and say that it did have free will, and it would behave in all the fuzzy and sometimes incomprehensible ways that we humans do. At the very least, we will learn a great deal about ourselves in the process of creating such machines.

  2. Thanks, Lloyd. You’re right that I mainly assume here that robots are just straightforwardly running programs that someone has put together in advance. Although I think that’s more or less where we are with military robots at the moment, it does beg some questions.

    Interesting thoughts about robots learning their own rules. It does seem plausible to me that any entity with human-style agency and moral responsibilty would have to have done that in some sense.

  3. I think the key phrase in your comment is “straightforwardly running programs that someone has put together in advance”. There are plenty of robots where this is not the case. Even in the typical robot hobby club it’s not uncommon to find subsumption-based robots exhibiting fairly surprising emergent behaviors. The iRobot company uses subsumption architectures in a number of their military robots (though I’m uncertain if any of those carry lethal weapons yet). It’s also very common to see even hobby level robots whose behavior is based on neural networks evolved from genetic algorithms or other evolutionary approaches.

    While these types of robots are still relatively simplistic compared to say, the nervous system of an insect, they’re a long way from a robot running a BASIC program full of procedural instructions like “go straight”, “turn right”, etc. In the end, isn’t a Human just running a complex program resulting evolutionary algorithms? But we make ethical decisions, have moral responsibilities, and arguably, have free will.

    I agree it’s still premature to claim any current military robots are making ethical decisions. But these robots are reacting to sensory input in complex ways comparable to simple life forms. My estimation is that the most advanced robots are probably equivalent to the level of complexity found in a one celled animal. Does an amoeba make ethical decisions? What about a reptile, a mammal, a primate?

    Out of curiosity, what definition of “agent” are we talking about excludes non-biological entities such as robots? Roboticists use the word all the time; perhaps in a different context than philosophers?

  4. Steevithak: Many have argued, including C. S. Lewis, that morals are handed to us by some otherworldly force. In my universe, this is not possible. It is my belief that ethics and morals are both products of our biology and the environment in which we grow up, though I would argue that it takes a fairly high level of conceptual mechanism to support their development. This would certainly leave out the amoebas. As for free will, I am not surprisingly on the side of those who say it is not all it is cracked up to be. I am still in progress with Montague’s “Why Choose This Book?”, a most interesting treatise on the matter.

    You are certainly correct that as soon as a machine has any sort of sensory input, it is already beyond the realm of “straightforwardly running a program”.

  5. Perhaps it is not a question of ethics, as people deserve to die. I’m not saying everyone. Just those of us who turn off a toaster in front of a robot, and must be tried for the murder of said toaster.

  6. To tell you the truth, the subject matter here about ” Conscious Entities » Blog Archive » Ethical kill-bots “is some thing that many readers I am sure would love. In My opinion, sample cover letter project manager is about alot of thing. And to day Thursday as I read your work, I have no doubt that we my job search will be simplified Any way, that is it and I think other commenter think so.

  7. Pingback: The Philosophical Question of Ethical Killer Robots | robots.net

Leave a Reply

Your email address will not be published. Required fields are marked *