Robot ethics have been attracting media attention again recently. Could autonomous kill-bots be made to behave ethically – perhaps even more ethically than human beings?
Can robots be ethical agents at all? The obvious answer is no, because they aren’t really agents; they don’t make any genuine decisions, they just follow the instructions they have been given. They really are ‘only following orders’, and unlike human beings, they have no capacity to make judgements about whether to obey the rules or not, and no moral responsibility for what they do.
On the other hand, the robots in question are autonomous to a degree. The current examples, so far as I know, are relatively simple, but it’s not impossible to imagine, at least, a robot which was bound by the interactions within its silicon only in the sort of way human beings are bound by the interactions within their neurons. After all it’s still an open debate whether we ourselves, in the final analysis, make any decisions, or just act in obedience to the laws of biology and physics.
The autonomous kill-bots certainly raise qualms of a kind which seem possibly moral in nature. We may find land-mines morally repellent in some sense (and perhaps the abandonment of responsibility by the person who places them is part of that) but a robot which actually picks out its targets and aims a gun seems somehow worse (or does it?). I think part of the reason is the deeply embedded conviction that commission is worse than omission; that we are more to blame for killing someone then for failing to save their life. This feels morally right, but philosophically it’s hard to argue for a clear distinction. Doctors apparently feel that injecting a patient in a persistent vegetative state with poison would be wrong, but that it’s OK to fail to provide the nutrition which keeps the patient alive: rationally it’s hard to explain what the material difference might be.
Suppose we had a more moral kind of land mine. It only goes off if heavy pressure is applied, so that it is unlikely to blow up wandering children, only heavy military vehicles. If anything, that seems better than the ordinary kind of mine; yet an automated machine gun which seeks out military targets on its own initiative seems somehow worse than a manual one. Rules which restrain seem good, while rules which allow the robot to kill people it could not have killed otherwise seem bad; unfortunately, it may be difficult to make the distinction. A kill-bot which picks out its own targets may be the result of giving new cybernetic powers to a gun which would otherwise sit idle, or it may be result of imposing some constraints on a bot which otherwise shoots everything that moves.
In practice the real challenge arises from the need to deal with messy reality. A kill-bot can easily be given a set of rules of engagement, required to run certain checks before firing, and made to observe appropriate restraint. It will follow the rules more rigorously than a human soldier, show no fear, and readily risk its own existence rather than breach the rules of engagement. In these respects, it may be true that the robot can exceed the human in propriety.
But practical morality comes in two parts; working out what principle to apply, and working out what the hell is going on. In the real world, even for human beings, the latter is the real problem more often than not. I may know perfectly clearly that it is my duty to go on defending a certain outpost until there ceases to be any military utility in doing so; but has that point been reached? Are the enemy so strong it is hopeless already? Will reinforcements arrive if I just hang on a bit longer? Have I done enough that my duty to myself now supersedes my duty to the unit? More fundamentally, is that the enemy, or a confused friendly unit, or partisans, or non-combatants? Are they trying to slip past, to retreat, to surrender, to annihilate me in particular? Am I here for a good reason, is my role important, or am I wasting my time, holding up an important manoeuvre, wasting ammunition? These questions are going to be much more difficult for the kill-bots to tackle.
Three things seem inevitable. There will be a growing number of people working on the highly interesting question of which algorithms produce, for any given set of computing and sensory limitations, the optimum ratio of dead enemies and saved innocents over a range of likely sets of circumstances. They will refer to the rules which emerge from their work as ethical, whether they really are or not. Finally, those algorithms will in turn condition our view of how human beings should behave in the same circumstances, and affect our real moral perceptions. That doesn’t sound too good, but again the issue is two-sided. Perhaps on some distant day the chief kill-bot, having absorbed and exhaustively considered the human commander’s instructions for an opportunistic war will use the famous formula:
“I’m sorry Dave, I’m afraid I can’t do that..”
Sorry November was a bit quiet, by the way. Most of my energy was going into my Nanowrimo effort – successful, I’m glad to say. Peter