|
Robot ethics
|
20 September
2004 | |
|
|
A moral
entity? |
The protests of Isaac Asimov fans about
the recent film I Robot don't seem to have had much
impact, I'm afraid. Asimov's original collection of short stories aimed to
provide an altogether more sophisticated and positive angle on robots, in
contrast to the science fiction cliché which has them rebelling against
human beings and attempting to take over the world. The film, by contrast,
apparently embodies this cliché. The screenplay was originally developed
from a story entirely unrelated to I Robot : only at a late
stage were the title and a few other superficial elements from Asimov's
stories added to it.
As you probably know, Asimov's robots all had
three basic laws built into them:
- A robot may not injure a human being, or,
through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by
human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as
long as such protection does not conflict with the First or Second Law.
The interplay between these laws in a variety
of problematic situations generated the plots, which typically (in the
short stories at least) posed a problem whose solution provided the
punchline of the story. |
|
I enjoyed the stories myself, but the laws do
raise a few problems. They obviously involve a very high level of
cognitive function, and it is rather difficult to imagine a robot clever
enough to understand the laws properly but not too sophisticated to be
rigorously bound by them: there is plenty of scope within them for
rationalising almost any behaviour ("Ah, the quality of life these humans
enjoy is pretty poor - I honestly think most of them would suffer less
harm if they died painlessly now.") It’s a little alarming that the laws
give the robot’s own judgement of what might be harmful precedence over
its duty to obey human beings. Any smokers would presumably have the
cigarette torn from their lips whenever robots were about. The intention
was clearly to emphasise the essentially benign and harmless nature of the
robots, but the effect is actually to offer would-be murderers an
opportunity (“Now, Robbie, Mr Smith needs this small lead slug injected
into his brain, but he’s a bit nervy about it. Would you…?”). In fairness,
these are not totally dissimilar to the problems Asimov’s stories dealt
with. And after all, reducing a race’s entire ethical code to three laws
is rather a challenge – even God allowed himself ten! |
|
The wider question of robot ethics is a
large and only partially explored subject. We might well ask on what
terms, if any, robots enter the moral universe at all. There are two main
angles to this: are they moral subjects, and if not, are they nevertheless
moral objects. To be a moral subject is, if you like, to count as a person
for ethical purposes: as a subject you can have rights and duties and be
responsible for your actions. If were is such a thing as free will, you
would probably have that, too. It seems pretty clear that ordinary
machines, and unsophisticated robots which merely respond to remote
control, are not moral subjects because they are merely the tools of
whoever controls or uses them. This probably goes for hard-programmed
robots of the old school, too. If some person or team of persons has
programmed your every move, and carefully considered what action you
should output for each sensory input, then you really seem to be morally
equivalent to the remote-control robot: you’re just on a slightly longer
lead. |
|
Isn't that a bit too sweeping? Although the
aim of every programmer is to make the program behave in a specified way,
there can’t be many programs of any complexity which did not at some stage
spring at least a small surprise on their creators. We need not be talking
about errors, either: it seems easy enough to imagine that a robot might
be equipped with a structure of routines and functions which were all
clearly understood on their own, but whose interaction with each other,
and with the environment, was unforeseen and perhaps even unforeseeable.
It’s arguable that human beings have downloaded a great deal of their
standard behaviour, and even memory, into the environment around them,
relying on the action-related properties or affordances of the objects
they encounter to prompt appropriate action. To a man with a hammer, as
the saying goes, everything looks like a nail: maybe when a robot
encounters a tool for the first time, it will develop behaviour which was
never covered explicitly in its programming. |
|
But we don’t have to rely on that kind
of reasoning to make a case for the agency of robots, because we can also
build into them elements which are not directly programmed at all.
Connectionist approaches leave the robot brain to wire itself up in ways
which are not only unforeseen, but often incomprehensible to direct
examination. Such robots may need a carefully designed learning
environment to guide them in the right directions, but after all, so do we
in our early years. Alan Turing himself seems to have thought that
human-level intelligence might require a robot which began with the
capacities of a baby, and was gradually educated. |
|
But does unpredictable behaviour by itself
imply moral responsibility? Lunatics behave in a highly unpredictable way,
and are generally judged not to be responsible for their actions on those
very grounds. Surely the robot has to show some qualities of rationality
to be accounted a moral subject? |
|
Granted, but why shouldn’t it? All
that’s required is that its actions show a coherent pattern of
motivation. |
|
Any pattern of behaviour can be interpreted as
motivated by some set of motives. What matters is whether the robot
understands what it’s doing and why. You’ve shown no real reason to think
it can. |
|
And you’ve shown no reason to suppose
it can’t. |
|
Once again we reach an impasse. Alright, well
let’s consider whether a robot could be a moral object. In a way this is
less demanding – most people would probably agree that animals are
generally moral objects without being moral subjects. They have no duties
or real responsibility for their actions, but they can suffer pain,
mistreatment and other moral wrongs, which is the essence of being a moral
object. The key point here is surely whether a robot really feels
anything, and on the face of it that seems very unlikely. If you equipped
a robot with a pain system, it would surely just be a system to make it
behave ‘as if’ it felt pain – no more effective in terms of real pain than
painting the word ‘ouch’ on a speech balloon. |
|
Well, why do people feel pain? Because nerve
impulses impinge in a certain way on processes in the brain. Sensory
inputs from a robot’s body could impinge in just the same sort of way on
equivalent processes in their central computer – why not? You accept that
animals feel pain, not because you can prove it directly, but because
animals seem to work in the same way as human beings. Why can’t that logic
be applied to a robot with the right kinds of structure.
|
|
Because I know – from inside – that the pain I
feel is not just a functional aspect of certain processes. It actually
hurts! I’m willing to believe the same of animals that resemble me, but as
the resemblance gets more distant, I believe it less: and robots are very
distant indeed. |
|
Well, look, the last thing I want is another
qualia argument. So let me challenge your original assumption. The key
point isn’t whether the robot feels anything. Suppose someone were to
destroy the Mona Lisa. Wouldn’t that be a morally dreadful act, even if
they were somehow legally entitled to do so? Or suppose they destroyed a
wonderful and irreplaceable book? How much more dreadful to destroy the
subtle mechanism and vast content of a human brain – or a similarly
complex robot? |
|
So let me get this right. You’re now arguing
that paintings are moral objects? |
|
Why not? Not in the same way or to the same
degree as a person, but somewhere, ultimately, on the same spectrum.
|
|
That’s so mad I don’t think it deserves, as
Jane Austen said, the compliment of rational opposition. |
|
|