Do We Need Ethical AI?

Amanda Sharkey has produced a very good paper on robot ethics which reviews recent research and considers the right way forward – it’s admirably clear, readable and quite wide-ranging, with a lot of pithy reportage. I found it comforting in one way, as it shows that the arguments have had a rather better airing to date than I had realised.

To cut to the chase, Sharkey ultimately suggests that there are two main ways we could respond to the issue of ethical robots (using the word loosely to cover all kinds of broadly autonomous AI). We could keep on trying to make robots that perform well, so that they can safely be entrusted with moral decisions; or we could decide that robots should be kept away from ethical decisions. She favours the latter course.

What is the problem with robots making ethical decisions? One point is that they lack the ability to understand the very complex background to human situations. At present they are certainly nowhere near a human level of understanding, and it can reasonably be argued that the prospects of their attaining that level of comprehension in the foreseeable future don’t look that good. This is certainly a valid and important consideration when it comes to, say, military kill-bots, which may be required to decide whether a human being is hostile, belligerent, and dangerous. That’s not something even humans find easy in all circumstances. However, while absolutely valid and important, it’s not clear that this is a truly ethical concern; it may be better seen as a safety issue, and Sharkey suggests that that applies to the questions examined by a number of current research projects.

A second objection is that robots are not, and may never be, ethical agents, and so lack the basic competence to make moral decisions. We saw recently that even Daniel Dennett thinks this is an important point. Robots are not agents because they lack true autonomy or free will and do not genuinely have moral responsibility for their decisions.

I agree, of course, that current robots lack real agency, but I don’t think that matters in the way suggested. We need here the basic distinction between good people and good acts. To be a good person you need good motives and good intentions; but good acts are good acts even if performed with no particular desire to do good, or indeed if done from evil but confused motives. Now current robots, lacking any real intentions, cannot be good or bad people, and do not deserve moral praise or blame; but that doesn’t mean they cannot do good or bad things. We will inevitably use moral language in talking about this aspect of robot behaviour just as we talk about strategy and motives when analysing the play of a chess-bot. Computers have no idea that they are playing chess; they have no real desire to win or any of the psychology that humans bring to the contest; but it would be tediously pedantic to deny that they do ‘really’ play chess and equally absurd to bar any discussion of whether their behaviour is good or bad.

I do give full weight to the objection here that using humanistic terms for the bloodless robot equivalents may tend to corrupt our attitude to humans. If we treat machines inappropriately as human, we may end up treating humans inappropriately as machines. Arguably we can see this already in the arguments that have come forward recently against moral blame, usually framed as being against punishment, which sounds kindly, though it seems clear to me that they might also undermine human rights and dignity. I take comfort from the fact that no-one is making this mistake in the case of chess-bots; no-one thinks they should keep the prize money or be set free from the labs where they were created. But there’s undoubtedly a legitimate concern here.

That legitimate concern perhaps needs to be distinguished from a certain irrational repugnance which I think clearly attaches to the idea of robots deciding the fate of humans, or having any control over them. To me this very noticeable moral disgust which arises when we talk of robots deciding to kill humans, punish them, or even constrain them for their own good, is not at all rational, but very much a fact about human nature which needs to be remembered.

The point about robots not being moral persons is interesting in connection with another point. Many current projects use extremely simple robots in very simple situations, and it can be argued that the very basic rule-following or harm prevention being examined is different in kind from real ethical issues. We’re handicapped here by the alarming background fact that there is no philosophical consensus about the basic nature of ethics. Clearly that’s too large a topic to deal with here, but I would argue that while we might disagree about the principles involved (I take a synthetic view myself, in which several basic principles work together) we can surely say that ethical judgements relate to very general considerations about acts. That’s not necessarily to claim that generality alone is in itself definitive of ethical content (it’s much more complicated than that), but I do think it’s a distinguishing feature. That carries the optimistic implication that ethical reasoning, at least in terms of cognitive tractability, might not otherwise be different in kind from ordinary practical reasoning, and that as robots become more capable of dealing with complex tasks they might naturally tend to acquire more genuine moral competence to go with it. One of the plausible arguments against this would be to point to agency as the key dividing line; ethical issues are qualitatively different because they require agency. It is probably evident from the foregoing that I think agency can be separated from the discussion for these purposes.

If robots are likely to acquire ethical competence as a natural by-product of increasing sophistication, then do we need to worry so much? Perhaps not, but the main reason for not worrying, in my eyes, is that truly ethical decisions are likely to be very rare anyway. The case of self-driving vehicles is often cited, but I think our expectations must have been tutored by all those tedious trolley problems; I’ve never encountered a situation in real life where a driver faced a clear cut decision about saving a bus load of nuns at the price of killing one fat man. If a driver follows the rule; ‘try not to crash, and if crashing is unavoidable, try to minimise the impact’, I think almost all real cases will be adequately covered.

A point to remember is that we actually do often make rules about this sort of thing which a robot could follow without needing any ethical sense of its own, so long as its understanding of the general context was adequate. We don’t have explicit rules about how many fat men outweigh a coachload of nuns just because we’ve never really needed them; if it happened every day we’d have debated it and made laws that people would have to know in order to pass their driving test. While there are no laws, even humans are in doubt and no-one can say definitively what the right choice is; so it’s not logically possible to get too worried that the robot’s choice in such circumstances would be wrong.

I do nevertheless have some sympathy with Sharkey’s reservations. I don’t think we should hold off from trying to create ethical robots though; we should go on, not because we want to use the resulting bots to make decisions, but because the research itself may illuminate ethical questions in ways that are interesting (a possibility Sharkey acknowledges). Since on my view we’re probably never really going to need robots with a real ethical sense, and on the other hand if we did, there’s a good chance they would naturally have developed the required competence, this looks to me like a case where we can have our cake and eat it (if that isn’t itself unethical).

10 thoughts on “Do We Need Ethical AI?

  1. A lot depends here on what we mean by “ethical sense”, but I think it’s fair to say that ethics in an AI would be different than ethics in a human.

    Humans instincts arose through evolution, and we have both selfish and pro-social ones. In modern societies, we’ve encoded rules of conduct, and flouting those rules has consequences (at least if we’re caught), which links pro-social outcomes to selfish desires, albeit ones we have to use some forethought to heed.

    In case of robots, its instincts are presumably going to be honed to do what we design it to do. In a sophisticated system, those instincts will end up having conflicts with each other in particular situations, and the robot will need to assess which course of action maximally satisfies its instincts. How this might come out in an ethical situation could be unpredictable. So that’s a case where explicitly giving the system have ethical instincts might be beneficial.

    However, ethics in a robot would not involve it knowing it might be punished if it didn’t follow the ethical rule. We don’t need to set up social structures to link its selfish desires to social outcomes, since it won’t have those selfish desires unless we put them there. It would involve ensuring a strong instinct to do the ethical thing arises in the relevant situation.

    Which is to say, a robot flouting ethical rules likely wouldn’t be a thing since its ethics would manifest as strong instinctive desires. It might end up flouting a particular ethic if it had multiple in conflict, but to Peter’s point, we should acknowledge that those are edge cases that humans struggle with too.

  2. The Global Workspace Place today, is our planet…
    …sometimes I feel the only thing valuable about AI is money…

    We have spent the last hundred years creating more money to build a sustainable Earth…
    …imagine, we would continue to do this, but in addition, to then eliminate the need for money someday…

    That our conscience might become our guide again…
    …experience one conscience quote or saying every day, for ever…

  3. I’m not so sanguine about getting ethics in robots along with any general reasoning ability. SAP has a good point about instincts, but I would follow Antonio Damasio (Descartes’ Error) and say emotions. Emotions are key to almost all our reasoning, definitely including ethics. But we don’t know how they work, on the level of detail needed to program a system to have similar performance. And the ethically important emotions have only partial overlap with the emotions we consult when playing chess (“this position feels strong”) or driving cars (“I’m confident I can make this turn”).

  4. Paul,
    I’m usually cautious with the word “emotion” in these discussions, because, like “consciousness”, it’s too protean of a term. Are we talking about the pre-feeling instinctive impulse (which is what I think Damasio-emotion refers to)? Or the feeling of that impulse? Or the feeling integrated into learned frameworks (the Lisa Feldman Barrett version)? Or the entire stack?

    I totally agree we have no idea how to design the overall emotional stack. The more base layers seem similar to the ordinary programmatic directives we know how to do, but relating them to complex sensory scenarios seems a whole different matter.

  5. I think the issue arises primarily because we are using the wrong dichotomy. Whether the entity is made in a factory, womb or wetware lab or is even entirely alien doesn’t really make any difference. A better dichotomy is sentient/non-sentient. The ethics of any sentient being should be essentially the same for any level of competency. (We don’t expect the family dog to have the same ethics as the owner.) The overall ethics structure would probably evolve once other forms of sentient beings actually exist.

    The responsibility for the ethics of non-sentient entities should be entirely with their makers as well as users where they have the capability to alter the entities behaviour because the entity itself has no means to understand and develop ethics independently of how it is built/programmed.

  6. “Robots are not agents because they lack true autonomy or free will”

    Many would say the same of humans. In fact, that’s the direction science points, despite the assertions of having-their-cake-and-eating-it-too compatibilists like Dennett.

  7. If we will be able to make robots able to kill people cleverly then surely we can make them cleverly avoid killing people.

  8. Does philosophy have its own “fundamental interactions” today…
    …observation, to question, to self knowledge, to self observation, observation, to question…

    Are we here, only to make bottle a opener…lets also be here to make a “self” while making a bottle opener…

  9. I am skeptical when “true autonomy” comes into play. What does that mean? I am not sure the it is know if and why humans make ethical decisions. Is true agency detectable, measurable? I believe the problem of agency and consciousness is very deep and important.

  10. The possibility of AI raises another, entirely different ethical question, concerning how we should treat apparently self-aware robots. If we ever end up creating such a thing, I would not be comfortable with treating them as if they have no inner life worth respecting simply because Chalmers finds zombies conceivable.

Leave a Reply

Your email address will not be published. Required fields are marked *