Moral machines – thinking otherwise

David Gunkel of NIU has produced a formidable new book (via) on the question of whether machines should now be admitted to the community of moral beings.

He lets us know what his underlying attitudes are when he mentions by way of introduction that he thought of calling the book A Vindication of the Rights of Machines, in imitation of Mary Wollstonecraft. Historically Gunkel sees the context as one in which a prolonged struggle has gradually extended the recognised moral domain from being the exclusive territory of rich white men to the poor, people of colour, women and now tentatively perhaps even certain charismatic animals (I think it overstates the case a bit to claim that ancient societies excluded women, for example, from the moral realm altogether: weren’t women like Eve and Pandora blamed for moral failings, while Lucretia and Griselda were held up as fine examples of moral behaviour – admittedly of a rather grimly self-subordinating kind? But perhaps I quibble.) Given this background the eventual admission of machines to the moral community seems all but inevitable; but in fact Gunkel steps back and lowers his trumpet. His more modest aim, he says, is like Socrates simply to help people ask better questions. No-one who has read any Plato believes that Socrates didn’t have his answers ready before the inquiry started, so perhaps this is a sly acknowledgement that Gunkel too, thinks he really knows where this is all going.

For once we’re not dealing here with the Anglo-Saxon tradition of philosophy: Gunkel may be physically in Illinois, but intellectually he is in the European Continental tradition, and what he proposes is a Derrida-influenced deconstruction. Deconstruction, as he concisely explains, is not destruction or analysis or debunking, but the removal from an issue of the construction applied heretofore.  We can start by inverting the normal understanding, but then we look for the emergence of a new way of ‘thinking otherwise’ on the topic which escapes the traditional framing. Even the crustiest of Anglo-Saxons ought to be able to live with that as a modus operandi for an enquiry.

The book falls into three sections: in the first two Gunkel addresses moral agency, questions about the morality of what we do, and then moral patiency, about the morality of what is done to us. This is a sensible and useful division. Each section proceeds largely by reportage rather than argument, with Gunkel mainly telling us what others have said, followed by summaries which are not really summaries (it would actually be very difficult to summarise the multiple, complex points of view explored) but short discussions moving the argument on. The third and final section discusses a number of proposals for ‘thinking otherwise’.

On agency, Gunkel sets out a more or less traditional view as a starting point and notes that identifying agents is tricky because of the problem of ‘other minds’: we can never be sure whether some entity is acting with deliberate intention because we can never know  the contents of another mind. He seems to me to miss a point here; the advent of the machines has actually transformed the position. It used to be possible to take it for granted that the problem of other minds was outside the scope of science, but the insights generated by AI research and our ever-increasing ability to look inside working brains with scanners mean that this is no longer the case. Science has not yet solved the problem, but the idea that we might soon be able to identify agents by objective empirical measurement no longer requires reckless optimism.

Gunkel also quotes various sources to show that actual and foreseen developments in AI are blurring the division between machine and human cognition (although we could argue that that seems to be happening more because the machines are getting better and moving into marginal territory than because of any fundamental flaws in the basic concepts).  Instrumentalism would transfer all responsibility to human beings, reducing machines to the status of tools, but against that Gunkel quotes a Heideggerian concept of machine autonomy and criticisms of inherent anthropocentrism. He gently rejects the argument of Joanna Bryson that robots should be slaves (in normal usage I think slaves have to be people, which in this discussion begs the question). He rightly describes the concept of personhood as central and points out its links with consciousness, which, as we can readily agree, throws a whole new can of worms into the mix. All in all, Gunkel reasonably concludes that we’re in some difficulty and that the discussion appears to  ‘lead into that kind of intellectual cul-de-sac or stalemate that Hegel called a “bad infinite.”’

He doesn’t give up without considering some escape routes. Deborah Johnson interestingly proposes that although computers lack the metal states required for proper agency, they have intentionality and are therefore moral players at some level. Various others offer proposals which lower the bar for moral agency in one way or another; but all of these are in some respects unsatisfactory. In the end Gunkel thinks we might do well to drop the whole mess and try an approach founded on moral patiency instead.

The question now becomes, not should we hold machines responsible for their actions, but do we have a moral duty to worry about what we do to them? Gunkel feels that this side of the question has often been overshadowed by the debate about agency, but in some ways it ought to be easier to deal with. An interesting proposition here is that of a Turing Triage Test: if a machine can talk to us for a suitable time about moral matters without our being able to distinguish it from a human being, we ought to give it moral status and presumably, not turn it off. Gunkel notes reasonable objections that such a test requires all the linguistic and general cognitive capacity of the original Turing test simply in order to converse plausibly, which is surely asking too much. Although I don’t like the idea of the test very much, I think there might be ways round these objections if we could reduce the interaction to multiple-choice button-pushing, for example.

It can be argued that animals, while lacking moral agency, have a good claim to be moral patients. They have no duties, but may have rights, to put it another way. Gunkel rightly points here to the Benthamite formulation that what matters is not whether they can think, but whether they can suffer; but he notes considerable epistemological problems (we’re up against Other Minds again). With machines the argument from suffering is harder to make because hardly anyone believes they do suffer: although they may simulate emotions and pain, it is most often agreed that in this area at least Searle was right that simulations and reality are poles apart. Moreover it’s debatable whether bringing animals into a human moral framework is an unalloyed benefit or to some extent simply the reassertion of human dominance. Nevertheless some would go still further and Gunkel considers proposals to extend ethical status to plants, land, and artefacts. Information Ethics essentially completes the extension of the ethical realm by excluding nothing at all.

This, then is one of the ways of thinking otherwise – extending the current framework to include non-human individuals of all kinds. But there are other ways: one is to extend the individual: a number of influential voices have made the case in recent years for an extended conception of consciousness, and that might be the most likely way for machines to gravitate within the moral realm – as adjuncts of a more broadly conceived humanity.

More radically, Gunkel suggests we might adopt proposals to decentre the system; instead of working with fixed Cartesian individuals we might try to grant each element in the moral system the rights and responsibilities appropriate to it at the time (I’m not sure exactly how that plays out in real situations); or we could modify and distribute our conception of agency. There is an even more radical possibility which Gunkel clearly finds attractive in the ethics of Emmanuel Levinas, which makes both agency and patiency secondary and derived while ethical interactions become primary, or to put it more accurately:

The self or the ego, as Levinas describes it… becomes what it is as a by-product of an uncontrolled and incomprehensible exposure to the face of the Other that takes place prior to any formulation of the self in terms of agency.

I warned you this was going to get a bit Continental – but actually making acts define the agent rather than the other way about may not be as unpalatably radical as all that. He clearly likes Levinas’ drift, anyway, and perhaps even better Silvia Benso’s proposed ‘mash-up’ which combines Levinasian non-ethics with Heideggerian non-things (tempting but a little unfair to ask what kind of sense that presumably makes).

Actually the least appealing proposal reported by Gunkel, to me at least, is that of Anne Foerst, who would reduce personhood to a social construct which we assign or withhold: this seems dangerously close to suggesting that say, concentration camp guards can actually withdraw real moral patienthood from their victims and hence escape blame (I’m sure that’s not what she actually means).

However, on the brink of all this heady radicalism Gunkel retreats to common sense. At the beginning of the book he suggested that Descartes could be seen as ‘the bad guy’ in his reduction of animals to the status of machines and exclusion of both from the moral realm; but perhaps after all, he concludes, we are in the end obliged to imitate Descartes’ provisional approach to life, living according to the norms of our society while the philosophical issues resist final resolution. This gives the book a bit of a dying fall and cannot but seem a  bit of a cop-out.

Overall, though, the book provides a galaxy of challenging thought to which I haven’t done anything like justice and Gunkel does a fine job of lucid and concise exposition. That said, I don’t find myself in sympathy with his outlook. For Gunkel and others in his tradition the ethical question is essentially political and under our control: membership of the moral sphere is something that can be given or not given rather like the franchise. It’s not really a matter of empirical or scientific fact, which helps explain Gunkel’s willingness to use fictional examples and his relative lack of interest in what digital computers actually can and cannot do. While politics and social convention are certainly important aspects of the matter, I believe we are also talking about real, objective capacities which cannot be granted or held back by the fiat of society any more than the ability to see or speak. To put it in a way Gunkel might find more congenial: when ethnic minorities and women are granted equal moral status, it isn’t simply an arbitrary concession of power, the result of a social tug-of-war but the recognition of hard facts about the equality of human moral capacity.

Myself I should say that moral agency is very largely a matter of understanding what you are doing; an ability to allow foreseen contingencies to influence current action. This is something machines might well achieve: arguably the only reason they haven’t got it already is an understandable human reluctance to trust free-ranging computational decision-making given an observable tendency for programs to fail more disastrously and less gracefully than human minds.

Moral patienthood on the other hand is indeed partly a matter of matter of the ability to experience pain and other forms of suffering, and that is problematic for machines; but it’s also a matter of projects and wishes, and machines fall outside consideration here because they simply don’t have any. They literally don’t care, and hence they simply aren’t in the game.

That seems to leave me with machines that I should praise and blame but need not worry about harming: should be good for a robot butler anyway.

 

15 thoughts on “Moral machines – thinking otherwise

  1. Question. Are dogs moral agents? Dogs have been modified from their wolf stock by humans to behave in certain ways which are useful to us. Many people will say that dogs have personalities and act as if know when they have done things which will displease their owners/pack leaders.

    It doesn’t seem impossible to me that complicated machines couldn’t be modified to have dog-like behaviours, so if dogs can show behaviour where they appear to distinguish right from wrong (from a limited palette perhaps) then machines could too.

  2. Does he discuss nihilism at all, Peter?

    The question of ‘anthropocentrism,’ technology, and the moral status of nonhumans is becoming something of a cottage industry in certain Continental circles. The central idea in the stuff I’ve encountered is that the naturalization of the human warrants applying the same analogistic arguments used to broaden the moral franchise in the past to nonhuman species (and now apparently) to machines as well.

    But there’s more than a little tendentiousness in this approach, I think. As you mention, Peter, there’s the question of how this isn’t actually *anthropomorphism* as opposed to a ‘decentering of the privilege of the human’? An extension to all the world of the same rights we extend to our pets.

    There’s also the problem that the analogy cuts both ways as well. In showing the continuity between the human and the animal and the machine, perhaps we should talk about why we treat humans as so special, rather than just more animals and machines. This would be a truly ‘nonanthropocentric’ world, one where every ‘agent’ is treated as a resource equally. An inhuman world. In other words, this whole approach utterly sidesteps the problem of nihilism, of what ‘morality’ could possibly mean when all has been revealed as mechanism.

    Which is why I’ve personally given up on this literature as a sop for rationalizing a certain (typically animal rights) moral agenda. These are all downstream questions, as far as I’m concerned. To say humans are just another kind of animal or just another kind of machine doesn’t imply that humans should be treated like animals or machines any more than it implies that animals and machines should be treated like humans.

    This is the dilemma that has to be thought through, the great ‘scandal’ of naturalism. One that these guys all seem to conveniently ignore.

  3. One that [naturalists] all seem to conveniently ignore.

    Perhaps because while naturalist nihilists (NNs) see it as a philosophical problem, we don’t see it as a practical problem – just as you seem to suggest here:

    To say humans are just another kind of animal or … machine doesn’t imply that humans should be treated like animals or machines

    A society of NNs presumably would have no illusion of “the sanctity of human life” and might therefore be more amenable to “solving” some problems by killing (or at least not interfering with natural expiration), but it isn’t immediately obvious to me that the net worldwide mortality consequent to such a society’s direct actions would be any greater than that consequent to actions of the current sanctity-of-human-life saturated US society. Despite being vulnerable (I’m older), I’d be willing to risk it.

  4. A machine cannot decide for itself to act immorally unless it has been programmed to react strategically with a particular “immoral” action as its selected option. But perhaps a machine could be programed to act in a harmful way in consideration of the consequences to itself. And if the consequences fit the concept of a reward for its destructive behaviors, we could, for example, either send it to war and see what happens, or turn it off. Rather hard to punish it for disobedience, however.
    Moral behaviors are what humans trust each other to do in their particular society.
    Everything you trust a machine to do is therefor moral from its “point of view,” even if you’ve directed it to turn and kill you and any nearby children.

  5. Charles: “Perhaps because while naturalist nihilists (NNs) see it as a philosophical problem, we don’t see it as a practical problem – just as you seem to suggest here”

    It’s the *continentals* who are doing the ignoring. The anglo neuro-ethics field seems to be burgeoning.

  6. I just discovered this really well-composed review as a result of a re-posting at machineslikeus.com Thanks for the article and for the comments, they are all extremely interesting and useful. Let me join the conversation by making two brief comments.

    1) The question of ethical nihilism is extremely important, and it is one that unfortunately I did not have the room to explore in the book. A good treatment of this issue can be found in Anthony Beaver’s essay “Moral Machines and the Threat of Ethical Nihilism,” which is included in Patrick Lin et al “Robot Ethics: The Ethical and Social Implication on Robotics” (MIT, 2012).

    2) I have, from the beginning of my professional career, sought to work across the analytic/continental divide. Like Luciano Floridi and others in the relatively new fields of “machine ethics” and “information ethics,” I think both sides have a wealth of insight to contribute to the discussion, and we do no one any favors by continuing to perpetuate the “family feud.” Elsewhere I have argued that philosophers need to stop the petty bickering. If not for ourselves, then for all those “others” who are effected by it, i.e. animals, the environment, machines, etc.

  7. That Beaver essay is great stuff, and certainly one of the few uncompromising considerations of the dilemma I’ve encountered since reading Ray Brassier’s Nihil Unbound. Thank you, David. The way the AI and cognitive neuroscience research programs are pressing in opposite directions, the humanization of mechanism versus the mechanization of the human, has to count as one of the most peculiar, if not crazy, twists in the history of thought. (I consider the latter push in my latest blog post, http://rsbakker.wordpress.com/2012/09/04/the-person-fallacy/)

    Luciano Floridi’s Philosophy of Information, IMHO, needs to be on every philosopher’s bookshelf. The ‘informatic turn’ seems to have happened everywhere *but* philosophy.

    If you don’t mind me asking, what do you think of the critiques of anthropocentrism you find in the likes of Cary Wolfe or Donna Haraway?

  8. Scott,
    Thanks for the link to your Person Fallacy post…definitely interesting material. Do you know Justin Leiber’s short book “Can Animals and Machines be Persons?” (1985)? It is composed as a fictional dialogue (following that long and venerable tradition in philosophical argumentation) and provides a rather compelling way for accessing and thinking about “persons” and related issues.

    As for Haraway and Wolfe…I find both thinkers to be intriguing. For my money, Haraway is the real innovator. Her “Cyborg Manifesto” (1984) does some impressive heavy lifting when it comes to questioning the assumptions and consequences of human exceptionalism, which is, as you know, part and parcel of the modern/European/enlightenment project. Because I am trained in the continental tradition, I connect all of this back to the anti-humanism developed by Nietzsche and Heidegger (especially the “Letter on Humanism”) but there are other important points of contact with people like Norbert Wiener, the progenitor of the science of cybernetics, Dan Dennett (who you mention and deal with in considerable detail), and the original article introducing the figure of the cyborg written by Manfred Clynes and Nathan Kline back in 1962.

    I am, however, concerned that Haraway has not been able to sustain the impact of her critical intervention. In particular, I interpret her most recent work, “When Species Meet” (2008), as sneaking in the very humanist values and naturalism that her earlier writings appear to have questioned. Like Hegel says of Kant, I think Haraway takes things to the limit and then (unfortunately) recoils in horror from the radical conclusions her own efforts make available. So it is a kind of love/hate relationship…but that’s the name of the game in any critical endeavor.

    Hope that answers it. Incidentally I wrote a long critical response to Haraway’s Manifesto in my first book “Hacking Cyberspace” (2001). If you’re interested, I can get you a copy of that chapter in pdf form. Also the book has been out for quite a while, so used copies are available for mere pennies. One of my students just picked one up for 50 cents. Ah..the joys of academic publishing.

    Best,
    David.

  9. Peter,
    No worries. I think you did a fine job characterizing the approach, trajectory and outcomes of the book. I’m very grateful for your informed reading and well-composed review. It was definitely an interesting read and has provided me with some really useful “food for thought.” So all is good as far as I’m concerned. Thanks again to you and everyone who has commented…it’s been good fun.

  10. Regarding the issue of past communities whose moral agency, or full human condition was questioned: women, black people, slaves, etc…

    It was actually their reivindications that in most cases started the process to achieve first kind citizenship. Women XIX century’s revolution, Slavery banning, etc… These communities claimed that they were as moral agents as white men.

    Except for the genial movie Blade Runner can we consider this…

    Wouldn’t it be a precondition to consider this issue that machines themselves claim for their rights?

  11. Vicente: The animal rights people won’t be happy with it, but I’d say it qualifies as a bona fide zinger. Even if you admit that the capacity to advocate for equal rights isn’t necessarily a condition of deserving equal rights, you could argue that it is a condition of effecting real change.

    It also raises the spectre of paternalism… If we *condescend* to extend certain rights, how ‘real’ could those rights be?

  12. Vincente (and Scott),
    Really good and important insight. If you recall this is the premise of both Asimov’s “Bicentennial Man” and one of the episodes of the “Animatrix,” the animated “Matrix” trilogy prequel. But Scott is correct, animal rights complicates the picture. The deciding factor in these matters is the question of “moral agency” vs. “moral patiency.” When it is a matter of agency, the extension of rights (to women, slaves, etc.) is almost always (although there have been exceptions) a matter of advocacy on the part of the excluded group. However, in situations of moral patiency (i.e. animal rights philosophy) the process of moral inclusion takes an entirely different path.

  13. It depends how powerful paternalism is supposed to be. If some other species had gained whatever you might describe us as having, have before us, how would we want them treating a certain species of naked ape? And equally, how do we want to treat other animals – those who didn’t win the race? Is it paternalism to imagine how you’d want to be treated as the loser? Or does paternalism involve the inability to imagine such?

    And on advocation, I’m not sure, apart from fancy words, exactly how slaves, female humans and humans with dark skin have in advocational force any more than animals – fear, pain, sadness, terror. When did these cease to speak.

Leave a Reply

Your email address will not be published. Required fields are marked *