jailbotIs there a retribution gap? In an interesting and carefully argued paper John Danaher argues that in respect of robots, there is.

For human beings in normal life he argues that a fairly broad conception of responsibility works OK. Often enough we don’t even need to distinguish between causal and moral responsibility, let alone worrying about the six or more different types identified by hair-splitting philosophers.

However, in the case of autonomous robots the sharing out of responsibility gets more difficult. Is the manufacturer, the programmer, or the user of the bot responsible for everything it does, or does the bot properly shoulder the blame for its own decisions? Danaher thinks that gaps may arise, cases in which we can blame neither the humans involved nor the bot. In these instances we need to draw some finer distinctions than usual, and in particular we need to separate the idea of liability into compensation liability on one hand and and retributive liability on the other. The distinction is essentially that between who pays for the damage and who goes to jail; typically the difference between matters dealt with in civil and criminal courts. The gap arises because for liability we normally require that the harm must have been reasonably foreseeable. However, the behaviour of autonomous robots may not be predictable either by their designers or users on the one hand, or by the bots themselves on the other.

In the case of compensation liability Danaher thinks things can be patched up fairly readily through the use of strict and vicarious liability. These forms of liability, already well established in legal practice, give up some of the usual requirements and make people responsible for things they could not have been expected to foresee or guard against. I don’t think the principles of strict liability are philosophically uncontroversial, but they are legally established and it is at least clear that applying them to robot cases does not introduce any new issues. Danaher sees a worse problem in the case of retribution, where there is no corresponding looser concept of responsibility, and hence, no-one who can be punished.

Do we, in fact, need to punish anyone? Danaher rightly says that retribution is one of the fundamental principles behind punishment in most if not all human societies, and is upheld by many philosophers. Many, perhaps, but my impression is that the majority of moral philosophers and lay opinion actually see some difficulty in justifying retribution. Its psychological and sociological roots are strong, but the philosophical case is much more debatable. For myself I think a principle of retribution can be upheld , but it is by no means as clear or as well supported as the principle of deterrence, for example. So many people might be perfectly comfortable with a retributive gap in this area.

What about scapegoating – punishing someone who wasn’t really responsible for the crime? Couldn’t we use that to patch up the gap?  Danaher mentions it in passing, but treats it as something whose unacceptability is too obvious to need examination. I think, though, that in many ways it is the natural counterpart to the strict and vicarious liability he endorses for the purposes of compensation. Why don’t we just blame the manufacturer anyway – or the bot (Danaher describes Basil Fawlty’s memorable thrashing of his unco-operative car)?

How can you punish a bot though? It probably feels no pain or disappointment, it doesn’t mind being locked up or even switched off and destroyed. There does seem to be a strange gap if we have an entity which is capable of making complex autonomous decisions, but doesn’t really care about anything. Some might argue that in order to make truly autonomous decisions the bot must be engaged to a degree that makes the crushing of its hopes and projects a genuine punishment, but I doubt it. Even as a caring human being it seems quite easy to imagine working for an organisation on whose behalf you make complex decisions, but without ultimately caring whether things go well or not (perhaps even enjoying a certain schadenfreude in the event of disaster). How much less is a bot going to be bothered?

In that respect I think there might really be a punitive gap that we ought to learn to live with; but I expect the more likely outcome in practice is that the human most closely linked to disaster will carry the case regardless of strict culpability.


  1. 1. Tom Clark says:

    Thanks Peter, interesting. You say “…I think there might really be a punitive gap that we ought to learn to live with.”

    Agreed: we should not indulge our retributive instincts, the desire to punish on grounds of desert, irrespective of any beneficial consequences such as deterrence, rehabilitation or public safety. The author suggests the retribution gap may be an opportunity to promote other approaches to criminal justice:

    “…the existence of a retribution gap presents a strategic opening for those who oppose retributivism. An increased amount of robot-caused harm, in the absence of retributive blame, could shock or unsettle the cultural status quo. Since that status quo seems to be dominated by retributivism (in many countries), something needs to be inserted into the gap in order to restore the equilibrium. Those who prefer and advocate for non-retributive approaches to crime and punishment could find themselves faced with a great opportunity. Their calls for a more consequentialist, harm-reductionist approach to our practices of punishment and blame could have a better hearing in light of the retribution gap. Consequently, there is something of significance in the retribution gap for those who completely reject the retributivist philosophy.”

    Music to my ears! http://www.naturalism.org/applied-naturalism/criminal-justice/repressing-revenge

  2. 2. Peter says:

    Yes. The usual fear would be that instead of allowing our instincts to be restrained or tutored we might seek informal revenge to make up for the formal retribution ‘deficit’. But perhaps we could stretch to seeing bot crimes as accidents?

  3. 3. Stephen says:

    There is a range of autonomy for bots. At the lowest end, a bot will be entirely predictable and do the same thing every time. A different bot of the same model will do the same thing as the first bot as well. This type of autonomous bot is akin to a tool. A more sophisticated autonomous bot will learn behaviour, so that different bots of the same model might act differently. The same bot will do the essentially the same thing to a stimulus, though. This is still not much different than a “smart” tool. A further sophistication is to add complex behaviour. The response of a bot might be different each time, depending on a variety of circumstances, and be somewhat unpredictable. This is like owning a pet. The ultimate sophistication in autonomy would be a bot that demanded it’s own emancipation.

    It seems to me that before a bot is emancipated, it would need a sense of social responsibility. That would affect punishment models.

    Retribution, as opposed to deterrence, is for the victim. It doesn’t matter what the bot or the owner, if there is one, thinks of the punishment, only what the victim thinks. If the bot were emancipated, would they be satisfied with a “reset to factory defaults” or would they want the bot destroyed, a recall of the defective bots, financial compensation or even free bot services. They may or may not get want they want because society as a whole, through the court system, gets a say as to what is reasonable and just. Retribution for an unemancipated bot would necessarily be directed at the bot owner in the same way that retribution would be directed at a dog owner, not the dog, if it misbehaved.

    Therefore, it seems ownership is key and eliminates much of the “retribution gap”.

  4. 4. Callan S. says:

    It seems the difficulty of responsibility here comes from avoiding acknowledging the simple principle of slavery. The bot isn’t independent, it’s owned – it’s owner pays damages. Simple.

    Only when we try and paste our modern sensibilities onto ‘autonomous robot’ does it get confusing. Ie, although ‘robot’ means slave, we try and come at the ‘autonomous robot’ with some sense of…well, who’s in charge of what now?

  5. 5. Peter says:

    I like the way these ugly old concepts – scapegoating, slavery – are brought back to life in this discussion, like zombies out of a freshly-opened tomb coming forward offering to do useful work for us… 🙂

  6. 6. Callan S. says:

    And yet it’s that simple. Is someone making an autonomous robot for the robot’s own sake? For it to have it’s own life – somewhat like the urge to have children and give a child it’s own life?

    Or are they being made for labour?

    Autonomous but not actually autonomous. The preparedness to enact that contradiction has simply been waiting in a tomb, indeed. Scraping at the inside of the coffin lid. If there ever was a lid.

  7. 7. Stephen says:

    If you don’t like the slavery paradigm, think of owning a working dog. A search and rescue dog gets lots of love and respect (as well as training), but if it bites someone, the owner is still responsible.

  8. 8. John Davey says:

    You can substitute ‘bridge’ for computer and ‘civil engineer’ for programmer and get your answer I think.

    If a bridge ‘decides’ to collapse because the engineer didn’t use the right concrete, then presumably the engineer is responsible. If a bridge decides to collapse because of a large bomb, then the bomber is responsible.

    Bridges make tough decisions all day long and the responsibility for keeping cars out of water is a tough one. Is it right to punish bridges when they collapse ? How would you even know a bridge felt punished ?


Leave a Reply