Feeling free

Eddy Nahmias recently reported on a ground-breaking case in Japan where a care-giving robot was held responsible for agreeing to a patient’s request for a lethal dose of drugs. Such a decision surely amounts to a milestone in the recognition of non-human agency; but fittingly for a piece published on 1 April, the case was in fact wholly fictional.

However, the imaginary case serves as an introduction to some interesting results from the experimental philosophy Nahmias has been prominent in developing. The research – and I take it to be genuine – aims not at clarifying the metaphysics or logical arguments around free will and responsibility, but at discovering how people actually think about those concepts.

The results are interesting. Perhaps not surprisingly, people are more inclined to attribute free will to robots when told that the robots are conscious. More unexpectedly, they attach weight primarily to subjective and especially emotional conscious experience. Free will is apparently thought to be more a matter of having feelings than it is of neutral cognitive processing.

Why is that? Nahmias offers the reasonable hypothesis that people think free will involves caring about things. Entities with no emotions, it might be, don’t have the right kind of stake in the decisions they make. Making a free choice, we might say, is deciding what you want to happen; if you don’t have any emotions or feelings you don’t really want anything, and so are radically disqualified from an act of will. Nahmias goes on to suggest, again quite plausibly, that reactive emotions such as pride or guilt might have special relevance to the social circumstances in which most of our decisions are made.

I think there’s probably another factor behind these results; I suspect people see decisions based on imponderable factors as freer than others. The results suggest, let’s say, that the choice of a lover is a clearer example of free will than the choice of an insurance policy; that might be because the latter choice has a lot of clearly calculable factors to do with payments versus benefits. It’s not unreasonable to think that there might be an objectively correct choice of insurance policy for me in my particular circumstances, but you can’t really tell someone their romantic inclinations are based on erroneous calculations.

I think it’s also likely that people focus primarily on interesting cases, which are often instances of moral decisions; those in turn often involve self-control in the face of strong desires or emotions.

Another really interesting result is that while philosophers typically see freedom and responsibility as two sides of the same coin, people’s everyday understanding may separate the two. It looks as though people do not generally distinguish all that sharply between the concepts of being causally responsible (it’s because of you it happened, whatever your intentions) and morally responsibly (you are blameworthy and perhaps deserve punishment). So, although people are unwilling to say that corporations or unconscious robots have free will, they are prepared to hold them responsible for their actions. It might be that people generally are happier with concepts such as strict liability than moral philosophers mainly are; or of course, we shouldn’t rule out the possibility that people just tend to suffer some mild confusion over these issues.

Thought-provoking stuff, anyway, and further evidence that experimental philosophy is a tool we shouldn’t reject.