Artificial Pain

botpainWhat are they, sadists? Johannes Kuehn and Sami Haddadin,  at Leibniz University of Hannover are working on giving robots the ability to feel pain: they presented their project at the recent ICRA 2016 in Stockholm. The idea is that pain systems built along the same lines as those in humans and other animals will be more useful than simple mechanisms for collision avoidance and the like.

As a matter of fact I think that the human pain system is one of Nature’s terrible lash-ups. I can see that pain sometimes might stop me doing bad things, but often fear or aversion would do the job equally well. If I injure myself I often go on hurting for a long time even though I can do nothing about the problem. Sometimes we feel pain because of entirely natural things the body is doing to itself – why do babies have to feel pain when their teeth are coming through? Worst of all, pain can actually be disabling; if I get a piece of grit in my eye I suddenly find it difficult to concentrate on finding my footing or spotting the sabre-tooth up ahead; things that may be crucial to my survival; whereas the pain in my eye doesn’t even help me sort out the grit. So I’m a little sceptical about whether robots really need this, at least in the normal human form.

In fact, if we take the project seriously, isn’t it unethical? In animal research we’re normally required to avoid suffering on the part of the subjects; if this really is pain, then the unavoidable conclusion seems to be that creating it is morally unacceptable.

Of course no-one is really worried about that because it’s all too obvious that no real pain is involved. Looking at the video of the prototype robot it’s hard to see any practical difference from one that simply avoids contact. It may have an internal assessment of what ‘pain’ it ought to be feeling, but that amounts to little more than holding up a flag that has “I’m in pain” written on it. In fact tackling real pain is one of the most challenging projects we could take on, because it forces us to address real phenomenal experience. In working on other kinds of sensory system, we can be sceptics; all that stuff about qualia of red is just so much airy-fairy nonsense, we can say; none of it is real. It’s very hard to deny the reality of pain, or its subjective nature: common sense just tells us that it isn’t really pain unless it hurts. We all know what “hurts” really means, what it’s like, even though in itself it seems impossible to say anything much about it (“bad”, maybe?).

We could still take the line that pain arises out of certain functional properties, and that if we reproduce those then pain, as an emergent phenomenon, will just happen. Perhaps in the end if the robots reproduce our behaviour perfectly and have internal functional states that seem to be the same as the ones in the brain, it will become just absurd to deny they’re having the same experience. That might be so, but it seems likely that those functional states are going to go way beyond complex reflexes; they are going to need to be associated with other very complex brain states, and very probably with brain states that support some form of consciousness – whatever those may be. We’re still a very long way from anything like that (as I think Kuehn and Haddadin would probably agree)

So, philosophically, does the research tell us nothing? Well, there’s one interesting angle. Some people like the idea that subjective experience has evolved because it makes certain sensory inputs especially effective. I don’t really know whether that makes sense, but I can see the intuitive appeal of the idea that pain that really hurts gets your attention more effectively than pain that’s purely abstract knowledge of your own states. However, suppose researchers succeed in building robots that have a simple kind of synthetic pain that influences their behaviour in just the way real pain dies for animals. We can see pretty clearly that there’s just not enough complexity for real pain to be going on, yet the behaviour of the robot is just the same as if there were. Wouldn’t that tend to disprove the hypothesis that qualia have survival value? If so, then people who like that idea should be watching this research with interest – and hoping it runs into unexpected difficulty (usually a decent bet for any ambitious AI project, it must be admitted).