botpainWhat are they, sadists? Johannes Kuehn and Sami Haddadin,  at Leibniz University of Hannover are working on giving robots the ability to feel pain: they presented their project at the recent ICRA 2016 in Stockholm. The idea is that pain systems built along the same lines as those in humans and other animals will be more useful than simple mechanisms for collision avoidance and the like.

As a matter of fact I think that the human pain system is one of Nature’s terrible lash-ups. I can see that pain sometimes might stop me doing bad things, but often fear or aversion would do the job equally well. If I injure myself I often go on hurting for a long time even though I can do nothing about the problem. Sometimes we feel pain because of entirely natural things the body is doing to itself – why do babies have to feel pain when their teeth are coming through? Worst of all, pain can actually be disabling; if I get a piece of grit in my eye I suddenly find it difficult to concentrate on finding my footing or spotting the sabre-tooth up ahead; things that may be crucial to my survival; whereas the pain in my eye doesn’t even help me sort out the grit. So I’m a little sceptical about whether robots really need this, at least in the normal human form.

In fact, if we take the project seriously, isn’t it unethical? In animal research we’re normally required to avoid suffering on the part of the subjects; if this really is pain, then the unavoidable conclusion seems to be that creating it is morally unacceptable.

Of course no-one is really worried about that because it’s all too obvious that no real pain is involved. Looking at the video of the prototype robot it’s hard to see any practical difference from one that simply avoids contact. It may have an internal assessment of what ‘pain’ it ought to be feeling, but that amounts to little more than holding up a flag that has “I’m in pain” written on it. In fact tackling real pain is one of the most challenging projects we could take on, because it forces us to address real phenomenal experience. In working on other kinds of sensory system, we can be sceptics; all that stuff about qualia of red is just so much airy-fairy nonsense, we can say; none of it is real. It’s very hard to deny the reality of pain, or its subjective nature: common sense just tells us that it isn’t really pain unless it hurts. We all know what “hurts” really means, what it’s like, even though in itself it seems impossible to say anything much about it (“bad”, maybe?).

We could still take the line that pain arises out of certain functional properties, and that if we reproduce those then pain, as an emergent phenomenon, will just happen. Perhaps in the end if the robots reproduce our behaviour perfectly and have internal functional states that seem to be the same as the ones in the brain, it will become just absurd to deny they’re having the same experience. That might be so, but it seems likely that those functional states are going to go way beyond complex reflexes; they are going to need to be associated with other very complex brain states, and very probably with brain states that support some form of consciousness – whatever those may be. We’re still a very long way from anything like that (as I think Kuehn and Haddadin would probably agree)

So, philosophically, does the research tell us nothing? Well, there’s one interesting angle. Some people like the idea that subjective experience has evolved because it makes certain sensory inputs especially effective. I don’t really know whether that makes sense, but I can see the intuitive appeal of the idea that pain that really hurts gets your attention more effectively than pain that’s purely abstract knowledge of your own states. However, suppose researchers succeed in building robots that have a simple kind of synthetic pain that influences their behaviour in just the way real pain dies for animals. We can see pretty clearly that there’s just not enough complexity for real pain to be going on, yet the behaviour of the robot is just the same as if there were. Wouldn’t that tend to disprove the hypothesis that qualia have survival value? If so, then people who like that idea should be watching this research with interest – and hoping it runs into unexpected difficulty (usually a decent bet for any ambitious AI project, it must be admitted).

10 Comments

  1. 1. Tom Clark says:

    Peter:

    “Some people like the idea that subjective experience has evolved because it makes certain sensory inputs especially effective. I don’t really know whether that makes sense…”

    Indeed. The assumption behind the research seems to be that phenomenal pain is something that gets produced by sufficiently complex, e.g., human-style pain networks. Pain then plays an additional role in guiding behavior over and above what the networks themselves do, thus increasing the efficiency of the system. But of course there is as yet no account on offer of how the experience of pain gets produced and then supplements its physical/functional correlates in behavior control, and I’m not holding my breath.

    In designing damage avoidance modules in artificial systems, there’s no reason that the designer need worry about building in pain or other experience. Just make them as mechanistically efficient and sensitive as needed to avoid damage. Likewise, natural selection operated on biological systems that as a result became ever more sophisticated in their processing, but at no point do we need to suppose that experience per se was selected for in furthering survival. Rather it’s the *neural basis* of consciousness that was selected for, as well non-conscious capacities.

    This is not to deny the subjective importance and reality of pain, and the fact that as conscious creatures we can’t help but take the experience of pain as the cause of behavior aimed at minimizing damage. But pain and other experiences don’t play a role in 3rd person, intersubjective scientific accounts of behavior for the simple reason that they aren’t observables. So the idea of building in pain for robots can’t be anything more than replicating in (say) silicon whatever processes are found to be associated with pain in biological systems. But it’s those processes, not pain, that will account for the increased efficiency of the artificial systems in avoiding damage.

    Still, it’s very likely that to replicate such processes in silicon will result in pain for the system, which should give us pause before charging ahead with such a program. As you say: “if this really is pain, then the unavoidable conclusion seems to be that creating it is morally unacceptable.”

  2. 2. Stephen says:

    Creating a system of pain in an entity isn’t necessarily morally unethical. After all, some people are born without the ability to feel pain and doctors do what they can to treat the disorder. What is unethical is creating pain where there is no benefit to the entity. Poking a dog with a stick is unethical. Surgery to fix a problem isn’t, in spite of the pain it can cause.

    It is clear, though, that the researchers are not trying to create pain qualia. They are using the word “pain” as a metaphor for the mechanistic behaviour they are inserting into the bot. The human pain metaphor is guiding their choices of mechanisms to deal with the detection of various overload conditions.

  3. 3. Tom Clark says:

    Stephen:

    “It is clear, though, that the researchers are not trying to create pain qualia.”

    Perhaps not, but if they replicate something like our pain system in an AI, they might end up doing so. If we could create a system that didn’t feel pain but performed as well as we do, seems like that would be an ethically preferable solution if we’re interested in minimizing suffering. But a human-equivalent system that doesn’t feel pain may not be possible, depending on what consciousness ends up being a function of.

  4. 4. Luís Ferreira says:

    Concerning the hypothesis that “we all know what “hurts” really means, I am forced to quote a pacient suffering unbearable neural pain after having had surgery to remove it: “The pain is the same, but I feel much better now” (Damasio 1994, p. 266). Maybe not quite so clear after all…!

  5. 5. Stephen says:

    Tom

    “if they replicate something like our pain system in an AI, they might end up doing so”

    No I don’t that is a possibility at all. They are creating things like reflexes and programmed responses to protect the bot. They can’t accidentally create consciousness, which would be necessary for feeling pain, by building more complex bot behaviour software.

    “If we could create a system that didn’t feel pain but performed as well as we do, seems like that would be an ethically preferable solution”

    I think that is exactly what they are trying too do, according to the article.

  6. 6. Callan S. says:

    I think what is often invisible with pain is that it is weighted.

    For example, reaching into a box full of burning material…it’d really hurt!

    Reaching in to the box to save a family pet from being burned…oh suddenly now it’s a good idea to reach into the box?

    Because pain is a weight, not an absolute.

    Perhaps they need to build a machine that has an aversion, then evaluates whether the goal is worth it compared to the aversion effect.

    Plus the way that paint paints the world – if something scorches your hand, then you get an aversion about that thing. For others it’s just an object, for you you start to avoid it. It’s different. So the machine would need to actually change it’s ‘perceptions’ of things in regard to pain. MAYBE, even, be capable of false perception changes (ie, be able to develop an aversion to something that did not really cause the pain effect)

  7. 7. Ron Bar Lev says:

    It may be interesting to note that:
    a. Animals and people with pain insensitivity cannot manage to avoid severely damaging incidents, even though their other cognitive correlative faculties are not known to be compromised. Even with high awareness their best efforts invariably fail.
    If proto-phenomenality, whatever it’s natural implementation might be, makes up the “foundation” of conscious environmental models (i.e. provides “grounding”, “prior” to behaviorally related computation) then it’s expression should relieve the computational machinery from much of the computational and communications burden (generally facilitating integration of locally bound, incommunicable, distributed information. For elaboration see my TSC2016 poster presentation). There is an article in the latest JCS (Vol.23 No 3-4) titled “Bifactualism: A new physicalist response to the knowledge argument” in which Danielle Swanepoel arrives at what I take to be a similar notion, discussing a distinction between General vs. Particular facts.
    Furthermore non-relational contents could motivate on-going perceptual and behavioral pattern evolution,- beyond the range of correlations they are initially associated with – without “corrupting” the driving “proto-qualities”.
    Buildng on what Callan says, an agent may be able to conjure and evaluate a contextually relevant set of information-rich scenarios, for their modulatory effects on such local information-priors, without having to lean on tight behavioral coupling.

    b. Lingering or incapacitating pain may serve a brutal survival enhancing role that applies to populations, helping to relieve communities of weak individuals: Sacrificing the damaged to predators and reducing competition on scarce resources.

  8. 8. Michael Murden says:

    Perhaps one of the functions of pain is to punish us for our mistakes (or even for our sins). The previous discussions of bad bots and bot punishment suggest that beings which can’t suffer, and therefore can’t be punished, can’t be morally responsible for their actions and therefore should not be allowed to perform actions that potentially have moral consequences. If that is the case, then it is immoral to create such bots and their creators can and should be punished for the actions of those bots. One can argue that if the capacity to suffer is essential to the capacity to be morally responsible then building the capacity for pain/suffering into bots is a moral obligation for anyone who builds bots which can perform actions that potentially have moral consequences.

  9. 9. Mark S says:

    “As a matter of fact I think that the human pain system is one of Nature’s terrible lash-ups. I can see that pain sometimes might stop me doing bad things, but often fear or aversion would do the job equally well.”

    Wrong! People burn without the ability to feel pain suffer from hundreds of injuries and fractures. Pick a a good medical pain textbook and you will see pictures of young kids with gross skeletal deformities because of the hundreds of fractures they had.

  10. 10. ihtio says:

    Meanwhile on planet Earth…

    Discussions about the topic of ethics of programming robots rage, while billions of sentient being are tortured, enslaved, abused, raped, and slaughtered for the culinary pleasure of the Rational Man.

Leave a Reply