Fish Pain

Fish don’t feel pain, says Brian Key.  How does he know? In the deep philosophical sense it remains a matter of some doubt as to whether other human beings really feel pain, and as Key notes, Nagel famously argued that we couldn’t know what it was like to be a bat at all, even though we have much more in common with them than with fish. But in practice we don’t really feel much doubt that humans with bodies and brains like ours do indeed have similar sensations, and we trust that their reports of pain are generally as reliable as our own. Key’s approach extends this kind of practical reasoning. He relies on human reports to identify the parts of the brain involved in feeling pain, and then looks for analogues in other animals.

Key’s review of the evidence is interesting; in brief he concludes that it is the cortex that ‘does’ pain; fish don’t have anything that corresponds with human cortex, or any other brain structure that plausibly carries out the same function. They have relatively hard-wired responses to help them escape  physical damage, and they have a capacity to learn about what to avoid, but they don’t have any mechanism for actually feeling pain with. It is really, he suggests, just anthropomorphism that sees simple avoidance behaviour as evidence of actual pain. Key is rightly stern about anthropomorphism, but I think he could have acknowledged the opposite danger of speciesism. The wide eyes and open mouths of fish, their rigid faces and inability to gesture or scream incline us to see them as stupid, cold, and unfeeling in a way which may not be properly objective.

Still, a careful examination of fish behaviour is a perfectly valid supplementary approach, and Key buttresses his case by noting that pain usually suppresses normal behaviour. Drilling a hole in a human’s skull tends to inhibit locomotion and social activity, but apparently doing the same thing to fish does not stop them going ahead with normal foraging and mating behaviour as though nothing had happened. Hard to believe, surely, that they are in terrible pain but getting on with a dancing display anyway?

I think Key makes a convincing case that fish don’t feel what we do, but there is a small danger of begging the question if we define pain in a way that makes it dependent on human-style consciousness to begin with. The phenomenology really needs clarification, but defining pain, other than by demonstration, is peculiarly difficult. It is almost by definition the thing we want to avoid feeling, yet we can feel pain without being bothered by it, and we can have feelings we desperately want to avoid which are, however, not pain. Pain may be a tiny twinge accompanying a reflex, an attention-grabbing surge, or something we hold in mind and explore (Dennett, I think, says somewhere that he had been told that examining pain introspectively was one way to make it bearable. On his next dentist visit, he tried it out and found that although the method worked, the effort and boredom involved in continuous close attention to the detailed qualities of his pain was such that he eventually preferred straightforward hurting.) Humans certainly understand pain and can opt to suffer it voluntarily in ways that other creatures cannot; whether on balance this higher awareness makes our pain more or less bearable is a difficult question in itself. We might claim that imagination and fear magnify our suffering, but being to some degree aware and in control can also put us in a better position than a panicking dog that cannot understand what is happening to it.

Key leans quite heavily on reportable pain; there are obvious reasons for that, but it could be that doing so skews him towards humanity and towards the cortex, which is surely deeply involved in considering and reporting. He dismisses some evidence that pain can occur without a cortex, and therefore happens in the brain stem. His objections seem reasonable, but surely it would be odd if nothing were going on in the brain stem, that ‘old brain’ we have inherited through evolution, even if it’s only some semi-automatic avoidance stuff. The danger is that we might be paying attention to the reportable pain dealt with by the ‘talky’ part of our minds while another kind is going on elsewhere. We know from such phenomena as blindsight that we can unconsciously ‘see’ things; could we not also have unconscious pain going on in another part of the brain?

That raises another important question: does it matter? Is unconscious or forgotten pain worth considering – would fish pain be negligible even if it exists? Pain is, more or less, the feeling we all want to avoid, so in one way its ethical significance is obvious. But couldn’t those automatic damage avoidance behaviours have some ethical significance too? Isn’t damage sort of ethically charged in itself? Key rejects the argument that we should give fish the ‘benefit of the doubt’, but there is a slightly different argument that being indifferent to apparent suffering makes us worse people even if strictly speaking no pains are being felt.

Consider a small boy with a robot dog; the toy has been programmed to give displays of affection and enjoyment, but if mistreated it also performs an imitation of pain and distress. Now suppose the boy never plays nicely, but obsessively ‘tortures’ the robot, trying to make it yelp and whine as loudly as possible. Wouldn’t his parents feel some concern; wouldn’t they tell him that what he was doing was wrong, even though the robot had no real feelings whatever. Wouldn’t that be a little more than simple  anthropomorphism?

Perhaps we need a bigger vocabulary; ‘pain’ is doing an awful lot of work in these discussions.

Artificial Pain

botpainWhat are they, sadists? Johannes Kuehn and Sami Haddadin,  at Leibniz University of Hannover are working on giving robots the ability to feel pain: they presented their project at the recent ICRA 2016 in Stockholm. The idea is that pain systems built along the same lines as those in humans and other animals will be more useful than simple mechanisms for collision avoidance and the like.

As a matter of fact I think that the human pain system is one of Nature’s terrible lash-ups. I can see that pain sometimes might stop me doing bad things, but often fear or aversion would do the job equally well. If I injure myself I often go on hurting for a long time even though I can do nothing about the problem. Sometimes we feel pain because of entirely natural things the body is doing to itself – why do babies have to feel pain when their teeth are coming through? Worst of all, pain can actually be disabling; if I get a piece of grit in my eye I suddenly find it difficult to concentrate on finding my footing or spotting the sabre-tooth up ahead; things that may be crucial to my survival; whereas the pain in my eye doesn’t even help me sort out the grit. So I’m a little sceptical about whether robots really need this, at least in the normal human form.

In fact, if we take the project seriously, isn’t it unethical? In animal research we’re normally required to avoid suffering on the part of the subjects; if this really is pain, then the unavoidable conclusion seems to be that creating it is morally unacceptable.

Of course no-one is really worried about that because it’s all too obvious that no real pain is involved. Looking at the video of the prototype robot it’s hard to see any practical difference from one that simply avoids contact. It may have an internal assessment of what ‘pain’ it ought to be feeling, but that amounts to little more than holding up a flag that has “I’m in pain” written on it. In fact tackling real pain is one of the most challenging projects we could take on, because it forces us to address real phenomenal experience. In working on other kinds of sensory system, we can be sceptics; all that stuff about qualia of red is just so much airy-fairy nonsense, we can say; none of it is real. It’s very hard to deny the reality of pain, or its subjective nature: common sense just tells us that it isn’t really pain unless it hurts. We all know what “hurts” really means, what it’s like, even though in itself it seems impossible to say anything much about it (“bad”, maybe?).

We could still take the line that pain arises out of certain functional properties, and that if we reproduce those then pain, as an emergent phenomenon, will just happen. Perhaps in the end if the robots reproduce our behaviour perfectly and have internal functional states that seem to be the same as the ones in the brain, it will become just absurd to deny they’re having the same experience. That might be so, but it seems likely that those functional states are going to go way beyond complex reflexes; they are going to need to be associated with other very complex brain states, and very probably with brain states that support some form of consciousness – whatever those may be. We’re still a very long way from anything like that (as I think Kuehn and Haddadin would probably agree)

So, philosophically, does the research tell us nothing? Well, there’s one interesting angle. Some people like the idea that subjective experience has evolved because it makes certain sensory inputs especially effective. I don’t really know whether that makes sense, but I can see the intuitive appeal of the idea that pain that really hurts gets your attention more effectively than pain that’s purely abstract knowledge of your own states. However, suppose researchers succeed in building robots that have a simple kind of synthetic pain that influences their behaviour in just the way real pain dies for animals. We can see pretty clearly that there’s just not enough complexity for real pain to be going on, yet the behaviour of the robot is just the same as if there were. Wouldn’t that tend to disprove the hypothesis that qualia have survival value? If so, then people who like that idea should be watching this research with interest – and hoping it runs into unexpected difficulty (usually a decent bet for any ambitious AI project, it must be admitted).

Pain without suffering

The latest issue of the JCS is all about pain.  Pain has always been tough to deal with: it’s subjective, not a thing out there in the world, and yet even the most hardline reductionist materialist can’t really dismiss it as an airy-fairy poetic delusion. We are all intensely concerned about pain, and the avoidance of it is among our most important moral and political projects. When you step back a bit, that seems remarkable: it’s easy to see more or less objective reasons why we should want to prevent disease, mitigate the effects of natural disasters, prevent wars and famines – harder to see why near or even at the top of the list of things we care about should be avoiding the occurrence of a particular kind of pattern of neuronal firing.

It’s hard even to say what it is. It seems to be a sensation, but a sensation of what? Of…. pain? Our other sensations give us information, about light, sound, temperature, and so on. Pain is often accompanied by feelings of pressure or heat or whatever, but it is quite distinct and separable from those impressions. In itself, the only thing pain tells us is: ‘you’re in pain’.  It seems sensible, therefore, to regard it as not a sensation in the same way as other sensations, but as being something like a kind of deferrable reflex: instead of just automatically moving our arm away from the hot pan it tells us urgently that we ought to do so. So it turns out to be something like a change in our dispositions or a change of weightings in our current projects.  That kind of account is appealing except for the single flaw of being evident nonsense.  When I’m in the dentist’s chair, I’m not feeling a change in my dispositions or anything that abstract, I’m feeling pain – that thing, that bad thing, you know what I mean, even though words fail me.

If it’s hard to describe, then, is pain actually the most undeniable of qualia? From some angles it looks like a quale, but qualia are supposed to have no causal effects on our behaviour, and that is exceptionally difficult to believe in the case of pain: if ever anything was directly linked to motivation, pain is it.  Undeniability looks more plausible: pain is pre-eminently one of the things it seems we can’t be wrong about. I might be mistaken in my belief that my hand has just been sheared off by a saw:  that ‘s a deduction about the state of the world based on the evidence of my senses; I don’t see how I could be wrong about the fact that I’m in agony because no reasoning is involved: I just am.

One of the contributors to the JCS might take issue with that, though. S. Benjamin Fink wants to present an approach to difficult issues of phenomenal experience and as his example he offers a treatment of pain which suggests it isn’t the simple unanalysable primitive we might think. In Fink’s view one of the dangers we need to guard against is the assumption that elements of experience we’ve always, as it happens, had together are necessarily a single phenomenon.  In particular, he wants to argue for the independence of pain and suffering/unpleasantness. Pain, it turns out, is not really bad after all (at least, not necessarily and in itself).

Fink offers several examples where pain and unpleasantness occur separately. An itch is unpleasant but not painful; the burning sensation produced by hot chillies is painful but not unpleasant (at least, so long as it occurs in the mouths of regular chili eaters, and not in their eyes or a neophyte’s mouth). These examples seem vulnerable to a counterargument based on mildness: itches aren’t described as pains just because they aren’t bad enough; and the same goes for spicy food in a mouth that has become accustomed to it. But Fink’s real clincher is the much more dramatic example of pain asymbolia. People with this condition still experience pain but don’t mind it. It’s not at all that they’re anaesthetised: they are aware of pain and can use it rationally to decide when some part of their body is in danger of damage, but they do so , as it were coldly, and don’t mind needles being stuck in them for experimental purposes at all. Fink quotes a woman who underwent a lobotomy to cure continual pain: many years later she reported happily that the pain was still there: “In fact, it’s still agonising. But I don’t mind.”

These people are clearly exceptional, but it’s worth noting that even in normal people the link between nociception, the triggering of pain-sensing nerve-endings, and the actual experience of pain is by no means as invariable and straightforward as philosophers used to believe back in the days when some argued that the firing of c-fibres was identical with the occurence of pain. Fink wants to draw a distinction between pain itself, a sensation, and suffering, the emotional response associated with it; it is the latter, in his view, which is the bad thing while pain itself is a mere colourless report. As a further argument he notes research which seems to show that when subjects are feeling compassion, some neural activity can be seen in areas which are normally active when the subjects themselves are feeling pain. The subjects, as it were, feel the the pain of others, though obviously without actual nociception.

So is Fink right? I think many people’s first reaction might be that unpleasantness just defines pain, so that if you’re feeling something that isn’t unpleasant, we wouldn’t want to call it pain. We might say that people with asymbolia experience nocition (not sure that’s really a word but work with me on this) but not pain. Fink would say – he does say – that we ought to listen to what people say. Usage should determine our definition, he says, we should not make our definitions normatively control our usage.  But he’s in a weak position here. If we are to pay attention to usage, then surely we should pay attention to the usage of the vast majority of people who regard pain as a unitary phenomenon, not to a small group of people with a most unusual set of experiences which might have tutored their perceptions in unreliable ways. I’m not sure it’s clear that asymbolics, in any case, insist that what they’re aware of is proper, echt pain – if they were asked, would they perhaps agree that it’s not pain in quite the ordinary sense?

I’m also not convinced that suffering, or unpleasantess, is really a well-defined entity in the way Fink requires. Unpleasantness may be a slight lapse of manners at a tea-party;  you might suffer badly on the stock exchange while happily sipping a cocktail on your sun-lounger. I’m not sure there is a distinct complex of emotional affect we can label as suffering at all. And if there is, we’re back with the sheer implausibility of saying that that’s what the bad stuff is: when I hit my thumb with a hammer it doesn’t seem like a matter of affect to me, it seems very definitely like old-fashioned simple pain.

If we’re going to take that line, though, we have to account for Fink’s admittedly persuasive examples, in particular asymbolia.  Never mind now what we call it: how is it that these people can experience something they’re willing to call pain without minding it, if it isn’t that our concept of pain needs reform?

Well, there is one other property of pain which we’ve overlooked so far.  There is one obvious kind of pain which I can perceive without being disturbed at all – yours. We may indeed feel some sympathetic twinges for the pain of others, but a key point about pain is that it’s essentially ours. It sticks to us in a way nothing else does: it’s normal in philosophy to speak of the external world, but pain, perhaps uniquely, isn’t external in that sense: it’s in here with us.  That may be why it has another property, noted by Fink, of being very difficult to ignore.

So it may be that subjects with asymbolia are not lacking emotional affect, but rather any sense of ownership. The pain they feel is external, it’s not particularly theirs: like Mrs Gradgrind they feel that

‘… there’s a pain somewhere in the room, but I couldn’t positively say that I have got it.’