Morality and Consciousness

What is the moral significance of consciousness? Jim Davies addressed the question in a short but thoughtful piece recently.

Davies quite rightly points out that although the nature of consciousness is often seen as an academic matter, remote from practical concerns, it actually bears directly on how we treat animals and each other (and of course, robots, an area that was purely theoretical not that long ago, but becomes more urgently practical by the day). In particular, the question of which entities are to be regarded as conscious is potentially decisive in many cases.

There are two main ways my consciousness affects my moral status. First, if I’m not conscious, I can’t be a moral subject, in the sense of being an agent (perhaps I can’t anyway, but if I’m not conscious it really seems I can’t get started). Second, I probably can’t be a moral object either; I don’t have any desires that can be thwarted and since I don’t have any experiences, I can’t suffer or feel pain.

Davies asks whether we need to give plants consideration. They respond to their environment and can suffer damage, but without a nervous system it seems unlikely they feel pain. However, pain is a complex business, with a mix of simple awareness of damage, actual experience of that essential bad thing that is the experiential core of pain, and in humans at least, all sorts of other distress and emotional response. This makes the task of deciding which creatures feel pain rather difficult, and in practice guidelines for animal experimentation rely heavily on the broad guess that the more like humans they are, the more we should worry. If you’re an invertebrate, then with few exceptions you’re probably not going to be treated very tenderly. As we come to understand neurology and related science better, we might have to adjust our thinking. This might let us behave better, but it might also force us to give up certain fields of research which are useful to us.

To illustrate the difference between mere awareness of harm and actual pain, Davies suggests the example of watching our arm being crushed while heavily anaesthetised (I believe there are also drugs that in effect allow you to feel the pain while not caring about it). I think that raises some additional fundamental issues about why we think things are bad. You might indeed sit by and watch while your arm was crushed without feeling pain or perhaps even concern. Perhaps we can imagine that for some reason you’re never going to need your arm again (perhaps now you have a form of high-tech psychokinesis, an ability to move and touch things with your mind that simply outclasses that old-fashioned ‘arm’ business), so you have no regrets or worries. Even so, isn’t there just something bad about watching the destruction of such a complex and well-structured limb?

Take a different example; everyone is dead and no-one is ever coming back, not even any aliens. The only agent left is a robot which feels no pleasure or pain but makes conscious plans; it’s a military robot and it spends its time blowing up fine buildings and destroying works of art, for no particular reason. Its vandalistic rampage doesn’t hurt anyone and cannot have any consequences, but doesn’t its casual destructiveness still seem bad?

I’d like to argue that there is a badness to destruction over and above its consequential impact, but it’s difficult to construct a pure example, and I know many people simply don’t share my intuition. It is admittedly difficult because there’s always the likelihood that one’s intuitions are contaminated by ingrained assumptions about things having utility. I’d like to say there’s a real moral rule that favours more things and more organisation, but without appealing to consequentialist arguments it’s hard for me to do much more than note that in fact moral codes tend to inhibit destruction and favour its opposite.

However, if my gut feeling is right, it’s quite important, because it means the largely utilitarian grounds used for rules about animal research and some human matters, re not quite adequate after all; the fact that some piece of research causes no pain is not necessarily enough to stop its destructive character bring bad.

It’s probably my duty to work on my intuitions and arguments a bit more, but that’s hard to do when you’re sitting in the sun with a beer in the charming streets of old Salamanca…

10 thoughts on “Morality and Consciousness

  1. Just to help you explore your intuitions a bit, how did you feel about the destruction of the Berlin Wall? Do you have conflicting feelings when you watch the video of the swastika being blown off the roof of the Reichstag? What about the slow deterioration of the pyramids, or does an active agent need to be involved (which I guess was the point of having a lone post-humanity robot destroying stuff)? Is it the same intuition that denounces the wanton destruction of natural formations, like the guy who knocked over the balanced rock? What about the destruction of natural formations in the name of progress (e.g., strip-mining)?

    For myself, I think your intuition is a natural product of evolution, which is a natural product of the tendency to not only increase entropy, but increase the acceleration of increasing entropy.

    *

  2. On pain, I think it helps to consider what an organism needs to have in order to experience it. It seems to need an internal self/body image (a model or neural firing pattern) built by continuous signalling from an internal network of sensors (nerves) throughout its body. It needs to have strong preferences about the state of that self/body so that when it receives signals that violate those preferences, it has powerful defensive impulses, impulses it can only inhibit with significant energy.

    We could argue about whether it needs to have some level of introspection so it knows that it’s in pain, but it’s not clear that newborn babies have that capability, yet I wouldn’t be willing to say a newborn can’t feel pain. (Although it used to be a common medical sentiment that they couldn’t, few people seem to believe that today.)

    When asking if trees feel pain, you might could argue that they can be damaged, and may respond to that damage, but I can’t see any evidence that they build an internal body image anywhere, much less have preferences about it.

    Things get a little hazy with organisms with nervous systems but without any central brain, such as c-elegans worms. Such simple worms will respond to noxious stimuli, but it’s hard to imagine they have any internal image or preferences about it in their diffuse nervous systems.

    But any vertebrate or invertebrate with distance senses has a central brain / ganglia. I often read that insects don’t feel pain, but when I spray one, it sure buzzes and convulses like it’s in serious distress, enough so that I usually try to put it out of its misery if I can. Am I just projecting? Perhaps, but I prefer to err on the side of caution (albeit not to the extent of letting the bug continue to live in my house).

    I think people resist the idea of animal consciousness because we eat them, use them for scientific research, or, in many cases, eradicate them when they cross our interests, and taking that stance avoids having to deal with difficult questions. Myself, I don’t think the research or pest control should necessarily stop, but we should be clear about what we’re doing and carefully consider the costs against any benefits.

  3. Plato thought meaning came from the forms, independent of human consciousness. With your post apocalyptic art vandal robot you’re repeating his mistake in an odd way, I think.

  4. “Its vandalistic rampage doesn’t hurt anyone and cannot have any consequences, but doesn’t its casual destructiveness still seem bad?”

    Well, of course it would _seem_ bad, whether or not it _is_ bad, because our emotional responses aren’t calibrated for a situation where no moral agents exist. Destroying valuable things is bad in nearly all situations in real life, and there’s no reason evolution or society would have given us proper intuitions about the moral status of destructive behavior in a situation that no person will ever encounter.

  5. “I often read that insects don’t feel pain”
    Peter Singer had a paper on this many years ago, with examples where the front half (or back half) keeps on with whatever it was doing despite disastrous events at the other end.

  6. Regarding plants, although not cognitively aware of the sensation of pain, plants (from 3.5 billion years old algae to angiosperms) not only experience suffering in the form of chemical panic felt by the entire organism via electrical impulses transmitted across the plasmodesmata , but it is now known that they live in fear of their ferociously peculiar understanding of pain .

    Located deep inside the plant genome, isolated within the first intron MPK4, lay three ancient genes (PR1, PR2, PR5) that have revealed to researchers that MPK4 is devoted to negative regulation of the PR gene expression. This gene expression is anticipatory. It is expectant. It is preparatory. It is suspicious. It is, in a word, fearful. If translated to the human experience, the PR gene expression is what a human observer would identify with as a deep-rooted, physiologically hardwired anxiety; a most ancient paranoia.

  7. Peter

    Isn’t this a case of confusing first person feelings with a social construct such as morality? The two aren’t the same and one does not have a necessary relationship with the other. The idea that its just a matter of being able to feel pain is a bit simplistic. Dogs can feel pain but they don’t have a morality. Any dog would see the inflicting of pain as wrong, unless they were eating the victim. Human morality is all about the wider issues – beyond the pain. Human morality is about whether or not its always wrong to inflict pain, and deciding when it’s ‘morally’ ok. There isn’t a feeling in sight, which is why the Greeks liked the idea of it.

    There have been lots of christian homosexuals who felt that homosexuality was wrong : didn’t stop them from fancying men. Some men felt anguish as a consequence of this inability to fuse morality with personal feelings – but most simply changed their morality and felt just fine afterwards.

    We know that some nazis didn’t feel good about killing jews : a small number felt just fine. But most felt bad even though they subscribed to nazi morality : no amount of nazi rationale could change their feelings about killing. It doesn’t stop nazi morality from being a morality.

    And of course if you were a nazi killer or not, the physical pain of being run over on the foot is the same. Pain has little to do with morality and the relationship is tangential to say the least.

    JBD

  8. Is sensing oneself–questioning and conscinousness interacting instinctively, providing feeling of oneself (If though objective value is always here) motivating us–to work with subjective values…

Leave a Reply

Your email address will not be published. Required fields are marked *