Morality and Consciousness

What is the moral significance of consciousness? Jim Davies addressed the question in a short but thoughtful piece recently.

Davies quite rightly points out that although the nature of consciousness is often seen as an academic matter, remote from practical concerns, it actually bears directly on how we treat animals and each other (and of course, robots, an area that was purely theoretical not that long ago, but becomes more urgently practical by the day). In particular, the question of which entities are to be regarded as conscious is potentially decisive in many cases.

There are two main ways my consciousness affects my moral status. First, if I’m not conscious, I can’t be a moral subject, in the sense of being an agent (perhaps I can’t anyway, but if I’m not conscious it really seems I can’t get started). Second, I probably can’t be a moral object either; I don’t have any desires that can be thwarted and since I don’t have any experiences, I can’t suffer or feel pain.

Davies asks whether we need to give plants consideration. They respond to their environment and can suffer damage, but without a nervous system it seems unlikely they feel pain. However, pain is a complex business, with a mix of simple awareness of damage, actual experience of that essential bad thing that is the experiential core of pain, and in humans at least, all sorts of other distress and emotional response. This makes the task of deciding which creatures feel pain rather difficult, and in practice guidelines for animal experimentation rely heavily on the broad guess that the more like humans they are, the more we should worry. If you’re an invertebrate, then with few exceptions you’re probably not going to be treated very tenderly. As we come to understand neurology and related science better, we might have to adjust our thinking. This might let us behave better, but it might also force us to give up certain fields of research which are useful to us.

To illustrate the difference between mere awareness of harm and actual pain, Davies suggests the example of watching our arm being crushed while heavily anaesthetised (I believe there are also drugs that in effect allow you to feel the pain while not caring about it). I think that raises some additional fundamental issues about why we think things are bad. You might indeed sit by and watch while your arm was crushed without feeling pain or perhaps even concern. Perhaps we can imagine that for some reason you’re never going to need your arm again (perhaps now you have a form of high-tech psychokinesis, an ability to move and touch things with your mind that simply outclasses that old-fashioned ‘arm’ business), so you have no regrets or worries. Even so, isn’t there just something bad about watching the destruction of such a complex and well-structured limb?

Take a different example; everyone is dead and no-one is ever coming back, not even any aliens. The only agent left is a robot which feels no pleasure or pain but makes conscious plans; it’s a military robot and it spends its time blowing up fine buildings and destroying works of art, for no particular reason. Its vandalistic rampage doesn’t hurt anyone and cannot have any consequences, but doesn’t its casual destructiveness still seem bad?

I’d like to argue that there is a badness to destruction over and above its consequential impact, but it’s difficult to construct a pure example, and I know many people simply don’t share my intuition. It is admittedly difficult because there’s always the likelihood that one’s intuitions are contaminated by ingrained assumptions about things having utility. I’d like to say there’s a real moral rule that favours more things and more organisation, but without appealing to consequentialist arguments it’s hard for me to do much more than note that in fact moral codes tend to inhibit destruction and favour its opposite.

However, if my gut feeling is right, it’s quite important, because it means the largely utilitarian grounds used for rules about animal research and some human matters, re not quite adequate after all; the fact that some piece of research causes no pain is not necessarily enough to stop its destructive character bring bad.

It’s probably my duty to work on my intuitions and arguments a bit more, but that’s hard to do when you’re sitting in the sun with a beer in the charming streets of old Salamanca…