Gorilla The Royal Society had a joint event with the Royal Society of Literature back in November, devoted to the question of the mental ‘gap’ between human beings and animals. An eminent panel (Nicky Clayton, Doris Lessing, Will Self and Andrew Whiten) offered thoughts and addressed a few questions.There’s a video here, though I wouldn’t bother trying it unless you have a really fast connection.
In fact, although entertaining and somewhat enlightening, the discussion never made more than glancing contact with any of the philosophical issues; a lot of the time was spent in remarking on the charm and intelligence of apes, corvids, cats, and other animals. Will Self asserted that there were no differences between apes and humans. Having read his novel Great Apes, which provides a number of interesting details about the social and sexual behaviour of chimpanzees, I thought this shed a surprising light on his own private life; as did, perhaps, his insistence that consciousness was greatly over-rated.

Anyway, I thought it might be worth trying to scratch the surface of the issues a little more deeply. What are the fundamental mental differences between animals and humans? For our purposes three areas stand out: moral, intellectual, and phenomenal.


A number of moves to upgrade the official moral status of animals have been debated in recent times: the Great Ape Project, for example, seeks to obtain recognition of three basic legal rights for our nearer relatives among the animals. Whether this is philosphically a sound approach is open to doubt; Roger Scruton, at least, has argued that since animals do not have moral duties, they cannot have rights either. I find the idea that duties and rights go together in this way very attractive: unfortunately, I can’t see any compelling reason to think it’s true. However, it could reasonably be argued that for talk about rights to be meaningful, the person with those rights has to have the ability to assert or give up the right in question, and some animals surely don’t have the intellectual capacity to assert or give up anything that abstract. How far this is true of all animals, of course, depends on your view of their intellectual capacities, and to some degree on how explicit you feel the assertion of rights needs to be – is struggling to survive an assertion of the right to live?

We don’t have to talk in terms of rights, however, and in fact the idea of natural or moral rights is a slippery concept which sometimes gives a false air of dignity to mere desires or opinions – when I say animals have a right to something, I may be doing no more than urging others to agree to giving it. Peter Singer’s line on animal morality takes a different tack, making them parties to Utilitarianism. The argument here is that animal pain or pleasure should be weighed in the scales together with human feelings. If you’re a utilitarian, this conclusion seems a logical extension of the system, and even if you’re not you may well feel that animal pain is morally relevant. However, the argument does rest on the assumption that animals feel pain, which has been denied by some, about some animals in any case; so this line of argument may rest on what view we take of the phenomenal capacities of animals and how far their version of pain resembles ours (although it would be perfectly possible, logically, to adopt a form of utilitarianism which took no account of phenomenal qualities and merely took it that pain responses had negative utility regardless of whether they ‘really hurt’).

One argument which tends to lend weight to a Scrutonish line of thinking is the plausible contention that animals are moral objects, but not moral subjects: that is, there may be moral rules about what can be done to them, but not about what they should do. It may be a moral principle that a wolf should not be caused unnecessary pain; but we don’t consider holding wolves accountable for their behaviour and subjecting them to the apparatus of the criminal justice system for any excess pain they may have caused to other animals.

Or do we? In training and working with a dog we certainly do use punishments and rewards: we talk in censorious tones of bad behaviour and admiring ones of good: not all that different from the way we might behave towards a child. There’s scope for argument here over whether we really think a ‘bad dog’ is morally bad or bad in some lesser, more pragmatic sense – whether, to get a bit Kantian, we’re dealing with categorical or merely hypothetical imperatives.

It seems reasonable to me, skipping over that discussion, that animals fall on a moral scale so far as responsibility is concerned: most animals have a negligible level of moral responsibility, but the most intelligent show at least the beginnings of a capacity for it, albeit at a level well below that of most adult human beings.

We seem, anyway, to have established a pleasingly symmetrical (but suspiciously neat) structure whereby there are two lines to take on the moral qualities of animals, each of which depends at least partly on the view we take of one of the other two areas I mentioned to begin with.

So what about the intellectual capacities of animals? The Royal Society discussion touched on the way tool use, once considered a uniquely human practice, has gradually been found to occur in a range of species and a number of different forms (Including, apparently, the nose-picking tool, something not yet achieved by human technology so far as I know. I hope the designers of the Swiss Army Knife are not reading this.) This is undoubtedly true, but it still does seem that human tool use is distinctively different, even if it’s difficult to formulate the distinction in qualitative terms.

The subject of animal linguistic competence causes strong disagreement. The work of Sue Savage-Rumbaugh and others has produced chimps which appear to manage a high degree of communication through special keyboards or through sign language; but while some find the results compelling evidence of real linguistic competence verging on the human, others remain unconvinced.

It strikes me that there is an interesting comparison to be made here with the Turing test. Both candidates for human-style cognition, the chimp and the computer, are to be judged very largely on their ability to hold a conversation (maybe we’re not so much Homo Sapiens as Homo Loquens), and in both cases the results are interpreted very differently according to the predisposition of the observer. However, it seems to me we expect much more grammatical competence from a computer than we do from a chimpanzee: the Turing test requires properly constructed human sentences, whereas the chimpanzees are provided with a simplified system where it’s much harder to go wrong. In fact, we require the machine’s text responses to be indistinguishable from those of a human being, but a few errors on the chimp’s part would not cause us to rise from the desk, satisfied that the creature was a sham.

In a different respect, though, we’re harder on the primate: we take it for granted that if the chimp can communicate at all it will be able to tell us about objects nearby and carry out simple instructions even in unfamilliar surroundings: to ask the computer to fetch a ball and give it to its sister would not be considered fair play.

I think the comparison of our attitudes in the two cases suggests that we do, almost unthinkingly, attribute to apes some of the basic mental abilities we enjoy ourselves: but the fact that we let them off the tougher linguistic tasks we expect an inanimate algorithm to be able to deal with shows that even the pro-primate school of thought is aware of a distinct gap.

What then, about the phenomenal experience of animals? There is a pretty well established consensus for practical purposes which says the bigger your brain, the more you feel and the greater the care that must be taken to avoid unnecessary suffering. This may short-change some non-mammals a bit, but it seems an appealing rule of thumb. A minority school of thought says that humans are qualitatively different here; only humans know that they are in pain, and that’s what makes it so bad. Dennett, who as an enemy of qualia in all their forms, wants pain to be a matter of interpretation, seems to say it’s largely a matter of crushed hopes and projects, though I don’t think he goes on to draw the implied negative conclusion about the pain capacity of projectless animals.

The real philosophical problem is that we have no secure grounds for believing anyone feels pain except ourselves: it’s only a weak induction from the single case where we have direct evidence. Perhaps the best we can say is that the induction applies to some animals almost as well as it applies to our fellow human beings.

Where does that leave us? Are animals conscious? I don’t know, but my guess is that there is no clear qualitative gap between humans and animals – there’s nothing we’ve got that at least some of them haven’t got in at least a vestigial form. But there’s a huge quantitative gap; they’re never more than slightly conscious.

One Comment

  1. 1. steve esser says:

    Good post, Peter. Evolution teaches us that the differences are quantitative. The thalamo-cortical system, which seems to correlate with phenomenal experience, exists in vertebrates beyond mammals as well from what I’ve read.

Leave a Reply