I came across A defense of Anthropomorphism: comparing Coetzee and Gowdy by Onno Oerlemans the other day. The main drift of the paper is literary: it compares the realistic approach to dog sentiment in J.M.Coetzee’s Disgrace with the strongly anthropomorphic elephants in Barbara Gowdy’s The White Bone. But it begins with a wide-ranging survey of anthropomorphism, the attribution of human-like qualities to entities that don’t actually have them. It mentions that the origin of the term has to do with representing God as human (a downgrade, unlike other cases of anthropomorphism), notes Darwin’s willingness to attribute similar emotions to humans and animals, and summarises Derrida’s essay The Animal That Therefore I Am (More To Follow). The Derrida piece discusses the embarrassment a human being may feel about being naked in front of a pet cat (I didn’t know that the popular Ceiling Cat internet meme had such serious intellectual roots) and concludes that taking the consciousness of animals as seriously as our own threatens one of the fundamental distinctions installed in the foundations of our conception of the world.
That may be, but the attribution of human sentience to animals is rife in popular culture, especially when it comes to children. Some lists of banned books suggest that the Chinese government has cracked down on many apparently harmless children’s books; it turns out this is because at one time the Chinese decided to eliminate anthropomorphism from children’s literature, wiping out a large swathe of traditional and Western stories. I can’t help feeling a small degree of sympathy with this: a look at children’s television reveals so many characters who are either outright talking animals, or (even stranger) humanoids with animal heads, you might well conclude there was some law against the depiction of human beings. It would surely seem odd to any aliens who might be watching that we were so obsessed with the fantasised doings of other species.
Or perhaps it wouldn’t seem strange at all, and they would merely make plans for a picture-book series about Hubert the Human, his body spherical with twelve tentacles just like a normal person, but his head displaying the strange features of Homo Sapiens. It seems likely that our fondness for anthropomorphism has something to do with our marked tendency to see faces in random patterns: our brains are clearly set up with a strong prejudice towards recognising people, or sentience at least, even where it doesn’t really exist. Such a strong tendency must surely have evolved because of a high survival value – it seems plausible that erring on the side of caution when spotting potential enemies or predators, for example, might be a good strategy – and if that’s the case we might expect any aliens to have evolved a similar bias.
That bias is a problem for us when it comes to science, however. When considering animal behaviour, it seems natural, almost unavoidable, to assume that the same kind of feelings, intentions and plans are at work as those responsible for similar behaviour in humans. After all, humans are animals. It’s clear that other animals don’t make such complicated plans as we do; they don’t talk in the same way we do and don’t seem to have the same kinds of abstract thought. But some of them seem to have at least the beginnings or precursors of human-style consciousness.
Unfortunately, careful observation shows beyond doubt that some forms of animal behaviour which seemed purposeful are really just fantastically well-developed instincts. The sphex wasp seems to check its burrow out of forethought before dragging its victim inside; but if you move the victim slightly and make it start the pattern again, it will check the burrow again, and go on doing so tirelessly over and over, in spite of the fact that it knows, or should know, that nothing could possibly have gone into the burrow since the last check.
A parsimonious approach seems called for. The methodologically correct principle to apply was eventually crystallised in Morgan’s Canon:
‘In no case may we interpret an action as the outcome of the exercise of a higher mental faculty, if it can be interpreted as the exercise of one which stands lower in the psychological scale’
In effect, this principle sets up a strong barrier against anthropomorphism: we may only attribute human-style conscious thought to an animal if nothing else – no combination of instinct, training, environment and luck – can possibly account for its behaviour. I said this was ‘methodologically correct’, but in fact it is a very strong restriction, and it could be argued that if it were rigorously applied, the attribution of human-style cognition to certain humans might begin to look doubtful. According to Oerlemans, ethologists have been asking whether, by striving too hard to avoid anthropomorphism, we haven’t sometimes denied ourselves legitimate and valuable insights.
It’s interesting to reflect that in another part of the forest we are faced with a similar difficulty and have adopted different principles. Besides the question of when to attribute intelligence to animals, we have the question of when to do so for machines. The nearest thing we have to Morgan’s Canon here is the Turing Test, which says that if something seems like a conscious intelligence after ten minutes or so of conversation, we might as well assume that that’s what it is. Now as it happens, because of its linguistic bias, the Turing Test would not admit any animal species to the human level of consciousness; but it does seem to be a less demanding criterion. Perhaps this is because of the differing history in the two fields: we’ve always been surrounded by animals whose behaviour was intelligent in some degree, and perhaps need to rein in our optimism; whereas there were few machines until the nineteenth century, and the conviction that they could in principle be intelligent in any sense took time to gain acceptance – so a more encouraging test seems right.
Perhaps, if some future genius comes up with the definitive test for consciousness, it will lie somewhere between Morgan and Turing, and be equally applicable to animals and machines?