Where did consciousness come from? A recent piece in New Scientist (paywalled, I’m afraid) reviewed a number of ideas about the evolutionary origin and biological nature of consciousness. The article obligingly offered a set of ten criteria for judging whether an organism is conscious or not…

  • Recognises itself in a mirror
  • Has insight into the minds of others
  • Displays regret having made a bad decision
  • Heart races in stressful situations
  • Has many dopamine receptors in its brain to sense reward
  • Highly flexible in making decisions
  • Has ability to focus attention (subjective experience)
  • Needs to sleep
  • Sensitive to anaesthetics
  • Displays unlimited associative learning

This is clearly a bit of a mixed bag. One or two of these have a clear theoretical base; they could be used as the basis of a plausible definition of consciousness. Having insight into the minds of others (‘theory of mind’) is one, and unlimited associative learning looks like another. But robots and aliens need not have dopamine receptors or racing hearts, yet we surely wouldn’t rule out their being conscious on that account. The list is less like notes towards a definition and more of a collection of symptoms.

They’re drawn from some quite different sources, too. The idea that self-awareness and awareness of the minds of others has something to do with consciousness is widely accepted and the piece alludes to some examples in animals. A chimp shown a mirror will touch a spot that had been covertly placed on its forehead, which is (debatably) said to prove it knows that the reflection is itself. A scrub jay will re-hide food if it was seen doing the hiding the first time – unless it was seen only by its own mate. A rat that pressed the wrong lever in an experiment will, it seems, gaze regretfully at the right one (‘What do you do for a living?’ ‘Oh, I assess the level of regret in a rat’s gaze.’) Self-awareness certainly could constitute consciousness if higher-order theories are right, but to me it looks more like a product of consciousness and hence a symptom, albeit a pretty good one.

Another possibility is hedonic variation, here championed by Jesse Prinz and Bjørn Grinde. Many animals exhibit a raised heart rate and dopamine levels when stimulated – but not amphibians or fish (who seem to be getting a bad press on the consciousness front lately). There’s a definite theoretical insight underlying this one. The idea is that assigning pleasure to some outcomes and letting that drive behaviour instead of just running off fixed patterns instinctively, allows an extra degree of flexibility which on the whole has a positive survival value. Grinde apparently thinks there are downsides too and on that account it’s unlikely that consciousness evolved more than once. The basic idea here seems to make a lot of sense, but the dopamine stuff apparently requires us to think that lizards are conscious while newts are not. That seems a fine distinction, though I have to admit that I don’t have enough experience of newts to make the judgement (or of lizards either if I’m being completely honest).

Bruno van Swinderen has a different view, relating consciousness to subjective experience. That, of course, is notoriously unmeasurable according to many, but luckily van Swinderen thinks it correlates with selective attention, or indeed is much the same thing. Why on earth he thinks that remains obscure, but he measures selective attention with some exquisitely designed equipment plugged into the brains of fruit flies. (‘Oh, you do rat regret? I measure how attentively flies are watching things.’)

Sleep might be a handy indicator, as van Swinderen believes it is creatures that do selective attention that need it. They also, from insects to vertebrates (fish are in this time), need comparable doses of anaesthetic to knock them out, whereas nematode worms need far more to stop them in their tracks. I don’t know whether this is enough. I think if I were shown a nematode that had finally been drugged up enough to make it keep still, I might be prepared to say it was unconscious; and if something can become unconscious, it must previously have been conscious.

Some think by contrast that we need a narrower view; Michael Graziano reckons you need a mental model, and while fish are still in, he would exclude the insects and crustaceans van Swinderen grants consciousness to. Eva Jablonka thinks you need unlimited associative learning, and she would let the insects and crustaceans back in, but hesitates over those worms. The idea behind associative learning is again that consciousness takes you away from stereotyped behaviour and allows more complex and flexible responses – in this case because you can, for example, associate complex sets of stimuli and treat them as one new stimulus, quite an appealing idea.

Really it seems to me that all these interesting efforts are going after somewhat different conceptions of consciousness. I think it was Ned Block who called it a ‘mongrel’ concept; there’s little doubt that we use it in very varied ways, to describe the property of a worm that’s still moving at one end, to the ability to hold explicit views about the beliefs of other conscious entities at the other. We don’t need one theory of consciousness, we need a dozen.

14 Comments

  1. 1. SelfAwarePatterns says:

    I think your final paragraph hits the nail on the head. Quite simply, there is no fact of the matter on whether or not a system other than a healthy developed human being is conscious, at least in terms of the way humans are conscious. In the end, what we’re really asking is, how similar is the information processing of the other system to the information processing of a mentally complete human?

    In my view, the mirror test is a poor one. It’s too tangled up with an organism’s intelligence: can it figure out that the animal in the reflection is itself? Just because an animal can’t do that doesn’t mean it doesn’t have any sense of self.

    On associate learning, I think the important distinction is robust operant learning, where an organism has to learn by figuring out (simulating?) consequences. This is in contrast to the much simpler Pavlovian classical conditioning which many primitive animals exhibit, including some single celled organisms. It seems like operant learning would be crucial for the flexible decision making criteria.

    One criteria not listed is self-administering of analgesics, such as an arthritic rat pushing a button that dispenses pain relief. In other words, can the animal suffer, understands its suffering, and act purely to relieve that suffering? But does that mean that fish who don’t appear to experience lingering pain aren’t conscious?

    In the end, there is no sharp line separating conscious creatures from non-conscious ones, only increasing levels of sophistication and functionality. A human is more conscious than a chimpanzee, which is more conscious than a dog, mouse, lamprey, bee, or fruit fly, all of which are over a nematode.

  2. 2. Paul Topping says:

    I agree with the first comment. The wide range of consciousness indicators in the list makes it perfectly clear that this is a fuzzy concept. I would suggest working on making it sharper but that’s also an unfruitful pursuit, IMHO We should simply shelve this question until we understand how the brain works. When that happens, my guess is that we’ll have identified a set of more specific capabilities that certain species have and certain other ones do not. Many of these will be unique to a species or, perhaps, a set of species that fill some ecological niche.

  3. 3. James of Seattle says:

    “In the end, what we’re really asking is, how similar is the information processing of the other system to the information processing of a mentally complete human?”

    I want to second this notion. I think the basis of consciousness is simply interaction with the environment. As you combine such interactions by combining agents in different ways various abilities “emerge”, starting with information processing, and expanding to include memory (via genes, neuronal feedback loops, neuronal reconfiguration, etc.), emotion (systemic changes which alter the information processing on a more systemic scale), and various kinds of intelligence.

    And there will not be just a single scale. For example, there should be no reason to think that a person suddenly becomes unconscious if you remove the processing which leads to emotions. Similarly with long term memory, as happens with damage to the hippocampus. An entity’s consciousness should simply be considered the collection of interactions of which that entity is capable.

    And before someone argues that some conscious activity is internal, and so not a response to the environment, I would point out that it is very important to specify the entity in question. So when we speak of a human being’s consciousness, we are usually referring to what Damasio refers to as the autobiographical self. This would be a subset of the brain which seems able to access the senses and language and memory, but does not necessarily access, for example, the executive functions of the brain. Thus, decisions can be made before the autobiographical self has access to them. So in this case the entity in question is a subsystem in the brain, and various other parts of the brain would be considered the environment with which the autobiographical self interacts.

    *

  4. 4. David Duffy says:

    Ants looking in the mirror
    http://www.journalofscience.net/File_Folder/521-532(jos).pdf

  5. 5. Jayarava says:

    Hmm. One could take these two “Heart races in stressful situations” and “Has many dopamine receptors in its brain to sense reward” and combine them into a criterion regarding responding to their environment.

    However, it gets tricky because plants and even bacteria have stress responses. Bacteria pursue goals of a sort – seeking out their preferred environment or food for example. I’m reading Mercier and Sperber’s new book “The Enigma of Reason” and they point out that plants and bacteria also make inferences from their interactions with the environment. So scratch the ability to make inferences off the list, since it is present in all living things, conscious or not. Or redefine consciousness in the direction of pan-psychism.

    Indeed as I write I’m struggling to identify any behavioural definition of consciousness which is not also applicable to plants.

    The thing of identifying our image in a mirror is exactly the kind of homo-centric mistake that Frans de Waal slams in his latest book “Are We Smart Enough to Know How Smart Animals Are?” What about Nagel’s bat? What constitutes a “mirror” for an animal whose main sense for navigating the world is sonar and which can barely see? Or for a blind mole, that cannot see at all, who navigates by touch and smell?

    It’s 2017 and we cannot agree what consciousness is; we cannot agree on what might constitute possessing it; and we cannot agree on how to test for it. I cannot think of another field of knowledge seeking which is so lacking in foundation, except perhaps economics.

    An analogy would be if zoology were called “unicorn studies”, and was based on comparing all living animals with a series of different (mutually exclusive) mythic accounts of the unicorn.

    “Consciousness” just seems like bad science. We’re trying to fit the data to the hypothesis – or worse we’re looking to *verify* a theory that is not generated by observing the data, but is a legacy concept from ancient Greece.

  6. 6. Christophe Menant says:

    Yes Jayarava,
    behavioral definitions of consciousness are problematic.
    But what about a cognitive/representational definition, like ‘the representation
    of oneself as an existing entity like others humans are represented as existing’.
    We can’t get into the minds of animals but I feel that only humans are capable of that type of representation.
    True that it puts the focus on self-consciousness. But it opens a path for phenomenal consciousness which needs some minimal form of self-consciousness (for phenomenologists ‘a minimal form of self-consciousness is a constant structural feature of conscious experience’).
    And such representational approach also allows evolutionary perspectives in terms of evolution of representaions.

  7. 7. SelfAwarePatterns says:

    Hi Christophe,
    “But what about a cognitive/representational definition, like ‘the representation of oneself as an existing entity like others humans are represented as existing’. We can’t get into the minds of animals but I feel that only humans are capable of that type of representation.”

    Actually, I think any social species is going to have that capability, perhaps not to the extent humans do, but living in a social setting of any kind seems difficult to imagine without it. When monkey-A refuses to groom monkey-B because monkey-B refused to groom monkey-A yesterday, there has to be some representation of both monkey-B and monkey-A in monkey-A’s brain.

    Backing up on the evolution tree, what use are distance senses (sight, hearing, smell) if aren’t used to model the environment? And what use is modelling the environment for a living system if it isn’t modeled in relation to the system itself, without instincts that make a distinction between the system and its environment?

    I don’t think self awareness, like consciousness overall, is a binary attribute, only either being fully present or fully absent. It seems like it evolved in stages and levels of sophistication.

  8. 8. Christophe Menant says:

    Hi SelfAwarePatterns,
    Yes, I agree with you that self-awareness and consciousness are not binary attributes (ex: apes capable of mirror self recognition). But I feel that chimps are far from being self-conscious as we are. They are not capable of identification with conspecifics like we humans are.
    And that may be the result of different evolutionary stories.
    We know that chimps are our cousins. We share with them a common ancestor that lived 6MY ago, and today chimps are quite similar to that common ancestors. We humans have had a rich and complex evolutionary story during these last 6MY. Chimps did not change significantly during these years.
    I feel that it is precisely the development of identification with conspecifics that has separated our evolutionary stories. This because it meant also identification with suffering or endangered conspecifics.
    Our pre-human ancestors, which did not care that much about the well being of their conspecifics, were suddenly brought to feel the fears and sufferings of their conspecifics. This may have been the source of a huge anxiety increase in the mind of our ancestors, somehow unbearable, with two possible reactions:
    – Reject the increase of identification with conspecifics in order to keep away the anxiety sufferings. This lead to an evolutionary status quo that has resulted in our tody chimps.
    – Develop caring and cooperation tools to limit the sufferings of conspecifics. Use these tools as evolutionary advantages to cope with the remaining anxiety . The processes of anxiety limitation combined with evolutionary advantages may have produced an evolutionary engine leading to us humans. A key point being that our human self-consciousness is interwoven with anxiety management (another difference with our chimps cousins…).
    This is how I would see an important chimps/humans difference relatively to self-consciousnes. More on this at https://philpapers.org/rec/MENPFA-3 (paper) and at https://philpapers.org/rec/MENCOO (TSC poster).

  9. 9. Jayarava says:

    Hi Christophe,

    I echo what SelfAwarePatterns says. But I would add…

    ‘the representation of oneself as an existing entity like others humans are represented as existing’.

    I’m happy enough with representationalism since reading Thomas Metzinger’s The Ego Tunnel – given the phenomenology of how the sense of self breaks, it must be a virtual model or a representation. No other kind of instantiation would allow for the phenomenology of mind malfunctions we see. The question is, why would we see this representation as a “symptom” of some abstract feature of living brains?

    In any discussion like this, introducing the abstract notion of “consciousness” moves us away from the phenomenology that underpins the idea of representationalism towards speculative metaphysics. To what end? To me it looks like trying to fit the data to the hypothesis, i.e. bad science.

    We never *experience* “consciousness” (in the abstract). We are often conscious. Experience is usually being conscious *of* something. If we use representationalism to explain this, then there is simply no need to invoke a layer of abstraction. Representationalism is sufficient to explain it. What you need for representation is a brain. You do not need an abstract consciousness.

    Metzinger makes it clear that the feeling of being a self observing experience, is *also* an experience. We know this experience of self can break or stop. I often go back to Jill Bolte Taylor’s moving account of her stroke, in which her sense of self becomes at first intermittent and then, after she recovers, optional. I also know people who report not having a sense of self as a direct effect of hardcore meditation.

    Bear with me here… My main interest is studying ancient Buddhist texts in a language called P??i, which is a North Indian vernacular with roots similar to Sanskrit (and preceded the emergence of modern languages like Hindi by ~1500 years). Being a Metzinger/Lakoff fan, I noticed a curious feature of P??i: the “MIND IS A CONTAINER” metaphor is absent (I’ve looked closely at this, but not yet published). The ancient North Indians do not seem to have conceived of mental activity as happening *in* the mind; there is no counterpart of the idea of consciousness as the theatre of experience. And there is no word that corresponds to the concept of consciousness we are discussing. Sure, there are many words for mental processes – in fact they had a highly developed technical vocabulary for discussing mental phenomena. But our idea, an abstract conscious-ness, just never occurred to them. Of the many words for mental processes, none, to the best of my knowledge, is in the form of an abstract noun.

  10. 10. Christophe Menant says:

    Hi Jayarava,
    The evolutionary pespective of self-consciousness that I favor is based on an evolution of representations in the mind of our pre-human ancestor. And these initial self-representations were more as object than as subject. Representations more objective than virtual. Let me try to tell why.
    When intersubjectivity evolved into identification with conspecifics two types of representations were to merge in the mind of our ancestors:
    – The representations they had of their conspecifics as being entities existing in the environment.
    – The limited representations they had of themselves (seen parts of the body, heard shoutings, perceived actions,…)
    The merger of these representations brought the representations that our ancestors had of themselves to access a characteristic of the representations of conspecifics: being about an entity existing in the environment. So the representations that our ancestors had of themselves became representations of entities existing in the environment. I take that event as having introduced an elementary version of self-consciousness in our primate evolution. (an ‘ancestral self-consciousness’)
    But our primate ancestors were living in the wild and the representations of conspecifics were more about entities acting for survival than about entities feeling about themselves. So the first elements of self-consciousness in human evolution was more on the object side than on the subject side. At this level of evolution phenomenal consciousness was a performance to be developed (as feelings do indeed exist in animals). I agree that this can be disturbing when considering that phenomenal consciousness should not come after self-consciousness in human minds (a strong trend in today philosophyof mind).
    But it is clear that the ‘ancestral self-conciousness’ that our ancestors have accessed in their primate evolution had to go through many developments before reaching the performance of our today human self-consciousness as object and as subject where phenomenal consciousness plays a key role (the story of these developments is still to be written).
    I understand your interest with eastern philosophies which can indeed introduces new concepts and perpectives. For the moment let me try to bring a little further a western type of modeling for a possible evolutionary story of consciousness in human mind.

  11. 11. Jayarava says:

    “… an evolution of representations in the mind of our pre-human ancestor.”

    Well yes, but a theory that is based on speculations like this is pointless. You can’t know how pre-human ancestors thought – only their bones survive. Anything you say about their minds is pure fantasy.

    A scientific is based on empirical observations, tries to explain known data points, and make predictions that can be tested for accuracy.

    Just making up stories is a work of literature, not science or philosophy. Nothing wrong with literature, as long as it is honestly advertised as fiction.

    I made no mention whatever about “eastern philosophies”. The point I made about Indic languages was based George Lakoff and Mark Johnson’s philosophy of language. The last time I looked, Berkeley, California was in the “West”.

  12. 12. Chistophe Menant says:

    Well, stating that anything said about the minds of our ancestors is pure fantasy discredits a lot of anthropological works….Not sure that this is what you want to say.
    Many research activities are investigating how human mind could have evloved. As minds do not fossilise, hypothesis (speculations as you say) have to be used.
    The one positioning anxiety management as a key contributor to the nature of self-consciousness is bit delicate to present as it echos many unpleasant psychological states we prefer to forget. We naturaly tend to avoid thinking about mental rumination, existential questioning, nihilistic feelings, affective disorders, panic and so on.
    But I feel that positioning human anxiety as interwoven with the nature of self-consciousness could shed a new light on many of our mental disorders and perhaps indicate some phylogenetic roots of human motivations (Freudian death and life drives as modes of anxiety management?- So a possible human source of evil- Could Pascalian diversions be modes of anxiey management?..).
    Still a lot to do on these subjects

  13. 13. Peter A. says:

    Having insight into the minds of others (‘theory of mind’) is one, and unlimited associative learning looks like another.

    Well, apparently I don’t have this ability, according to a certain Dr. Simon Baron-Cohen, because I happen to be on the so-called ‘autism spectrum’ (Asperger’s Syndrome).

    It’s nonsense of course, because we (i.e. those of us who have either this specific condition, or one similar) actually DO understand that others have minds, and the vast majority of us have no trouble in seeing the world from another’s perspective. Attempts like this to dehumanise us is one of the reasons why so many of us refuse to get an ‘official diagnosis’, because we have had to learn the hard way that the so-called experts often get things hopelessly wrong.

    I am not a robot, and can easily solve the CAPTCHA puzzles I come across. Maybe solving these puzzles should be one of the tests used to determine consciousness as well.

  14. 14. H. Ceon says:

    Forgive me for barging in here… but it seems to me that there is a lot of confusion about what consciousness actually is. In my opinion all those points listed above are wrong. The points describe physical and mental attributes, not consciousness. Consciousness is NOT mental activity. Consciousness has nothing to do with intelligence. Consciousness is what “experiences” our mental activity. My impression is that all living beings, including even single cellular lifeforms, are conscious. But certainly everything that naturally has acquired eyes or ears or other sensory organs, require consciousness to experience the signals from those organs. Consciousness is not something complex that requires an advanced brain, consciousness is something simple that is present in all living beings, it is the difference between a dead robot that has no own experience(even with extremely complex AI programming), and a living being experiencing the contents of his or her own mind(no matter how simple that mind is).

Leave a Reply