Emotions like fear are not something inherited from our unconscious animal past. Instead they arise from the higher-order aspects that make human thought conscious. That (if I’ve got it right) is the gist of an interesting paper by LeDoux and Brown.

A mainstream view of fear (the authors discuss fear in particular as a handy example of emotion, on the assumption that similar conclusions apply to other emotions) would make it a matter of the limbic system, notably the amygdala, which is known to be associated with the detection of threats. People whose amygdalas have been destroyed become excessively trusting, for example – although as always things are more complicated than they seem at first and the amygdalas are much more than just the organs of ‘fear and loathing’. LeDoux and Brown would make fear a cortical matter, generated only in the kind of reflective consciousness possessed by human beings.

One immediate objection might be that this seems to confine fear to human beings, whereas it seems pretty obvious that animals experience fear too. It depends, though, what we mean by ‘fear’. LeDoux and Brown would not deny that animals exhibit aversive behaviour, that they run away or emit terrified noises; what they are after is the actual feeling of fear. LeDoux and Brown situate their concept of fear in the context of philosophical discussion about phenomenal experience, which makes sense but threatens to open up a larger can of worms – nothing about phenomenal experience, including its bare existence, is altogether uncontroversial. Luckily I think that for the current purposes the deeper issues can be put to one side; whether or not fear is a matter of ineffable qualia we can probably agree that humanly conscious fear is a distinct thing. At the risk of begging the question a bit we might say that if you don’t know you’re afraid, you’re not feeling the kind of fear LeDoux and Brown want to talk about.

On a traditional view, again, fear might play a direct causal role in behaviour. We detect a threat, that causes the feeling of fear, and the feeling causes us to run away. For LeDoux and Brown, it doesn’t work like that. Instead, while the threat causes the running away, that process does not in itself generate the feeling of fear. Those sub-cortical processes, along with other signals, feed into a separate conscious process, and it’s that that generates the feeling.

Another immediate objection therefore might be that the authors have made fear an epiphenomenon; it doesn’t do anything. Some, of course, might embrace the idea that all conscious experience is epiphenomenal; a by-product whose influence on behaviour is illusory. Most people, though, would find it puzzling that the brain should go to the trouble of generating experiences that never affect behaviour and so contribute nothing to survival.

The answer here, I think, comes from the authors’ view of consciousness. They embrace a higher-order theory (HOT). HOTs (there are a number of variations) say that a mental state is conscious if there is another mental state in the same mind which is about it – a Higher Order Representation (HOR); or to put it another way, being conscious is being aware that you’re aware. If that is correct, then fear is a natural result of the application of conscious processes to certain situations, not a peculiar side-effect.

HOTs have been around for a long time: they would always get a mention in any round-up of the contenders for an explanation of consciousness, but somehow it seems to me they have never generated the little bursts of excitement and interest that other theories have enjoyed. LeDoux and Brown suggest that other theories of emotion and consciousness either are ‘first -order’ theories explicitly, or can be construed as such. They defend the HOT concept against one of the leading objections, which is that it seems to be possible to have HORs of non-existent states of awareness. In Charles Bonnet, syndrome, for example, people who are in fact blind have vivid and complex visual hallucinations. To deal with this, the authors propose to climb one order higher; the conscious awareness, they suggest, comes not from the HOR of a visual experience but from the HOR of a HOR: a HOROR, in fact. There clearly is no theoretical limit to the number of orders we can rise to, and there’s some discussion here about when and whether we should call the process introspection.

I’m not convinced by HOTs myself. The authors suggest that single-order theory implies there can be conscious states of which we are not aware, which seems sort of weird: you can feel fear and not know you’re feeling fear? I think there’s a danger here of equivocating between two senses of ‘aware’. Conscious states are states of awareness, but not necessarily states we are aware of; something is in awareness if we are conscious; but that’s not to say that the something includes our awareness itself. I would argue, contrarily, that there must be states of awareness with no HOR; otherwise, what about the HOR itself? If HORs are states of awareness themselves, each must have its own HOR, and so on indefinitely. If they’re not, I don’t see how the existence of an inert representation can endow the first-order state with the magic of consciousness.

My intuitive unease goes a bit wider than that, too. The authors have given a credible account of a likely process, but on this account fear looks very like other conscious states. What makes it different – what makes it actually fearful? It seems possible to imagine that I might perform the animal aversive behaviour, experience a conscious awareness of the threat and enter an appropriate conscious state without actually feeling fear. I have no doubt more could be said here to make the account more plausible and in fairness LeDoux and Brown could well reply that nobody has a knock-down account of phenomenal experience, with their version offering a lot more than some.

In fact, even though I don’t sign up for a HOT I can actually muster a pretty good degree of agreement nonetheless. Nobody, after all, believes that higher order mental states don’t exist (we could hardly be discussing this subject if they didn’t). In fact, although I think consciousness doesn’t require HORs, I think they are characteristic of its normal operation and in fact ordinary consciousness is a complex meld of states of awareness at several different levels. If we define fear the way LeDoux and Brown do, I can agree that they have given a highly plausible account of how it works without having to give up my belief that simple first-order consciousness is also a thing.



  1. 1. Paul Torek says:

    All mammals have a cortex. So it’s not like animals would be excluded from feeling fear on this theory. Which is a good thing, too, because denying fear to mammals would be as good a refutation as any theory of mind ever gets.

  2. 2. Peter says:

    Yes: I think the kind of fear they want to talk about is restricted to humans on the basis that only humans have this self-conscious style of consciousness. But I’m not sure. I don’t see any fundamental reason why it might not also apply for, say, dogs.

  3. 3. vicp says:

    Peter, We can think of the higher mechanisms of fear in the higher cortical areas as an extremely high state of attention or the animal is disabling all of its other awareness resources to concentrate on an immediate threat. The external sensorimotor queues caused by the fear state is also a signal to other members of the group or it a basic form of language. In the case of parenting, fear states in the child are easily signaled to and acted on by older or more experienced members of the group without going into the same HORs but understanding those HORs. Which also ties back to language inside of us. We can read and hear and understand each other’s words without experiencing the originator’s original states. Intentionality?

  4. 4. SelfAwarePatterns says:

    It seems like a lot here depends on how we define consciousness. If we constrain it to only that which is available for introspection, then a lot of what the paper authors discuss seems plausible. I do think there’s something to be said for their approach.

    And it’s not controversial to say that the neural circuits that generate defensive impulses and dispositions aren’t the circuits that model (feel) them for consciousness. After all, a common question is why all this processing is accompanied by experience. The answer is that there are circuits to process experience (or do the processing that leads us to conclude we have experience). A Panksepp or Damasio might argue that low resolution feeling circuits may exist as low as the upper brainstem, but even they wouldn’t argue that it was anything available to introspection.

    Paul Torek makes an excellent point that all mammals have a cortex. I’d add that birds have their own version of a cortex, and all vertebrates have a forebrain. So arguing that conscious emotions exist in the human cortical system shouldn’t by itself drive us to conclude that animals don’t have emotions. And arthropod and cephalopod brains seem too radically different to make any conclusions about their experience based on where things might take place in human anatomy.

    On self awareness, I think we make a mistake if we consider it as something that is either all there or completely absent. In tests even a goldfish appears able to evaluate various courses of action. It seems like it has to have some incipient conception of itself as something distinct from the environment in order to do so. It’s definitely not at the same level of the self awareness of primitive mammals, which in turn isn’t at the same theory of mind level of social species, which may not be at the level of humans, but the differences seem more of degree than sharp distinction.

  5. 5. Callan S. says:

    If HORs are states of awareness themselves, each must have its own HOR, and so on indefinitely. I don’t see how the existence of an inert representation can endow the first-order state with the magic of consciousness.

    Simply because of mechanics, the recursive HOR states (HOR states about prior HOR states) run out eventually. But for the same reason that there is no HOR state that says ‘you have run out of HOR states!’, the subject cannot see the end of their own introspection. Without a state that says ‘Your introspection ends here’, it’s a ‘all you see is all there is’ situation. Or more exactly a ‘all you introspect is all there is’.

    The inability to see the end of introspection makes it seem magical the way not being able to see what the magician did to make that coin vanish makes it seem magical. I mean, we all accept that what magicians do does (because we lack full information) seem magical. What if there’s something our brains do that you we not have full information about and so it seems magical. In light of the magician acts, it’s not a controversial idea to suggest.

  6. 6. Brain Molecule Marketing says:

    The dilemma with all “top-down”, higher order concepts, most of which are just popular cultural tropes and myths, is there is no way to measure them, or define them, independent of solipsistic, self-reports using everyday language. They are also pretty much debunked by “bottom-up” neurology, physiology and biology, duh

    Medicine and other professions rarely depend on intuition and self-reports, why should brain science. Also, pop culture ideas like emotions and consciousness violate findings in biology, medicine, neurology and physiology. Joe’s efforts are useful, but let’s leave semantic word play to philosophers, theologians and poets, shall we?

  7. 7. Callan S. says:

    BMM, in what way do emotions violate those sciences?

    And you’ve got yourself a case trying to just dismiss consciousness by saying it violated something. How do you explain the experience you’d describe around you, as you see and feel it? Surely you’d have to give more than that, as someone putting their hand on a hotplate will give a self report that is fairly well correlated with externally measured reality by all the sciences mentioned.

  8. 8. Tom Clark says:

    My usual complaint: It isn’t at all clear from this paper what the causal contribution of the phenomenal experience of fear is to behavior, unless one identifies it with the various neural goings-on. And of course such an identity wouldn’t confer consciousness with any additional causal power over and above its neural instantiation. So what’s the function of consciousness per se?

    They say:

    “An important question to consider is the function of fear and other states of emotional awareness. Our proposal that emotions are cognitive states is consistent with the idea that once they are assembled in the GNC they can contribute to decision making (6, 117, 169), as well as to imaginations about one’s future self and the emotions it may experience, and about decisions and actions one’s future self might take when these emotions occur. This notion overlaps with a proposal by Mobbs and colleagues (24, 164). Emotion schema, built up by past emotions, would provide a context and set of constraints for such anticipated emotions. In the
    short-term, anticipated emotions might, like the GNC itself, play a role in top-down modulation of perceptual and memory processing, but also processing in subcortical survival circuits that contribute to the initial assembly of the emotional state. Considerable evidence shows that top-down cognitive modulation of survival circuits occurs (21, 167, 168), and presumably emotional schema within the GNC, could similarly modulate survival circuit activity.”

    Here they don’t disambiguate fear as an experience from the associated cognitive states that are assembled in the GNC, presumably out of neural wetware. Are there two things here, or just one? Is it fear itself that participates in the “top-down modulation of survival circuits,” or can the story be told in purely neural network, phenomenal-free terms?

    If fear *just is* a neurally-instantiated cognitive state, then we needn’t bring in talk of experience to explain behavior from a scientific perspective. However, fear (an undeniably real experience) is in fact a very convenient subjective shorthand that we can report to each other in explaining our reaction to a snake. Subjectively, then, fear isn’t epiphenomenal, even if it gets left out of scientific explanations.

    Of course the usual hard question can still be asked: why is it that HORs give rise to, or are identical to, experiences?

Leave a Reply