Getting Emotional

Are emotions an essential part of cognition? Luiz Pessoa thinks so (or perhaps he feels it is the case). He argues that while emotions have often been regarded as something quite separate from rational thought, a kind of optional extra which can be bolted on, but has no essential role, they are actually essential.

I don’t find his examples very convincing. He says robots on a Mars mission might have to plan alternative routes and choose between priorities, but I don’t really see why that requires them to get emotional. He says that if your car breaks down in the desert, you may need a quick fix. A sense of urgency will impel you to look for that, while a calm AI might waste time planning and implementing a proper repair. Well, I don’t know. On the one hand, it seems perfectly possible to appreciate the urgency of the situation in a calm rational way. On the other, it’s easy to imagine that panic and many other emotional responses might be very unhelpful, blocking the search for solutions or interfering with the ability to focus on implementation.

Yet there must be something in what he says, mustn’t there? Otherwise, why would we have emotions? I suppose in principle it could be that they really have no role; that they are epiphenomenal, a kind of side effect of no real importance. But they seem to influence behaviour in ways that make that implausible.

Perhaps they add motivation? In the final analysis, pure reason gives us no reason to do anything. It can say, if you want A, then the best way to get it is through X, Y, and Z. But if you ask, should I want A, pure reason merely shrugs, or at best it says, you should if you want B.

However, it doesn’t take much to provide a basic set of motivations. If we just assume that we want to survive, the need to obtain secure sources of food, shelter, and so on soon generate a whole web of subordinate motivations. Throw in a few very simple built-in drives – avoidance of pain, seeking sex, maintenance of good social relations – and we’re pretty much there in terms of human motivation. Do we need complex and distracting emotions?

Some argue that emotions add more ‘oomph’, that they intensify action or responses to stimuli. I’ve never quite understood the actual causal process there, but granting the possibility, it seems emotions must either harmonise with rational problem solving, or conflict with it. Rational problem solving is surely always best, so they must either be irrelevant or harmful?

One fairly traditional view is that emotions are a legacy of evolution, a system that developed before rational problem solving was available. So different emotional states affect the speed and intensity of certain sets of responses. If you get angry, you become more ready to fight, which may be helpful. Now, we would be better off deciding rationally on our responses, but we’re lumbered with the system our ancestors evolved. Moreover, some of the preparatory stuff, like a more rapid heartbeat, has never come under rational control so without emotions it wouldn’t be accessible. It can be argued that emotions are really little more than certain systems getting into certain potentially useful ready states like this.

That might work for anger, but I still don’t understand how grief, say, is a useful state to be in. There’s one more possible role for emotions, which is social co-ordination. Just as laughter or yawning tends to spread around the group, it can be argued that emotional displays help get everyone into a similar state of heightened or depressed responsiveness. But if that is truly useful, couldn’t it be accomplished more easily and in less disabling/distracting ways? For human beings, talking seems to be the tool for the job?

It remains a fascinating question, but I’ve never heard of a practical problem in AI that really seemed to need an emotional response.

17 thoughts on “Getting Emotional

  1. Peter: “Some argue that emotions add more ‘oomph’, that they intensify action or responses to stimuli. I’ve never quite understood the actual causal process there”

    That’s my understanding too.
    “The word “emotion” dates back to 1579, when it was adapted from the French word émouvoir, which means “to stir up”.”
    https://en.wikipedia.org/wiki/Emotion#Etymology,_definitions,_history_and_differentiation

    As for the causal process, I would say that emotions, like all qualia, ARE the causal process. More precisely, they are what the causal process is in itself, its intrinsic identity, as opposed to its relations to other things/processes.

    Peter: “I still don’t understand how grief, say, is a useful state to be in.”

    Grief seems to be an inseparable part of our bonds/attachments. It is related to care as two sides of a coin.

  2. “I’ve never heard of a practical problem in AI that really seemed to need an emotional response”

    How about self-driving cars responding to trolley problems? They might not need emotion (or an emotional programmer) for a utilitarian numbers-based response algorithm, but might for a deontological one?

  3. I agree that Pessoa’s examples are unconvincing, but I also think that he’s basically on the right track, overall.

    Do we need complex and distracting emotions?

    I think you’ve started by answering this question. Can we have any kind of motivation that isn’t tied to something we would be inclined to classify as “emotional”?

    Where it gets confusing is that we notice emotions especially when they misfire. If fear makes me freeze in a situation where I should be running away, if I’ll pay for the mistake and still manage to survive, I would be inclined to “blame” fear for the bad consequences I suffered. In such a situation, the fact that freezing usually allows to avoid unwanted attention will not naturally be very visible.

    If we accept that our (primary) motivations more or less coincide with emotions, we then find that emotions need to be outside the influence of what we call rationality. If they weren’t, we would be able to hack them and change them “at will”. Yeah, but what will?
    So, to function, a motivational system within a given agent cannot be easily modifiable by the agent itself. Therefore, “intelligent machines” cannot show what we’d recognise as intelligence without having a “black-boxed” motivational system built-in.

    The motivational system we are equipped with is indeed black-boxed (why do I like pizza and hate seafood? What makes pain inherently undesirable? Introspection does not help at all!), and, since I am alive and kicking, it did serve me quite well.

    In other words, I think it’s a mistake to fixate on the (relatively rare, but socially very significant) cases where emotions (as they manifest) are dysfunctional, in the sense that they do not help us to achieve what we desire.
    Being mechanisms (our standard naturalists assumption), it is expected that they will sometimes systematically fail when operating outside the kind of environment that shaped them (i.e. pretty much every human situation since the invention of agriculture).

    My motivational system didn’t make me reproduce, though, and that’s because we have “override” abilities that are especially developed in our species. Which proves my point: give me too much ability of not being a slave to my primary drives and I might hack and neutralise them.

    Following this thread, you might reach the conclusion that intelligence is primarily a function of how much reasoning can be interjected between motivation and action, which makes black-box motivations (emotions) a prerequisite for intelligence.

  4. @ Sergio: you write that “the motivational system we are equipped with is indeed black-boxed (why do I like pizza and hate seafood? What makes pain inherently undesirable? Introspection does not help at all!)”.

    I think there are several issues bundled up here: if you like pizza but not seafood, that is a personal trait, which you might want to describe as outside of rational influence, but, for me, doesn’t fit with your definition of black-box motivations (emotions). The question of what makes pain inherently undesirable is quite a different one. Granted, you might develop strong feelings against seafood, which fits with the description of “complex and distracting emotions”, but those feelings need some kind of conscious investment, even if they seem (and probably are) involuntary.

    The way I see it, there are two levels in this discussion: primary drives and their associated emotions (anger, fear etc.) and other states that actually require us to “work” to be maintained: ressentment, grief, gratitude. That second level fits with what Metzinger would call “the emotional self-model”, as it is no longer in the realm of bodily sensations and perceptions.

  5. That emotion is a part of life and in this case that emotional life in evolution is proposed as a passive state…

    …Instead could we propose life as a active state in evolution and that emotional functioning in its ‘intensities’ aid in transformation of energies for entities to progress–more in consciousness…

  6. A lot depends on how we define “emotion”. If we mean affects, basic primal responses and urges composed of valence, arousal, and reflex, then I totally agree that AI must have some version of that. If we mean the complex social and learned version of emotions, which are built on affects, then I’m not there.

    Even in the case of affects, there’s no reason for the AI version to be as messy, contradictory, and heavy handed as the evolved programming that drives us. The question is, is an utterly pertinent and measured version of motivation one we’d consider emotion?

    “That might work for anger, but I still don’t understand how grief, say, is a useful state to be in.”

    I think grief is an intense urge to do something about a situation that we rationally understand can’t be fixed. But the primal reflexive portion our mind takes a while to catch up with the rational portion. Until it does, we experience that frustrated urge as grief.

    The only reason I could see for inflicting that on an AI is maybe we don’t want it to conclude too quickly that something is hopeless. For example, we might not want a robot nurse to give up too quickly in taking care of a critically sick patient. We might err on the side of having its urge to care for them be very slow to abate once it was no longer relevant. During that period, it might make sense to call its state one of grief. But even there, I can’t see any reason to have that state last for weeks or months as it often does in humans.

  7. @Patricia, #4.
    Yes, I have wilfully mixed up different things, specifically because their common denominator is that introspection cannot help me understanding where they come from.
    The conscious side builds on what is black boxed, so I can (and do) maintain a self-model that includes my dislike for seafood. I suppose we agree that a really interesting zone is the blurred line between things like: “I know I don’t like seafood, so I don’t order it in bars and restaurants” (explicit self-model, conscious) and “I have no idea why I dislike seafood” (black box). Gratitude is one good example: we can experience a sudden and involuntary feeling of gratitude, which sometimes catches us by surprise (black box). We can rationalise it, sometimes, and partially explain to us why it makes sense to be grateful (self-model), we can also cultivate it (very much a conscious effort).
    As it frequently happens, the distinction between two extremes is usually very blurred. Doesn’t mean that we can’t use the two extremes to try and make some sense of it all.
    Depending on aim and inclinations, you could choose to call “emotion” the physical sensations associated with a given state (not sure this definition makes sense, but hopefully you can understand the gist), or the categories that we automatically assign to the same physical sensations (perhaps some internal state that corresponds with “recognising” a sensation), or the words/concepts we use to talk about them, etcetera. Each approach has its pros and cons, so I wasn’t too much concerned with this kind of debate.
    Overall, I don’t think we disagree on much, but I might be wrong!

  8. The evolutionary function of emotion may be as a secondary bandwidth of information into conscious cognition that impinges on but doesn’t obscure rational deliberation. It could be that previously (in evolutionary time) they had a more direct control of behavior and only became “emotion” when more sophisticated processes came online.

    So through either conditioning or innate behavioral imprinting, a large hairy beast may instill fear through this secondary information channel, which disposes one to flight. A large body of subconscious, stored knowledge then directs us through emotion in this way. Contrast this with actually “speaking” to us through the rational process, i.e. voices in our head, which would disrupt thought (more than emotion does), and lead to bad outcome.

  9. Peter: “Do we need complex and distracting emotions?”

    There is a complex emotion, unavoidable because active in our everyday life. It is anxiety, which is more than a heritage of some animal anxiety (beyond evolutionary psychology/psychiatry arguments). Human anxiety management is a true human constraint probably rooted in our evolutionary history and involved in many of our motivations (mostly at unconscious level).
    Regarding emotions in AI, as you say it is difficult to find a practical AI problem that really seems to need an emotional response. This may be true also for self-consciousness. I take these points as highlighting the fact that AI is still to face what could make it really intelligent in the human sense. Close to that is the somehow neglected evolutionary thread for AI. Animals feel emotions and have a self. The nature of that animal self looks to me as key for an understanding of our human consciousness. Can we neglect ALife while pretending to strong AI? Not sure…

  10. Grief is a useful state, because love is a useful state and it is pointless to live without love. The people at Radio Shack, alas defunct, can give you technical support on love if you’d like. You can’t be programmed for grief which is inevitable if you love, or even if you’re alive (but that’s another post) but most humans learn to love before kindergarten. I’m not sure if we can program robots to love and grieve but I think you show promise

  11. If you want to build a robot from the ground up without using humans or other animals as a model, that’s one thing. Although I still wonder what a “drive” is if it’s not an emotion.

    But if you want to draw serious inspiration from the human model – surely a reasonable approach – then Pessoa understates the case, if anything. Emotions provide a vital background of perspective, keeping us tuned to what’s important.

    If we just assume that we want to survive, we can rationally calculate how many calories per day we’ll need to eat, maybe – but we’re terribly likely to overlook something vital without feelings and emotions to guide us. We’d probably get hypothermia without the felt need for warmth and the dislike of cold. Or, look at the injuries sustained by people who are unable to feel pain, even if they are by and large rational and careful people. Or, look at the many practical failings of the emotionally impaired decision makers that Antonio Damasio discusses.

    Most decisions are too complex to calculate with pencil and paper in reasonable time. You can’t *just* “do the math.” For the most part, you have to go with your gut feelings – or “intuition”, to use the more “reason” oriented word for a phenomenon that plainly lies in the reason/emotion intersection. You break out the logic toolbox only when your gut feeling is that it’s needed for a specific problem.

  12. Peter



    It remains a fascinating question, but I’ve never heard of a practical problem in AI that really seemed to need an emotional response.”

    How about the problem of making two computers fall in love ?

    What – incidentally – is a typical practical problem in AI that differentiates it from a typical practical problem in POC (plain old computing) ?

    regs
    JBD

  13. The actions of an unemotional AI robot are completely different in nature to that of a human. This is an obvious truism. Our subtle inner emotional activity is also guiding our thinking in not so obvious ways, helping ‘steer’ in a manner that if done by reason alone the process would bog down in a myriad calculations. Emotions are linked to the holistic seeing of one one side of our bicameral brain. The instance of grief and its purpose is clarified by some reverse emotional engineering examination. Grief is linked to human bonding and empathy, two essential aspects of what it means to be human. The same can be seen in the arts, music and even the senses associated with scientific exploration. The experiencing of the aliveness of emotions is very central to motivating our actions and our thinking, essential to our whole human experience of being.

  14. Emotions are non-physical energies and they are all controllable including depression.

    http://thomasmpowell.com/wp-content/uploads/2016/03/psycanics_emotion-love-happiness_oct05.pdf
    Chapter 13
    Introduction to Psicanic Energy Processing. How to Discreate Realities.

    Negative identities (NIRs: Negative Identity Realities) are the cause of all negative emotions.
    Every negative emotion indicates the activation of a negative identity.

    BE causes FEEL: IDENTITIES cause EMOTIONS — and are the only cause of emotion. Every negative thought and emotion can be traced to identities. Negative identities that deny or suppress power or value, generate negative emotions. Positive identities, those that affirm power and value, trigger positive emotions. It is impossible to separate emotion (how you feel) from identity (who you are). To eliminate any negative emotion it is only necessary to discreate the negative identity.

    In the center of every moment of emotional pain and suffering there is, always, without exception, a negative identity. Changing an identity—easy to do with Psycanic Energy Processing—will change the entire sequence of emotion, thought, and behavior that is generated by that identity.

  15. emotions are certainly essential for cognition. we use them to valence everything – constantly evaluate threat and reward in our past memories, our present sensory input and our future plans.

    human-built machines run on whatever hardware and software we build them out of. in that context, the word “emotion” is a marketing one. eg. the playstation 2 cpu was called “the emotion engine”

    for more, see my pages here:

    Jaak Panksepp’s Neuroscience Of Emotional Processing (Also Rat Tickling)
    http://www.owenparachute.com/jaak-panksepp-neuroscience.html

    Neuroscience Surprises From David Rock (How To Maximize Conscious And Unconscious Cognition By Exploiting How Our Brains Work)
    http://www.owenparachute.com/david-rock-neuroscience.html

    On Emotional And Social Intelligence By Daniel Goleman (IQ, EQ, SQ)
    http://www.owenparachute.com/goleman-emotional-intelligence.html

  16. Generally I do not read post on blogs, but I would like to say that this write-up very pressured me to check out and do so! Your writing taste has been surprised me. Thanks, very nice article.

  17. I think emotions are what certain endogenous sensory data feel like to a conscious entity. “Emotions” without consciousness are just physical, informational processes.

    As to why emotions seem like an unnecessarily sloppy and volatile way to regulate behavior, well, natural selection is a sloppy and volatile process. We weren’t programmed by an intelligent agent with precision in mind. Our emotions are the way the are for the same reason we have bad backs, birth defects, infertility, flat feet, cancer, cleft pallets, and, of course, emotional problems.

    IMHO.

Leave a Reply

Your email address will not be published. Required fields are marked *