Getting Emotional

Are emotions an essential part of cognition? Luiz Pessoa thinks so (or perhaps he feels it is the case). He argues that while emotions have often been regarded as something quite separate from rational thought, a kind of optional extra which can be bolted on, but has no essential role, they are actually essential.

I don’t find his examples very convincing. He says robots on a Mars mission might have to plan alternative routes and choose between priorities, but I don’t really see why that requires them to get emotional. He says that if your car breaks down in the desert, you may need a quick fix. A sense of urgency will impel you to look for that, while a calm AI might waste time planning and implementing a proper repair. Well, I don’t know. On the one hand, it seems perfectly possible to appreciate the urgency of the situation in a calm rational way. On the other, it’s easy to imagine that panic and many other emotional responses might be very unhelpful, blocking the search for solutions or interfering with the ability to focus on implementation.

Yet there must be something in what he says, mustn’t there? Otherwise, why would we have emotions? I suppose in principle it could be that they really have no role; that they are epiphenomenal, a kind of side effect of no real importance. But they seem to influence behaviour in ways that make that implausible.

Perhaps they add motivation? In the final analysis, pure reason gives us no reason to do anything. It can say, if you want A, then the best way to get it is through X, Y, and Z. But if you ask, should I want A, pure reason merely shrugs, or at best it says, you should if you want B.

However, it doesn’t take much to provide a basic set of motivations. If we just assume that we want to survive, the need to obtain secure sources of food, shelter, and so on soon generate a whole web of subordinate motivations. Throw in a few very simple built-in drives – avoidance of pain, seeking sex, maintenance of good social relations – and we’re pretty much there in terms of human motivation. Do we need complex and distracting emotions?

Some argue that emotions add more ‘oomph’, that they intensify action or responses to stimuli. I’ve never quite understood the actual causal process there, but granting the possibility, it seems emotions must either harmonise with rational problem solving, or conflict with it. Rational problem solving is surely always best, so they must either be irrelevant or harmful?

One fairly traditional view is that emotions are a legacy of evolution, a system that developed before rational problem solving was available. So different emotional states affect the speed and intensity of certain sets of responses. If you get angry, you become more ready to fight, which may be helpful. Now, we would be better off deciding rationally on our responses, but we’re lumbered with the system our ancestors evolved. Moreover, some of the preparatory stuff, like a more rapid heartbeat, has never come under rational control so without emotions it wouldn’t be accessible. It can be argued that emotions are really little more than certain systems getting into certain potentially useful ready states like this.

That might work for anger, but I still don’t understand how grief, say, is a useful state to be in. There’s one more possible role for emotions, which is social co-ordination. Just as laughter or yawning tends to spread around the group, it can be argued that emotional displays help get everyone into a similar state of heightened or depressed responsiveness. But if that is truly useful, couldn’t it be accomplished more easily and in less disabling/distracting ways? For human beings, talking seems to be the tool for the job?

It remains a fascinating question, but I’ve never heard of a practical problem in AI that really seemed to need an emotional response.