Getting Emotional

Are emotions an essential part of cognition? Luiz Pessoa thinks so (or perhaps he feels it is the case). He argues that while emotions have often been regarded as something quite separate from rational thought, a kind of optional extra which can be bolted on, but has no essential role, they are actually essential.

I don’t find his examples very convincing. He says robots on a Mars mission might have to plan alternative routes and choose between priorities, but I don’t really see why that requires them to get emotional. He says that if your car breaks down in the desert, you may need a quick fix. A sense of urgency will impel you to look for that, while a calm AI might waste time planning and implementing a proper repair. Well, I don’t know. On the one hand, it seems perfectly possible to appreciate the urgency of the situation in a calm rational way. On the other, it’s easy to imagine that panic and many other emotional responses might be very unhelpful, blocking the search for solutions or interfering with the ability to focus on implementation.

Yet there must be something in what he says, mustn’t there? Otherwise, why would we have emotions? I suppose in principle it could be that they really have no role; that they are epiphenomenal, a kind of side effect of no real importance. But they seem to influence behaviour in ways that make that implausible.

Perhaps they add motivation? In the final analysis, pure reason gives us no reason to do anything. It can say, if you want A, then the best way to get it is through X, Y, and Z. But if you ask, should I want A, pure reason merely shrugs, or at best it says, you should if you want B.

However, it doesn’t take much to provide a basic set of motivations. If we just assume that we want to survive, the need to obtain secure sources of food, shelter, and so on soon generate a whole web of subordinate motivations. Throw in a few very simple built-in drives – avoidance of pain, seeking sex, maintenance of good social relations – and we’re pretty much there in terms of human motivation. Do we need complex and distracting emotions?

Some argue that emotions add more ‘oomph’, that they intensify action or responses to stimuli. I’ve never quite understood the actual causal process there, but granting the possibility, it seems emotions must either harmonise with rational problem solving, or conflict with it. Rational problem solving is surely always best, so they must either be irrelevant or harmful?

One fairly traditional view is that emotions are a legacy of evolution, a system that developed before rational problem solving was available. So different emotional states affect the speed and intensity of certain sets of responses. If you get angry, you become more ready to fight, which may be helpful. Now, we would be better off deciding rationally on our responses, but we’re lumbered with the system our ancestors evolved. Moreover, some of the preparatory stuff, like a more rapid heartbeat, has never come under rational control so without emotions it wouldn’t be accessible. It can be argued that emotions are really little more than certain systems getting into certain potentially useful ready states like this.

That might work for anger, but I still don’t understand how grief, say, is a useful state to be in. There’s one more possible role for emotions, which is social co-ordination. Just as laughter or yawning tends to spread around the group, it can be argued that emotional displays help get everyone into a similar state of heightened or depressed responsiveness. But if that is truly useful, couldn’t it be accomplished more easily and in less disabling/distracting ways? For human beings, talking seems to be the tool for the job?

It remains a fascinating question, but I’ve never heard of a practical problem in AI that really seemed to need an emotional response.

Inside Out

homunculusThe homunculus returns? I finally saw Inside Out (possible spoilers – I seem to be talking about films a lot recently). Interestingly, it foregrounds a couple of problematic ways of thinking about the mind.

One, obviously, is the notorious homuncular fallacy. This is the tendency to explain mental faculties, say consciousness, by attributing them to a small entity within the mind – a “little man” that just has all the capacities of the whole human being. It’s almost always condemned because it appears to do no more than defer the real explanation. If it’s really a little man in your head that does consciousness, where does his consciousness come from? An even smaller man, in his head?

Inside Out of course does the homuncular thing very explicitly. The mind of the young girl Riley, the main character, where most of the action is set, is controlled by five primal emotions who are all fully featured cartoon people – Joy, Sadness, Anger, Fear, and Disgust, little people who walk around inside Riley’s head doing the kind of thing people do (Is it actually inside her head? In the Beano’s Numskulls cartoon, touted as a forerunner of Inside Out, much of the humour came from the definite physicality of the way they worked; here the five emotions view the world through a screen rather than eyeholes and use a console rather than levers. They could in fact be anywhere or in some undefined conceptual space.) It’s an odd set (aren’t Joy and Sadness the extremes of a spectrum?) Unexpectedly negative too: this is technically a Disney film, and it rates anger, fear, and disgust as more important and powerful than love? If it were full-on Disney the leading emotions would surely be Happy-go-lucky Feelin’s, and Wishing on a Star.

There are some things to be said in favour of homunculi. Most people would agree that we contain a smaller entity that does all the thinking; the brain, or maybe even narrower than that (proponents of the Extended Mind would very much not agree, of course). Daniel Dennett has also spoken out for homunculi, suggesting that they’re fine so long as the homunculi in each layer get simpler; in the end we get to ones that need no explanation. That’s alright, except that I don’t think the beings in this Dennettian analysis are really homunculi – they’re more like black boxes. The true homunculus has all the capacities of a full human being rather than a simpler subset.

We see the problem that arises from that in Inside Out. The emotions are too rounded; they all seem to have a full set of feelings themselves; they show all show fear and Joy gets sad. How can that work?

The other thing that seems not quite right to me is unfortunately the climactic revelation that Sadness has a legitimate role. It is, apparently, to signal for help. In my view that can’t really be the whole answer and the film unintentionally shows us the absurdity of the idea; it asks us to believe that being joyless, angry and withdrawn, behaving badly and running away are not enough to evoke concern and sympathetic attention from parents; you don’t get your attention, and your hug till they see the tears.

No doubt sadness does often evoke support, but I can’t think that’s its main function. Funnily enough, Sadness herself briefly articulates a somewhat better idea early in the film. It’s muttered so quickly I didn’t quite get it, but it was something about providing an interval for adjustment and emotional recalibration. That sounds a bit more promising; I suspect it was what a real psychologist told Pixar at some stage; something they felt they should mention for completeness but that didn’t help the story.

Films and TV do shape our mental models; The Matrix laid down tramlines for many metaphysical discussions and Star Trek’s transporters are often invoked in serious discussions of personal identity. Worse, fears about AI have surely been helped along by Hollywood’s relentless and unimaginative use of the treacherous robot that turns on its creators. I hope Inside Out is not going to reintroduce homunculi to general thinking about the mind.