Posts tagged ‘emotion’

Is this a breakthrough in robot emotion? Zhou et al describe the Emotional Chatting Machine (ECM), a chatbot which uses machine learning to return answers with a specified emotional tone.

The ultimate goal of such bots is to produce a machine that can detect the emotional tone of an input and choose the optimum tone for its response, but this is very challenging. It’s not simply a matter of echoing the emotional tone of the input; the authors suggest for example, that sympathy is not always the appropriate response to a sad story. For now, the task they address is to take two inputs; the actual content and a prescribed emotional tone, and generate a response to the content reflecting the required tone. Actually, doing more than reflecting is going to be very challenging indeed because the correct tone of a response ought to reflect the content as well as the tone of the input; if someone calmly tells you they’re about to die, or about to kill someone else, an equally calm response may not be emotionally appropriate (or it could be in certain contexts; this stuff is, to put it mildly, complex).

To train the ECM, two databases were used. The NLPCC dataset has 23,105 sentences collected from Weibo, a Chinese blog site, and categorised by human beings using eight categories: Anger, Disgust, Fear, Happiness, Like, Sadness, Surprise and Other. Fear and Surprise turned up too rarely on Weibo blogs to be usable in practice.

Rather than using the NLPCC dataset directly, the researchers used it to train a classifier which then categorised the larger STC dataset, which has 219,905 posts and 4,308,211 responses; they reckon they achieved an accuracy of 0.623, which doesn’t sound all that great, but was apparently good enough to work with; obviously this is something that could be improved in future. It was the ’emotionalised’ STC data set which was then used to train the ECM for its task.

Results were assessed by human beings for both naturalness (how human they seemed) and emotional accuracy; ECM improved substantially on other approaches and generally turned in a good performance, especially on emotional accuracy. Alas, the chattbot is not available to try out online.

This is encouraging but I have a number of reservations. The first is about the very idea of an emotional chatbot. Chatbots are almost by definition superficial. They don’t attempt to reproduce or even model the processes of thought that underpin real conversation, and similarly they don’t attempt to endow machines with real or even imitation emotion (the ECM has internal and external memory in which to record emotional states, but that’s as far as it goes). Their performance is always, therefore, on the level of a clever trick.

Now that may not matter, since the aim is merely to provide machines that deal better with emotional human beings. They might be able to do that without having anything like real or even model emotions themselves (we can debate the ethical implications of ‘deceiving’ human interlocutors like this another time). But there must be a worry that performance will be unreliable.

Of course, we’ve seen that by using large data sets, machines can achieve passable translations without ever addressing meanings; it is likely enough that they can achieve decent emotional results in the same sort of way without ever simulating emotions in themselves. In fact the complexity of emotional responses may make humans more forgiving than they are for translations, since an emotional response which is slightly off can always be attributed to the bot’s personality, mood, or other background factors. On the other hand, a really bad emotional misreading can be catastrophic, and the chatbot approach can never eliminate such misreading altogether.

My second reservation is about the categorisation adopted. The eight categories adopted for the NLPCC data set, and inherited here with some omissions, seem to belong to a family of categorisations which derive ultimately from the six-part one devised by Paul Ekman: anger, disgust, fear, happiness, sadness, and surprise. The problem with this categorisation is that it doesn’t look plausibly comprehensive or systematic. Happiness and sadness look like a pair, but there’s no comparable positive counterpart of disgust or fear, for example.  These problems have meant that the categories are often fiddled with. I conjecture that ‘like’ was added to the NLPCC set as a counterpart to disgust, and ‘other’ to ensure that everything could be categorised somewhere. You may remember that n the Pixar film Inside Out Surprise didn’t make the cut; some researchers have suggested that only four categories are really solid, with fear/surprise and anger/disgust forming pairs that are not clearly distinct.

The thing is, all these categorisations are rooted in attempts to categorise facial expressions. It isn’t the case that we necessarily have a distinct facial expression for every possible emotions, so that gives us an incomplete and slightly arbitrary list. It might work for a bot that pulled faces, but one that provides written outputs needs something better. I think a dimensional approach is better; one that defines emotions in terms of a few basic qualities set out along different axes. These might be things like attracted/repelled, active/passive, ingoing/outgoing or whatever. There are many models along these lines and they have a long history in psychology; they offer better assurance of a comprehensive account and a more hopeful prospect of a reductive explanation.

I suppose you also have to ask whether we want bots that respond emotionally. The introduction of cash machines reduced the banks’ staff costs, but I believe they were also popular because you could get your money without having to smile and talk. I suspect that in a similar way we really just want bots to deliver the goods (often literally), and their lack of messy humanity is their strongest selling point. I suspect though, that in this respect we ain’t seen nothing yet…

Emotions like fear are not something inherited from our unconscious animal past. Instead they arise from the higher-order aspects that make human thought conscious. That (if I’ve got it right) is the gist of an interesting paper by LeDoux and Brown.

A mainstream view of fear (the authors discuss fear in particular as a handy example of emotion, on the assumption that similar conclusions apply to other emotions) would make it a matter of the limbic system, notably the amygdala, which is known to be associated with the detection of threats. People whose amygdalas have been destroyed become excessively trusting, for example – although as always things are more complicated than they seem at first and the amygdalas are much more than just the organs of ‘fear and loathing’. LeDoux and Brown would make fear a cortical matter, generated only in the kind of reflective consciousness possessed by human beings.

One immediate objection might be that this seems to confine fear to human beings, whereas it seems pretty obvious that animals experience fear too. It depends, though, what we mean by ‘fear’. LeDoux and Brown would not deny that animals exhibit aversive behaviour, that they run away or emit terrified noises; what they are after is the actual feeling of fear. LeDoux and Brown situate their concept of fear in the context of philosophical discussion about phenomenal experience, which makes sense but threatens to open up a larger can of worms – nothing about phenomenal experience, including its bare existence, is altogether uncontroversial. Luckily I think that for the current purposes the deeper issues can be put to one side; whether or not fear is a matter of ineffable qualia we can probably agree that humanly conscious fear is a distinct thing. At the risk of begging the question a bit we might say that if you don’t know you’re afraid, you’re not feeling the kind of fear LeDoux and Brown want to talk about.

On a traditional view, again, fear might play a direct causal role in behaviour. We detect a threat, that causes the feeling of fear, and the feeling causes us to run away. For LeDoux and Brown, it doesn’t work like that. Instead, while the threat causes the running away, that process does not in itself generate the feeling of fear. Those sub-cortical processes, along with other signals, feed into a separate conscious process, and it’s that that generates the feeling.

Another immediate objection therefore might be that the authors have made fear an epiphenomenon; it doesn’t do anything. Some, of course, might embrace the idea that all conscious experience is epiphenomenal; a by-product whose influence on behaviour is illusory. Most people, though, would find it puzzling that the brain should go to the trouble of generating experiences that never affect behaviour and so contribute nothing to survival.

The answer here, I think, comes from the authors’ view of consciousness. They embrace a higher-order theory (HOT). HOTs (there are a number of variations) say that a mental state is conscious if there is another mental state in the same mind which is about it – a Higher Order Representation (HOR); or to put it another way, being conscious is being aware that you’re aware. If that is correct, then fear is a natural result of the application of conscious processes to certain situations, not a peculiar side-effect.

HOTs have been around for a long time: they would always get a mention in any round-up of the contenders for an explanation of consciousness, but somehow it seems to me they have never generated the little bursts of excitement and interest that other theories have enjoyed. LeDoux and Brown suggest that other theories of emotion and consciousness either are ‘first -order’ theories explicitly, or can be construed as such. They defend the HOT concept against one of the leading objections, which is that it seems to be possible to have HORs of non-existent states of awareness. In Charles Bonnet, syndrome, for example, people who are in fact blind have vivid and complex visual hallucinations. To deal with this, the authors propose to climb one order higher; the conscious awareness, they suggest, comes not from the HOR of a visual experience but from the HOR of a HOR: a HOROR, in fact. There clearly is no theoretical limit to the number of orders we can rise to, and there’s some discussion here about when and whether we should call the process introspection.

I’m not convinced by HOTs myself. The authors suggest that single-order theory implies there can be conscious states of which we are not aware, which seems sort of weird: you can feel fear and not know you’re feeling fear? I think there’s a danger here of equivocating between two senses of ‘aware’. Conscious states are states of awareness, but not necessarily states we are aware of; something is in awareness if we are conscious; but that’s not to say that the something includes our awareness itself. I would argue, contrarily, that there must be states of awareness with no HOR; otherwise, what about the HOR itself? If HORs are states of awareness themselves, each must have its own HOR, and so on indefinitely. If they’re not, I don’t see how the existence of an inert representation can endow the first-order state with the magic of consciousness.

My intuitive unease goes a bit wider than that, too. The authors have given a credible account of a likely process, but on this account fear looks very like other conscious states. What makes it different – what makes it actually fearful? It seems possible to imagine that I might perform the animal aversive behaviour, experience a conscious awareness of the threat and enter an appropriate conscious state without actually feeling fear. I have no doubt more could be said here to make the account more plausible and in fairness LeDoux and Brown could well reply that nobody has a knock-down account of phenomenal experience, with their version offering a lot more than some.

In fact, even though I don’t sign up for a HOT I can actually muster a pretty good degree of agreement nonetheless. Nobody, after all, believes that higher order mental states don’t exist (we could hardly be discussing this subject if they didn’t). In fact, although I think consciousness doesn’t require HORs, I think they are characteristic of its normal operation and in fact ordinary consciousness is a complex meld of states of awareness at several different levels. If we define fear the way LeDoux and Brown do, I can agree that they have given a highly plausible account of how it works without having to give up my belief that simple first-order consciousness is also a thing.

 

imageThe recent short NYT series on robots has a dying fall. The articles were framed as an investigation of how robots are poised to change our world, but the last piece is about the obsolescence of the Aibo, Sony’s robot dog. Once apparently poised to change our world, the Aibo is no longer made and now Sony will no longer supply spare parts, meaning the remaining machines will gradually cease to function.
There is perhaps a message here about the over-selling and under-performance of many ambitious AI projects, but the piece focuses instead on the emotional impact that the ‘death’ of the robot dogs will have on some fond users. The suggestion is that the relationship these owners have with their Aibo is as strong as the one you might have with a real dog. Real dogs die, of course, so though it may be sad, that’s nothing new. Perhaps the fact that the Aibos are ‘dying’ as the result of a corporate decision, and could in principle have been immortal makes it worse? Actually I don’t know why Sony or some third party entrepreneur doesn’t offer a program to virtualise your Aibo, uploading it into software where you can join it after the Singularity (I don’t think there would really be anything to upload, but hey…).
On the face of it, the idea of having a real emotional relationship with an Aibo is a little disturbing. Aibos are neat pieces of kit, designed to display ’emotional’ behaviour, but they are not that complex (many orders of magnitude less complex than a dog, surely), and I don’t think there is any suggestion that they have any real awareness or feelings (even if you think thermostats have vestigial consciousness, I don’t think an Aibo would score much higher. If people can have fully developed feelings for these machines, it strongly suggests that their feelings for real dogs have nothing to do with the dog’s actual mind. The relationship is essentially one-sided; the real dog provides engaging behaviour, but real empathy is entirely absent.
More alarming, it might be thought to imply that human relationships are basically the same. Our friends, our loved ones, provide stimuli which tickle us the right way; we enjoy a happy congruence of behaviour patterns, but there is no meeting of minds, no true understanding. What’s love got to do with it, indeed?
Perhaps we can hope that Aibo love is actually quite distinct from dog love. The people featured in the NYT video are Japanese, and it is often said that Japanese culture is less rigid about the distinction between animate and inanimate than western ideas. In Christianity, material things lack souls and any object that behaves as if it had one may be possessed or enchanted in ways that are likely to be unnatural and evil. In Shinto, the concept of kami extends to anything important or salient, so there is nothing unnatural or threatening about robots. But while that might validate the idea of an Aibo funeral, it does not precisely equate Aibos and real dogs.
In fact, some of the people in the video seem mainly interested in posing their Aibos for amusing pictures or video, something they could do just as well with deactivated puppets. Perhaps in reality Japanese culture is merely more relaxed about adults amusing themselves with toys?
Be that as it may, it seems that for now the era of robot dogs is already over…

kiss… is not really what this piece is about (sorry). It’s an idea I had years ago for a short story or a novella. ‘Lust’ here would have been interpreted broadly as any state which impels a human being towards sex. I had in mind a number of axes defining a general ‘lust space’. One of the axes, if I remember rightly, had specific attraction to one person at one end and generalised indiscriminate enthusiasm at the other; another went from sadistic to masochistic, and so on. I think I had eighty-one basic forms of lust, and the idea was to write short episodes exemplifying each one: in fact, to interweave a coherent narrative with all of them in.

My creative gifts were not up to that challenge, but I mention it here because one of the axes went from the purely intellectual to the purely physical. At the intellectual extreme you might have an elderly homosexual aristocrat who, on inheriting a title, realises it is his duty to attempt to procure an heir. At the purely physical end you might have an adolescent boy on a train who notices he has an erection which is unrelated to anything that has passed through his mind.

That axis would have made a lot of sense (perhaps) to Luca Barlassina and Albert Newen, whose paper in Philosophy and Phenomenological Research sets out an impure somatic theory of the emotions. In short, they claim that emotions are constituted by the integration of bodily perceptions with representations of external objects and states of affairs.

Somatic theories say that emotions are really just bodily states. We don’t get red in the face because we’re angry, we get angry because we’ve become red in the face. As no less an authority than William James had it:

The more rational statement is that we feel sorry because we cry, angry because we strike, afraid because we tremble, and not that we cry, strike, or tremble, because we are sorry, angry, or fearful, as the case may be. Without the bodily states following on the perception, the latter would be purely cognitive in form, pale, colorless, destitute of emotional warmth.

This view did not appeal to everyone, but the elegantly parsimonious reduction it offers has retained its appeal, and Jesse Prinz has put forward a sophisticated 21st century version. It is Prinz’s theory that Barlassina and Newen address; they think it needs adulterating, but they clearly want to build on Prinz’s foundations, not reject them.

So what does Prinz say? His view of emotions fits into the framework of his general view about perception: for him, a state is a perceptual state if it is a state of a dedicated input system – eg the visual system. An emotion is simply a state of the system that monitors our own bodies; in other words emotions are just perceptions of our own bodily states.  Even for Prinz, that’s a little too pure: emotions, after all, are typically about something. They have intentional content. We don’t just feel angry, we feel angry about something or other. Prinz regards emotions as having dual content: they register bodily states but also represent core relational themes (as against say, fatigue, which both registers and represents a bodily state). On top of that, they may involve propositional attitudes, thoughts about some evocative future event, for example, but the propositional attitudes only evoke the emotions, they don’t play any role in constituting them. Further still, certain higher emotions are recalibrati0ns of lower ones: the simple emotion of sadness is recalibrated so it can be controlled by a particular set of stimuli and become guilt.

So far so good. Barlassina and Newen have four objections. First, if Prinz is right, then the neural correlates of emotion and the perception of the relevant bodily states must just be the same. Taking the example of disgust, B&N argue that the evidence suggests otherwise: interoception, the perception of bodily changes, may indeed cause disgust, but does not equate to it neurologically.

Second, they see problems with Prinz’s method of bringing in intentional content. For Prinz emotions differ from mere bodily feeling because they represent core relational themes. But, say B&N, what about ear pressure? It tells us about unhealthy levels of barometric pressure and oxygen, and so relates to survival, surely a core relational theme: and it’s certainly a perception of a bodily state – but ear pressure is not an emotion.

Third, Prinz’s account only allows emotions to be about general situations; but in fact they are about particular things. When we’re afraid of a dog, we’re afraid of that dog, we’re not just experiencing a general fear in the presence of a specific dog.

Fourth, Prinz doesn’t fully accommodate the real phenomenology of emotions. For him, fear of a lion is fear accompanied by some beleifs about a lion: but B&N maintain that the directedness of the emotion is built in, part of the inherent phenomenology.

Barlassina and Newen like Prinz’s somatic leanings, but they conclude that he simply doesn’t account sufficiently for the representative characteristics of emotions: consequently they propose an ‘impure’ theory by which emotions are cognitive states constituted when interoceptive states are integrated with with perceptions of external objects or states of affairs.

This pollution or elaboration of the pure theory seems pretty sensible and B&N give a clear and convincing exposition. At the end of the day it leaves me cold not because they haven’t done a good job but because I suspect that somatic theories are always going to be inadequate: for two reasons.

First, they just don’t capture the phenomenology. There’s no doubt at all that emotions are often or typically characterised or coloured by perception of distinctive bodily states, but is that what they are in essence? It doesn’t seem so. It seems possible to imagine that I might be angry or sad without a body at all: not, of course, in the same good old human way, but angry or sad nevertheless. There seems to be something almost qualic about emotions, something over and above any of the physical aspects, characteristic though they may be.

Second, surely emotions are often essentially about dispositions to behave in a certain way? An account of anger which never mentions that anger makes me more likely to hit people just doesn’t seem to cut the mustard. Even William James spoke of striking people. In fact, I think one could plausibly argue that the physical changes associated with an emotion can often be related to the underlying propensity to behave in a certain way. We begin to breathe deeply and our heart pounds because we are getting ready for violent exertion, just as parallel cognitive changes get us ready to take offence and start a fight. Not all emotions are as neat as this: we’ve talked in the past about the difficulty of explaining what grief is for. Still, these considerations seem enough to show that a somatic account, even an impure one, can’t quite cover the ground.

Still, just as Barlassina and Newen built on Prinz, it may well be that they have provided some good foundation work for an even more impure theory.