Posts tagged ‘phenomenology’

kiss… is not really what this piece is about (sorry). It’s an idea I had years ago for a short story or a novella. ‘Lust’ here would have been interpreted broadly as any state which impels a human being towards sex. I had in mind a number of axes defining a general ‘lust space’. One of the axes, if I remember rightly, had specific attraction to one person at one end and generalised indiscriminate enthusiasm at the other; another went from sadistic to masochistic, and so on. I think I had eighty-one basic forms of lust, and the idea was to write short episodes exemplifying each one: in fact, to interweave a coherent narrative with all of them in.

My creative gifts were not up to that challenge, but I mention it here because one of the axes went from the purely intellectual to the purely physical. At the intellectual extreme you might have an elderly homosexual aristocrat who, on inheriting a title, realises it is his duty to attempt to procure an heir. At the purely physical end you might have an adolescent boy on a train who notices he has an erection which is unrelated to anything that has passed through his mind.

That axis would have made a lot of sense (perhaps) to Luca Barlassina and Albert Newen, whose paper in Philosophy and Phenomenological Research sets out an impure somatic theory of the emotions. In short, they claim that emotions are constituted by the integration of bodily perceptions with representations of external objects and states of affairs.

Somatic theories say that emotions are really just bodily states. We don’t get red in the face because we’re angry, we get angry because we’ve become red in the face. As no less an authority than William James had it:

The more rational statement is that we feel sorry because we cry, angry because we strike, afraid because we tremble, and not that we cry, strike, or tremble, because we are sorry, angry, or fearful, as the case may be. Without the bodily states following on the perception, the latter would be purely cognitive in form, pale, colorless, destitute of emotional warmth.

This view did not appeal to everyone, but the elegantly parsimonious reduction it offers has retained its appeal, and Jesse Prinz has put forward a sophisticated 21st century version. It is Prinz’s theory that Barlassina and Newen address; they think it needs adulterating, but they clearly want to build on Prinz’s foundations, not reject them.

So what does Prinz say? His view of emotions fits into the framework of his general view about perception: for him, a state is a perceptual state if it is a state of a dedicated input system – eg the visual system. An emotion is simply a state of the system that monitors our own bodies; in other words emotions are just perceptions of our own bodily states.  Even for Prinz, that’s a little too pure: emotions, after all, are typically about something. They have intentional content. We don’t just feel angry, we feel angry about something or other. Prinz regards emotions as having dual content: they register bodily states but also represent core relational themes (as against say, fatigue, which both registers and represents a bodily state). On top of that, they may involve propositional attitudes, thoughts about some evocative future event, for example, but the propositional attitudes only evoke the emotions, they don’t play any role in constituting them. Further still, certain higher emotions are recalibrati0ns of lower ones: the simple emotion of sadness is recalibrated so it can be controlled by a particular set of stimuli and become guilt.

So far so good. Barlassina and Newen have four objections. First, if Prinz is right, then the neural correlates of emotion and the perception of the relevant bodily states must just be the same. Taking the example of disgust, B&N argue that the evidence suggests otherwise: interoception, the perception of bodily changes, may indeed cause disgust, but does not equate to it neurologically.

Second, they see problems with Prinz’s method of bringing in intentional content. For Prinz emotions differ from mere bodily feeling because they represent core relational themes. But, say B&N, what about ear pressure? It tells us about unhealthy levels of barometric pressure and oxygen, and so relates to survival, surely a core relational theme: and it’s certainly a perception of a bodily state – but ear pressure is not an emotion.

Third, Prinz’s account only allows emotions to be about general situations; but in fact they are about particular things. When we’re afraid of a dog, we’re afraid of that dog, we’re not just experiencing a general fear in the presence of a specific dog.

Fourth, Prinz doesn’t fully accommodate the real phenomenology of emotions. For him, fear of a lion is fear accompanied by some beleifs about a lion: but B&N maintain that the directedness of the emotion is built in, part of the inherent phenomenology.

Barlassina and Newen like Prinz’s somatic leanings, but they conclude that he simply doesn’t account sufficiently for the representative characteristics of emotions: consequently they propose an ‘impure’ theory by which emotions are cognitive states constituted when interoceptive states are integrated with with perceptions of external objects or states of affairs.

This pollution or elaboration of the pure theory seems pretty sensible and B&N give a clear and convincing exposition. At the end of the day it leaves me cold not because they haven’t done a good job but because I suspect that somatic theories are always going to be inadequate: for two reasons.

First, they just don’t capture the phenomenology. There’s no doubt at all that emotions are often or typically characterised or coloured by perception of distinctive bodily states, but is that what they are in essence? It doesn’t seem so. It seems possible to imagine that I might be angry or sad without a body at all: not, of course, in the same good old human way, but angry or sad nevertheless. There seems to be something almost qualic about emotions, something over and above any of the physical aspects, characteristic though they may be.

Second, surely emotions are often essentially about dispositions to behave in a certain way? An account of anger which never mentions that anger makes me more likely to hit people just doesn’t seem to cut the mustard. Even William James spoke of striking people. In fact, I think one could plausibly argue that the physical changes associated with an emotion can often be related to the underlying propensity to behave in a certain way. We begin to breathe deeply and our heart pounds because we are getting ready for violent exertion, just as parallel cognitive changes get us ready to take offence and start a fight. Not all emotions are as neat as this: we’ve talked in the past about the difficulty of explaining what grief is for. Still, these considerations seem enough to show that a somatic account, even an impure one, can’t quite cover the ground.

Still, just as Barlassina and Newen built on Prinz, it may well be that they have provided some good foundation work for an even more impure theory.

 

Picture: Martin Heidegger. This paper by Dotov, Nie, and Chemero describes experiments which it says have pulled off the remarkable feat of providing empirical, experimental evidence for Heidegger’s phenomenology, or part of it; the paper has been taken by some as providing new backing for the Extended Mind theory, notably expounded by Andy Clark in his 2008 book (‘Supersizing the Mind’).

Relating the research so strongly to Heidegger puts it into a complex historical context. Some of Heidegger’s views, particularly those which suggest there can be no theory of everyday life, have been taken up by critics of artificial intelligence. Hubert Dreyfus in particular, has offered a vigorous critique drawing mainly from Heidegger an idea of the limits of computation, one which strongly resembles those which arise from the broadly-conceived frame problem, as discussed here recently. The authors of the paper claim this heritage, accepting the Dreyfusard view of Heidegger as an early proto-enemy of GOFAI .

For it is GOFAI (Good Old Fashioned Artificial Intelligence) we’re dealing with. The authors of the current paper point out that the Heideggerian/Dreyfusard critique applies only to AI based on straightforward symbol manipulation (though I think a casual reader of Dreyfus  could well be forgiven for going away with the impression that he was a sceptic about all forms of AI), and that it points toward the need to give proper regard to the consequences of embodiment.

Hence their two experiments. These are designed to show objective signs of a state described by Heidegger, known in English as ‘ready-to-hand’. This seems a misleading translation, though I can’t think of a perfect alternative. If a hammer is ‘ready to hand’, I think that implies it’s laid out on the bench ready for me to pick it up when I want it;  the state Heidegger was talking about is the one when you’re using the hammer confidently and skilfully without even having to think about it. If something goes wrong with the hammering, you may be forced to start thinking about the hammer again – about exactly how it’s going to hit the nail, perhaps about how you’re holding it. You can also stop using the hammer altogether and contemplate it as a simple object. But when the hammer is ready-to-hand in the required sense, you naturally speak of your knocking in a few nails as though you were using your bare hands, or more accurately, as if the hammer had become part of you.

Both experiments were based on subjects using a mouse to play a simple game.  The idea was that once the subjects had settled, the mouse would become ready-to-hand; then the relationship between mouse movement and cursor movement would be temporarily messed up; this should cause the mouse to become unready-to-hand for a while. Two different techniques were used to detect readiness-to-hand. In the first experiment the movements of the hand and mouse were analysed for signs of 1/f? noise. Apparently earlier research has established that the appearance of 1/f? noise is a sign of a smoothly integrated system.  The second experiment used a less sophisticated method; subjects were required to perform a simple counting task at the same time as using the mouse; when their performance at this second task faltered, it was taken as a sign that attention was being transferred to cope with the onset of unreadiness to hand. Both experiments yielded the expected results.  (Regrettably some subjects were lost because of an unexpected problem – they weren’t good enough at the simple mouse game to keep it going for the duration of the experiment. Future experimenters should note the need to set up a game which cannot come to a sudden halt.)

I think the first question which comes to mind is: why were the experiments were even necessary?  It is a common experience that tools or vehicles become extensions of our personality; in fact it has often been pointed out that even our senses get relocated. If you use a whisk to beat eggs, you sense the consistency of the egg not by monitoring the movement of the whisk against your fingers, but as though you were feeling the egg with the whisk, as though there was a limited kind of sensation transferred into the whisk. Now of course, for any phenomenological observation, there will be some diehards who deny having had any such experience; but my impression is that this sort of thing is widely accepted, enough to feature as a proposition in a discussion without further support.  Nevertheless, it’s true that it this remains subjective, so it’s a fair claim that empirical results are something new.

Second, though, do the results actually prove anything? Phenomenologically, it seems possible to me to think of alternative explanations which fit the bill without invoking readiness-to-hand. Does it seem to the subject that the mouse has become part of them, part of a smoothly-integrated entity – or does the mouse just drop out of consciousness altogether? Even if we accept that the presence of 1/f? noise shows that integration has occurred, that doesn’t give us readiness-to-hand (or if it does, it seems the result was already achieved by the earlier research).

In the second experiment we’ve certainly got a transfer of attention – but isn’t that only natural? If a task suddenly becomes inexplicably harder, it’s not surprising that more attention is devoted to it – surely we can explain that without invoking Heidegger? The authors acknowledge this objection, and if I understand correctly suggest that the two tasks involved were easy enough to rule out problems of excessive cognitive load so that, I suppose, no significant switch of attention would have been necessary if not for the breakdown of readiness-to-hand.  I’m not altogether convinced.

I do like the chutzpah involved in an experimental attempt to validate Heidegger, though, and I wouldn’t rule out the possibility that bold and ingenious experiments along these lines might tell us something interesting.