Whereof we cannot speak

bokkenThe latest JCS features a piece by Christopher Curtis Sensei about the experience of achieving mastery in Aikido. It seems he spent fifteen years cutting bokken (an exercise with wooden swords, don’t ask me), becoming very proficient technically but never satisfying the old Sensei. Finally he despaired and stopped trying; at which point, of course, he made the required breakthrough. He needed to stop thinking about it. You do feel that his teacher could perhaps have saved him a few years if he had just said so explicitly – but of course you cannot achieve the state of not thinking about something directly and deliberately. Intending to stop thinking about a pink hippo involves thinking about a pink hippo; you have to do something else altogether.

This unreflective state of mind crops up in many places; it has something to do with the desirable state of ‘flow’ in which people are said to give their best sporting or artistic performances; it seems to me to be related to the popular notion of mindfulness, and it recalls Taoist and other mystical ideas about cleared minds and going with the stream. To me it evokes Julian Jaynes, who believed that in earlier times human consciousness manifested itself to people as divine voices; what we’re after here is getting the gods to shut up at last.

Clearly this special state of mind is a form of consciousness (we don’t pass out when we achieve it) and in fact on one level I think it is very simple. It’s just the absence of second-order consciousness, of thoughts about thoughts, in other words. Some have suggested that second-order thought is the distinctive or even the constitutive feature of human consciousness; but it seems clear to me that we can in fact do without it for extended periods.

All pretty simple then. In fact we might even be able to define it physiologically – it could be the state in which the cortex stops interfering and let’s the cerebellum and other older bits of the brain do their stuff uninterrupted – we might develop a way of temporarily zapping or inhibiting cortical activity so we can all become masters of whatever we’re doing at the flick of a switch. What’s all the fuss about?

Except that arguably none of the foregoing is actually about this special state of mind at all. What we’re talking about is unconsidered thought, and I cannot report it or even refer to it without considering it; so what have I really been discussing? Some strange ghostly proxy? Nothing? Or are these worries just obfuscatory playing with words?
There’s another mental thing we shouldn’t, logically, be able to talk about – qualia. Qualia, the ineffable subjective aspect of things, are additional to the scientific account and so have no causal powers; they cannot therefore ever have caused any of the words uttered or written about them. Is there a link here? I think so. I think qualia are pure first-order experiences; we cannot talk about them because to talk about them is to move on to second-order cognition and so to slide away from the very thing we meant to address. We could say that qualia are the experiential equivalent of the pure activity which Curtis Sensei achieved when he finally cut bokken the right way. Fifteen years and I’ll understand qualia; I just won’t be able to tell anyone about it…

Implausible Materialism

whistlePhysical determinism is implausible according to Richard Swinburne in the latest JCS; he cunningly attacks via epiphenomenalism.

Swinburne defines physical events as public and mental ones as private – we could argue about that, but as a bold, broad view it seems fair enough. Mental events may be phenomenal or intentional, but for current purposes the distinction isn’t important. Physical determinism is defined as the view that each physical event is caused solely by other physical events; here again we might quibble, but the idea seems basically OK to be going on with.

Epiphenomenalism, then, is the view that while physical events may cause mental ones, mental ones never cause physical ones. Mental events are just, as they say, the whistle on the locomotive (though the much-quoted analogy is not exact: prolonged blowing of the whistle on a steam locomotive can adversely affect pressure and performance). Swinburne rightly describes epiphenomenalism as an implausible view (in my view, anyway – many people would disagree), but for him it is entailed by physical determinism, because physical events are only ever caused by other physical events. In his eyes, then, if he can prove that epiphenomenalism is wrong, he has also shown that physical determinism is ruled out. This is an unusual, perhaps even idiosyncratic perspective, but not illogical.

Swinburne offers some reasonable views about scientific justification, but what it comes down to is this; to know that epiphenomenalism is true we have to show that mental events cause no physical events; but that very fact would mean we could never register when they had occurred – so how would we prove it? In order to prove epiphenomenalism true, we must assume that what it says is false!

Swinburne takes it that epiphenomenalism means we could never speak of our private mental events – because our words would have to have been caused by the mental events, and ex hypothesi they don’t cause physical events like speech. This isn’t clearly the case – as I’ve mentioned before, we manage to speak of imaginary and non-existent things which clearly have no causal powers. Intentionality – meaning – is weirder and more powerful than Swinburne supposes.

He goes on to discuss the famous findings of Benjamin Libet, which seem to show that decisions are detectable in the brain before we are aware of having made them. These results point towards epiphenomenalism being true after all. Swinburne is not impressed; he sees no basic causal problem in the idea that a brain event precedes the mental event of the decision, which in turn precedes action. Here he seems to me to miss the point a bit, which is that if Libet is right, the mental experience of making a decision has no actual effect, since the action is already determined.

The big problem, though is that Swinburne never engages with the normal view; ie that in one way or another mental events have two aspects. A single brain event is at the same time a physical event which is part of the standard physical story, and a mental event in another explanatory realm. In one way this is unproblematic; we know that a mass of molecules may also be a glob of biological structure, and an organism; we know that a pile of paper, a magnetised disc, or a reel of film may all also be “A Christmas Carol”. As Scrooge almost puts it, Marley’s ghost may be undigested gravy as well as a vision of the grave.

It would be useless to pretend there is no residual mystery about this, but it’s overwhelmingly how most people reconcile physical determinism with the mental world, so for Swinburne to ignore it is a serious weakness.

Four kinds of Hard

four hard problemsNot one Hard Problem, but four. Jonathan Dorsey, in the latest JCS, says that the problem is conceived in several different ways and we really ought to sort out which we’re talking about.

The four conceptions, rewritten a bit by me for what I hope is clarity, are that the problem is to explain why phenomenal consciousness:

  1. …arises from the physical (using only a physicalist ontology)
  2. …arises from the physical, using any ontology
  3. …arises at all (presumably from the non-physical)
  4. …arises at all or cannot be explained.

I don’t really see these as different conceptions of the problem (which simply seems to be the explanation of phenomenal consciousness), but rather as different conceptions of what the expected answer is to be. That may be nit-picking; useful distinctions in any case.  Dorsey offers some pros and cons for each of the four.

In favour of number one, it’s the most tightly focused. It also sits well in context, because Dorsey sees the problem as emerging under the dominance of physics. The third advantage is that it confines the problem to physicalism and so makes life easy for non-physicalists (not sure why this is held to be one of the pros, exactly). Against; well, maybe that context is dominating too much? Also the physicalist line fails to acknowledge Chalmers’ own naturalist but non-physicalist solution (it fails to acknowledge lots of other potential solutions too, so I’m not quite clear why Chalmers gets this special status at this point – though of course he did play a key role in defining the Hard Problem).

Number two’s pros and cons are mostly weaker versions of number one’s. It too is relatively well-focused. It does not identify the Hard Problem with the Explanatory Gap (that could be a con rather than a pro in my humble opinion). It fits fairly well in context and makes life relatively easy for traditional non-physicalists. It may yield a bit too much to the context of physics and it may be too narrow.

Number three has the advantage of focusing on the basics, and Dorsey thinks it gives a nice clear line between Hard and Easy problems. It provides a unifying approach – but it neglects the physical, which has always been central to discussion.

Number four provides a fully extended version of the problem, and makes sense of the literature by bringing in eliminativism. In a similar way it gives no-one a free pass; everyone has to address it. However, in doing so it may go beyond the bounds of a single problem and extend the issues to a wide swathe of philosophy of mind.

Dorsey thinks the answer is somewhere between 2 and 3; I’m more inclined to think it’s most likely between 1 and 2.

Let’s put aside the view that phenomenal consciousness cannot be explained. There are good arguments for that conclusion, but to me they amount to opting out of a game which is by no means clearly lost. So the problem is to explain how phenomenal consciousness arises. The explanation surely has to fit into some ontology, because we need to know what kind of thing phenomenal experience really is. My view is that the high-level differences between ontologies actually matter less than people have traditionally thought. Look at it this way: if we need an ontology, then it had better be comprehensive and consistent.  Given those two properties, we might as well call it a monism. because it encompasses everything and provides one view, even if that one view is complex.

So we have a monism: but it might be materialism, idealism, spiritualism, neutral monism, or many others. Does it matter? The details do matter, but if we’ve got one substance it seems to me it doesn’t matter what label we apply. Given that the material world and its ontology is the one we have by far the best knowledge of, we might as well call it materialism. It might turn out that materialism is not what we think, and it might explain all sorts of things we didn’t expect it to deal with, but I can’t see any compelling reason to call our single monist ontology anything else.

So what are the details, and what ontology have we really got? I’m aware that most regulars here are pretty radical materialists, with some exceptions (hat tip to cognicious); people who have some difficulty with the idea that the cosmos has any contents besides physical objects; uncomfortable with the idea of ideas (unless they are merely conjunctions of physical objects) and even with the belief that we can think about anything that isn’t robustly physical (so much for mathematics…). That’s not my view. I’m also a long way from from being a Platonist, but I do think the world includes non-physical entities, and that that doesn’t contradict a reasonable materialism. The world just is complex and in certain respects irreducible; probably because it’s real. Reduction, maybe, is essentially a technique that applies to ideas and theories: if we can come up with a simpler version that does the job,  then the simpler version is to be adopted. But it’s a mistake to think that that kind of reduction applies to reality itself; the universe is not obliged to conform to a flat ontology, and it does not. At the end of the day – and I say this with the greatest regret – the apprehension of reality is not purely a matter of finding the simplest possible description.

I believe the somewhat roomier kind of materialism I tend to espouse corresponds generally with what we should recognise as the common sense view, and this yields what might be another conception of the Hard Problem…

  1. …arises from the physical (in a way consistent with common sense)

 

Do we care about where?

macaque and rakeDo we care whether the mind is extended? The latest issue of the JCS features papers on various aspects of extended and embodied consciousness.

In some respects I think the position is indicated well in a paper by Michael Wheeler, which tackles the question of whether phenomenal experience is realised, at least in part, outside the brain. One reason I think this attempt is representative is its huge ambition. The general thesis of extension is that it makes sense to regard tools and other bodily extensions – the iPad in my hand now, but also simple notepads, and even sticks – as genuine participating contributors to mental events. This is sort of appealing if we’re talking about things like memory, or calculation, because recording data and doing sums are the kind of thing the iPad does. Even for sensory experience it’s not hard to see how the camera and Skype might reasonably be seen as extended parts of my perceptual apparatus. But phenomenal experience, the actual business of how something feels? Wheeler notes a strong intuition that this, at least, must be internal (here read as ‘neural’), and this surely comes from the feeling that while the activity of a stick or pad looks like the sort of thing that might be relevant to “easy problem” cognition, it’s hard to see what it could contribute to inner experience. Granted, as Wheeler says, we don’t really have any clear idea what the brain is contributing either, so the intuition isn’t necessarily reliable. Nevertheless it seems clear that tackling phenomenal consciousness is particularly ambitious, and well calculated to put the overall idea of extension under stress.

Wheeler actually examines two arguments, both based on experiments. The first, from Noë, relies on sensory substitution. Blind people fitted with apparatus that delivers optical data in tactile form begin to have what seems like visual experience (How do we know they really do? Plenty of scope for argument, but we’ll let that pass.) The argument is that the external apparatus has therefore transformed their phenomenal experience.

Now of course it’s uncontroversial that changing what’s around you changed the content of your experience, and changing the content changes your phenomenal experience. The claim here is that the whole modality has been transformed, and without a parallel transformation in the brain. It’s the last point that seems particularly vulnerable. Apparently the subjects adapt quickly to the new kit, too quickly for substantial ‘neural rewiring’, but what’s substantial in this context? There are always going to be some neural changes during any experience, and who’s to say that those weren’t the crucial ones?

The second argument is due to Kiverstein and Farina, who report that when macaques are trained to use rakes to retrieve food, the rakes are incorporated into their body image (as reflected in neural activity). This is easy enough to identify with – if you use a stick to probe the ground, you quickly start to experience the ‘feel’ of the ground as being at the end of the stick, not in your hand. Does it prove that your phenomenal experience is situated partly in the stick? Only in a sense that isn’t really the one required – we already felt it as being in the hand. We never experience tactile sensation as being in the brain: the anti-extension argument is merely that the brain is uniquely the substrate where it the feeling is generated.

Wheeler, rather anti-climatically but I think correctly, thinks neither argument is successful; and that’s another respect in which I think his paper represents the state of the extended mind thesis; both ambitious and unproven.
Worse than that, though, it illustrates the point which kills things for me; I don’t really care one way or the other. Shall we call these non-neural processes mental? What if we do? Will we thereby get a new insight into how mental processes work? Not really, so why worry? The thesis that experience is external in a deeper sense, external to my mind, is strange and mind-boggling; the thesis that it’s external in the flatly literal sense of having some of its works outside the brain is just not that philosophically interesting.

OK, it’s true that what we know about the brain doesn’t seem to explicate phenomenal experience either, and perhaps doesn’t even look like the kind of thing that in principle might do so. But if there are ever going to be physical clues, that’s kind of where you’d bet on them being.

Is phenomenal experience extended? Well, I reckon phenomenal experience is tied to the non-phenomenal variety. Red qualia come with the objective perception of red. So if we accept the extended mind for the latter, we should probably accept it for the former. But please yourself; in the absence of any additional illumination, who cares where it is?

Time Travel Consciousness

CamembertCan you change your mind after the deed is done? Ezequiel Di Paolo thinks you can, sometimes. More specifically, he believes that acts can become intentional after they have already been performed. His theory, which seems to imply a kind of time travel, is set out in a paper in the latest JCS.

I think the normal view would be that for an act to be intentional, it must have been caused by a conscious decision on your part. Since causes come before effects, the conscious decision must have happened beforehand, and any thoughts you may have afterwards are irrelevant. There is a blurry borderline over what is conscious, of course; if you were confused or inattentive, if you were ‘on autopilot’ or you were following a hunch or a whim it may not be completely clear how consciously your action was considered.

There can, moreover, be what Di Paolo calls an epistemic change. In such a case the action was always intentional in fact, but you only realise that it was when you think about your own motives more carefully after the event. Perhaps you act in the heat of the moment without reflection; but when you think about it you realise that in fact what you did was in line with your plans and actually caused by them. Although this kind of thing raises a few issues, it is not deeply problematic in the same way as a real change. Di Paolo calls the real change an ontological one; here you definitely did not intend the action beforehand, but it becomes intentional retrospectively.

That seems disastrous on the face of it. If the intentionality of an act can change once, it can presumably change again, so it seems all intentions must become provisional and unreliable; the whole concept of responsibility looks in danger of being undermined. Luckily, Di Paolo believes that changes can only occur in very particular circumstances, and in such a way that only one revision can occur.

His view founds intentions in enactment rather than in linear causation; he has them arising in social interaction. The theory draws on Husserl and Heidegger, but probably the easiest way to get a sense of it is to consider the examples presented by Di Paolo. The first is from De Jaegher and centres, in fittingly continental style, around a cheese board.

De Jaegher is slicing himself a corner of Camembert and notices that his companion is watching in a way which suggests that he too, would like to eat cheese. DJ cuts him a slice and hands it over.
“I could see you wanted some cheese,” he remarks.
“Funny thing, that,” he replies, “actually, I wasn’t wanting cheese until you handed it to me; at that moment the desire crystallised and I now found I had been wanting cheese.”

In a corner of the room, Alice is tired of the party to do; the people are boring and the magnificent cheese board is being monopolised by philosophers enacting around it. She looks across at her husband and happens to scratch her wrist. He comes over.
“Saw you point at your watch,” he says, “yeah, we probably should go now. We’ve got the Stompers’ do to go to.”
Alice now realises that although she didn’t mean to point to her watch originally, she now feels the earlier intention is in place after all – she did mean to suggest they went.

At the Stompers’ there is dancing; the tango! Alice and Bill are really good, and as they dance Bill finds that his moves are being read and interpreted by Alice superbly; she conforms and shapes to match him before he has actually decided what to do; yet she has read him correctly and he realises that after the fact his intentions really were the ones she divined. (I sort of melded the examples.)

You see how it works? No, it doesn’t really convince me either. It is a viable way of looking at things, but it doesn’t compel us to agree that there was a real change of earlier intention. Around the cheese board there may always have been prior hunger, but I don’t see why we’d say the intention existed before accepting the cheese.

It is true, of course, that human beings are very inclined to confabulate, to make up stories about themselves that make their behaviour make sense, even if that involves some retrospective monkeying with the facts. It might well be that social pressure is a particularly potent source of this kind of thing; we adjust our motivations to fit with what the people around us would like to hear. In a loose sense, perhaps we could even say that our public motives have a social existence apart from the private ones lodged in the recesses of our minds; and perhaps those social ones can be adjusted retrospectively because, to put it bluntly, they are really a species of fiction.

Otherwise I don’t see how we can get more than an epistemic change. I’ve just realised that I really kind of feel like some cheese…

CEMI and meaning

bindingJohnjoe McFadden has followed up the paper on his conscious electromagnetic information (CEMI) field which we discussed recently with another in the JCS – it’s also featured on MLU, where you can access a copy.

This time he boldly sets out to tackle the intractable enigma of meaning. Well, actually, he says his aims are more modest; he believes there is a separate binding problem which affects meaning and he wants to show how the CEMI field offers the best way of resolving it. I think the problem of meaning is one of those issues it’s difficult to sidle up to; once you’ve gone into the dragon’s lair you tend to have to fight the beast even if all you set out to do was trim its claws; and I think McFadden is perhaps drawn into offering a bit more than he promises; nothing wrong with that, of course.

Why then, does McFadden suppose there is a binding problem for meaning? The original binding problem is to do with perception. All sorts of impulses come into our heads through different senses and get processed in different ways in different places and different speeds. Yet somehow out of these chaotic inputs the mind binds together a beautifully coherent sense of what is going on, everything matching and running smoothly with no lags or failures of lip-synch. This smoothly co-ordinated experience is robust, too; it’s not easy to trip it up in the way optical illusions so readily derail up our visual processes. How is this feat pulled off? There are a range of answers on offer, including global workspaces and suggestions that the whole thing is a misconceived pseudo-problem; but I’ve never previously come across the suggestion that meaning suffers a similar issue.

McFadden says he wants to talk about the phenomenology of meaning. After sitting quietly and thinking about it for some time, I’m not at all sure, on the basis of introspection, that meaning has any phenomenology of its own, though no doubt when we mean things there is usually some accompanying phenomenology going on. Is there something it is like to mean something? What these perplexing words seem to portend is that McFadden, in making his case for the binding problem of meaning, is actually going to stick quite closely with perception. There is clearly a risk that he will end up talking about perception; and perception and meaning are not at all the same. For one thing the ‘direction of fit’ is surely different; to put it crudely, perception is primarily about the world impinging on me, whereas meaning is about me pointing at the world.

McFadden gives five points about meaning. The first is unity; when we mean a chair, we mean the whole thing, not its parts. That’s true, but why is it problematic? McFadden talks about how the brain deals with impossible triangles and sees words rather than collections of letters, but that’s all about perception; I’m left not seeing the problem so far as meaning goes. The second point is context-dependence. McFadden quite rightly points out that meaning is highly context sensitive and that the same sequence of letters can mean different things on different occasions. That is indeed an interesting property of meaning; but he goes on to talk about how meanings are perceived, and how, for example, the meaning of “ball” influences the way we perceive the characters 3ALL. Again we’ve slid into talking about perception.

With the third point, I think we fare a bit better; this is compression, the way complex meanings can be grasped in a flash. If we think of a symphony, we think, in a sense, of thousands of notes that occur over a lengthy period, but it takes us no time at all. This is true, and it does point to some issue around parts and wholes, but I don’t think it quite establishes McFadden’s point. For there to be a binding problem, we’d need to be in a position where we had to start with meaning all the notes separately and then triumphantly bind them together in order to mean the symphony as a whole – or something of that kind, at any rate. It doesn’t work like that; I can easily mean Mahler’s eighth symphony (see, I just did it), of whose notes I know nothing, or his twelfth, which doesn’t even exist.

Fourth is emergence: the whole is more than the sum of its parts. The properties of a triangle are not just the properties of the lines that make it up. Again, it’s true, but the influence of perception is creeping in; when we see a triangle we know our brain identifies the lines, but we don’t know that in the case of meaning a triangle we need at any stage to mean the separate lines – and in fact that doesn’t seem highly plausible. The fifth and last point is interdependence: changing part of an object may change the percept of the whole, or I suppose we should be saying, the meaning. It’s quite true that changing a few letters in a text can drastically change its meaning, for example. But again I don’t see how that involves us in a binding problem. I think McFadden is typically thinking of a situation where we ask ourselves ‘what’s the meaning of this diagram?’ – but that kind of example invites us to think about perception more than meaning.

In short, I’m not convinced that there is a separate binding problem affecting meaning, though McFadden’s observations shed some interesting lights on the old original issue. He does go on to offer us a coherent view of meaning in general. He picks up a distinction between intrinsic and extrinsic information. Extrinsic information is encoded or symbolised according to arbitrary conventions – it sort of corresponds with derived intentionality – so a word, for example, is extrinsic information about the thing it names. Intrinsic information is the real root of the matter and it embodies some features of the thing represented. McFadden gives the following definition.

Intrinsic information exists whenever aspects of the physical relationships that exist between the parts of an object are preserved – either in the original object or its representation.

So the word “car” is extrinsic and tells you nothing unless you can read English. A model of a car, or a drawing, has intrinsic information because it reproduces some of the relations between parts that apply in the real thing, and even aliens would be able to tell something about a car from it (or so McFadden claims). It follows that for meaning to exist in the brain there must be ‘models’ of this kind somewhere. (McFadden allows a little bit of wiggle room; we can express dimensions as weights, say, so long as the relationships are preserved, but in essence the whole thing is grounded in what some others might call ‘iconic’ representation. ) Where could that be? The obvious place to look is in the neurons. but although McFadden allows that firing rates in a pattern of neurons could carry the information, he doesn’t see how they can be brought together: step forward the CEMI field (though as I said previously I don’t really understand why the field doesn’t just smoosh everything together in an unhelpful way).

The overall framework here is sensible and it clearly fits with the rest of the theory; but there are two fatal problems for me. The first is that, as discussed above, I don’t think McFadden succeeds in making the case for a separate binding problem of meaning, getting dragged back by the gravitational pull of perception. We have the original binding problem because we know perception starts with a jigsaw kit of different elements and produces a slick unity, whereas all the worries about parts seem unmotivated when it comes to meaning. If there’s no new binding problem of meaning, then the appeal of CEMI as a means of solving it is obviously limited.

The second problem is that his account of meaning doesn’t really cut the mustard. This is unfair, because he never said he was going to solve the whole problem of meaning, but if this part of the theory is weak it inevitably damages the rest.  The problem is that representations that work because they have some of the properties of the real thing, don’t really work.  For one thing a glance at the definition above shows it is inherently limited to things with parts that have a physical relationship. We can’t deal with abstractions at all. If I tell you I know why I’m writing this, and you ask me what I mean, I can’t tell you I mean my desire for understanding, because my desire for understanding does not have parts with a physical relationship, and there cannot therefore be intrinsic information about it.

But it doesn’t even work for physical objects. McFadden’s version of intrinsic information would require that when I think ‘car’ it’s represented as a specific shape and size. In discussing optical illusions he concedes at a late stage that it would be an ‘idealised’ car (that idealisation sounds problematic in itself); but I can mean ‘car’ without meaning anything ideal or particular at all. By ‘car’ I can in fact mean a flying vehicle with no wheels made of butter and one centimetre long  (that tiny midge is going to regret settling in my butter dish as he takes his car ride into the bin of oblivion courtesy of a flick from my butter knife), something that does not in any way share parts with physical relationships which are the same as any of those applying to the big metal thing in the garage.

Attacking that flank, as I say, probably is a little unfair. I don’t think the CEMI theory is going to get new oomph from the problems of meaning, but anyone who puts forward a new line of attack on any aspect of that intractable issue deserves our gratitude.

Pockett redux

Electric BrainSue Pockett is back in the JCS with a new paper about her theory that conscious experience is identical with certain electromagnetic patterns generated by the brain. The main aim of the current paper is to offer a new hypothesis about what distinguishes conscious electromagnetic fields from non-conscious ones; basically it’s a certain layered shape, with negative charge on top followed by a positive region, a neutral one, and another positive layer.

Pockett sets the scene by suggesting that there are three main contenders when it comes to explaining consciousness: the thesis that consciousness is identical with certain patterns of neuron firing; functionalism, the view that consciousness is a process; and her own electromagnetic field theory. This doesn’t seem a very satisfactory framing. For one thing it doesn’t do much justice to the wild and mixed-up jungle which the field really is (or so it seems to me); second there isn’t really a sharp separation between the first two views she mentions – plenty of people who think consciousness is identical with patterns of neuronal activity would be happy to accept that it’s also a neuronal process. Third, it does rather elevate Pockett herself from the margins to the status of a major contender; forgiveable self-assertion, perhaps, although it may seem a little ungenerous that she doesn’t mention that others have also suggested that consciousness is an electromagnetic field (notably Johnjoe McFadden – although he believes the field is causally effective in brain processes, whereas Pockett, as we shall see, regards it as an epiphenomenon with no significant effects).

Pockett mentions some of the objections to her theory that have come up. One of the more serious ones is the point that the fields she is talking about are in fact tiny and very local: so local that there is no realistic possibility of the field from one neuron influencing the activity of another neuron. Pockett is happy to accept this; conscious experience doesn’t actually have the causal role we usually attribute to it, she says: it’s really just a kind of passenger accompanying mental processes whose course is determined elsewhere. She cites various people in support of this epiphenomenalist view, including Libet – she has some interesting things to say about his experiments (but doesn’t note that he too liked the idea of a mental field of some kind).

The new thesis about the shape of conscious fields appears to spring from observation of neuronal structure in the neocortex, and in particular the fact that in its fourth layer there are no pyramidal cell bodies. This is a feature which appears to be characteristic of the parts of the neocortex often associated with consciousness, and it implies that there is a gap or neutral region in the fields generated there, helping to yield the layered structure mentioned above. Pockett proposes a number of experiments which might support her view, some of which have already been carried out.

So what do we make of that? I see a number of problems.

First is the inherent implausibility of the overall theory. Why on earth would we identify conscious experience with tiny electromagnetic fields? I think the attraction of the theory comes from thinking about two kinds of consciousness, more or less the two kinds identified by Ned Block: the a-consciousness that does the work of useful, practical cogitation, and the p-consciousness which simply endows it with subjective feeling. Looked at from this angle it may be appealing to think that the actual firing of neurons is doing the a-consciousness while the subjectivity, the phenomenal experience, come from the subsidiary electric buzz.

But when we look at it more closely, that doesn’t make any particular sense. The problem is that conscious experience doesn’t seem like anything in physics, whether a field or a lump of matter. If we’re going to identify it with something physical, we might as well identify it with the neurons and simplify our theory – dropping the field entity in obedience to Occam’s Razor. Nothing about the electrical fields helps to explain the nature of phenomenal experience in a way that would motivate us to adopt them.

Second, there’s the problem of Pockett’s epiphenomenalism. Epiphenomenalism is very problematic because if consciousness does not cause any of our actions, our reports and discussions about it cannot have been caused by it either. Pockett’s own account of consciousness could not have been caused by her actual conscious experience, and nothing she writes could be causally related to that actual experience: if she were a zombie with no consciousness, she would have written just the same words. This is a bit of an uncomfortable position in general, but it also means that Pockett’s ambitions to test her theory experimentally are doomed. You cannot test scientifically for the existence of a supposed entity that has no effects because it makes no observable difference whether it’s there or not. All of Pockett’s proposed experiments, on examination, test accessible aspects of her theory to do with how the fields are generated by neurons: none of them test whether consciousness is present or absent, and on her theory no such experiment is possible.

While those issues don’t technically prove that Pockett is wrong, they do seem to me to be bad problems that make her theory pretty unattractive.

 

Smelling secret harmonies

Smelling soundIs trilled smell possible? Ed Cooke and Erik Myin raise the question in the JCS.  Why do we care? Well, for one thing smell has always tended to be the poor relation in discussions of conscious experience. The science of vision is so much better developed that seeing generally looks a more tractable area to attack, but arguably the discussion is somewhat lop-sided as a result; ‘seeing red’ isn’t necessarily a perfect epitome of all sensory experience, so a bit of clarification around smells might well be useful.

But the main point of asking the question is to test what Cooke and Myin call the independence thesis: the view that the experienced character of sensations includes a ‘something it is like’ over and above the gross physics of the business: that there’s an ultimate smelliness about smell that has nothing to do with the details of the sensory process. I would say there’s a range of possible positions here. Hardly anyone, I think would say that the physics of perception is irrelevant to how the experience seems. We know that that the wave structure of light and sound determines some of the characteristics of the experiences of vision and hearing, for example, and we know that smell is vaguer about location than vision because it depends on gases wafting around rather than sharply defined rays of light.  But beyond that the consensus breaks down. Some would say that these physical characteristics are just the basics and the real excitement lies in the ineffable qualities of experience. Purple is a thing in itself, not a blank sensory token which would do equally well for the smell of coffee, they might say.

Some would go further and accept that the qualities of experience are very largely determined by the physics of the medium and sensory apparatus, but that there’s a certain something beyond that which doesn’t reduce to the simple physics.  Rigorous materialists will be tempted to go further still and take the view that however complex and indefinable our experiences may seem, they are fully determined by the qualities of the processes that give rise to them: this of course, amounts to denying the existence of ineffable qualia. (My own view, for what it’s worth, lies an infinitesimal distance short of this extreme.)

Cooke and Myin’s approach is to look at the consequences of the independence thesis. If it’s true then we ought to be able to transfer the forms of one sensory modality to another without it losing its identity. So, in sound we can have a trill, a very rapid alternation of two notes; if independence is true, we ought to be able to have trilled smells.

Before tackling the thought experiment in more detail, Cooke and Myin provide a brisk review of some of the relevant science, including some odd and interesting facts. The smell of pressure-cooked pork liver is made up of 179 different compounds; airflow is indispensable to smell (having your nose full of smelly stuff or your receptors stimulated produces no sensation unless there’s airflow); and so sniffing is more important than you may have thought. It turns out that human beings are pretty well incapable of identifying single components of a smell when there are more than three – so much for perfume designers – and perception of smell is also very heavily conditioned by previous experience (if you’ve encountered smell b together with lemony smells in the past, you’ll tend to think smell b has lemony notes even when the lemon smells are objectively absent).  It looks as if we might each be working with a typical vocabulary of about 10,000 known smells, out of a theoretical 400,00 that the nose can distinguish: best estimates suggest that smell-space has a minimum of somewhere between 32 and 68 dimensions (as compared to human colour vision’s paltry 3).

Now we come to the thought-experiment itself.  It seems that Jesse Prinz has denied the possibility that a sound could become a smell merely by changing the structure of the experience (could the sound of a fire alarm ever become the smell of smoke?), so with fine daring, that is the first transition Cooke and Myin propose to anatomise in a thought-experiment.

Thought-experiments are always a little unsatisfactory because they don’t really force people to accept your conclusions in the way that a proper argument does. In this case, moreover, it seems to me there’s a particularly difficult trick to bring off because for the experiment to convince, Cooke and Myin want the transfer of properties to seem plausible; yet the more plausible it seems the more plausible independence seems too.  They want us to believe that they’re doing the best possible description of a transfer that could plausibly happen, in order to convince us that once we understand it it’s not plausible that it’s really a transfer at all.

However, I think they do a commendable job. First, sounds have to become less distinct in their onset and direction; they have to be more like generalised hums which float around appearing and dispersing slowly (no good for rapid warnings any more). The we have to imagine that we use our noses to detect sounds, that they only become perceptible when we breathe, and that sniffing or breathing deeply affects their intensity. We must imagine that it’s now a little more difficult to pick out single sounds when there are several at once: we might have to think about it for few moments and take some extra sniffs.

That’s not too bad, but there are bigger problems. We’ve noted that smell space appears to be huge; Cooke and Myin suggest we could enlarge sound space the same way by imagining that the differences in sound are like the differnces in timbre between musical instruments (though we have to suppose that we can readily distinguish the timbres of 10,000 or so different instruments). On the other hand, musical notes fit on an organised scale with perceptible relationships between different notes: smell doesn’t really have that, so we must drop it and assume that sounds are essentially monotonous. To round things off with behavioural factors, we should think of sound as no longer used for communication, but mainly for the evaluation of the acceptability of food, people and other biological entities; and we should imagine that sounds now have that characteristic of certain smells which allows them to evoke memories with particular potency.

If you’re still with the experiment, you’ll now have some intuitive idea of what it would be like if sounds had the structural and other characteristics of smells. But no, say Cooke and Myin: isn’t it apparent now that the sensations we’re talking about wouldn’t be sounds any more (in fact they would pretty much have become smells)? Isn’t it clear, in short, that in order to be trillable, smells would have to cease being smells? They go on to a further thought experiment in which smells become colours.

This is a valuable exercise, but as I say, thought experiments are not knock-down arguments, and I am willing to bet there wil in fact be plenty of people who are prepared to go along with Cooke and Myin’s transition but insist at the end that the sensations they’re imagining are still in some way sounds, or at least have a core soundiness which makes them different from echt smells. (You notice how I criticise the weakness of thought experiments and yet here I am doing something worse – a kind of third-person thought-experiment where I invite agreement that in certain odd circumstances other people would think in a certain way.)

Personally I think some of the most interesting territory revealed here is not so much at the ends of the transition as in the middle. The experiment raises the possibility of mixed modalities never before imagined, chimerical experiences with some of the characteristics of two or more different standard senses. Not just that, either, because we can invent new physical constraints and structures and develop possible sensory modalities which have nothing whatever in common with any real ones, if our imagination permits.

This gives Cooke and Myin some possible new ammunition. Do all these imaginary new modalities get their own essence, their own qualia? If we mix smell and hearing in different ways, do we have to suppose that there are distinct qualia of, er, smearing and hell?

For that matter, what if we took a subject (all right, victim) through the transition of sound to smell; and then separately gave him back sound? has he now got two distinct experiences of sound? Then if we move the new sound2 though the transition to smell, does he have two smells? And if we then give him back a separate sense of sound again? And so on.

I can’t help thinking it would be quite a Christmas present if we could have a sense with the spatial distinctness of vision, the structured harmonics of sound, and the immense dimensionality of smell. There would be some truly amazing symphonic odours to be painted.

Merry Christmas, all!

Pain without suffering

The latest issue of the JCS is all about pain.  Pain has always been tough to deal with: it’s subjective, not a thing out there in the world, and yet even the most hardline reductionist materialist can’t really dismiss it as an airy-fairy poetic delusion. We are all intensely concerned about pain, and the avoidance of it is among our most important moral and political projects. When you step back a bit, that seems remarkable: it’s easy to see more or less objective reasons why we should want to prevent disease, mitigate the effects of natural disasters, prevent wars and famines – harder to see why near or even at the top of the list of things we care about should be avoiding the occurrence of a particular kind of pattern of neuronal firing.

It’s hard even to say what it is. It seems to be a sensation, but a sensation of what? Of…. pain? Our other sensations give us information, about light, sound, temperature, and so on. Pain is often accompanied by feelings of pressure or heat or whatever, but it is quite distinct and separable from those impressions. In itself, the only thing pain tells us is: ‘you’re in pain’.  It seems sensible, therefore, to regard it as not a sensation in the same way as other sensations, but as being something like a kind of deferrable reflex: instead of just automatically moving our arm away from the hot pan it tells us urgently that we ought to do so. So it turns out to be something like a change in our dispositions or a change of weightings in our current projects.  That kind of account is appealing except for the single flaw of being evident nonsense.  When I’m in the dentist’s chair, I’m not feeling a change in my dispositions or anything that abstract, I’m feeling pain – that thing, that bad thing, you know what I mean, even though words fail me.

If it’s hard to describe, then, is pain actually the most undeniable of qualia? From some angles it looks like a quale, but qualia are supposed to have no causal effects on our behaviour, and that is exceptionally difficult to believe in the case of pain: if ever anything was directly linked to motivation, pain is it.  Undeniability looks more plausible: pain is pre-eminently one of the things it seems we can’t be wrong about. I might be mistaken in my belief that my hand has just been sheared off by a saw:  that ‘s a deduction about the state of the world based on the evidence of my senses; I don’t see how I could be wrong about the fact that I’m in agony because no reasoning is involved: I just am.

One of the contributors to the JCS might take issue with that, though. S. Benjamin Fink wants to present an approach to difficult issues of phenomenal experience and as his example he offers a treatment of pain which suggests it isn’t the simple unanalysable primitive we might think. In Fink’s view one of the dangers we need to guard against is the assumption that elements of experience we’ve always, as it happens, had together are necessarily a single phenomenon.  In particular, he wants to argue for the independence of pain and suffering/unpleasantness. Pain, it turns out, is not really bad after all (at least, not necessarily and in itself).

Fink offers several examples where pain and unpleasantness occur separately. An itch is unpleasant but not painful; the burning sensation produced by hot chillies is painful but not unpleasant (at least, so long as it occurs in the mouths of regular chili eaters, and not in their eyes or a neophyte’s mouth). These examples seem vulnerable to a counterargument based on mildness: itches aren’t described as pains just because they aren’t bad enough; and the same goes for spicy food in a mouth that has become accustomed to it. But Fink’s real clincher is the much more dramatic example of pain asymbolia. People with this condition still experience pain but don’t mind it. It’s not at all that they’re anaesthetised: they are aware of pain and can use it rationally to decide when some part of their body is in danger of damage, but they do so , as it were coldly, and don’t mind needles being stuck in them for experimental purposes at all. Fink quotes a woman who underwent a lobotomy to cure continual pain: many years later she reported happily that the pain was still there: “In fact, it’s still agonising. But I don’t mind.”

These people are clearly exceptional, but it’s worth noting that even in normal people the link between nociception, the triggering of pain-sensing nerve-endings, and the actual experience of pain is by no means as invariable and straightforward as philosophers used to believe back in the days when some argued that the firing of c-fibres was identical with the occurence of pain. Fink wants to draw a distinction between pain itself, a sensation, and suffering, the emotional response associated with it; it is the latter, in his view, which is the bad thing while pain itself is a mere colourless report. As a further argument he notes research which seems to show that when subjects are feeling compassion, some neural activity can be seen in areas which are normally active when the subjects themselves are feeling pain. The subjects, as it were, feel the the pain of others, though obviously without actual nociception.

So is Fink right? I think many people’s first reaction might be that unpleasantness just defines pain, so that if you’re feeling something that isn’t unpleasant, we wouldn’t want to call it pain. We might say that people with asymbolia experience nocition (not sure that’s really a word but work with me on this) but not pain. Fink would say – he does say – that we ought to listen to what people say. Usage should determine our definition, he says, we should not make our definitions normatively control our usage.  But he’s in a weak position here. If we are to pay attention to usage, then surely we should pay attention to the usage of the vast majority of people who regard pain as a unitary phenomenon, not to a small group of people with a most unusual set of experiences which might have tutored their perceptions in unreliable ways. I’m not sure it’s clear that asymbolics, in any case, insist that what they’re aware of is proper, echt pain – if they were asked, would they perhaps agree that it’s not pain in quite the ordinary sense?

I’m also not convinced that suffering, or unpleasantess, is really a well-defined entity in the way Fink requires. Unpleasantness may be a slight lapse of manners at a tea-party;  you might suffer badly on the stock exchange while happily sipping a cocktail on your sun-lounger. I’m not sure there is a distinct complex of emotional affect we can label as suffering at all. And if there is, we’re back with the sheer implausibility of saying that that’s what the bad stuff is: when I hit my thumb with a hammer it doesn’t seem like a matter of affect to me, it seems very definitely like old-fashioned simple pain.

If we’re going to take that line, though, we have to account for Fink’s admittedly persuasive examples, in particular asymbolia.  Never mind now what we call it: how is it that these people can experience something they’re willing to call pain without minding it, if it isn’t that our concept of pain needs reform?

Well, there is one other property of pain which we’ve overlooked so far.  There is one obvious kind of pain which I can perceive without being disturbed at all – yours. We may indeed feel some sympathetic twinges for the pain of others, but a key point about pain is that it’s essentially ours. It sticks to us in a way nothing else does: it’s normal in philosophy to speak of the external world, but pain, perhaps uniquely, isn’t external in that sense: it’s in here with us.  That may be why it has another property, noted by Fink, of being very difficult to ignore.

So it may be that subjects with asymbolia are not lacking emotional affect, but rather any sense of ownership. The pain they feel is external, it’s not particularly theirs: like Mrs Gradgrind they feel that

‘… there’s a pain somewhere in the room, but I couldn’t positively say that I have got it.’

 

 

Unconscious free will

Picture: unconscious will. Does the idea of unconscious free will even make sense? Paula Droege, in the recent JCS, seems to think it might. Generally experiments like Libet’s famous ones, which seemed to show that decisions are made well before the decider is consciously aware of them, are considered fatal to free will. If the conscious activity came along only after the matter had been settled, it must surely have been powerless to affect it (there are some significant qualifications to this: Libet himself, for example, considered there was a power of last-minute veto which he called ‘free won’t’ – but still the general point is clear). If our conscious thoughts were irrelevant, it seems we didn’t have any say in the matter.

However, this view implies a narrow conception of the self in which unconscious processes are not really part of me and I only really consist of that entity that does all the talking. Yet in other contexts, notably in psychoanalysis, don’t we take the un- or sub-conscious to be more essential to our personality than the fleeting surface of consciousness, to represent more accurately what we ‘really’ want and feel? Droege, while conceding that if we take the narrow view there’s a good deal in the sceptical position, would prefer a wider view in which unconscious acts are valid examples of agency too. She would go further and bring in social influences (though it’s not entirely clear to me how the effects of social pressure can properly be transmuted into examples of my own free will), and she offers the conscious mind the consolation prize of being able to influence habits and predispositions which may in turn have a real causal influence on our actions.

I suppose there are several ways in which we exercise our agency. We perhaps tend to think of cases of conscious premeditation because they are the clearest, but in everyday life we just do stuff most of the time without thinking about it much, or very explicitly. Many of the details of our behaviour are left to ‘autopilot’, but in the great majority of cases the conscious mind would nevertheless claim these acts as its own. Did you stop at the traffic light and then move off again when it turned green? You don’t really remember doing it, but are generally ready to agree that you did. In unusual cases, we know that people sometimes even elaborate or confabulate spurious rationales for actions they didn’t really determine.

But it’s much more muddled than that. We do also at times seek to disown moral responsibility for something done when we weren’t paying proper attention, or where our rational responses were overwhelmed by a sudden torrent of emotion. Should someone who responds to the sight of a hated enemy by swerving to collide with the provoker be held responsible because the murderous act stems from emotions which are just as valid as cold calculation? Perhaps, but sometimes the opposite is taken to be the case, and the overwhelming emotion of a crime passionnel can be taken as an excuse. Then again few would accept the plea of the driver who says he shouldn’t be held responsible for an accident because he was too drunk to drive properly.

I think there may be an analogy with the responsibility held by the head of a corporation: the general rule is that the buck stops with the chief, even if the chief did not give orders for the particular action which subordinates have taken; in the same way we’re presumed by default to be responsible for what we do: but there are cases where control is genuinely and unavoidably lost, no matter what prudent precautions the chief may have put in place. There may be cases where the chief properly has full and sole responsibility; in other cases where the corporation has blundered on in pursuit of its own built-in inclinations it may be appropriate for the organization as a whole to accept blame for its corporate personality: and where confusion reigned for reasons beyond reasonable control, no responsibility may be assigned at all.

If that’s right, then Droege is on to something; but if there are two distinct grades of responsibility in play, there ought really to be two varieties of free will; the one exercised explicitly by the fully conscious me, and the other by ‘whole person’ me, in which the role of the conscious me, while perhaps not non-existent is small and perhaps mostly indirect. This is an odd thought, but if, like Droege, we broadly accept that Libet has disproved the existence of the first variety of free will, it means we don’t have the kind we can’t help believing in, but do have another kind we never previously thought of – which seems even odder.