CEMI and meaning

bindingJohnjoe McFadden has followed up the paper on his conscious electromagnetic information (CEMI) field which we discussed recently with another in the JCS – it’s also featured on MLU, where you can access a copy.

This time he boldly sets out to tackle the intractable enigma of meaning. Well, actually, he says his aims are more modest; he believes there is a separate binding problem which affects meaning and he wants to show how the CEMI field offers the best way of resolving it. I think the problem of meaning is one of those issues it’s difficult to sidle up to; once you’ve gone into the dragon’s lair you tend to have to fight the beast even if all you set out to do was trim its claws; and I think McFadden is perhaps drawn into offering a bit more than he promises; nothing wrong with that, of course.

Why then, does McFadden suppose there is a binding problem for meaning? The original binding problem is to do with perception. All sorts of impulses come into our heads through different senses and get processed in different ways in different places and different speeds. Yet somehow out of these chaotic inputs the mind binds together a beautifully coherent sense of what is going on, everything matching and running smoothly with no lags or failures of lip-synch. This smoothly co-ordinated experience is robust, too; it’s not easy to trip it up in the way optical illusions so readily derail up our visual processes. How is this feat pulled off? There are a range of answers on offer, including global workspaces and suggestions that the whole thing is a misconceived pseudo-problem; but I’ve never previously come across the suggestion that meaning suffers a similar issue.

McFadden says he wants to talk about the phenomenology of meaning. After sitting quietly and thinking about it for some time, I’m not at all sure, on the basis of introspection, that meaning has any phenomenology of its own, though no doubt when we mean things there is usually some accompanying phenomenology going on. Is there something it is like to mean something? What these perplexing words seem to portend is that McFadden, in making his case for the binding problem of meaning, is actually going to stick quite closely with perception. There is clearly a risk that he will end up talking about perception; and perception and meaning are not at all the same. For one thing the ‘direction of fit’ is surely different; to put it crudely, perception is primarily about the world impinging on me, whereas meaning is about me pointing at the world.

McFadden gives five points about meaning. The first is unity; when we mean a chair, we mean the whole thing, not its parts. That’s true, but why is it problematic? McFadden talks about how the brain deals with impossible triangles and sees words rather than collections of letters, but that’s all about perception; I’m left not seeing the problem so far as meaning goes. The second point is context-dependence. McFadden quite rightly points out that meaning is highly context sensitive and that the same sequence of letters can mean different things on different occasions. That is indeed an interesting property of meaning; but he goes on to talk about how meanings are perceived, and how, for example, the meaning of “ball” influences the way we perceive the characters 3ALL. Again we’ve slid into talking about perception.

With the third point, I think we fare a bit better; this is compression, the way complex meanings can be grasped in a flash. If we think of a symphony, we think, in a sense, of thousands of notes that occur over a lengthy period, but it takes us no time at all. This is true, and it does point to some issue around parts and wholes, but I don’t think it quite establishes McFadden’s point. For there to be a binding problem, we’d need to be in a position where we had to start with meaning all the notes separately and then triumphantly bind them together in order to mean the symphony as a whole – or something of that kind, at any rate. It doesn’t work like that; I can easily mean Mahler’s eighth symphony (see, I just did it), of whose notes I know nothing, or his twelfth, which doesn’t even exist.

Fourth is emergence: the whole is more than the sum of its parts. The properties of a triangle are not just the properties of the lines that make it up. Again, it’s true, but the influence of perception is creeping in; when we see a triangle we know our brain identifies the lines, but we don’t know that in the case of meaning a triangle we need at any stage to mean the separate lines – and in fact that doesn’t seem highly plausible. The fifth and last point is interdependence: changing part of an object may change the percept of the whole, or I suppose we should be saying, the meaning. It’s quite true that changing a few letters in a text can drastically change its meaning, for example. But again I don’t see how that involves us in a binding problem. I think McFadden is typically thinking of a situation where we ask ourselves ‘what’s the meaning of this diagram?’ – but that kind of example invites us to think about perception more than meaning.

In short, I’m not convinced that there is a separate binding problem affecting meaning, though McFadden’s observations shed some interesting lights on the old original issue. He does go on to offer us a coherent view of meaning in general. He picks up a distinction between intrinsic and extrinsic information. Extrinsic information is encoded or symbolised according to arbitrary conventions – it sort of corresponds with derived intentionality – so a word, for example, is extrinsic information about the thing it names. Intrinsic information is the real root of the matter and it embodies some features of the thing represented. McFadden gives the following definition.

Intrinsic information exists whenever aspects of the physical relationships that exist between the parts of an object are preserved – either in the original object or its representation.

So the word “car” is extrinsic and tells you nothing unless you can read English. A model of a car, or a drawing, has intrinsic information because it reproduces some of the relations between parts that apply in the real thing, and even aliens would be able to tell something about a car from it (or so McFadden claims). It follows that for meaning to exist in the brain there must be ‘models’ of this kind somewhere. (McFadden allows a little bit of wiggle room; we can express dimensions as weights, say, so long as the relationships are preserved, but in essence the whole thing is grounded in what some others might call ‘iconic’ representation. ) Where could that be? The obvious place to look is in the neurons. but although McFadden allows that firing rates in a pattern of neurons could carry the information, he doesn’t see how they can be brought together: step forward the CEMI field (though as I said previously I don’t really understand why the field doesn’t just smoosh everything together in an unhelpful way).

The overall framework here is sensible and it clearly fits with the rest of the theory; but there are two fatal problems for me. The first is that, as discussed above, I don’t think McFadden succeeds in making the case for a separate binding problem of meaning, getting dragged back by the gravitational pull of perception. We have the original binding problem because we know perception starts with a jigsaw kit of different elements and produces a slick unity, whereas all the worries about parts seem unmotivated when it comes to meaning. If there’s no new binding problem of meaning, then the appeal of CEMI as a means of solving it is obviously limited.

The second problem is that his account of meaning doesn’t really cut the mustard. This is unfair, because he never said he was going to solve the whole problem of meaning, but if this part of the theory is weak it inevitably damages the rest.  The problem is that representations that work because they have some of the properties of the real thing, don’t really work.  For one thing a glance at the definition above shows it is inherently limited to things with parts that have a physical relationship. We can’t deal with abstractions at all. If I tell you I know why I’m writing this, and you ask me what I mean, I can’t tell you I mean my desire for understanding, because my desire for understanding does not have parts with a physical relationship, and there cannot therefore be intrinsic information about it.

But it doesn’t even work for physical objects. McFadden’s version of intrinsic information would require that when I think ‘car’ it’s represented as a specific shape and size. In discussing optical illusions he concedes at a late stage that it would be an ‘idealised’ car (that idealisation sounds problematic in itself); but I can mean ‘car’ without meaning anything ideal or particular at all. By ‘car’ I can in fact mean a flying vehicle with no wheels made of butter and one centimetre long  (that tiny midge is going to regret settling in my butter dish as he takes his car ride into the bin of oblivion courtesy of a flick from my butter knife), something that does not in any way share parts with physical relationships which are the same as any of those applying to the big metal thing in the garage.

Attacking that flank, as I say, probably is a little unfair. I don’t think the CEMI theory is going to get new oomph from the problems of meaning, but anyone who puts forward a new line of attack on any aspect of that intractable issue deserves our gratitude.

Blind Brain

Blind AquinasBesides being the author of thoughtful comments here – and sophisticated novels, including the great fantasy series The Second Apocalypse – Scott Bakker has developed a theory which may dispel important parts of the mystery surrounding consciousness.

This is the Blind Brain Theory (BBT). Very briefly, the theory rests on the observation that from the torrent of information processed by the brain, only a meagre trickle makes it through to consciousness; and crucially that includes information about the processing itself. We have virtually no idea of the massive and complex processes churning away in all the unconscious functions that really make things work and the result is that consciousness is not at all what it seems to be. In fact we must draw the interesting distinction between what consciousness is and what it seems to be.

There are of course some problems about measuring the information content of consciousness, and I think it remains quite open whether in the final analysis information is what it’s all about. There’s no doubt the mind imports information, transforms it, and emits it; but whether information processing is of the essence so far as consciousness is concerned is still not completely clear. Computers input and output electricity, after all, but if you tried to work out their essential nature by concentrating on the electrical angle you would be in trouble. But let’s put that aside.

You might also at first blush want to argue that consciousness must be what it seems to be, or at any rate that the contents of consciousness must be what they seem to be: but that is really another argument. Whether or not certain kinds of conscious experience are inherently infallible (if it feels like a pain it is a pain), it’s certainly true that consciousness may appear more comprehensive and truthful than it is.

There are in fact reasons to suspect that this is actually the case, and Scott mentions three in particular; the contingent and relatively short evolutionary history of consciousness, the complexity of the operations involved, and the fact that it is so closely bound to unconscious functions. None of these prove that consciousness must be systematically unreliable, of course. We might be inclined to point out that if consciousness has got us this far it can’t be as wrong as all that. A general has only certain information about his army – he does not know the sizes of the boots worn by each of his cuirassiers, for example – but that’s no disadvantage: by limiting his information to a good enough set of strategic data he is enabled to do a good job, and perhaps that’s what consciousness is like.

But we also need to take account of the recursively self-referential nature of consciousness. Scott takes the view (others have taken a similar line), that consciousness is the product of a special kind of recursion which allows the brain to take into account its own operations and contents as well as the external world. Instead of simply providing an output action for a given stimulus, it can throw its own responses into the mix and generate output actions which are more complex, more detached, and in terms of survival, more effective. Ultimately only recursively integrated information reaches consciousness.

The limits to that information are expressed as information horizons or strangely invisible boundaries; like the edge of the visual field the contents of conscious awareness  have asymptotic limits – borders with only one side. The information always appears to be complete even though it may be radically impoverished in fact. This has various consequences, one of which is that because we can’t see the gaps, the various sensory domains appear spuriously united.

This is interesting, but I have some worries about it. The edge of the visual field is certainly phenomenologically interesting, but introspectively I don’t think the same kind of limit seems to come up with other senses. Vision is a special case: it has an orderly array of positions built in, so at some point the field has to stop arbitrarily; with sound the fading of farther sounds corresponds to distance in a way which seems merely natural; with smell position hardly comes into it and with touch the built-in physical limits mean the issue of an information horizon doesn’t seem to arise. For consciousness itself spatial position seems to me at least to be irrelevant or inapplicable so that the idea of a boundary doesn’t make sense. It’s not that I can’t see the boundary or that my consciousness seems illimitable, more that the concept is radically inapplicable, perhaps even metaphorically. Scott would probably say that’s exactly how it is bound to seem…

There are several consequences of our being marooned in an encapsulated informatic island whose impoverishment is invisible to us: I mentioned unity, and the powerful senses of a ‘now’ and of personal identity are other examples which Scott covers in more detail. It’s clear that a sense of agency and will could also be derived on this basis and the proposition that it is our built-in limitations that give rise to these powerfully persuasive but fundamentally illusory impressions makes a good deal of sense.

More worryingly Scott proceeds to suggest that logic and even intentionality – aboutness – are affected by a similar kind of magic that similarly turns out to be mere conjuring. Again, results generated by systems we have no direct access to, produce results which consciousness complacently but quite wrongly attributes to itself and is thereby deluded as to their reliability. It’s not exactly that they don’t work (we could again make the argument that we don’t seem to be dead yet, so something must be working) more that our understanding of how or why they work is systematically flawed and in fact as we conceive of them they are properly just illusions.

Most of us will, I think want to stop the bus and get off at this point. What about logic, to begin with? Well, there’s logic and logic. There is indeed the unconscious kind we use to solve certain problems and which certainly is flawed and fallible; we know many examples where ordinary reasoning typically goes wrong in peculiar ways. But then there’s formal explicit logic, which we learn laboriously, which we use to validate or invalidate the other kind and which surely happens in consciousness (if it doesn’t then really I don’t think anything does and the whole matter descends into complete obscurity); hard not to feel that we can see and understand how that works too clearly for it to be a misty illusion of competence.

What about intentionality? Well, for one thing to dispel intentionality is to cut off the branch on which you’re sitting: if there’s no intentionality then nothing is about anything and your theory has no meaning. There are some limits to how radically sceptical we can be. Less fundamentally, intentionality doesn’t seem to me to fit the pattern either; it’s true that in everyday use we take it for granted, but once we do start to examine it the mystery is all too apparent. According to the theory it should look as if it made sense, but on the contrary the fact that it is mysterious and we have no idea how it works is all too clear once we actually consider it. It’s as though the BBT is answering the wrong question here; it wants to explain why intentionality looks natural while actually being esoteric; what we really want to know is how the hell that esoteric stuff can possibly work.

There’s some subtle and surprising argumentation going on here and throughout which I cannot do proper justice to in a brief sketch, and I must admit there are parts of the case I may not yet have grasped correctly – no doubt through density (mine, not the exposition’s) but also I think perhaps because some of the latter conclusions here are so severely uncongenial. Even if meaning isn’t what I take it to be, I think my faulty version is going to have to do until something better comes along.

(BTW, the picture is supposed to be Thomas Aquinas, who introduced the concept of intentionality. The glasses are suppose to imply he’s blind, but somehow he’s just come out looking like a sort of cool monk dude. Sorry about that.)

 

When incredulities meet

Picture: qualintentionality. Sometimes mistakes can be more interesting than getting it right. Last week I was thinking about Pauen’s claim, reasonable enough, that belief in qualia is ultimately based on the intuitive sense that experience and physics are two separate realms. The idea that subjective stuff, the redness of red and so on, could be nothing but certain jigs danced by elementary particles, provokes a special incredulity. What’s the famous quote that sums that up, I thought? Something about…

This phenomenal quality is characteristic exclusively of mental phenomena. No physical phenomenon exhibits anything like it.

That captures the incredulity quite nicely. However, it dawned on me that it was Brentano, and he didn’t say ‘phenomenal quality’, he said ‘intentional inexistence’.

So it turns out we have two incredulitites, one about qualia – subjectivity or ‘what it is like’, one about intentionality – ‘aboutness’ or meaningfulness. To me, they have a very similar feel. So what do we say about that? I can see four reasonable possibilities.

  1. The resemblance is superficial: just because your mind boggles at two different things, it doesn’t mean the two things are identical.
  2. The incredulity is the same because it’s not specifically attached to qualia or intentionality, it’s just characteristic of mental phenomena of all kinds.
  3. The incredulity arises from intentionality, and qualia have it because they are intentional in nature.
  4. The incredulity arises from qualia, and intentionality has it because it arises out of qualia.

Although 1. is a very rational line to take, I can’t help feeling there is at least a little more to it than that. I don’t detect in myself a third incredulity – I don’t feel that nothing in subjectivity could possibly account for intentionality, or vice versa: that remains to be examined. And to put it no higher, it would be nice if we could tidy things up by linking the two problems, or even perhaps reducing one to the other. One inexplicable realm is bad enough.

I suppose 2. is what Brentano himself might have said. I don’t know whether we’d now be quite so quick to bestow the mystery on all mental phenomena: it doesn’t seem so implausible now that calculation or choosing a chess move might be nothing more than a special kind of physical activity. Moreover, if the problem doesn’t come from intentionality or qualia, we seem to have a third problem distinct from either, which is unwelcome, and a slight difficulty over the relationships. It doesn’t seem much of an answer to say that qualia seem strange and non-physical because they’re mental, unless we can go on to say a lot more about the spookiness of the mental and why it attaches to subjective experience the way it does.

I suppose we could go dualist here, and say that mental things exist in a separate domain in which both qualia and meanings participate. Isn’t something like that the main reason dualists are dualists, in fact? Taking that route involves the usual problems of explaining the interaction between worlds and indeed, giving some explanation of how the second world works. If we don’t give that latter explanation we seem only to have deferred the issues.

It might be easier if we said something along the lines of the mental being essentially a different level of explanation within a monist universe. For me, that looks at least a starter so far as intentionality goes, but not for qualia. They’re not really a level of explanation – they’re not explanatory at all, quite the reverse. This brings out some interesting differences. In the case of qualia we already have a pretty full scientific account of how the senses work. We pretty much know what we’d reduce qualia to, if we’re in the market for a reduction. In a sense, the way is clear: there’s no work in the ordinary world that we need qualia to do, we just need an extra ineffable zing from somewhere, something we could arguably dispense with. For intentionality, things are much worse. There is no scientific account of meaning, we don’t really know how the brain deals with it, yet it is an essential part of our lives which can’t be dismissed as airy-fairy obscurantism.  Curiously, of course, it’s qualia which are seen as the Hard Problem, while intentionality is part of the easy one. I suppose this is because when we contemplate intentionality, it doesn’t seem intractable. We may not know how it works, but it looks like the kind of thing we could get a grip on given a couple of insights; whereas there seems no way of scaling the smooth glassy wall presented by qualia.

Here’s a thought: if we’re saying that the two issues are different facets of the same problem, we ought to be able to apply the established qualia arguments to intentionality and still make sense, shouldn’t we?  We can’t do it the other way because I don’ t think there are any arguments for the existence of intentionality – nobody denies it.

So: the zombies go quite well, at first sight, anyway: we’d say that intentionality zombies (another kind – sorry) look and behave like us, but never actually mean what they say or understand the words they read.  By some process they come out with appropriate responses, but in the same sort of sense as the original zombies, the lights are all out.

Then instead of inverted spectra, we’d have inverted meanings. This is trickier, because there’s no tidy realm of meaning equivalent to the spectrum we can use – unless we co-opt the spectrum itself and say that when you mean red, you say blue… That doesn’t seem to work. Could we say that you actually mean the negation of everything you say, but for some reason act otherwise…? Maybe not.

Alright, let’s try Mary: Intentionality Mary was brought up without ever grasping the meaning of anything, but she understands everything there is to know about cognition… That doesn’t seem to make sense.

The problem is always that qualia have no causal effects, whereas meanings and intentions absolutely do: in fact if anything the problem with them is explaining their efficacy. Noting this, we can see that actually even the zombies didn’t really work: we can believe in people who behave like us without having real experience, but it’s surely nonsensical to say that our counterparts without desires or intentions would behave the same way as us, unless we’re really only talking about some kind of quale of desire or intention.

So if qualia and intentionality are radically different in some respects, the differences might provide at least a hint that ‘both mental’ is not a good enough explanation for the two incredulities.

What about option 3? Could it be that the incredulity we’re concerned with is basically attached to intentionality, and qualia only have it because they are intentional in nature? On the face of it it seems quite reasonable to think that the redness we experience is about the rose, and that it’s the special magic aboutness that adds the extra ineffable quality. With other qualia, though, it’s not so clear. If you take happiness to be qualic, what is it about? We can of course be happy about particular things, but that’s distinct from just being happy. Moreover, there’s plenty of intentionality without qualia: an account book is suffused with intentionality. In fairness, that’s only the derived kind – accounts only mean what we make them mean – perhaps it’s only the original intentionality of our thoughts that bestows qualicity?  But with intentionality, we expect content. We believe and desire and think that x or y, with x or y being capable of expression in words: but it’s the whole point of qualia that there’s nothing like that available.

Option 4 says qualia are fundamental and intentionality springs from them. John Searle has actually put this view forward (in addition to his view that intentionality is the business of of imposing directions of fit on directions of fit). The suggestion here is that, for example, the feeling of hunger is about food in some basic, primitive sense, and that it’s on similar qualia that all our meaningfulness is built. The example has a definite appeal, and there’s something attractive about rooting intentionality in the ‘three Fs’ of survival: making it not some celestial mystery but a particular slant that arises out our nature as competitive and social biological creatures. But there are problems. We must remember that the quale of hunger has no causal effects: it’s only the functional counterpart that actually causes us to speak or seek food, so the connection between the quale and the expression of beliefs or desires is broken. We may suspect for other reasons that it’s not really the quale at work here: the sense in which hunger means food looks very like H.P.Grice’s natural meanings (those spots mean measles). We may suspect that this is really what makes the example seem to work, yet completely inanimate and non-qualic things can have this kind of meaning (those clouds mean rain), so although it is an excellent place to start looking for an analysis of intentionality, it doesn’t seem to be a matter of qualia.

Personally, I would reaffirm the view I’ve often set out before: I haven’t a clue what’s going on.

Thatter way to consciousness

Picture: Raymond Tallis‘Aping Mankind’ is a large scale attack by Raymond Tallis on two reductive dogmas which he characterises as ‘Neuromania’ and ‘Darwinitis’.  He wishes especially to refute the identification of mind and brain, and as an expert on the neurology of old age, his view of the scientific evidence carries a good deal of weight. He also appears to be a big fan of Parmenides, which suggests a good acquaintance with the philosophical background. It’s a vigorous, useful, and readable contribution to the debate.

Tallis persuasively denounces exaggerated claims made on the basis of brain scans, notably claims to have detected the ‘seat of wisdom’ in the brain.  These experiments, it seems, rely on what are essentially fuzzy and ambiguous pictures arrived at by subtraction in very simple experimental conditions, to provide the basis for claims of a profound and detailed understanding far beyond what they could possibly support. This is no longer such a controversial debunking as it would have been a few years ago, but it’s still useful.

Of course, the fact that some claims to have reduced thought to neuronal activity are wrong does not mean that thought cannot nevertheless turn out to be neuronal activity, but Tallis pushes his scepticism a long way. At times he seems reluctant to concede that there is anything more than a meaningless correlation between the firing of neurons in the brain and the occurence of thoughts in the mind.  He does agree that possession of a working brain is a necessary condition for conscious thought, but he’s not prepared to go much further. Most people, I think, would accept that Wilder Penfield’s classic experiments, in which the stimulation of parts of the brain with an electrode caused an experience of remembered music in the subject, pretty much show that memories are encoded in the brain one way or another; but Tallis does not accept that neurons could constitute memories. For memory you need a history, you need to have formed the memories in the first place, he says: Penfield’s electrode was not creating but merely reactivating memories which already existed.

Tallis seems to start from a kind of Brentanoesque incredulity about the utter incompatibility of the physical and the mental. Some of his arguments have a refreshingly simple (or if you prefer, naive) quality: when we experience yellow, he points out, our nerve impulses are not yellow.  True enough, but then a word need not be printed in yellow ink to encode yellowness either. Tallis quotes Searle offering a dual-aspect explanation: water is H2O, but H2O molecules do not themselves have watery properties: you cannot tell what the back of a house loks like from the front, although it is the same house. In the same way our thoughts can be neural activity without the neurons themselves resembling thoughts. Tallis utterly rejects this: he maintains that to have different aspects requires a conscious observer, so we’re smuggling in the very thing we need to explain.  I think this is an odd argument. If things don’t have different aspects until an observer is present, what determines the aspects they eventually have? If it’s the observer, we seem to slipping towards idealism or solipsism, which I’m sure Tallis would not find congenial. Based on what he says elsewhere, I think Tallis would say the thing determines its own aspects in that it has potential aspects which only get actualised when observed; but in that case didn’t it really sort of have those aspects all along? Tallis seems to be adopting the view that an appearance (say yellowness) can only properly be explained by another thing that already has that same appearance (is yellow). It must be clear that if we take this view we’re never going to get very far with our explanations of yellow or any other appearance.

But I think that’s the weakest point in a sceptical case which is otherwise fairly plausible. Tallis is Brentanoesque in another way in that he emphasises the importance of intentionality – quite rightly, I think. He suggests it has been neglected, which I think is also true, although we must not go overboard: both Searle and Dennett, for example, have published whole books about it. In Tallis’ view the capacity to think explicitly about things is a key unique feature of human mindfulness, and that too may well be correct. I’m less sure about his characterisation of intentionality as an outward arrow. Perception, he says, is usually represented purely in terms of information flowing in, but there is also a corresponding outward flow of intentionality. The rose we’re looking at hits our eye (or rather a beam of light from the rose does so), but we also, as it were, think back at the rose. Is this a useful way of thinking about intentionality? It has the merit of foregrounding it, but I think we’d need a theory of intentionality  in order to judge whether talk of an outward arrow was helpful or confusing, and no fully-developed theory is on offer.

Tallis has a very vivid evocation of a form of the binding problem, the issue of how all our different sensory inputs are brought together in the mind coherently. As normally described, the binding problem seems like lip-synch issues writ large: Tallis focuses instead on the strange fact that consciousness is united and yet composed of many small distinct elements at the same time.  He rightly points out that it’s no good having a theory which merely explains how things are all brought together: if you combine a lot of nerve impulses into one you just mash them. I think the answer may be that we can experience a complex unity because we are complex unities ourselves, but it’s an excellent and thought-provoking exposition.

Tallis’ attack on’ Darwinitis’ takes on Cosmidoobianism, memes and the rest with predictable but entertaining vigour. Again, he presses things quite a long way. It’s one thing to doubt whether every feature of human culture is determined by evolution: Tallis seems to suggest that human culture has no survival value, or at any rate, had none until recently, too recently to account for human development. This is reminiscent of the argument put by Alfred Russel Wallace, Darwin’s co-discoverer of the principle of survival of the fittest: he later said that evolution could not account for human intelligence because a caveman could have lived his life perfectly well with a much less generous helping of it. The problem is that this leaves us needing a further explanation of why we are so brainy and cultured; Wallace, alas, ended up resorting to spiritualism to fill the gap (we can feel confident that Tallis, a notable public champion of disbelief, will never go that way). It seems better to me to draw a clear distinction between the capacity for human culture, which is wholly explicable by evolutionary pressure, and the contents of human culture, which are largely ephemeral, variable, and non-hereditary.

Tallis points out that some sleight of hand with vocabulary is not unknown in this area, in particular the tactic of the transferrred epithet: a word implying full mental activity is used metaphorically – a ‘smart’ bomb is said to be ‘hunting down’ its target – and the important difference is covertly elided. He notes the particular slipperiness of the word ‘information’, something we’ve touched on before.

It is a weakness of Tallis’ position that he has no general alternative theory to offer in place of those he is attacking – consciousness remains a mystery (he sympathises with Colin McGinn’s mysterianism to some degree, incidentally, but reproves him for suggesting that our inability to understand ourselves might be biological). However, he does offer positive views of selfhood and free will, both of which he is concerned to defend. Rather than the brain, he chooses to celebrate the hand as a defining and influential human organ: opposable thumbs allow it to address itself and us: it becomes a proto-tool and this gives us a sense of ourselves as acting on the world in a tool-like manner. In this way we develop a sense of ourselves as a distinct entity and an agent, an existential intuition.  This is OK as far as it goes though it does sound in places like another theory of how we get a mere impression, or dare I say an illusion, of selfhood and agency, the very position Tallis wants to refute. We really need more solid ontological foundations. In response to critics who have pointed to the elephant’s trunk and the squid’s tentacles, Tallis grudgingly concedes that hands alone are not all you need and a human brain does have something to contribute.

Turning to free will, Tallis tackles Libet’s experiments (which seem to show that a decision to move one’s hand is actually made a measurable time before one becomes aware of it). So, he says, the decision to move the hand can be tracked back half a second? Well, that’s nothing: if you like you can track it back days, to when the experimental subject decided to volunteer; moreover, the aim of the subject was not just to move the hand, but also to help that nice Dr Libet, or to forward the cause of science. In this longer context of freely made decisions the precise timing of the RP is of no account.

To be free according to Tallis, an act must be expressive of what the agent is, the agent must seem to be the initiator, and the act must deflect the course of events. If we are inclined to doubt that we can truly deflect the course of events, he again appeals to a wider context: look at the world around us, he says, and who can doubt that collectively we have diverted the course of events pretty substantially?  I don’t think this will convert any determinists. The curious thing is that Tallis seems to be groping for a theory of different levels of description, or well, a dual aspect theory.  I would  have thought dual-aspect theories ought to be quite congenial to Tallis, as they represent a rejection of ‘nothing but’ reductionism in favour of an attempt to give all levels of interpretation parity of esteem, but alas it seems not.

As I say, there is no new theory of consciousness on offer here, but Tallis does review the idea that we might need to revise our basic ideas of how the world is put together in order to accommodate it. He is emphatically against traditional dualism, and he firmly rejects the idea that quantum physics might have the explanation too. Panpsychism may have a certain logic but generate more problems than it solves.  Instead he points again to the importance of intentionality and the need for a new view that incorporates it: in the end ‘Thatter’, his word for the indexical, intentional quality of the mental world, may be as important as matter.

Introspection

Picture: Socrates looking at himself. Introspection, the direct examination of the contents of our own minds, seems itself to be in many minds at the moment.  The latest issue of the Journal of Consciousness Studies was devoted to papers on introspection, marking the tenth anniversary of the publication of The View from Within, by Francisco Varela and Jonathan Shear (which was itself a special edition of the JCS); and now Eric Schwitzgebel has produced a new entry for the Stanford Encyclopedia of Philosophy.

The two accounts are of course quite different in some respects. The encyclopaedia entry is a careful, scholarly account, neutral and comprehensive; the JCS issue is openly a rallying-cry in support of a programme flowing from Varela’s work.  This, it seems, called for an end to the ban on examination of lived experience;  the JCS gives the impression that it was something of a milestone, though Schwitzgebel’s piece does not mention it (he does cite an earlier paper by Varela, once again in the JCS).

What’s all this about a ban? Well, back in the nineteenth century, psychologists had no fears about using introspective evidence; it was thought that a proper scientific effort would lead to an objectively verifiable kind of phenomenology. We should be able to classify the elements of mental experience and clarify how they worked together, just by examining what went on in our own heads. A great deal of work was done on all this (It was a great disappointments for me to discover, on first opening Brentano’s Psychology from an Empirical Standpoint, that it consisted almost entirely of this kind of thing, and that the only passage about intentional inexistence, the interesting issue, was the couple of paragraphs which I had already read as quotes in several other books.).  There was a gradual refinement of the methods involved, leading on to the great heyday of introspectionism, with Wundt and Titchener in the lead. Unfortunately, it became clear that the rival schools of introspectionism had begun to come up with results which in some respects were radically different and incompatible, and since our own introspections are by their nature private and unverifiable, all they could really do by way of settling the issues was to shout at each other.

This embarrassing impasse led to a reaction away from introspection and to the rise of behaviourism, which not only denied the usefulness of examining our inner experience, but actually went to the extreme of denying that there was any such thing as inner experience.  Behaviourism in its turn fell out of favour, but according to Varela there remained an instinctive distrust of introspection which continued to put people off it as an avenue of research. This is the ‘ban’ he wanted to see overturned.

Was there, is there, really a ban? Not exactly.  Apart from the most dogmatic of the behaviourists, no-one has ever tried to exclude introspection altogether. In recent times, introspective evidence has been widely accepted – the problem of qualia, thought by some to be the problem of consciousness, depends entirely on introspection. I think the real problem arises when we adopt special methods. In order to obtain consistent results, the old introspectionists thought extensive training was necessary. It wasn’t enough to sit and think for a bit; you had to have mastered certain skills of discrimination and perception. The methodological dangers involved in teaching your researchers what kind of thing they could legitimately look for are clear.

Unfortunately, it seems to be very much this kind of programme which the JCS authors would like to resurrect – or rather, have resurrected, and wish to gain acceptance and support for.  Once again we are going to need to learn how to introspect properly before our observations will be acceptable. What makes it worse for me is that the proposal seems to be tied up with NLP – Neuro-linguistic Programming.  I don’t know a great deal about NLP: it seems to be a protean doctrine which shares with the Holy Roman Empire the property of not really being any of the three things in its name – but for me it does nothing to render another trip down this particular blind alley more attractive.

Blandula I don’t know about that, but aren’t they right to emphasise the potential value of introspection? Isn’t it the case that introspection is our only source of infallible information? Most of the things we perceive are subject to error and delusion, but we can’t, for example, be wrong about the fact that we are feeling pain, can we? That seems interesting to me. Our impressions of the outside world come to us through a chain of cause and effect, and at any stage errors or misinterpretations can creep in; but because introspection is direct, there’s no space for error to occur. You could well say it’s our only source of certain knowledge – isn’t that worth pursuing a little more systematically?

Bitbucket Infallible? That is the exact reverse of the truth: in fact all introspections are false. Think about it. Introspection can only address the contents of consciousness, right? You can’t introspect the unconscious mental processes that keep you balanced, or regulate your heartbeat. But all of the contents of consciousness have intentionality – they’re all about things, yes? So to have direct experience of mental content is to be thinking about something else – not about the mental state itself, but about the thing it’s about! Now when we attempt to think directly about our own mental states, it follows that we’re not experiencing them in themselves – we’re experiencing a different mental state which is about them. In short, we’re necessarily imagining our mental states. Far from having direct contact, we are inevitably thinking about something we’ve just made up.

How wrong can I be?

Picture: Infallible. I was pondering the question of my own infallibility recently.  Not as the result of a sudden descent into megalomaniacal delusion – I was thinking only of the kinds of infallibility which, if they exist, are shared by all of us conscious beings.

Of course, I am only infallible on certain points: the question is, which? One prime candidate is my own existence. Quite a few people these days contend that the ‘self’ is an illusion, perhaps a fading shadow of the idea of the soul, or kind of trick with smoke and mirrors.  If we are to believe Descartes and his cogito, however, my own existence is the one thing I can’t doubt. Non-existent people don’t doubt anything, so to doubt my own existence is to prove it – though some would be quick to point out that a momentary doubt doesn’t amount to all that much in the way of a self, and that everything else remains to be argued for.

There are, in any case, some other issues on which I may be infallible.  I might be infallible about certain aspects of my current experience.  I could certainly be mistaken (dreaming, deluded, illuded) about the fact that this is a dagger I see before me, but I couldn’t be wrong about the fact that it seems to me there’s a dagger before me, could I? The world might be an illusion, but even illusions are things.

Or to make it even less debatable, I could be wrong about having stubbed my toe just now, but I couldn’t be wrong about the sensation of pain I’m currently feeling, surely? Perhaps I never stubbed my toe; perhaps I don’t have toes and am just a brain in a vat somewhere; perhaps none of the world really exists at all and this current thought, with all its implied memories and feelings,  is just a weird metaphysical wrinkle on the surface of universal nothingness; but that experience of pain is finally undeniable. The sheer immediacy of pain seems to mean that there just is no gap between me and it into which any misunderstanding could creep.

And yet… We’re familiar these days with the strange deformations of awareness which can result from brain injury; people who no longer recognise their arm as belonging to them; people who feel pain in an arm they haven’t got any more; people who are blind but insist, in the teeth of the evidence, that they can see.  Isn’t it possible we could have people who believe themselves to be in pain when they aren’t? A competent hypnotist produces false and even absurd beliefs in subjects all the time and could probably induce such a state without the least difficulty?

Well, a hypnotist could certainly induce someone to say they were in pain, and behave as if they were in pain; but would the subject be in real pain? Unfortunately, the only way we can get at people’s real, inner, subjective states is through their reports, so if a hypnotist has interfered with their ability to report, we’re a bit stuck. These days, it’s true, we could put someone in a scanner and have a look at their brain activity; but that would still beg some philosophical questions.

It’s tempting to say, look, I have real pain in my toe right this minute and that – that – can’t be a mistake. I grant you could fool some person into declaring themselves in pain falsely, and even believing it. We could imagine Mary the Pain Scientist, who has lived since birth in a state of analgesia; then we tickle her toes and tell her that that’s pain. Of course she believes it. But these cases of error somehow just don’t touch the infallibility of the real cases, like mine. Mary, and the other deluded pain-claimants, are simply using the wrong words – they’re calling something pain which isn’t really pain. But let’s put words aside; that thing I’m feeling now – that’s what I’m feeling, and I can’t be wrong.

If that argument succeeds, it seems to do so only by descending to a level where the concepts of truth and falsity no longer apply: of course there’s a sense in which a mere wordless sensation can’t be false. It can still be real, but if the reality of my feelings is all we’ve established, we don’t seem to have added anything of substance to the cogito. And indeed if we put ourselves back into Descartes shoes, it seems impossible to deny that some wicked demon could have convinced us that we were in pain when we aren’t – that’s more or less what happened to Pain Mary.

This is murky territory, but my own guess is that while we can’t feel pain without feeling pain, we can believe we’re in pain without feeling real pain, and for that matter we can feel pain while holding at some level the erroneous belief that we’re not feeling pain. The feeling itself may be veridical in some loose sense, but it can coexist with a higher-order belief about it which happens to be false.

I feel reasonably happy about that because it is clearly the case that we can have false beliefs about our own beliefs;  indeed, it’s pretty common.  I absolutely believe that bungee jumping is safe, until I step onto the platform, when I find that some part of my brain, or perhaps just my legs and stomach, hold other views entirely.

But the mention of believing something with one’s legs reveals that beliefs are slippery and polymorphous. To believe something I don’t need to do anything; I can have beliefs about things I never thought of  (yesterday I believed that Kubla Khan’s smallest horse never used the St Malo ferry, and would have said so confidently and without hesitation if asked, though I never thought of the beast until now), and I can go on believing things while unconscious, perhaps indeed when dead (did Galileo stop believing that the Earth goes round the Sun when he died?) .

But what about good old straightforward thoughts? Surely my thought that I am thinking that A, is much less vulnerable than a belief that I believe that A? How could I be thinking about a nice cup of tea while thinking that I’m thinking about the ferry to St Malo? It’s certainly possible for the attention to wander from one topic to another by insensible degrees – but could I really be mistaken about what I’m thinking now? There seems a real problem in that to me.

Now of course introspection is systematically unreliable in dealing with questions of this sort. Since introspective thoughts are pretty much by definition second order – ie, they’re thoughts about thoughts – introspection only gives me access to half of the comparison. I can only think about thoughts I’m thinking about. If my real thoughts were not what I thought I was thinking, how would I know any different?

It’s a fair point, but if my thoughts could be different from what I thought I was thinking, it would surely give rise to some very odd discrepancies between my behaviour and my expectations. In the case of beliefs, we’ve already noticed that discrepancies of a sort can arise, causing some minor inconsistencies in my behaviour over bungee-jumping, for example, as I stride confidently to the platform and then subside into panicky paralysis – but current thoughts seem a different and more difficult case to me.

How could that be so, though? Thoughts about my thoughts are second-order thoughts, and there is no magic connection between a thought and its target which guarantees accuracy.  It follows that there is simply no way of guaranteeing that second-order thoughts are not erroneous, and so there seems to be no way the infallibility I’m attributing to my own thoughts could arise.

The only answer I can see is that we must be wrong to assume that my knowledge of my own thoughts comes from second-order thoughts. The reason I know what I’m thinking is not that I have another true thought about what I’m thinking; instead, my knowledge just comes with the fact that this is what I’m thinking.

That radically contradicts the theory championed by some that conscious thoughts are exactly those for which there is a second order thought about the first thought. The error here, perhaps, has been to construct a model for our knowledge of our own thoughts which resembles our knowledge of the contents of a book. We know what that sentence says because we have a correct thought about it. But books have only derived intentionality (they only mean anything because someone has interpreted them as meaning something) whereas our thoughts have original intentionality (they mean stuff irrespective of what anyone thinks about it). It seems, then, that  the distinguishing property of a conscious thought is actually that the having of content and the knowing of content are inseparably the same.

I feel we’re frustratingly close to a new insight into the nature of intrinsic intentionality here. But I could be wrong.

Panpsychist Consciousness

Picture: star. The JCS has devoted its latest issue to definitions of consciousness. I thought I’d done reasonably well by quoting seventeen different views, but Ram L. P. Vimal lists forty, in what he acknowledges is not a comprehensive list. There is much to be said about all this – and Bill Faw promises a book-length treatment of the thoughts offered in his paper – but much of the ground has been trodden before.

A notable exception is David Skrbina’s panpsychist view. I have been accused in the past of being unfair to panpsychism, the belief that everything has some mental or experiential properties, and I remain unconvinced, but I was genuinely interested in hearing how a panpsychist would define consciousness. I think  panpsychists, who believe awareness of some kind is a fundamental property of everything, face a particular challenge in defining exactly what consciousness. For one thing they don’t enjoy the advantage which the rest of us have of being able to contrast the mindless stuff around us with mindful brains – for panpsychists there is no mindless stuff.  But sometimes it’s coming at a problem from a strange new angle that yields useful insights.

Skrbina very briefly puts a case for panpsychism by noting that even rocks maintain their own existence with a degree of success and respond to the impacts and changes of their environment.  This amounts, he suggests, to at least a simple form of experience, and hence of mind. But mind, he says,  has two aspects: the inner phenomenal experience and an outward-facing intentional/relational aspect. Both of these are characteristic of the mental life of all things; he acknowledges at least a prima facie difficulty over what counts as a ‘thing’ here, but it includes such entities as atoms, rocks, tables, chairs, human beings, planets, and stars.  In a footnote, Skrbina cites Plato and Aristotle as allies in thinking that stars might have a mental life, together with JBS Haldane’s view that the interior of stars might shelter minds superior to our own (perhaps not quite the same view – the existence of minds within stars doesn’t imply that the stars themselves have minds any more than the existence of minds in France suggests that France has its own mentality) and Roger Penrose who apparently has speculated that neutron stars may sustain large quantum superpositions and thus conceivably a high intensity of consciousness.

Skrbina does not, of course, believe that rocks have minds exactly like our own, and suggests that material complexity corresponds with mental complexity, so that there is a spectrum of mental life from the feeble, unremembered glimmerings experienced by rocks all the way up to the fantastically elaborate and persistent mental evolutions hosted by human beings. This is convenient, since it allows Skrbina to find a place for subconscious and unconscious mental activity, which can be regarded as merely low-wattage mentality, whereas on the face of it panpsychism seems to make unconsciousness impossible. But, he says, there is a fundamental continuity, and this applies to consciousness as well as general mentality. Consciousness, he suggests, is the border, the interface between the inward and outward aspects of mentality, and since everything posesses both of those, everything must have at least a simple analogue of consciousness. It might be better, he suggests, if we could find a new word for this common property of consciousness and reserve the term itself for the human-style variety, since that would accord better with normal usage, but we are nevertheless talking about a spectrum of complexity, not two different things.

Skrbina’s exposition is brief, and he only claims to be providing a pointer toward a promising line of investigation. The idea of consciousness as the linkage or interface between inner and outer mentality does have some appeal. Skrbina’s distinction between inner and outer corresponds approximately to a view which is widely popular about there being two basic kinds of consciousness;  the phenomenal, experiental variety and the rest. Famously this kind of distinction is embodied in David Chalmers’ hard/easy problem distinction and Ned Block’s a-consciousness and p-consciousness, to name only two examples; the pieces in the JCS provide other variations.  Why not regard consciousness as the thing that brings them together, even if you’re not attracted by panpsychism?

Well, I don’t know. For one thing I think the non-phenomenal half of the mind is usually short-changed.  Besides phenomenal awareness, we ought also to distinguish between agency, intentionality, and understanding, all large mysteries which really deserve better than being smooshed together. We could still see consciousness as the thing that brings it all together, perhaps, but that doesn’t exactly appeal either: it seems too much like saying that the human body is the thing that holds our bones and muscles together; better to say it’s the thing they help to make up.

I must confess – and this perhaps is unfair – to being put off by Skrbina’s description of consciousness as the luminous upper layer of the mind. Apart from the slightly confusing geometry (it’s the upper layer of the mind, but between the inner and outer parts), I don’t see why it’s luminous, and that sounds a bit like the resort to poetry sometimes adopted by theologians who have run out of cogent points to make. Still, he deserves at least a couple of cheers for offering a new approach, something he rightly advocates.

Making up a mind

Picture: Walter J Freeman.

In “How Brains make up their Minds”, Walter J Freeman set out to tackle the ancient issue of free will, but he also addressed many of the other fundamental issues about consciousness and thought. The book has an unusually even balance of neurology and philosophy, with similar ideas coming into play in both fields.

Freeman uses some familiar terms from philosophy in rather unusual ways. For him, intentionality does not mean “aboutness” in the way it generally does to contemporary philosophers. Instead, it means the property of being directed towards some object or goal. So in his eyes, the food-seeking behaviour of simple organisms displays intentionality even though there is no question of their having plans or acting deliberately. In his view, this is Thomas Aquinas’s original meaning, and a key foundation for consciousness. Aquinas is credited with a number of important insights which Freeman has incorporated into his own views.

‘Meaning’ also has a special sense in Freeman’s account, quite distinct from simple information. Freeman speaks of meaning in what sound at first like worryingly poetic or metaphorical terms, but the point is really a matter of context. Meaning, in Freeman’s sense, is given to mere information when it is set in the context of an individual mind, with all its multiple life experiences, history and characteristic quirks. This matches his views about the neuronal operation of the brain, where rather than discrete bits of data working their way through a program, he sees a mathematically chaotic pattern of activity in which the whole system comes to bear. Each brain has its own individual pattern of basic activity which provides a unique context in which meanings develop. It follows that meanings are, strictly, unique to particular individuals, and in stark contrast to Putnam’s famous doctrine, meanings are only in the head. Consciousness is the high-level pattern which brings the whole thing together, and emotional and moral self-control may well be a matter of how closely overall consciousness binds lower and more partial patterns of activity.

In Freeman’s view, the process of perception and action is not a two-way matter of inputs and outputs, but a one-way street of action on the world. Many people would agree that perception is an active business, not just a passive reception of impressions, but the idea that it consists entirely of action on the world sounds bonkers, and in fact Freeman does allow the outside world to influence our behaviour – the point is that all the ideas and interpretations bubble up from inside, and merely survive or fail to survive the impact of external reality. This is rather reminiscent of Edelman’s views and his analogy with the immune system, but Freeman draws from it the rather bleak conclusion that we are all, in a sense, in a state of solipsistic isolation from the world.

This creates a special problem for Freeman: how is it that we ever manage to overcome our isolation and communicate with each other? He sees social interaction as playing a mediating role, with processes rather similar to those which go on in the brain operating in the wider social sphere – though not so similar that society itself becomes a conscious entity. Freeman has a number of ideas about signalling and communication to offer, but I’m not sure he really manages to deal with the underlying problem, and it remains a weak spot in the theory.

What, then is the answer on free will? At times Freeman seems to assert free will, while at others he seems to deny it: in fact he ultimately considers the question an ill-formed one. We see actions in terms of freedom or determinism because we are wedded to linear causality, even though we know that it does not provide an adequate view of the world, and that circular causality and more sophisticated perspectives are often more appropriate. For the swirling chaotic patterns of the brain, dynamic analysis is a more appropriate tool than those based on linear causation, and when we apply it correctly, the old opposition between free and determined is no longer an issue.

There’s something in this, undoubtedly, but it doesn’t dispel the sense of mystery which has made the old debate such a long-running philosophical staple. There does seem, intuitively at least, to be something uniquely odd about the causality of our minds, but if the problem arose entirely from a lack of dynamic analysis, we should surely find some of the causality of the normal world more mysterious than we do?