Posts tagged ‘consciousness’

Does recent research into autism suggest real differences between male and female handling of consciousness?

Traditionally, autism has been regarded as an overwhelmingly male condition. Recently, though it has been suggested that the gender gap is not as great as it seems; it’s just that most women with autism go undiagnosed. How can that be? It is hypothesised that some sufferers are able to ‘camouflage’ the symptoms of their autism, and that this suppression of symptoms is particularly prevalent among women.

‘Camouflaging’ means learning normal social behaviours such as giving others appropriate eye contact, interpreting and using appropriate facial expressions, and so on. But surely, that’s just what normal people do? If you can learn these behaviours, doesn’t that mean you’re not autistic any more?
There’s a subtle distinction here between doing what comes naturally and doing what you’ve learned to do. Camouflaging, on this view, requires significant intellectual resources and continuous effort, so that while camouflaged sufferers may lead apparently normal lives, they are likely to suffer other symptoms arising from the sheer mental effort they have to put in – fatigue, depression, and so on.

Measuring the level of camouflaging – which is obviously intended to be undetectable – obviously raises some methodological challenges. Now a study reported in the invaluable BPS Research Digest claims to have pulled it off. The research team used scanning and other approaches, but their main tool was to contrast two different well-established methods of assessing autism – the Autism Diagnostic Observation Schedule on the one hand and the Autism Spectrum Quotient on the other. While the former assesses ‘external’ qualities such as behaviour, the latter measures ‘internal’ ones. Putting it crudely, they measure what you actually do and what you’d like to do respectively. The ratio between the two scores yields a measure of how much camouflaging is going on, and in brief the results confirm that camouflaging is present to a far greater degree in women. I think in fact it’s possible the results are understated; all of the subjects were people who had already been diagnosed with autism; that criterion may have selected women who were atypically low in the level of camouflaging, precisely because women who do a lot of camouflaging would be more likely to escape diagnosis.

The research is obviously welcome because it might help improve diagnosis rates for women, but also because a more equal rate of autism for men and women perhaps helps to dispel the idea, formerly popular but (to me at least) rather unpalatable, that autism is really little more than typical male behaviour exaggerated to unacceptable levels.

It does not eliminate the tricky gender issues, though. One thing that surely needs to be taken into account is the possibility that accommodating social pressures is something women do more of anyway. It is plausible (isn’t it?) that even among typical average people, women devote more effort to social signals, listening and responding, laughing politely at jokes, and so on. It might be that there is a base level of activity among women devoted to ‘camouflaging’ normal irritation, impatience, and boredom which is largely absent in men, a baseline against which the findings for people with autism should properly be assessed. It might have been interesting to test a selection of non-autistic people, if that makes sense in terms of the tests. How far the general underlying difference, if it exists, might be due to genetics, socialisation, or other factors is a thorny question.

At any rate, it seems to me inescapable that what the study is really attempting to do with its distinction between outward behaviour and inward states, is to measure the difference between unconscious and conscious control of behaviour. That subtle distinction, mentioned above, between natural and learned behaviour is really the distinction between things you don’t have to think about, and things that require constant, conscious attention. Perhaps we might draw a parallel of sorts with other kinds of automatic behaviour. Normally, a lot of things we do, such as walking, require no particular thought. All that stuff, once learned, is taken care of by the cerebellum and the cortex need not be troubled (disclaimer: I am not a neurologist). But people who have their cerebellum completely removed can apparently continue to function: they just have to think about every step all the time, which imposes considerable strain after a while. However, there’s no special organ analogous to the cerebellum that records our social routines, and so far as I know it’s not clear whether the blend of instinct and learning is similar either.

In one respect the study might be thought to open up a promising avenue for new therapeutic approaches. If women can, to a great extent, learn to compensate consciously for autism, and if that ability is to a great extent a result of social conditioning, then in principle one option would be to help male autism sufferers achieve the same thing through applying similar socialisation. Although camouflaging evidently has its downsides, it might still be a trick worth learning. I doubt if it is as simple as that, though; an awful lot of regimes have been tried out on male sufferers and to date I don’t believe the levels of success have been that great; on the other hand it may be that pervasive, ubiquitous social pressure is different in kind from training or special regimes and not so easily deployed therapeutically. The only way might be to bring up autistic boys as girls…

If we take the other view, that women’s ability or predisposition to camouflage is not the result of social conditioning, then we might be inclined to look for genuine ‘hard-wired’ differences in the operation of male and female consciousness. One route to take from there would be to relate the difference to the suggested ability of women (already a cornerstone of gender-related folk psychology) to multi-task more effectively, dividing conscious attention without significant loss to the efficiency of each thread. Certainly one would suppose that having to pay constant attention to detailed social cues would have an impact on the ability to pay attention to other things, but so far as I know there is no evidence that women with camouflaged autism are any worse at paying attention generally than anyone else. Perhaps this is a particular skill of the female mind, while if men pay that much attention to social cues, their ability to listen to what is actually being said is sensibly degraded?

The speculative ice out here is getting thinner than I like, so I’ll leave it there; but in all seriousness, any study that takes us forward in this area, as this one seems to do, must be very warmly welcomed.

A somewhat enigmatic report in the Daily Telegraph says that this problem has been devised by Roger Penrose, who says that chess programs can’t solve it but humans can get a draw or even a win.

I’m not a chess buff, but it looks trivial. Although Black has an immensely powerful collection of pieces, they are all completed bottled up and immobile, apart from three bishops. Since these are all on white squares, the White king is completely safe from them if he stays on black squares. Since the white pawns fencing in Black’s pieces are all on black squares, the bishops can’t do anything about them either. It looks like a drawn position already, in fact.

I suppose Penrose believes that chess computers can’t deal with this because it’s a very weird situation which will not be in any of their reference material. If they resort to combinatorial analysis the huge number of moves available to the bishops is supposed to render the problem intractable, while the computer cannot see the obvious consequences of the position the way a human can.

I don’t know whether it’s true that all chess programs are essentially that stupid, but it is meant to buttress Penrose’s case that computers lack some quality of insight or understanding that is an essential property of human consciousness.

This is all apparently connected with the launch of a new Penrose Institute, whose website is here, but appears to be incomplete. No doubt we’ll hear more soon.

Give up on real comprehension, says Daniel Dennett in From Bacteria to Bach and Back: commendably honest but a little discouraging to the reader? I imagine it set out like the warning above the gates of Hell: ‘Give up on comprehension, you who turn these pages’.  You might have to settle for acquiring some competences.

What have we got here? In this book, Dennett is revisiting themes he developed earlier in his career, retelling the grand story of the evolution of minds. We should not expect big new ideas or major changes of heart ( but see last week’s post for one important one). It would have been good at this stage to have a distillation; a perfect, slim little volume presenting a final crystalline formulation of what Dennett is all about. This isn’t that. It’s more like a sprawling Greatest Hits album. In there somewhere are the old favourites that will always have the fans stomping and shouting (there’s some huffing and puffing from Dennett about how we should watch out because he’s coming for our deepest intuitions with scary tools that may make us flinch, but honestly by now this stuff is about as shocking and countercultural as your dad’s Heavy Metal collection); but we’ve also got unnecessary cover versions of ideas by other people, some stuff that was never really a hit in the first place, and unfortunately one or two bum notes here and there.

And, oh dear, another attempt to smear Descartes by association. First Dennett energetically promoted the phrase “Cartesian theatre” – so hard some people suppose that it actually comes from Descartes; now we have ‘Cartesian gravity’, more or less a boo-word for any vaguely dualistic tendency Dennett doesn’t like. This is surely not good intellectual manners; it wouldn’t be quite so bad if it wasn’t for the fact that Descartes actually had a theory of gravity, so that the phrase already has a meaning. Should a responsible professor be spreading new-minted misapprehensions like this? Any meme will do?

There’s a lot about evolution here that rather left me cold (but then I really, really don’t need it explained again, thanks); I don’t think Dennett’s particular gift is for popularising other people’s ideas and his take seems a bit dated. I suspect that most intelligent readers of the book will already know most of this stuff and maybe more, since they will probably have kept up with epigenetics and the various proposals for extension of the modern synthesis that have emerged in the current century, (and the fascinating story of viral intervention in human DNA, surely a natural for anyone who likes the analogy of the selfish gene?) none of which gets any recognition here (I suppose in fairness this is not intended to be full of new stuff). Instead we hear again the tired and in my opinion profoundly unconvincing story about how leaping (‘stotting’) gazelles are employing a convoluted strategy of wasting valuable energy as a lion-directed display of fitness. It’s just an evasive manoeuvre, get over it.

For me it’s the most Dennettian bits of the book that are the best, unsurprisingly. The central theme that competence precedes, and may replace, comprehension is actually well developed. Dennett claims that evolution and computation both provide ‘inversions’ in which intentionless performance can give the appearance of intentional behaviour. He has been accused of equivocating over the reality of intentionality, consciousness and other concepts, but I like his attitude over this and his defence of the reality of ‘free-floating rationales’ seems good to me. It gives us permission to discuss the ‘purposes’ of things without presupposing an intelligent designer whose purposes they are, and I’m completely with Dennett when he argues that this is both necessary and acceptable. I’ve suggested elsewhere that talking about ‘the point’ of things, and in a related sense, what they point to, is a handy way of doing this. The problem for Dennett, if there is one, is that it’s not enough for competence to replace comprehension often; he needs it to happen every time by some means.

Dennett sets out a theoretical space with ‘bottom-up vs top-down’, ‘random vs directed search’, and ‘comprehension’ as its axes; at one corner of the resulting cube we have intentionless structures like a termite colony; at the other we have fully intentional design like Gaudi’s church of the Sagrada Familia, which to Dennett’s eye resembles a termite colony. Gaudi’s perhaps not the ideal choice here, given his enthusiasm for natural forms; it makes Dennett seem curiously impressed by the underwhelming fact that buildings by an architect who borrowed forms from the natural world turn out to have forms resembling those found in nature.

Still, the space suggests a real contrast between the mindless processes of evolution and deliberate design, which at first sight looks refreshingly different and unDennetian. It’s not, of course; Dennett is happy to embrace that difference so long as we recognise that the ‘deliberate design’ is simply a separate evolutionary process powered by memes rather than genes.

I’ve never thought that memes, Richard Dawkins’s proposed cultural analogue of genes, were a particularly helpful addition to Dennett’s theoretical framework, but here he mounts an extended defence of them. One of the worst flaws in the theory as it stands – and there are several – is its confused ontology. What are memes – physical items of culture or abstract ideas? Dennett, as a professional philosopher, seems more sensitive to this problem than some of the more metaphysically naive proponents of the meme. He provides a relatively coherent vision by invoking the idea that memes are ‘tokens’; they may take all sorts of physical forms – written words, pictures, patterns of neuronal firing – but each form is a token of a particular way of behaving. The problem here is that anything at all can serve as a token of any meme; we only know that a given noise or symbol tokens a specific meme because of its meaning. There may be – there certainly are – some selective effects that bite on the actual form of particular tokens. A word that is long or difficult to pronounce is more likely to be forgotten. But the really interesting selections take place at the level of meanings; that requires a much more complex level of explanation. There may still be mechanisms involved that are broadly selective if not exactly Darwinian – I think there are – but I believe any move up to this proper level of complexity inevitably edges the simplistic concept of the meme out of play.

The original Dawkinsian observation that the development of cultural items sometimes resembles evolution was sound, but it implicitly called for the development of a general theory which in spite of some respectable attempts, has simply failed to appear. Instead, the supporters of memetics, perhaps trapped by the insistent drumbeat of the Dawkinsian circus, have tended to insist instead that it’s all Darwinian natural selection. How a genetic theory can be Darwinian when Darwin never heard of genes is just one of the lesser mysteries here (should we call it ‘Mendelian’ instead? But Darwin’s name is the hooray word here just as Descartes’ is the cue for boos). Among the many ways in which cultural selection does not resemble biological evolution, Dennett notes the cogent objection that there is nothing that corresponds to DNA; no general encoding of culture on which selection can operate. One of the worst “bum notes” in the book is Dennett’s strange suggestion that HTML might come to be our cultural DNA. This is, shall we say, an egregious misconception of the scope of text mark-up language.

Anyway, it’s consciousness we’re interested in (check out Tom Clark’s thoughtful take here) and the intentional stance is the number the fans have been waiting for; cunningly kept till last by Dennett. When we get there, though, we get a remix instead of the classic track. Here he has a new metaphor, cunningly calculated to appeal to the youth of today; it’s all about apps. Our impression of consciousness is a user illusion created by our gift for language; it’s like the icons that activate the stuff on your phone. You may object that a user illusion already requires a user, but hang on. Your ability to talk about yourself is initially useful for other people, telling them useful stuff about your internal states and competences, but once the system is operating, you can read it too. It seems plausible to me that something like that is indeed an important part of the process of consciousness, though in this version I felt I had rather lost track of what was illusory about it.

Dennett moves on to a new attack on qualia. This time he offers an explanation of why people think they occur – it’s because of the way we project our impressions back out into the world, where they may seem unaccountable. He demonstrates the redundancy of the idea by helpfully sketching out how we could run up a theory of qualia and noting how pointless they are. I was nodding along with this. He suggests that qualia and our own sense of being the intelligent designers in our own heads are the same kind of delusion, simply applied externally or internally. I suppose that’s where the illusion is.

He goes on to defend a sort of compatibilist view of free will and responsibility; another example of what Descartes might be tempted to label Dennettian Equivocation, but as before, I like that posture and I’m with him all the way. He continues with a dismissal of mysterianism, leaning rather more than I think is necessary on the interesting concept of joint understanding, where no one person gets it all perfectly, but nothing remains to be explained, and takes a relatively sceptical view of the practical prospects for artificial general intelligence, even given recent advances in machine learning. Does Google Translate display understanding (in some appropriate sense); no, or rather, not yet. This is not Dennett as we remember him; he speaks disparagingly of the cheerleaders for AI and says that “some of us” always discounted the hype. Hmm. Daniel Dennett, long-time AI sceptic?

What’s the verdict then? Some good stuff in here, but as always true fans will favour the classic album; if you want Dennett at his best the aficionado will still tell you to buy Consciousness Explained.

 

 

A blast of old-fashioned optimism from Owen Holland: let’s just build a conscious robot!

It’s a short video so Holland doesn’t get much chance to back up his prediction that if you’re under thirty you will meet a conscious robot. He voices feelings which I suspect are common on the engineering and robotics side of the house, if not usually expressed so clearly: why don’t we just get on and put a machine together to do this? Philosophy, psychology, all that airy fairy stuff is getting us nowhere; we’ll learn more from a bad robot than twenty papers on qualia.

His basic idea is that we’re essentially dealing with an internal model of the world. We can now put together robots with an increasingly good internal modelling capability (and we can peek into those models); why not do that and then add new faculties and incremental improvements till we get somewhere?

Yeah, but. The history of AI is littered with projects that started like this and ran into the sand. In particular the idea that it’s all about an internal model may be a radical mis-simplification. We don’t just picture ourselves in the world, we picture ourselves picturing ourselves. We can spend time thinking just about the concept of consciousness – how would that appear in a model? In general our conscious experience is complex and elusive, and cannot accurately be put on a screen or described on a page (though generations of novelists have tried everything they can think of).

The danger when we start building is that the first step is wrong and already commits us to a wrong path. Maybe adding new abilities won’t help. Perhaps our ability to model the world is just one aspect of a deeper and more general faculty that we haven’t really grasped yet; building in a fixed spatial modeller might turn us away from that right at the off. Instead of moving incrementally towards consciousness we might end up going nowhere (although there should be some pretty cool robots along the way).

Still, without some optimism we’ll certainly get nowhere anyway.

You can’t build experience out of mere information. Not, at any rate, the way the Integrated Information Theory (IIT) seeks to do it. So says Garrett Mindt in a forthcoming paper for the JCS.

‘Information’ is notoriously a slippery term, and much depends on how you’re using it. Commonly people distinguish the everyday meaning, which makes information a matter of meaning or semantics, and the sense defined by Shannon, which is statistical and excludes meaning, but is rigorous and tractable.

It is a fairly common sceptical claim that you cannot get consciousness, or anything like intentionality or meaning, out of Shannon-style information. Mindt describes in his paper a couple of views that attack IIT on similar grounds. One is by Cerullo, who says:

‘Only by including syntactic, and most importantly semantic, concepts can a theory of information hope to model the causal properties of the brain…’

The other is by Searle, who argues that information, correctly understood, is observer dependent. The fact that this post, for example, contains information depends on conscious entities interpreting it as such, or it would be mere digital noise. Since information, defined this way, requires consciousness, any attempt to derive consciousness from it must be circular.

Although Mindt is ultimately rather sympathetic to both these cases, he says they fail because they assume that IIT is working with a Shannonian conception of information: but that’s not right. In fact IIT invokes a distinct causal conception of information as being ‘a difference that makes a difference’. A system conveys information, in this sense, if it can induce a change in the state of another system. Mindt likens this to the concept of information introduced by Bateson.

Mindt makes the interesting point that Searle and others tend to carve the problem up by separating syntax from semantics; but it’s not clear that semantics is required for hard-problem style conscious experience (in fact I think the question of what, if any, connection there is between the two is puzzling and potentially quite interesting). Better to use the distinction favoured by Tononi in the context of IIT, between extrinsic information – which covers both syntax and semantics – and intrinsic, which covers structure, dynamics, and phenomenal aspects.

Still, Mindt finds IIT vulnerable to a slightly different attack. Even with the clarifications he has made, the theory remains one of structure and dynamics, and physicalist structure and dynamics just don’t look like the sort of thing that could ever account for the phenomenal qualities of experience. There is no theoretical bridge arising from IIT that could take us across the explanatory gap.

I think the case is well made, although unfortunately it may be a case for despair. If this objection stands for IIT then it most likely stands for all physicalist theories. This is a little depressing because on one point of view, non-physicalist theories look unattractive. From that perspective, coming up with a physical explanation of phenomenal experience is exactly the point of the whole enquiry; if no such explanation is possible, no decent answer can ever be given.

It might still be the case that IIT is the best theory of its kind, and that it is capable of explaining many aspects of consciousness. We might even hope to squeeze the essential Hard Problem to one side. What if IIT could never explain why the integration of information gives rise to experience, but could explain everything, or most things, about the character of experience? Might we not then come to regard the Hard Problem as one of those knotty tangles that philosophers can mull over indefinitely, while the rest of us put together a perfectly good practical understanding of how mind and brain work?

I don’t know what Mindt would think about that, but he rounds out his case by addressing one claimed prediction of IIT; namely that if a large information complex is split, the attendant consciousness will also divide. This looks like what we might see in split-brain cases, although so far as I can see, nobody knows whether split-brain patients have two separate sets of phenomenal experiences, and I’m not sure there’s any way of testing the matter. Mindt points out that the prediction is really a matter of  ‘Easy Problem’ issues and doesn’t help otherwise: it’s also not an especially impressive prediction, as many other possible theories would predict the same thing.

Mindt’s prescription is that we should go back and have another try at that definition of information; without attempting to do that he smiles on dual aspect theories. I’m afraid I am left scowling at all of them; as always in this field the arguments against any idea seem so much better than the ones for.

 

Emotions like fear are not something inherited from our unconscious animal past. Instead they arise from the higher-order aspects that make human thought conscious. That (if I’ve got it right) is the gist of an interesting paper by LeDoux and Brown.

A mainstream view of fear (the authors discuss fear in particular as a handy example of emotion, on the assumption that similar conclusions apply to other emotions) would make it a matter of the limbic system, notably the amygdala, which is known to be associated with the detection of threats. People whose amygdalas have been destroyed become excessively trusting, for example – although as always things are more complicated than they seem at first and the amygdalas are much more than just the organs of ‘fear and loathing’. LeDoux and Brown would make fear a cortical matter, generated only in the kind of reflective consciousness possessed by human beings.

One immediate objection might be that this seems to confine fear to human beings, whereas it seems pretty obvious that animals experience fear too. It depends, though, what we mean by ‘fear’. LeDoux and Brown would not deny that animals exhibit aversive behaviour, that they run away or emit terrified noises; what they are after is the actual feeling of fear. LeDoux and Brown situate their concept of fear in the context of philosophical discussion about phenomenal experience, which makes sense but threatens to open up a larger can of worms – nothing about phenomenal experience, including its bare existence, is altogether uncontroversial. Luckily I think that for the current purposes the deeper issues can be put to one side; whether or not fear is a matter of ineffable qualia we can probably agree that humanly conscious fear is a distinct thing. At the risk of begging the question a bit we might say that if you don’t know you’re afraid, you’re not feeling the kind of fear LeDoux and Brown want to talk about.

On a traditional view, again, fear might play a direct causal role in behaviour. We detect a threat, that causes the feeling of fear, and the feeling causes us to run away. For LeDoux and Brown, it doesn’t work like that. Instead, while the threat causes the running away, that process does not in itself generate the feeling of fear. Those sub-cortical processes, along with other signals, feed into a separate conscious process, and it’s that that generates the feeling.

Another immediate objection therefore might be that the authors have made fear an epiphenomenon; it doesn’t do anything. Some, of course, might embrace the idea that all conscious experience is epiphenomenal; a by-product whose influence on behaviour is illusory. Most people, though, would find it puzzling that the brain should go to the trouble of generating experiences that never affect behaviour and so contribute nothing to survival.

The answer here, I think, comes from the authors’ view of consciousness. They embrace a higher-order theory (HOT). HOTs (there are a number of variations) say that a mental state is conscious if there is another mental state in the same mind which is about it – a Higher Order Representation (HOR); or to put it another way, being conscious is being aware that you’re aware. If that is correct, then fear is a natural result of the application of conscious processes to certain situations, not a peculiar side-effect.

HOTs have been around for a long time: they would always get a mention in any round-up of the contenders for an explanation of consciousness, but somehow it seems to me they have never generated the little bursts of excitement and interest that other theories have enjoyed. LeDoux and Brown suggest that other theories of emotion and consciousness either are ‘first -order’ theories explicitly, or can be construed as such. They defend the HOT concept against one of the leading objections, which is that it seems to be possible to have HORs of non-existent states of awareness. In Charles Bonnet, syndrome, for example, people who are in fact blind have vivid and complex visual hallucinations. To deal with this, the authors propose to climb one order higher; the conscious awareness, they suggest, comes not from the HOR of a visual experience but from the HOR of a HOR: a HOROR, in fact. There clearly is no theoretical limit to the number of orders we can rise to, and there’s some discussion here about when and whether we should call the process introspection.

I’m not convinced by HOTs myself. The authors suggest that single-order theory implies there can be conscious states of which we are not aware, which seems sort of weird: you can feel fear and not know you’re feeling fear? I think there’s a danger here of equivocating between two senses of ‘aware’. Conscious states are states of awareness, but not necessarily states we are aware of; something is in awareness if we are conscious; but that’s not to say that the something includes our awareness itself. I would argue, contrarily, that there must be states of awareness with no HOR; otherwise, what about the HOR itself? If HORs are states of awareness themselves, each must have its own HOR, and so on indefinitely. If they’re not, I don’t see how the existence of an inert representation can endow the first-order state with the magic of consciousness.

My intuitive unease goes a bit wider than that, too. The authors have given a credible account of a likely process, but on this account fear looks very like other conscious states. What makes it different – what makes it actually fearful? It seems possible to imagine that I might perform the animal aversive behaviour, experience a conscious awareness of the threat and enter an appropriate conscious state without actually feeling fear. I have no doubt more could be said here to make the account more plausible and in fairness LeDoux and Brown could well reply that nobody has a knock-down account of phenomenal experience, with their version offering a lot more than some.

In fact, even though I don’t sign up for a HOT I can actually muster a pretty good degree of agreement nonetheless. Nobody, after all, believes that higher order mental states don’t exist (we could hardly be discussing this subject if they didn’t). In fact, although I think consciousness doesn’t require HORs, I think they are characteristic of its normal operation and in fact ordinary consciousness is a complex meld of states of awareness at several different levels. If we define fear the way LeDoux and Brown do, I can agree that they have given a highly plausible account of how it works without having to give up my belief that simple first-order consciousness is also a thing.

 

Scott Bakker’s alien consciousnesses are back, and this time it’s peer-reviewed.  We talked about their earlier appearance in the Three Pound Brain a while ago, and now a paper in the JCS sets out a new version.

The new paper foregrounds the idea of using hypothetical aliens as a forensic tool for going after the truth about our own minds; perhaps we might call it xenophenomenology. That opens up a large speculative space, though it’s one which is largely closed down again here by the accompanying assumption that our aliens are humanoid, the product of convergent evolution. In fact, they are now called Convergians, instead of the Thespians of the earlier version.

In a way, this is a shame. On the one hand, one can argue that to do xenophenomenology properly is impractical; it involves consideration of every conceivable form of intelligence, which in turn requires an heroic if not god-like imaginative power which few can aspire to (and which would leave the rest of us struggling to comprehend the titanic ontologies involved anyway). But if we could show that any possible mind would have to be x, we should have a pretty strong case for xism about human beings. In the present case not much is said about the detailed nature of the Convergian convergence, and we’re pretty much left to assume that they are the same as us in every important respect. This means there can be no final reveal in which – aha! – it turns out that all this is true of humans too! Instead it’s pretty clear that we’re effectively talking about humans all along.

Of course, there’s not much doubt about the conclusion we’re heading to here, either: in effect the Blind Brain Theory (BBT). Scott argues that as products of evolution our minds are designed to deliver survival in the most efficient way possible. As a result they make do with a mere trickle of data and apply cunning heuristics that provide a model of the world which is quick and practical but misleading in certain important respects. In particular, our minds are unsuited to metacognition – thinking about thinking – and when we do apply our minds to themselves the darkness of those old heuristics breeds monsters: our sense of our selves as real, conscious agents and the hard problems of consciousness.

This seems to put Scott in a particular bind so far as xenophenomenology is concerned. The xenophenomenological strategy requires us to consider objectively what alien minds might be like; but Scott’s theory tells us we are radically incapable of doing so. If we are presented with any intelligent being, on his view those same old heuristics will kick in and tell us that the aliens are people who think much like us. This means his conclusion that Convergians would surely suffer the same mental limitations as us appears as merely another product of faulty heuristics, and the assumed truth of his conclusion undercuts the value of his evidence.

Are those heuristics really that dominant? It is undoubtedly true that through evolution the brains of mammals and other creatures took some short cuts, and quite a few survive into human cognition, including some we’re not generally aware of. That seems to short-change the human mind a bit though; in a way the whole point of it is that it isn’t the prisoner of instinct and habit. When evolution came up with the human brain, it took a sort of gamble; instead of equipping it with good fixed routines, it set it free to come up with new ones, and even over-ride old instincts. That gamble paid off, of course, and it leaves us uniquely able to identify and overcome our own limitations.

If it were true that our view of human conscious identity were built in by the quirks of our heuristics, surely those views would be universal, but they don’t seem to be. Scott suggests that, for example, the two realms of sky and earth naturally give rise to a sort of dualism, and the lack of visible detail in the distant heavens predisposes Convergians (or us) to see it as pure and spiritual. I don’t know about that as a generalisation across human cultures (didn’t the Greeks, for one thing, have three main realms, with the sea as the third?). More to the point, it’s not clear to me that modern western ways of framing the problems of the human mind are universal. Ancient Egyptians divided personhood into several souls, not just one. I’ve been told that in Hindu thought the question of dualism simply never arises. In Shinto the line between the living and the material is not drawn in quite the Western way. In Buddhism human consciousness and personhood have been taken to be illusions for many centuries. Even in the West, I don’t think the concept of consciousness as we now debate it goes back very far at all – probably no earlier than the nineteenth century, with a real boost in the mid-twentieth (in Italian and French I believe one word has to do duty for both ‘consciousness’ and ‘conscience’, although we mustn’t read too much into that). If our heuristics condemn us to seeing our own conscious existence in a particular way, I wouldn’t have expected that much variation.

Of course there’s a difference between what vividly seems true and what careful science tells us is true; indeed if the latter didn’t reveal the limitations of our original ideas this whole discussion would be impossible. I don’t think Scott would disagree about that; and his claim that our cognitive limitations have influenced the way we understand things is entirely plausible. The question is whether that’s all there is to the problems of consciousness.

As Scott mentions here, we don’t just suffer misleading perceptions when thinking of ourselves; we also have dodgy and approximate impressions of physics. But those misperceptions were not Hard problems; no-one had ever really doubted that heavier things fell faster, for example. Galileo sorted several of these basic misperceptions out simply by being a better observer than anyone previously, and paying more careful attention. We’ve been paying careful attention to consciousness for some time now, and arguably it just gets worse.

In fairness that might rather short-change Scott’s detailed hypothesising about how the appearance of deep mystery might arise for Convergians; those, I think, are the places where xenophenomenology comes close to fulfilling its potential.

 

Is consciousness a matter of entropy in the brain? An intriguing paper by R. Guevara Erra, D. M. Mateos, R. Wennberg, and J.L. Perez Velazquez says

normal wakeful states are characterised by the greatest number of possible configurations of interactions between brain networks, representing highest entropy values.

What the researchers did, broadly, is identify networks in the brain that were operative at a given time, and then work out the number of possible configurations these networks were capable of. In general, conscious states were associated with states with high numbers of possible configurations – high levels of entropy.

That makes me wrinkle my forehead a bit because it doesn’t fit well with my layman’s grasp of the concept of entropy. In my mind entropy is associated with low levels of available energy and an absence of large complex structure. Entropy always increases, but can decrease locally, as in the case of the complex structures of life, by paying for the decrease with a bigger increase elsewhere; typically by using up available energy. On this view, conscious states – and high levels of possible configurations – look like they ought to be low entropy; but evidently the reverse is actually the case. The researchers also used the Lempel-Ziv measure of complexity, one with strong links to information content, which is clearly an interesting angle in itself.

Of the nine subjects, three were epileptic, which allowed comparisons to be made with seizure states as well as those of waking and sleeping states. Interestingly, REM sleep showed relatively high entropy levels, which intuitively squares with the idea that dreaming resembles waking a little more than  fully unconscious states – though I think the equation of REM sleep with dreaming is not now thought to be as perfect as it once seemed.

One acknowledged weakness in the research is that it was not possible to establish actual connection. The assumed networks were therefore based on synchronisation instead. However, synchronisation can arise without direct connection, and absence of synchronisation is not necessarily proof of the absence of connection.

Still, overall the results look good and the picture painted is intuitively plausible. Putting all talk of entropy and Lempel-Ziv aside, what we’re really saying is that conscious states fall in the middle of a notional spectrum: at one end of this spectrum is chaos, with neutrons firing randomly; at the other we have then all firing simultaneously in indissoluble lockstep.

There is an obvious resemblance here to the Integrated Information Theory (IIT) which holds that consciousness arises once the quantity of information being integrated passes a value known as Phi. In fact, the authors of the current paper situate it explicitly within the context of earlier work which suggests that the general principle of natural phenomena is the maximisation of information transfer. The read-across from the new results into terms of information processing is quite clear. The authors do acknowledge IIT, but just barely; they may be understandably worried that their new work could end up interpreted as mere corroboration for IIT.

My main worry about both is that they are very likely true, but may not be particularly enlightening. As a rough analogy, we might discover that the running of an internal combustion engine correlates strongly with raised internal temperature states. The absence of these states proves to be a pretty good practical guide to whether the engine is running, and we’re tempted to conclude that raised temperature is the same as running. Actually, though, raising the temperature artificially does not make the engine run, and there is in fact a complex story about running in which raised temperatures are not really central. So it might be that high entropy is characteristic of conscious states without that telling us anything useful about how those states really work.

But I evidently don’t really get entropy, so I might easily be missing the true significance of all this.

Further to the question of conscious vs non-conscious action, here’s a recent RSA video presenting some evidence.

Nicholas Shea presents with Barry Smith riding shotgun. There’s a mention for one piece of research also mentioned by Di Nucci; preventing expert golfers from concentrating consciously on their shot actually improves their performance (it does the opposite for non-experts). There are two pieces of audience participation; one shows that subliminal prompts can (slightly) affect behaviour; the other shows that time to think and discuss can help where explicit reasoning is involved (though it doesn’t seem to help the RSA audience much).

Perhaps in the end consciousness is not essentially private after all, but social and co-operative?

Sign of the times to see two philosophers unashamedly dabbling in experiments. I think the RSA also has to win some kind of prize in the hotly-contested ‘unconvincing brain picture’ category for using purple and yellow cauliflower.

Fish don’t feel pain, says Brian Key.  How does he know? In the deep philosophical sense it remains a matter of some doubt as to whether other human beings really feel pain, and as Key notes, Nagel famously argued that we couldn’t know what it was like to be a bat at all, even though we have much more in common with them than with fish. But in practice we don’t really feel much doubt that humans with bodies and brains like ours do indeed have similar sensations, and we trust that their reports of pain are generally as reliable as our own. Key’s approach extends this kind of practical reasoning. He relies on human reports to identify the parts of the brain involved in feeling pain, and then looks for analogues in other animals.

Key’s review of the evidence is interesting; in brief he concludes that it is the cortex that ‘does’ pain; fish don’t have anything that corresponds with human cortex, or any other brain structure that plausibly carries out the same function. They have relatively hard-wired responses to help them escape  physical damage, and they have a capacity to learn about what to avoid, but they don’t have any mechanism for actually feeling pain with. It is really, he suggests, just anthropomorphism that sees simple avoidance behaviour as evidence of actual pain. Key is rightly stern about anthropomorphism, but I think he could have acknowledged the opposite danger of speciesism. The wide eyes and open mouths of fish, their rigid faces and inability to gesture or scream incline us to see them as stupid, cold, and unfeeling in a way which may not be properly objective.

Still, a careful examination of fish behaviour is a perfectly valid supplementary approach, and Key buttresses his case by noting that pain usually suppresses normal behaviour. Drilling a hole in a human’s skull tends to inhibit locomotion and social activity, but apparently doing the same thing to fish does not stop them going ahead with normal foraging and mating behaviour as though nothing had happened. Hard to believe, surely, that they are in terrible pain but getting on with a dancing display anyway?

I think Key makes a convincing case that fish don’t feel what we do, but there is a small danger of begging the question if we define pain in a way that makes it dependent on human-style consciousness to begin with. The phenomenology really needs clarification, but defining pain, other than by demonstration, is peculiarly difficult. It is almost by definition the thing we want to avoid feeling, yet we can feel pain without being bothered by it, and we can have feelings we desperately want to avoid which are, however, not pain. Pain may be a tiny twinge accompanying a reflex, an attention-grabbing surge, or something we hold in mind and explore (Dennett, I think, says somewhere that he had been told that examining pain introspectively was one way to make it bearable. On his next dentist visit, he tried it out and found that although the method worked, the effort and boredom involved in continuous close attention to the detailed qualities of his pain was such that he eventually preferred straightforward hurting.) Humans certainly understand pain and can opt to suffer it voluntarily in ways that other creatures cannot; whether on balance this higher awareness makes our pain more or less bearable is a difficult question in itself. We might claim that imagination and fear magnify our suffering, but being to some degree aware and in control can also put us in a better position than a panicking dog that cannot understand what is happening to it.

Key leans quite heavily on reportable pain; there are obvious reasons for that, but it could be that doing so skews him towards humanity and towards the cortex, which is surely deeply involved in considering and reporting. He dismisses some evidence that pain can occur without a cortex, and therefore happens in the brain stem. His objections seem reasonable, but surely it would be odd if nothing were going on in the brain stem, that ‘old brain’ we have inherited through evolution, even if it’s only some semi-automatic avoidance stuff. The danger is that we might be paying attention to the reportable pain dealt with by the ‘talky’ part of our minds while another kind is going on elsewhere. We know from such phenomena as blindsight that we can unconsciously ‘see’ things; could we not also have unconscious pain going on in another part of the brain?

That raises another important question: does it matter? Is unconscious or forgotten pain worth considering – would fish pain be negligible even if it exists? Pain is, more or less, the feeling we all want to avoid, so in one way its ethical significance is obvious. But couldn’t those automatic damage avoidance behaviours have some ethical significance too? Isn’t damage sort of ethically charged in itself? Key rejects the argument that we should give fish the ‘benefit of the doubt’, but there is a slightly different argument that being indifferent to apparent suffering makes us worse people even if strictly speaking no pains are being felt.

Consider a small boy with a robot dog; the toy has been programmed to give displays of affection and enjoyment, but if mistreated it also performs an imitation of pain and distress. Now suppose the boy never plays nicely, but obsessively ‘tortures’ the robot, trying to make it yelp and whine as loudly as possible. Wouldn’t his parents feel some concern; wouldn’t they tell him that what he was doing was wrong, even though the robot had no real feelings whatever. Wouldn’t that be a little more than simple  anthropomorphism?

Perhaps we need a bigger vocabulary; ‘pain’ is doing an awful lot of work in these discussions.