Does recent research into autism suggest real differences between male and female handling of consciousness?

Traditionally, autism has been regarded as an overwhelmingly male condition. Recently, though it has been suggested that the gender gap is not as great as it seems; it’s just that most women with autism go undiagnosed. How can that be? It is hypothesised that some sufferers are able to ‘camouflage’ the symptoms of their autism, and that this suppression of symptoms is particularly prevalent among women.

‘Camouflaging’ means learning normal social behaviours such as giving others appropriate eye contact, interpreting and using appropriate facial expressions, and so on. But surely, that’s just what normal people do? If you can learn these behaviours, doesn’t that mean you’re not autistic any more?
There’s a subtle distinction here between doing what comes naturally and doing what you’ve learned to do. Camouflaging, on this view, requires significant intellectual resources and continuous effort, so that while camouflaged sufferers may lead apparently normal lives, they are likely to suffer other symptoms arising from the sheer mental effort they have to put in – fatigue, depression, and so on.

Measuring the level of camouflaging – which is obviously intended to be undetectable – obviously raises some methodological challenges. Now a study reported in the invaluable BPS Research Digest claims to have pulled it off. The research team used scanning and other approaches, but their main tool was to contrast two different well-established methods of assessing autism – the Autism Diagnostic Observation Schedule on the one hand and the Autism Spectrum Quotient on the other. While the former assesses ‘external’ qualities such as behaviour, the latter measures ‘internal’ ones. Putting it crudely, they measure what you actually do and what you’d like to do respectively. The ratio between the two scores yields a measure of how much camouflaging is going on, and in brief the results confirm that camouflaging is present to a far greater degree in women. I think in fact it’s possible the results are understated; all of the subjects were people who had already been diagnosed with autism; that criterion may have selected women who were atypically low in the level of camouflaging, precisely because women who do a lot of camouflaging would be more likely to escape diagnosis.

The research is obviously welcome because it might help improve diagnosis rates for women, but also because a more equal rate of autism for men and women perhaps helps to dispel the idea, formerly popular but (to me at least) rather unpalatable, that autism is really little more than typical male behaviour exaggerated to unacceptable levels.

It does not eliminate the tricky gender issues, though. One thing that surely needs to be taken into account is the possibility that accommodating social pressures is something women do more of anyway. It is plausible (isn’t it?) that even among typical average people, women devote more effort to social signals, listening and responding, laughing politely at jokes, and so on. It might be that there is a base level of activity among women devoted to ‘camouflaging’ normal irritation, impatience, and boredom which is largely absent in men, a baseline against which the findings for people with autism should properly be assessed. It might have been interesting to test a selection of non-autistic people, if that makes sense in terms of the tests. How far the general underlying difference, if it exists, might be due to genetics, socialisation, or other factors is a thorny question.

At any rate, it seems to me inescapable that what the study is really attempting to do with its distinction between outward behaviour and inward states, is to measure the difference between unconscious and conscious control of behaviour. That subtle distinction, mentioned above, between natural and learned behaviour is really the distinction between things you don’t have to think about, and things that require constant, conscious attention. Perhaps we might draw a parallel of sorts with other kinds of automatic behaviour. Normally, a lot of things we do, such as walking, require no particular thought. All that stuff, once learned, is taken care of by the cerebellum and the cortex need not be troubled (disclaimer: I am not a neurologist). But people who have their cerebellum completely removed can apparently continue to function: they just have to think about every step all the time, which imposes considerable strain after a while. However, there’s no special organ analogous to the cerebellum that records our social routines, and so far as I know it’s not clear whether the blend of instinct and learning is similar either.

In one respect the study might be thought to open up a promising avenue for new therapeutic approaches. If women can, to a great extent, learn to compensate consciously for autism, and if that ability is to a great extent a result of social conditioning, then in principle one option would be to help male autism sufferers achieve the same thing through applying similar socialisation. Although camouflaging evidently has its downsides, it might still be a trick worth learning. I doubt if it is as simple as that, though; an awful lot of regimes have been tried out on male sufferers and to date I don’t believe the levels of success have been that great; on the other hand it may be that pervasive, ubiquitous social pressure is different in kind from training or special regimes and not so easily deployed therapeutically. The only way might be to bring up autistic boys as girls…

If we take the other view, that women’s ability or predisposition to camouflage is not the result of social conditioning, then we might be inclined to look for genuine ‘hard-wired’ differences in the operation of male and female consciousness. One route to take from there would be to relate the difference to the suggested ability of women (already a cornerstone of gender-related folk psychology) to multi-task more effectively, dividing conscious attention without significant loss to the efficiency of each thread. Certainly one would suppose that having to pay constant attention to detailed social cues would have an impact on the ability to pay attention to other things, but so far as I know there is no evidence that women with camouflaged autism are any worse at paying attention generally than anyone else. Perhaps this is a particular skill of the female mind, while if men pay that much attention to social cues, their ability to listen to what is actually being said is sensibly degraded?

The speculative ice out here is getting thinner than I like, so I’ll leave it there; but in all seriousness, any study that takes us forward in this area, as this one seems to do, must be very warmly welcomed.

Are we losing it?

Nick Bostrom’s suggestion that we’re most likely living in a simulated world continues to provoke discussion.  Joelle Dahm draws an interesting parallel with multiverses. I think myself that it depends a bit on what kind of multiverse you’re going for – the ones that come from an interpretation of quantum physics usually require conservation of identity between universes – you have to exist in more than one universe – which I think is both potentially problematic and strictly speaking non-Bostromic. Dahm also briefly touches on some tricky difficulties about how we could tell whether we were simulated or not, which seem reminiscent of Descartes’ doubts about how he could be sure he wasn’t being systematically deceived by a demon – hold that thought for now.

Some of the assumptions mentioned by Dahm would probably annoy Sabine Hossenfelder, who lays into the Bostromisers with a piece about just how difficult simulating the physics of our world would actually be: a splendid combination of indignation with actually knowing what she’s talking about.

Bostrom assumes that if advanced civilisations typically have a long lifespan, most will get around to creating simulated versions of their own civilisation, perhaps re-enactments of earlier historical eras. Since each simulated world will contain a vast number of people, the odds are that any randomly selected person is in fact living in a simulated world. The probability becomes overwhelming if we assume that the simulations are good enough for the simulated people to create simulations within their own world, and so on.

There’s  plenty of scope for argument about whether consciousness can be simulated computationally at all, whether worlds can be simulated in the required detail, and certainly about the optimistic idea of nested simulations. But recently I find myself thinking, isn’t it simpler than that? Are we simulated people in a simulated world? No, because we’re real, and people in a simulation aren’t real.

When I say that, people look at me as if I were stupid, or at least, impossibly naive. Dude,  read some philosophy, they seem to say. Dontcha know that Socrates said we are all just grains of sand blowing in the wind?

But I persist – nothing in a simulation actually exists (clue’s in the name), so it follows that if we exist, we are not in a simulation. Surely no-one doubts their own existence (remember that parallel with Descartes), or if they do, only on the kind of philosophical level where you can doubt the existence of anything? If you don’t even exist, why do I even have to address your simulated arguments?

I do, though. Actually, non-existent people can have rather good arguments; dialogues between imaginary people are a long-established philosophical method (in my feckless youth I may even have indulged in the practice myself).

But I’m not entirely sure what the argument against reality is. People do quite often set out a vision of the world as powered by maths; somewhere down there the fundamental equations are working away and the world is what they’re calculating. But surely that is the wrong way round; the equations describe reality, they don’t dictate it. A system of metaphysics that assumes the laws of nature really are explicit laws set out somewhere looks tricky to me; and worse, it can never account for the arbitrary particularity of the actual world. We sort of cling to the hope that this weird specificity can eventually be reduced away by titanic backward extrapolation to a hypothetical time when the cosmos was reduced to the simplicity of a single point, or something like it; but we can’t make that story work without arbitrary constants and the result doesn’t seem like the right kind of explanation anyway. We might appeal instead to the idea that the arbitrariness of our world arises from it’s being an arbitrary selection out of the incalculable banquet of the multiverse, but that doesn’t really explain it.

I reckon that reality just is the thing that gets left out of the data and the theory; but we’re now so used to the supremacy of those two we find it genuinely hard to remember, and it seems to us that a simulation with enough data is automatically indistinguishable from real events – as though once your 3D printer was programmed, there was really nothing to be achieved by running it.

There’s one curious reference in Dahm’s piece which makes me wonder whether Christof Koch agrees with me. She says the Integrated Information Theory doesn’t allow for computer consciousness. I’d have thought it would; but the remarks from Koch she quotes seem to be about how you need not just the numbers about gravity but actual gravity too, which sounds like my sort of point.

Regular readers may already have noticed that I think this neglect of reality also explains the notorious problem of qualia; they’re just the reality of experience. When Mary sees red, she sees something real, which of course was never included in her perfect theoretical understanding.

I may be naive, but you can’t say I’m not consistent…

A somewhat enigmatic report in the Daily Telegraph says that this problem has been devised by Roger Penrose, who says that chess programs can’t solve it but humans can get a draw or even a win.

I’m not a chess buff, but it looks trivial. Although Black has an immensely powerful collection of pieces, they are all completed bottled up and immobile, apart from three bishops. Since these are all on white squares, the White king is completely safe from them if he stays on black squares. Since the white pawns fencing in Black’s pieces are all on black squares, the bishops can’t do anything about them either. It looks like a drawn position already, in fact.

I suppose Penrose believes that chess computers can’t deal with this because it’s a very weird situation which will not be in any of their reference material. If they resort to combinatorial analysis the huge number of moves available to the bishops is supposed to render the problem intractable, while the computer cannot see the obvious consequences of the position the way a human can.

I don’t know whether it’s true that all chess programs are essentially that stupid, but it is meant to buttress Penrose’s case that computers lack some quality of insight or understanding that is an essential property of human consciousness.

This is all apparently connected with the launch of a new Penrose Institute, whose website is here, but appears to be incomplete. No doubt we’ll hear more soon.

Give up on real comprehension, says Daniel Dennett in From Bacteria to Bach and Back: commendably honest but a little discouraging to the reader? I imagine it set out like the warning above the gates of Hell: ‘Give up on comprehension, you who turn these pages’.  You might have to settle for acquiring some competences.

What have we got here? In this book, Dennett is revisiting themes he developed earlier in his career, retelling the grand story of the evolution of minds. We should not expect big new ideas or major changes of heart ( but see last week’s post for one important one). It would have been good at this stage to have a distillation; a perfect, slim little volume presenting a final crystalline formulation of what Dennett is all about. This isn’t that. It’s more like a sprawling Greatest Hits album. In there somewhere are the old favourites that will always have the fans stomping and shouting (there’s some huffing and puffing from Dennett about how we should watch out because he’s coming for our deepest intuitions with scary tools that may make us flinch, but honestly by now this stuff is about as shocking and countercultural as your dad’s Heavy Metal collection); but we’ve also got unnecessary cover versions of ideas by other people, some stuff that was never really a hit in the first place, and unfortunately one or two bum notes here and there.

And, oh dear, another attempt to smear Descartes by association. First Dennett energetically promoted the phrase “Cartesian theatre” – so hard some people suppose that it actually comes from Descartes; now we have ‘Cartesian gravity’, more or less a boo-word for any vaguely dualistic tendency Dennett doesn’t like. This is surely not good intellectual manners; it wouldn’t be quite so bad if it wasn’t for the fact that Descartes actually had a theory of gravity, so that the phrase already has a meaning. Should a responsible professor be spreading new-minted misapprehensions like this? Any meme will do?

There’s a lot about evolution here that rather left me cold (but then I really, really don’t need it explained again, thanks); I don’t think Dennett’s particular gift is for popularising other people’s ideas and his take seems a bit dated. I suspect that most intelligent readers of the book will already know most of this stuff and maybe more, since they will probably have kept up with epigenetics and the various proposals for extension of the modern synthesis that have emerged in the current century, (and the fascinating story of viral intervention in human DNA, surely a natural for anyone who likes the analogy of the selfish gene?) none of which gets any recognition here (I suppose in fairness this is not intended to be full of new stuff). Instead we hear again the tired and in my opinion profoundly unconvincing story about how leaping (‘stotting’) gazelles are employing a convoluted strategy of wasting valuable energy as a lion-directed display of fitness. It’s just an evasive manoeuvre, get over it.

For me it’s the most Dennettian bits of the book that are the best, unsurprisingly. The central theme that competence precedes, and may replace, comprehension is actually well developed. Dennett claims that evolution and computation both provide ‘inversions’ in which intentionless performance can give the appearance of intentional behaviour. He has been accused of equivocating over the reality of intentionality, consciousness and other concepts, but I like his attitude over this and his defence of the reality of ‘free-floating rationales’ seems good to me. It gives us permission to discuss the ‘purposes’ of things without presupposing an intelligent designer whose purposes they are, and I’m completely with Dennett when he argues that this is both necessary and acceptable. I’ve suggested elsewhere that talking about ‘the point’ of things, and in a related sense, what they point to, is a handy way of doing this. The problem for Dennett, if there is one, is that it’s not enough for competence to replace comprehension often; he needs it to happen every time by some means.

Dennett sets out a theoretical space with ‘bottom-up vs top-down’, ‘random vs directed search’, and ‘comprehension’ as its axes; at one corner of the resulting cube we have intentionless structures like a termite colony; at the other we have fully intentional design like Gaudi’s church of the Sagrada Familia, which to Dennett’s eye resembles a termite colony. Gaudi’s perhaps not the ideal choice here, given his enthusiasm for natural forms; it makes Dennett seem curiously impressed by the underwhelming fact that buildings by an architect who borrowed forms from the natural world turn out to have forms resembling those found in nature.

Still, the space suggests a real contrast between the mindless processes of evolution and deliberate design, which at first sight looks refreshingly different and unDennetian. It’s not, of course; Dennett is happy to embrace that difference so long as we recognise that the ‘deliberate design’ is simply a separate evolutionary process powered by memes rather than genes.

I’ve never thought that memes, Richard Dawkins’s proposed cultural analogue of genes, were a particularly helpful addition to Dennett’s theoretical framework, but here he mounts an extended defence of them. One of the worst flaws in the theory as it stands – and there are several – is its confused ontology. What are memes – physical items of culture or abstract ideas? Dennett, as a professional philosopher, seems more sensitive to this problem than some of the more metaphysically naive proponents of the meme. He provides a relatively coherent vision by invoking the idea that memes are ‘tokens’; they may take all sorts of physical forms – written words, pictures, patterns of neuronal firing – but each form is a token of a particular way of behaving. The problem here is that anything at all can serve as a token of any meme; we only know that a given noise or symbol tokens a specific meme because of its meaning. There may be – there certainly are – some selective effects that bite on the actual form of particular tokens. A word that is long or difficult to pronounce is more likely to be forgotten. But the really interesting selections take place at the level of meanings; that requires a much more complex level of explanation. There may still be mechanisms involved that are broadly selective if not exactly Darwinian – I think there are – but I believe any move up to this proper level of complexity inevitably edges the simplistic concept of the meme out of play.

The original Dawkinsian observation that the development of cultural items sometimes resembles evolution was sound, but it implicitly called for the development of a general theory which in spite of some respectable attempts, has simply failed to appear. Instead, the supporters of memetics, perhaps trapped by the insistent drumbeat of the Dawkinsian circus, have tended to insist instead that it’s all Darwinian natural selection. How a genetic theory can be Darwinian when Darwin never heard of genes is just one of the lesser mysteries here (should we call it ‘Mendelian’ instead? But Darwin’s name is the hooray word here just as Descartes’ is the cue for boos). Among the many ways in which cultural selection does not resemble biological evolution, Dennett notes the cogent objection that there is nothing that corresponds to DNA; no general encoding of culture on which selection can operate. One of the worst “bum notes” in the book is Dennett’s strange suggestion that HTML might come to be our cultural DNA. This is, shall we say, an egregious misconception of the scope of text mark-up language.

Anyway, it’s consciousness we’re interested in (check out Tom Clark’s thoughtful take here) and the intentional stance is the number the fans have been waiting for; cunningly kept till last by Dennett. When we get there, though, we get a remix instead of the classic track. Here he has a new metaphor, cunningly calculated to appeal to the youth of today; it’s all about apps. Our impression of consciousness is a user illusion created by our gift for language; it’s like the icons that activate the stuff on your phone. You may object that a user illusion already requires a user, but hang on. Your ability to talk about yourself is initially useful for other people, telling them useful stuff about your internal states and competences, but once the system is operating, you can read it too. It seems plausible to me that something like that is indeed an important part of the process of consciousness, though in this version I felt I had rather lost track of what was illusory about it.

Dennett moves on to a new attack on qualia. This time he offers an explanation of why people think they occur – it’s because of the way we project our impressions back out into the world, where they may seem unaccountable. He demonstrates the redundancy of the idea by helpfully sketching out how we could run up a theory of qualia and noting how pointless they are. I was nodding along with this. He suggests that qualia and our own sense of being the intelligent designers in our own heads are the same kind of delusion, simply applied externally or internally. I suppose that’s where the illusion is.

He goes on to defend a sort of compatibilist view of free will and responsibility; another example of what Descartes might be tempted to label Dennettian Equivocation, but as before, I like that posture and I’m with him all the way. He continues with a dismissal of mysterianism, leaning rather more than I think is necessary on the interesting concept of joint understanding, where no one person gets it all perfectly, but nothing remains to be explained, and takes a relatively sceptical view of the practical prospects for artificial general intelligence, even given recent advances in machine learning. Does Google Translate display understanding (in some appropriate sense); no, or rather, not yet. This is not Dennett as we remember him; he speaks disparagingly of the cheerleaders for AI and says that “some of us” always discounted the hype. Hmm. Daniel Dennett, long-time AI sceptic?

What’s the verdict then? Some good stuff in here, but as always true fans will favour the classic album; if you want Dennett at his best the aficionado will still tell you to buy Consciousness Explained.



A blast of old-fashioned optimism from Owen Holland: let’s just build a conscious robot!

It’s a short video so Holland doesn’t get much chance to back up his prediction that if you’re under thirty you will meet a conscious robot. He voices feelings which I suspect are common on the engineering and robotics side of the house, if not usually expressed so clearly: why don’t we just get on and put a machine together to do this? Philosophy, psychology, all that airy fairy stuff is getting us nowhere; we’ll learn more from a bad robot than twenty papers on qualia.

His basic idea is that we’re essentially dealing with an internal model of the world. We can now put together robots with an increasingly good internal modelling capability (and we can peek into those models); why not do that and then add new faculties and incremental improvements till we get somewhere?

Yeah, but. The history of AI is littered with projects that started like this and ran into the sand. In particular the idea that it’s all about an internal model may be a radical mis-simplification. We don’t just picture ourselves in the world, we picture ourselves picturing ourselves. We can spend time thinking just about the concept of consciousness – how would that appear in a model? In general our conscious experience is complex and elusive, and cannot accurately be put on a screen or described on a page (though generations of novelists have tried everything they can think of).

The danger when we start building is that the first step is wrong and already commits us to a wrong path. Maybe adding new abilities won’t help. Perhaps our ability to model the world is just one aspect of a deeper and more general faculty that we haven’t really grasped yet; building in a fixed spatial modeller might turn us away from that right at the off. Instead of moving incrementally towards consciousness we might end up going nowhere (although there should be some pretty cool robots along the way).

Still, without some optimism we’ll certainly get nowhere anyway.

Yes, Dennett has recanted. Alright, he hasn’t finally acknowledged that Jesus is his Lord and Saviour. He hasn’t declared that qualia are the real essence of consciousness after all. But his new book From Bacteria to Bach and Back does include a surprising change of heart.

The book is big and complex: to be honest it’s a bit of a ragbag (and a bit of a curate’s egg). I’ll give it the fuller review it deserves another time, but it seems to be worth addressing this interesting point separately. The recantation arises from a point on which Dennett has changed his mind once before. This is the question of homunculi. Homunculi are ‘little people’ and the term is traditionally used to criticise certain kinds of explanation, the kind that assume some module in the brain is just able to do everything a whole person could do. Those modules are effectively ‘little people in your head’, and they require just as much explanation as your brain did in the first place. At some stage many years ago, Dennett decided that homunculi were alright after all, on certain conditions. The way he thought it could work was an hierarchy of ever stupider homunculi. Your eyes deliver a picture to the visual homunculus, who sees it for you; but we don’t stop there; he delivers it to a whole group of further colleagues; line-recognising homunculi, colour-recognising homunculi, and so on. Somewhere down the line we get to an homunculus whose only job is to say whether a spot is white or not-white. At that point the function is fully computable and our explanation can be cashed out in entirely non-personal, non-mysterious, mechanical terms. So far so good, though we might argue that Dennett’s ever stupider routines are not actually homunculi in the correct sense of being complete people; they’re more like ‘black boxes’, perhaps, a stage of a process you can’t explain yet, but plan to analyse further.

Be that as it may, he now regrets taking that line. The reason is that he no longer believes that neurons work like computers! This means that even at the bottom level the reduction to pure computation doesn’t quite work. The reason for this remarkable change of heart is that Terrence Deacon and others have convinced Dennett that the nature of neurons as entities with metabolism and a lifecycle is actually relevant to the way they work. The fact that neurons, at some level, have needs and aims of their own may ground a kind of ‘nano-intentionality’ that provides a basis for human cognition.

The implications are large; if this is right then surely, computation alone cannot give rise to consciousness! You need metabolism and perhaps other stuff. That Dennett should be signing up to this is remarkable, and of course he has a get-out. This is that we could still get computer consciousness by simulating an entire brain and reproducing every quirk of every neuron. For now that is well beyond our reach – and it may always be, though Dennett speaks with misplaced optimism about Blue Brain and other projects. In fact I don’t think the get-out works even on a theoretical level; simulations always leave out some aspect of the thing simulated, and if this biological view is sound, we can never be sure that we haven’t left out something important.

But even if we allow the get-out to stand this is a startling change, and I’ve been surprised to see that no review of the book I’ve seen even acknowledges it. Does Dennett himself even appreciate quite how large the implications are? It doesn’t really look as if he does. I would guess he thinks of the change as merely taking him a bit closer to, say, the evolution-based perspective of Ruth Millikan, not at all an uncongenial direction for him. I think, however, that he’s got more work to do. He says:

The brain is certainly not a digital computer running binary code, but it is still a kind of computer…

Later on, however, he rehashes the absurd but surely digitally-computational view he put forward in Consciousness Explained:

You can simulate a virtual serial machine on a parallel architecture, and that’s what the brain does… and virtual parallel machines can be implemented on serial machines…

This looks pretty hopeless in itself, by the way. You can do those things if you don’t mind doing something really egregiously futile. You want to ‘simulate’ a serial machine on a parallel architecture? Just don’t use more than one of its processors. The fact is, parallel and serial computing do exactly the same job, run the same algorithms, and deliver the same results. Parallel processing by computers is just a practical engineering tactic, of no philosophical interest whatever. When people talk about the brain doing parallel processing they are talking about a completely different and much vaguer idea and often confusing themselves in the process. Why on earth does Dennett think the brain is simulating serial processing on a parallel architecture,  a textbook example of pointlessness?

It is true that the brain’s architecture is massively parallel… but many of the brain’s most spectacular activities are (roughly) serial, in the so-called stream of consciousness, in which ideas, or concepts or thoughts float by not quite in single file, but through a Von Neumann bottleneck of sorts…

It seems that Dennett supposes that only serial processing can deliver a serially coherent stream of consciousness, but that is just untrue. On display here too is Dennett’s bad habit of using ‘Von Neumann’ as a synonym for ‘serial’. As I understand it the term “Von Neumann Architecture” actually relates to a long-gone rivalry between very early computer designs. Historically the Von Neumann design used the same storage for programs and data, while the more tidy-minded Harvard Architecture provided separate storage. The competition was resolved in Von Neumann’s favour long ago and is as dead as a doornail. It simply has no relevance to the human brain: does the brain have a Von Neumann or Harvard architecture? The only tenable answer is ‘no’.

Anyway, whatever you may think of that, if Dennett now says the brain is not a digital computer, he just cannot go on saying it has a Von Neumann architecture or simulates a serial processor. Simple consistency requires him to drop all that now – and a good thing too. Dennett has to find a way of explaining the stream of consciousness that doesn’t rely on concepts from digital computing. If he’s up for it, we might get something really interesting – but retreat to the comfort zone must look awfully appealing at this stage. There is, of course, nothing shameful in changing your mind; if only he can work through the implications a bit more thoroughly, Dennett will deserve a lot of credit for doing so.

More another time.

correspondentA bit of a tribute to three people who have persisted in the face of scepticism.

How long has Doug Lenat has been working on the CYC project? I described it unkindly as a kind of dinosaur in 2005, when it was already more than twenty years old. What is it about? AI systems often lack the complex background of understanding needed to deal with real life situations. One strategy, back in the day, was to tackle the problem head-on and simply build a gigantic encyclopaedia of background facts. Once you had that, rules of inference would do the rest. That seems an impossibly optimistic and old-fashioned strategy now, but CYC has apparently been working away on its encyclopaedia ever since and – it’s said – is actually beginning to deliver.

Also in 2005 I described the remarkable findings of Maurits Van Den Noort apparently showing a reaction to stimuli before the stimuli actually occurred. (That post is one of the “lost” ones from when I moved over to WordPress. I’ve just brought it back, but alas the lively discussion in comments is gone.) I’ve heard no more about the research since, but Van Den Noort is still urging the case for a new relationship between neurophysics and quantum physics.

That post mentions Huping Hu, who with his long-time colleague (and wife) Maoxin Wu, also announced some remarkable findings relating consciousness and quantum physics. He went on to found his own journal – one way to ensure your papers are properly published – which continues to publish to this day.

We may feel (I do) that these people are in differing ways on the wrong track, but their persistence surely commands respect.

Louie Savva kindly invited me to do a couple of podcasts recently which are now accessible on his site. These are part of the ‘Existential Files’ series he and Matthew Smith have been doing on his blog of despair (actually quite cheerful, considering) Everything is Pointless. I understand Susan Blackmore is pencilled in to do one soon, which should be interesting.

This was a new departure for me, but I must say I had great fun maundering away.  A vast range of subjects got covered at high speed, from consciousness and brain preservation to the limits of reason and why the universe exists.

One interesting thing (for me) was that I don’t think I quite sound like a Londoner even after all these years. I don’t sound like John Major’s geeky nephew, as I had feared: but it turns out I’m in no danger of being mistaken for James Mason either.

Anyway if you’ve been looking for the chance to listen to a confused old git wibbling about cognition, this might be your lucky day…

You can’t build experience out of mere information. Not, at any rate, the way the Integrated Information Theory (IIT) seeks to do it. So says Garrett Mindt in a forthcoming paper for the JCS.

‘Information’ is notoriously a slippery term, and much depends on how you’re using it. Commonly people distinguish the everyday meaning, which makes information a matter of meaning or semantics, and the sense defined by Shannon, which is statistical and excludes meaning, but is rigorous and tractable.

It is a fairly common sceptical claim that you cannot get consciousness, or anything like intentionality or meaning, out of Shannon-style information. Mindt describes in his paper a couple of views that attack IIT on similar grounds. One is by Cerullo, who says:

‘Only by including syntactic, and most importantly semantic, concepts can a theory of information hope to model the causal properties of the brain…’

The other is by Searle, who argues that information, correctly understood, is observer dependent. The fact that this post, for example, contains information depends on conscious entities interpreting it as such, or it would be mere digital noise. Since information, defined this way, requires consciousness, any attempt to derive consciousness from it must be circular.

Although Mindt is ultimately rather sympathetic to both these cases, he says they fail because they assume that IIT is working with a Shannonian conception of information: but that’s not right. In fact IIT invokes a distinct causal conception of information as being ‘a difference that makes a difference’. A system conveys information, in this sense, if it can induce a change in the state of another system. Mindt likens this to the concept of information introduced by Bateson.

Mindt makes the interesting point that Searle and others tend to carve the problem up by separating syntax from semantics; but it’s not clear that semantics is required for hard-problem style conscious experience (in fact I think the question of what, if any, connection there is between the two is puzzling and potentially quite interesting). Better to use the distinction favoured by Tononi in the context of IIT, between extrinsic information – which covers both syntax and semantics – and intrinsic, which covers structure, dynamics, and phenomenal aspects.

Still, Mindt finds IIT vulnerable to a slightly different attack. Even with the clarifications he has made, the theory remains one of structure and dynamics, and physicalist structure and dynamics just don’t look like the sort of thing that could ever account for the phenomenal qualities of experience. There is no theoretical bridge arising from IIT that could take us across the explanatory gap.

I think the case is well made, although unfortunately it may be a case for despair. If this objection stands for IIT then it most likely stands for all physicalist theories. This is a little depressing because on one point of view, non-physicalist theories look unattractive. From that perspective, coming up with a physical explanation of phenomenal experience is exactly the point of the whole enquiry; if no such explanation is possible, no decent answer can ever be given.

It might still be the case that IIT is the best theory of its kind, and that it is capable of explaining many aspects of consciousness. We might even hope to squeeze the essential Hard Problem to one side. What if IIT could never explain why the integration of information gives rise to experience, but could explain everything, or most things, about the character of experience? Might we not then come to regard the Hard Problem as one of those knotty tangles that philosophers can mull over indefinitely, while the rest of us put together a perfectly good practical understanding of how mind and brain work?

I don’t know what Mindt would think about that, but he rounds out his case by addressing one claimed prediction of IIT; namely that if a large information complex is split, the attendant consciousness will also divide. This looks like what we might see in split-brain cases, although so far as I can see, nobody knows whether split-brain patients have two separate sets of phenomenal experiences, and I’m not sure there’s any way of testing the matter. Mindt points out that the prediction is really a matter of  ‘Easy Problem’ issues and doesn’t help otherwise: it’s also not an especially impressive prediction, as many other possible theories would predict the same thing.

Mindt’s prescription is that we should go back and have another try at that definition of information; without attempting to do that he smiles on dual aspect theories. I’m afraid I am left scowling at all of them; as always in this field the arguments against any idea seem so much better than the ones for.


Emotions like fear are not something inherited from our unconscious animal past. Instead they arise from the higher-order aspects that make human thought conscious. That (if I’ve got it right) is the gist of an interesting paper by LeDoux and Brown.

A mainstream view of fear (the authors discuss fear in particular as a handy example of emotion, on the assumption that similar conclusions apply to other emotions) would make it a matter of the limbic system, notably the amygdala, which is known to be associated with the detection of threats. People whose amygdalas have been destroyed become excessively trusting, for example – although as always things are more complicated than they seem at first and the amygdalas are much more than just the organs of ‘fear and loathing’. LeDoux and Brown would make fear a cortical matter, generated only in the kind of reflective consciousness possessed by human beings.

One immediate objection might be that this seems to confine fear to human beings, whereas it seems pretty obvious that animals experience fear too. It depends, though, what we mean by ‘fear’. LeDoux and Brown would not deny that animals exhibit aversive behaviour, that they run away or emit terrified noises; what they are after is the actual feeling of fear. LeDoux and Brown situate their concept of fear in the context of philosophical discussion about phenomenal experience, which makes sense but threatens to open up a larger can of worms – nothing about phenomenal experience, including its bare existence, is altogether uncontroversial. Luckily I think that for the current purposes the deeper issues can be put to one side; whether or not fear is a matter of ineffable qualia we can probably agree that humanly conscious fear is a distinct thing. At the risk of begging the question a bit we might say that if you don’t know you’re afraid, you’re not feeling the kind of fear LeDoux and Brown want to talk about.

On a traditional view, again, fear might play a direct causal role in behaviour. We detect a threat, that causes the feeling of fear, and the feeling causes us to run away. For LeDoux and Brown, it doesn’t work like that. Instead, while the threat causes the running away, that process does not in itself generate the feeling of fear. Those sub-cortical processes, along with other signals, feed into a separate conscious process, and it’s that that generates the feeling.

Another immediate objection therefore might be that the authors have made fear an epiphenomenon; it doesn’t do anything. Some, of course, might embrace the idea that all conscious experience is epiphenomenal; a by-product whose influence on behaviour is illusory. Most people, though, would find it puzzling that the brain should go to the trouble of generating experiences that never affect behaviour and so contribute nothing to survival.

The answer here, I think, comes from the authors’ view of consciousness. They embrace a higher-order theory (HOT). HOTs (there are a number of variations) say that a mental state is conscious if there is another mental state in the same mind which is about it – a Higher Order Representation (HOR); or to put it another way, being conscious is being aware that you’re aware. If that is correct, then fear is a natural result of the application of conscious processes to certain situations, not a peculiar side-effect.

HOTs have been around for a long time: they would always get a mention in any round-up of the contenders for an explanation of consciousness, but somehow it seems to me they have never generated the little bursts of excitement and interest that other theories have enjoyed. LeDoux and Brown suggest that other theories of emotion and consciousness either are ‘first -order’ theories explicitly, or can be construed as such. They defend the HOT concept against one of the leading objections, which is that it seems to be possible to have HORs of non-existent states of awareness. In Charles Bonnet, syndrome, for example, people who are in fact blind have vivid and complex visual hallucinations. To deal with this, the authors propose to climb one order higher; the conscious awareness, they suggest, comes not from the HOR of a visual experience but from the HOR of a HOR: a HOROR, in fact. There clearly is no theoretical limit to the number of orders we can rise to, and there’s some discussion here about when and whether we should call the process introspection.

I’m not convinced by HOTs myself. The authors suggest that single-order theory implies there can be conscious states of which we are not aware, which seems sort of weird: you can feel fear and not know you’re feeling fear? I think there’s a danger here of equivocating between two senses of ‘aware’. Conscious states are states of awareness, but not necessarily states we are aware of; something is in awareness if we are conscious; but that’s not to say that the something includes our awareness itself. I would argue, contrarily, that there must be states of awareness with no HOR; otherwise, what about the HOR itself? If HORs are states of awareness themselves, each must have its own HOR, and so on indefinitely. If they’re not, I don’t see how the existence of an inert representation can endow the first-order state with the magic of consciousness.

My intuitive unease goes a bit wider than that, too. The authors have given a credible account of a likely process, but on this account fear looks very like other conscious states. What makes it different – what makes it actually fearful? It seems possible to imagine that I might perform the animal aversive behaviour, experience a conscious awareness of the threat and enter an appropriate conscious state without actually feeling fear. I have no doubt more could be said here to make the account more plausible and in fairness LeDoux and Brown could well reply that nobody has a knock-down account of phenomenal experience, with their version offering a lot more than some.

In fact, even though I don’t sign up for a HOT I can actually muster a pretty good degree of agreement nonetheless. Nobody, after all, believes that higher order mental states don’t exist (we could hardly be discussing this subject if they didn’t). In fact, although I think consciousness doesn’t require HORs, I think they are characteristic of its normal operation and in fact ordinary consciousness is a complex meld of states of awareness at several different levels. If we define fear the way LeDoux and Brown do, I can agree that they have given a highly plausible account of how it works without having to give up my belief that simple first-order consciousness is also a thing.