Maybe hypnosis is the right state of mind and ‘normal’ is really ‘under-hypnotised’?

That’s one idea that does not appear in the comprehensive synthesis of what we know about hypnosis produced by Terhune, Cleeremans, Raz and Lynn. It is a dense, concentrated document, thick with findings and sources, but they have done a remarkably good job of keeping it as readable as possible, and it’s both a useful overview and full of interesting detail. Terhune has picked out some headlines here.

Hypnosis, it seems, has two components; the induction and one or more suggestions. The induction is what we normally think of as the process of hypnotising someone. It’s the bit that in popular culture is achieved by a swinging watch, mystic hand gestures or other theatrical stuff; in common practice probably just a verbal routine. It seems that although further research is needed around optimising the induction, the details are much less important than we might have been led to think, and Terhune et al don’t find it of primary interest. The truth is that hypnosis is more about the suggestibility of the subject than about the effectiveness of the induction. In fact if you want to streamline your view, you could see the induction as simply the first suggestion. Post-hypnotic suggestions, which take effect after the formal hypnosis session has concluded, may be somewhat different and may use different mechanisms from those that serve immediate suggestions, though it seems this has yet to be fully explored.

Broadly, people fall into three groups. 10 to 15 per cent of people are very suggestible, responding strongly to the full range of suggestions; about the same proportion are weakly suggestible and respond to hypnosis poorly or not at all; the rest of us are somewhere in the middle. Suggestibility is a fairly fixed characteristic which does not change over time and seems to be heritable; but so far as we know it does not correlate strongly with many other cognitive qualities or personality traits (nor with dissociative conditions such as Dissociative Identity Disorder, formerly known as Multiple Personality Disorder). It does interestingly resemble the kind of suggestibility seen in the placebo effect – there’s good evidence of hypnosis itself being therapeutically useful for certain conditions – and both may be correlated with empathy.

Terhune et al regard the debate about whether hypnosis is an altered state of consciousness as an unproductive one; but there are certainly some points of interest here when it comes to consciousness. A key feature of hypnosis is the loss of the sense of agency; hypnotised subjects think of their arm moving, not of having moved their arm. Credible current theories attribute this to the suppression of second-order mental states, or of metacognition; amusingly, this ‘cold control theory’ seems to lend some support to the HOT (higher order theory) view of consciousness (alright, please yourselves). Typically in the literature it seems this is discussed as a derangement of the proper sense of agency, but of course elsewhere people have concluded that our sense of agency is a delusion anyway. So perhaps, to repeat my opening suggestion, it’s the hypnotised subjects who have it right, and if we want to understand our own minds properly we should all enter a hypnotic state. Or perhaps that’s too much like noticing that blind people don’t suffer from optical illusions?

There’s a useful distinction here between voluntary control and top-down control. One interesting thing about hypnosis is that it demonstrates the power of top-down control, where beliefs, suggestions, and other high-level states determine basic physiological responses, something we may be inclined to under-rate. But hypnosis also highlights strongly that top-down control does not imply agency; perhaps we sometimes mistake the former for the latter? At any rate it seems to me that some of this research ought to be highly relevant to the analysis of agency, and suggests some potentially interesting avenues.

Another area of interest is surely the ability of hypnosis to affect attention and perception. It had been shown that changes in colour perception induced by hypnosis are registered in the brain differently from mere imagined changes. If we tell someone under hypnosis to see red for green and green for red, does that change the qualia of the experience or not? Do they really see green instead of red, or merely believe that’s what is happening? If anything the facts of hypnosis seem to compound the philosophical problems rather than helping to solve them; nevertheless it does seem to me that quite a lot of the results so handily summarised here should have a bigger impact on current philosophical discussion than they have had to date.


What is it like to be a Singularity (or in a Singularity)?

You probably know the idea. At some point in the future, computers become generally cleverer than us. They become able to improve themselves faster than we can do, and an accelerating loop is formed where each improvement speeds up the process of improving, so that they quickly zoom up to incalculable intelligence and speed, in a kind of explosion of intellectual growth. That’s the Singularity. Some people think that we mere humans will at some point have the opportunity of digitising and uploading ourselves, so that we too can grow vastly cleverer and join in the digital world in which these superhuman could scious entities will exist.

Just to clear upfront, I think there are some basic flaws in the plausibility of the story which mean the Singularity is never really going to happen: could never happen, in fact. However, it’s interesting to consider what the experience would be like.

How would we digitise ourselves? One way would be to create a digital model of our actual brain, and run that. We could go the whole hog and put ourselves into a fully simulated world, where we could enjoy sweet dreams forever, but that way we should miss out on the intellectual growth which the Singularity seems to offer, and we should also remain at the mercy of the vast new digital intellects who would be running the show. Generally I think it’s believed that only by joining in the cognitive ascent of these mighty new minds can we assure our own future survival.

In that case, is a brain simulation enough? It would run much faster than a meat brain, a point we’ll come back to, but it would surely suffer some of the limitations that biological brains are heir to. We could perhaps gradually enhance our memory and other faculties and gradually improve things that way, a process which might provide a comforting degree of continuity, but it seems likely that entities based on a biological scheme like this would be second-class citizens within the digital world, falling behind the artificial intellects who endlessly redesign and improve themselves. Could we then preserve our identity while turning fully digital and adopting a radical new architecture?

The subject of what constitutes personal identity, be it memory, certain kinds of continuity, or something else, is too large to explore here, except to note a basic question; can our identity ultimately be boiled down to a set of data? If the answer is yes (I actually believe it’s ‘no’, but today we’ll allow anything) , then one way or another the way is clear for uploading ourselves into an entirely new digital architecture.

The way is also clear for duplicating and splitting ourselves. Using different copies of our data we can become several people and follow different paths. Can we then re-merge? If the data that constitutes us is static, it seems we should be able to recombine it with few issues; if it is partly a description of a dynamic process we might not be able to do the merger on the fly, and might have to form a third, merged individual. Would we terminate the two contributing selves? Would we worry less about ‘death’ in such cases? If you know your data can always be brought back into action, terminating the processes using that data (for now) might seem less frightening than the irretrievable destruction of your only brain.

This opens up further strange possibilities. At the moment our conscious experience is essentially linear (it’s a bit more complex than that, with layers and threads of attention, but broadly there’s a consistent chronological stream). In the brave new world our consciousness could branch out without limit; or we could have grid experiences, where different loci of consciousness follow crossing paths, merging at each node and the splitting again, before finally reuniting in one node with (very strange) remembered composite experience.

If merging is a possibility, then we should be able to exchange bits of ourselves with other denizens of the digital world, too. When handed a copy of part of someone else we might retain it as exterior data, something we just know about, or incorporate it into a new merged self, whether a successor to ourselves as ourselves, or as a kind of child; if all our data is saved the difference perhaps ceases to be of great significance. Could we exchange data like this with the artificial entities that were never human, or would they be too different?

I’m presupposing here that both the ex-humans and the artificial consciousnesses here remain multiple and distinct. Perhaps there’s an argument for generally merging into one huge consciousness? I think probably not, because it seems to me that multiple loci of consciousness would just get more done in the way of thinking and experiencing. Perhaps when we became sufficiently linked and multi-threaded, with polydimensional multi-member grid consciousnesses binding everything loosely together anyway the question of whether we are one or many – and how many – might not seem important any more.

If we can exchange experiences, does that solve the Hard Problem? We no longer need to worry whether your experience of red is the same as mine, we just swap. Now many people (and I am one) would think that fully digitised entities wouldn’t be having real experiences anyway, so any data exchange they might indulge in would be irrelevant. There are several ways it could be done, of course. It might be a very abstract business or entities of human descent might exchange actual neural data from their old selves. If we use data which, fed into a meat brain, definitely produces proper experience, it perhaps gets a little harder to argue that the exchange is phenomenally empty.

The strange thing is, even if we put all the doubts aside and assume that data exchanges really do transfer subjective experience, the question doesn’t go away. It might be that attachment to a particular node of consciousness conditions the experience so that it is different anyway.

Consider the example of experiences transferred within a single individual, but over time. Let’s think of acquired tastes. When you first tasted beer, it seemed unpleasant; now you like it. Does it taste the same, with you having learnt to like that same taste? Or did it in fact taste different to you back then – more bitter, more sour? I’m not sure it’s possible to answer with great confidence. In the same way, if one node within the realm of the Singularity ‘runs’ another’s experience, it may react differently, and we can’t say for sure whether the phenomenal experience generated is the same or not…

I’m assuming a sort of cyberspace where these digital entities live – but what do they do all day? At one end of the spectrum, they might play video games constantly – rather sadly reproducing the world they left behind. Or at the intellectually pure end, they might devote themselves to the study of maths and philosophy. Perhaps there will be new pursuits that we, in our stupid meaty way, cannot even imagine as yet. But it’s hard not to see a certain tedious emptiness in the pure life of the mind as it would be available to these intellectual giants. They might be tempted to go on playing a role in the real world.

The real world, though, is far too slow. Whatever else they have improved, they will surely have racked up the speed of computation to the point where thousands of years of subjective time take only a few minutes of real world time. The ordinary physical world will seem to have slowed down very close to the point of stopping altogether; the time required to achieve anything much in the real world is going to seem like millions of years.

In fact, that acceleration means that from the point of view of ordinary time, the culture within the Singularity will quickly reach a limit at which everything it could ever have hoped to achieve is done. Whatever projects or research the Singularitarians become interested in will be completed and wrapped up in the blinking of an eye. Unless you think the future course of civilisation is somehow infinite, it will be completed in no time. This might explain the Fermi Paradox, the apparently puzzling absence of advanced alien civilisations: once they invent computing, galactic cultures go into the Singularity, wrap, themselves up in a total intellectual consummation, and within a few days at most, fall silent forever.

Is there a Hard Problem of physics that explains the Hard Problem of consciousness?

Hedda Hassel Mørch has a thoughtful piece in Nautilus’s interesting Consciousness issue (well worth a look generally) that raises this idea. What is the alleged Hard Problem of physics? She say it goes like this…

What is physical matter in and of itself, behind the mathematical structure described by physics?

To cut to the chase, Mørch proposes that things in themselves have a nature not touched by physics, and that nature is consciousness. This explains the original Hard Problem – we, like other things, just are by nature conscious; but because that consciousness is our inward essence rather than one of our physical properties, it is missed out in the scientific account.

I’m sympathetic to the idea that the original Hard Problem is about an aspect of the world that physics misses out, but according to me that aspect is just the reality of things. There may not, according to me, be much more that can usefully be said about it. Mørch, I think, takes two wrong turns. The first is to think that there are such things as things in themselves, apart from observable properties. The second is to think that if this were so, it would justify panpsychism, which is where she ends up.

Let’s start by looking at that Hard problem of physics.  Mørch suggests that physics is about the mathematical structure of reality, which is true enough, but the point here is that physics is also about observable properties; it’s nothing if not empirical. If things have a nature in themselves that cannot be detected directly or indirectly from observable properties, physics simply isn’t interested, because those things-in-themselves make no difference to any possible observation. No doubt some physicists would be inclined to denounce such unobservable items as absurd or vacuous, but properly speaking they are just outside the scope of physics, neither to be affirmed nor denied. It follows, I think, that this can’t be a Hard Problem of physics; it’s actually a Hard Problem of metaphysics.

This is awkward because we know that human consciousness does have physical manifestations that are readily amenable to physical investigation; all of our conscious behaviour, our speech and writing, for example. Our new Hard Problem (let’s call it the NHP) can’t help us with those; it is completely irrelevant to our physical behaviour and cannot give us any account of those manifestations of consciousness. That is puzzling and deeply problematic – but only in the same way as the old Hard Problem (OHP) – so perhaps we are on the right track after all?

The problem is that I don’t think the NHP helps us even on a metaphysical level. Since we can’t investigate the essential nature of things empirically, we can only know about it by pure reasoning; and I don’t know of any purely rational laws of metaphysics that tell us about it. Can the inward nature of things change? If so, what are the (pseudo-causal?) laws of intrinsic change that govern that process? If the inward nature doesn’t change, must we take everything to be essentially constant and eternal in itself? That Parmenidean changelessness would be particularly odd in entities we are relying on to explain the fleeting, evanescent business of subjective experience.

Of course Mørch and others who make a similar case don’t claim to present a set of a priori conclusions about their own nature; rather they suggest that the way we know about the essence of things is through direct experience. The inner nature of things is unknowable except in that one case where the thing whose inner nature is to be known is us. We know our own nature, at least. It’s intuitively appealing – but how do we know our own real nature? Why should being a thing bring knowledge of that thing? Just because we have an essential nature here’s no reason to suppose we are acquainted with that inner nature; again we seem to need some hefty metaphysics to explain this, which is actually lacking. All the other examples of knowledge I can think of are constructed, won through experience, not inherent. If we have to invent a new kind of knowledge to support the theory the foundations may be weak.

At the end of the day, the simplest and most parsimonious view, I think, is to say that things just are made up of their properties, with no essential nub besides. Leibniz’s Law tells us that that’s the nature of identity. To be sure, the list will include abstract properties as well as purely physical ones, but abstract properties that are amenable to empirical test, not ones that stand apart from any possible observation. Mørch disagrees:

Some have argued that there is nothing more to particles than their relations, but  intuition rebels at this claim. For there to be a relation, there must be two things being related. Otherwise, the relation is empty—a show that goes on without performers, or a castle constructed out of thin air.

I think the argument is rather that the properties of a particle relate to each other, while these groups of related properties relate in turn to other such groups. Groups don’t require a definitive member, and particles don’t require a single definitive essence. Indeed, since the particle’s essential self cannot determine any of its properties (or it could be brought within the pale of physics) it’s hard to see how it can have a defined relation to any of them and what role the particle-in-itself can play in Mørch’s relational show.

The second point where I think Mørch goes wrong is in the leap to panpsychism. The argument seems to be that the NHP requires non-structural stuff (which she likens to the hardware on which the software of the laws of physics runs – though I myself wouldn’t buy unstructured hardware); the OHP gives us the non-structural essence of conscious experience (of course conscious experience does have structure, but Mørch takes it that down there somewhere is the structureless ineffable something-it-is-like); why not assume that the latter is universal and fills the gap exposed by the NHP?

Well, because other matter exhibits no signs of consciousness, and because the fact that our essence is a conscious essence just wouldn’t warrant the assumption that all essences are conscious ones. Wouldn’t it be simpler to think that only the essences of outwardly conscious beings are conscious essences? This is quite apart from the many problems of panpsychism, which we’ve discussed before, and which Mørch fairly acknowledges.

So I’m not convinced, but the case is a bold and stimulating one and more persuasively argued than it may seem from my account. I applaud the aims and spirit of the expedition even though I may regret the direction it took.

Self-discovery: fascinating journey of life or load of tosh? An IAI discussion.

On the whole, I think the vastness of the subject means we get no more than first steps here, though the directions are at least interesting. Joanna Kavenna notes the paradoxical entanglements that can arise from self-examination and makes an interesting comparison with the process of novelists finding their ‘voice’. Exploration of selves is of course the bedrock of the novel, a topic which could take up many pages in itself. She asserts that the self is experientially real, but that thought also floats away unexamined.

David Chalmers has a less misty proposition; people have traits and we are inclined to think of some as deep or essential. Identifying these is a reasonable project, but not without dangers if we settle on the wrong ones.

Ed Stafford seems to be uncomfortable with philosophy unless it comes from an ayahuasca session or a distant tribe. He likes the idea of thinking with your stomach, but does not shed any light on the interesting question of how stomach thoughts differ from brain ones. In general he seems to take the view that for well-adjusted people there is no mystery, one knows who one is and there’s no need to wibble about it. Oddly, though he mentions being dropped on a desert island where the solitude was so severe, that even when the helicopter was still in view, he vomited. To suffer radical depersonalisation after a couple of minutes alone on a beach seems an extraordinary example of personal fragility, but I suppose we are to understand this was before he centred himself through contact with more robust cultures. Of course, those who reject theory always in fact have a theory; it’s just one that they either haven’t examined or don’t want examined. In response to Chalmers’ suggestion that a loving environment can surely lead to personal growth, he seems to begin adding qualifications to his view of the robustly settled personality, but if we are witnessing actual self-discovery here it doesn’t go far.

Myself I reckon that you don’t need to identify your essential traits to experience self-discovery; merely becoming conscious of your own traits renders them self-conscious and transforms them, an iterative process that represents a worthwhile kind of growth, both moral and psychological. But I’ve never tried ayahuasca.

Is colour the problem or the solution? Last year we heard about a way of correcting colour blindness with glasses. It only works for certain kinds of colour blindness, but the fact that it works at all is astonishing. Human colour vision relies on three different kinds of receptor cone cells in the retina; each picks up a different wavelength and the brain extrapolates from those data to fill in the spectrum. (Actually, it’s far more complex than that, with the background and light conditions taken into account so that the brain delivers a consistent colour reading for the same object even though in different conditions the light reflected from it may be of completely different wavelengths. But let’s leave that aside for now and stick with the simplistic view.) The thing is, receptor cells actually respond to a range of wavelengths; in some people two kinds of receptors have ranges that overlap so much the brain can’t discriminate. What the glasses do is cut out most of the overlapping wavelengths; suddenly the data from the different receptor cells are very different, and the brain can do a full-colour job at last.

Now a somewhat similar approach has been used to produce glasses that turn normal vision into super colour vision. These new lenses exploit the fact that we have two eyes; by cutting out different parts of the range of wavelengths detected by same kind of receptor in the right and left eyes, they give the effect of four kinds of receptor rather than three. In principle the same approach could double up all three kinds of receptor, giving us the effective equivalent of six kinds of receptor, though this has not been tried yet.

This tetrachromacy or four-colour system is not unprecedented. Some animals, notably pigeons, naturally have four or even more kinds of receptor. And a significant percentage of women, benefiting from the second copy of the relevant genes that you get when you have two ‘X’ chromosomes, have four kinds of receptor, though it doesn’t always lead to enhanced colour vision because in most cases the range of the fourth receptor overlaps the range of another one too largely to be useful.

There is no doubt that all three kinds of tetrachromat – pigeons, women with lucky genes, and people with special glasses – can discriminate between more colours than the rest of us. Because our trichromat eyes have only three sources of data, they have to treat mixtures of wavelengths as though they were the same as pure wavelengths with values equivalent to the average of the mixtures – though they’re not. Tetrachromats can do a bit better at this (and I conjecture that colour video and camera images, which use only the three colours needed to fool normal eyes, must sometimes look a bit strange to tetrachromats).

Do tetrachromats see the same spectrum as we do, but in better detail, or do they actually see different colours? There’s never been a way to tell for sure. Tetrachromats can’t tell us what colours they see any more than we can tell each other whether my red is the same as yours, or instead is the same as what you experience for green.The curious fact that the ends of the spectrum join up into a complete colour wheel might support the idea that the spectrum is in some sense an objective reality, based on mathematical harmonic relationships analogous to those of sound waves; in effect we see a single octave of colour with the wavelength at one end double (or half) that at the other. I’ve sort of speculated in the past that if our eyes could see a much wider range of wavelengths we would see lower and higher octaves of colour; not wholly new colours like Terry Pratchett’s octarine, but higher and lower reds, greens and blues. I speculated further that ‘lower’ and ‘higher’ might actually be experienced as ‘cooler’ and ‘hotter’. That is of course the wildest guesswork, but the thesis that everyone – tetrachromats included – sees the same spectrum but in lesser or greater detail seems to be confirmed by the experimenters if I’m reading it right.

Of course, colour vision is not just a matter of what happens in the retina; there is also a neural colour space mapped out in the brain (which interestingly is a little more extensive than the colour space of the real world, leading to the hidden existence of ‘chimerical’ colours).  Do pigeons, human tetrachromats, and human trichromats all map colours to similar neural spaces? I haven’t been able to find out, but I’m guessing the answer is yes. If it weren’t so, there would be potential issues over neural plasticity. If your brain receives no signals from one eye during your early life, it re-purposes the relevant bits of neural real estate and you cannot get your vision back later even if the eye starts sending the right kind of signal. We might expect that people who were colour blind from birth would be affected in a similar way, yet in fact use of the new glasses seems to bring an intact colour system straight into operation for the first time. So it might be that a standard spectral colour space is hard-wired into the genes of all of us (even pigeons), or again it might be that the spectrum is a mathematical reality which any visual system must represent, albeit with varying fidelity.

All of this is skating around the classic philosophical issues. Does Mary, who never saw colours, know something new when she has seen red? Well, we can say with confidence that the redness will be registered and mapped properly; she will not have lost the ability to see colour through being brought up in a monochrome world. More importantly, the scientifically tractable aspects of colour vision have moved another step closer to the subjective experience. We have some objective reasons for supposing that Mary’s colour experience will be arranged along the same spectral structure as ours, though not necessarily graduated with the same fineness.

None of this will banish the Hard Problem, or dispel our particular sense that colours especially are subjective optional extras. For a long time some have thought of colour as a ‘secondary’ property, in the observer, not the world; not like such properties as mass or volume, which are more ‘real’. The newly-understood complexity of colour vision leads to new arguments that it is in fact artificial, a useful artefact in the brain, in some sense not really there in objective reality.  My feeling though is that if we can all experience tetrachromacy, the gap between the objective and the subjective will not be perceived as being so unbridgeable as it has been to date.


Are robots short-changing us imaginatively?

Chat-bots, it seems, might be getting their second (or perhaps their third or fourth) wind. While they’re not exactly great conversationalists, the recent wave of digital assistants demonstrates the appeal of a computer you can talk to like a human being. Some now claim that a new generation of bots using deep machine learning techniques might be way better at human conversation than their chat-bot predecessors, whose utterances often veered rapidly from the gnomic to the insane.

A straw in the wind might be the Hugging Face app (I may be showing my age, but for me that name strongly evokes a ghastly Alien parasite). This greatly impressed Rachel Metz, who apparently came to see it as a friend. It’s certainly not an assistant – it doesn’t do anything except talk to you in a kind of parody of a bubbly teen with a limping attention span. The thing itself is available for IOS and the underlying technology, without the teen angle, appears to be on show here, though I don’t really recommend spending any time on either. Actual performance, based on a small sample (I can only take so much) is disappointing; rather than a leap forward it seems distinctly inferior to some Loebner prize winners that never claimed to be doing machine learning. Perhaps it will get better. Jordan Pearson here expresses what seem reasonable reservations about an app aimed at teens that demands a selfie from users as its opening move.

Behind all this, it seems to me, is the looming presence of Spike Jonze’s film Her, in which a professional letter writer from the near future (They still have letters? They still write – with pens?) becomes devoted to his digital assistant Samantha. Samantha is just one instance of a bot which people all over are falling in love with. The AIs in the film are puzzlingly referred to as Operating Systems, a randomly inappropriate term that perhaps suggests that Jonze didn’t waste any time reading up on the technology. It’s not a bad film at all, but it isn’t really about AI; nothing much would be lost if Samantha were a fairy, a daemon, or an imaginary friend. There’s some suggestion that she learns and grows, but in fact she seems to be a fully developed human mind, if not a superlative one, right from her first words. It’s perhaps unfair to single the relatively thoughtful Her out for blame, because with some honourable exceptions the vast majority of robots in fiction are like this; humans in masks.

Fictional robots are, in fact, fakes, and so are all chat-bots. No chat-bot designer ever set out to create independent cognition first and then let it speak; instead they simply echo us back to ourselves as best they can manage. This is a shame because the different patterns of thought that a robot might have; the special mistakes it might be prone to and the unexpected insights it might generate, are potentially very interesting; indeed I should have thought they were fertile ground for imaginative writers. But perhaps ‘imaginative’ understates the amazing creative powers that would be needed to think yourself out of your own essential cognitive nature. I read a discussion the other day about human nature; it seems to me that the truth is we don’t know what human nature is like because we have nothing much to compare it with; it won’t be until we communicate with aliens or talk properly with non-fake robots that we’ll be able to form a proper conception of ourselves.

To a degree it can be argued that there are examples of this happening already. Robots that aspire to Artificial General Intelligence in real world situations suffer badly from the Frame Problem, for instance. That problem comes in several forms, but I think it can be glossed briefly as the job of picking out from the unfiltered world the things that need attention. AI is terrible at this, usually becoming mired in irrelevance (hey, the fact that something hasn’t changed might be more important than the fact that something else has). Dennett, rightly I think, described this issue as not the discovery of a new problem for robots so much as a new discovery about human nature; turns out we’re weirdly, inexplicably good at something we never even realised was difficult.

How interesting it would be to learn more about ourselves along those challenging, mind-opening lines; but so long as we keep getting robots that are really human beings, mirroring us back to ourselves reassuringly, it isn’t going to happen.

Does recent research into autism suggest real differences between male and female handling of consciousness?

Traditionally, autism has been regarded as an overwhelmingly male condition. Recently, though it has been suggested that the gender gap is not as great as it seems; it’s just that most women with autism go undiagnosed. How can that be? It is hypothesised that some sufferers are able to ‘camouflage’ the symptoms of their autism, and that this suppression of symptoms is particularly prevalent among women.

‘Camouflaging’ means learning normal social behaviours such as giving others appropriate eye contact, interpreting and using appropriate facial expressions, and so on. But surely, that’s just what normal people do? If you can learn these behaviours, doesn’t that mean you’re not autistic any more?
There’s a subtle distinction here between doing what comes naturally and doing what you’ve learned to do. Camouflaging, on this view, requires significant intellectual resources and continuous effort, so that while camouflaged sufferers may lead apparently normal lives, they are likely to suffer other symptoms arising from the sheer mental effort they have to put in – fatigue, depression, and so on.

Measuring the level of camouflaging – which is obviously intended to be undetectable – obviously raises some methodological challenges. Now a study reported in the invaluable BPS Research Digest claims to have pulled it off. The research team used scanning and other approaches, but their main tool was to contrast two different well-established methods of assessing autism – the Autism Diagnostic Observation Schedule on the one hand and the Autism Spectrum Quotient on the other. While the former assesses ‘external’ qualities such as behaviour, the latter measures ‘internal’ ones. Putting it crudely, they measure what you actually do and what you’d like to do respectively. The ratio between the two scores yields a measure of how much camouflaging is going on, and in brief the results confirm that camouflaging is present to a far greater degree in women. I think in fact it’s possible the results are understated; all of the subjects were people who had already been diagnosed with autism; that criterion may have selected women who were atypically low in the level of camouflaging, precisely because women who do a lot of camouflaging would be more likely to escape diagnosis.

The research is obviously welcome because it might help improve diagnosis rates for women, but also because a more equal rate of autism for men and women perhaps helps to dispel the idea, formerly popular but (to me at least) rather unpalatable, that autism is really little more than typical male behaviour exaggerated to unacceptable levels.

It does not eliminate the tricky gender issues, though. One thing that surely needs to be taken into account is the possibility that accommodating social pressures is something women do more of anyway. It is plausible (isn’t it?) that even among typical average people, women devote more effort to social signals, listening and responding, laughing politely at jokes, and so on. It might be that there is a base level of activity among women devoted to ‘camouflaging’ normal irritation, impatience, and boredom which is largely absent in men, a baseline against which the findings for people with autism should properly be assessed. It might have been interesting to test a selection of non-autistic people, if that makes sense in terms of the tests. How far the general underlying difference, if it exists, might be due to genetics, socialisation, or other factors is a thorny question.

At any rate, it seems to me inescapable that what the study is really attempting to do with its distinction between outward behaviour and inward states, is to measure the difference between unconscious and conscious control of behaviour. That subtle distinction, mentioned above, between natural and learned behaviour is really the distinction between things you don’t have to think about, and things that require constant, conscious attention. Perhaps we might draw a parallel of sorts with other kinds of automatic behaviour. Normally, a lot of things we do, such as walking, require no particular thought. All that stuff, once learned, is taken care of by the cerebellum and the cortex need not be troubled (disclaimer: I am not a neurologist). But people who have their cerebellum completely removed can apparently continue to function: they just have to think about every step all the time, which imposes considerable strain after a while. However, there’s no special organ analogous to the cerebellum that records our social routines, and so far as I know it’s not clear whether the blend of instinct and learning is similar either.

In one respect the study might be thought to open up a promising avenue for new therapeutic approaches. If women can, to a great extent, learn to compensate consciously for autism, and if that ability is to a great extent a result of social conditioning, then in principle one option would be to help male autism sufferers achieve the same thing through applying similar socialisation. Although camouflaging evidently has its downsides, it might still be a trick worth learning. I doubt if it is as simple as that, though; an awful lot of regimes have been tried out on male sufferers and to date I don’t believe the levels of success have been that great; on the other hand it may be that pervasive, ubiquitous social pressure is different in kind from training or special regimes and not so easily deployed therapeutically. The only way might be to bring up autistic boys as girls…

If we take the other view, that women’s ability or predisposition to camouflage is not the result of social conditioning, then we might be inclined to look for genuine ‘hard-wired’ differences in the operation of male and female consciousness. One route to take from there would be to relate the difference to the suggested ability of women (already a cornerstone of gender-related folk psychology) to multi-task more effectively, dividing conscious attention without significant loss to the efficiency of each thread. Certainly one would suppose that having to pay constant attention to detailed social cues would have an impact on the ability to pay attention to other things, but so far as I know there is no evidence that women with camouflaged autism are any worse at paying attention generally than anyone else. Perhaps this is a particular skill of the female mind, while if men pay that much attention to social cues, their ability to listen to what is actually being said is sensibly degraded?

The speculative ice out here is getting thinner than I like, so I’ll leave it there; but in all seriousness, any study that takes us forward in this area, as this one seems to do, must be very warmly welcomed.

Are we losing it?

Nick Bostrom’s suggestion that we’re most likely living in a simulated world continues to provoke discussion.  Joelle Dahm draws an interesting parallel with multiverses. I think myself that it depends a bit on what kind of multiverse you’re going for – the ones that come from an interpretation of quantum physics usually require conservation of identity between universes – you have to exist in more than one universe – which I think is both potentially problematic and strictly speaking non-Bostromic. Dahm also briefly touches on some tricky difficulties about how we could tell whether we were simulated or not, which seem reminiscent of Descartes’ doubts about how he could be sure he wasn’t being systematically deceived by a demon – hold that thought for now.

Some of the assumptions mentioned by Dahm would probably annoy Sabine Hossenfelder, who lays into the Bostromisers with a piece about just how difficult simulating the physics of our world would actually be: a splendid combination of indignation with actually knowing what she’s talking about.

Bostrom assumes that if advanced civilisations typically have a long lifespan, most will get around to creating simulated versions of their own civilisation, perhaps re-enactments of earlier historical eras. Since each simulated world will contain a vast number of people, the odds are that any randomly selected person is in fact living in a simulated world. The probability becomes overwhelming if we assume that the simulations are good enough for the simulated people to create simulations within their own world, and so on.

There’s  plenty of scope for argument about whether consciousness can be simulated computationally at all, whether worlds can be simulated in the required detail, and certainly about the optimistic idea of nested simulations. But recently I find myself thinking, isn’t it simpler than that? Are we simulated people in a simulated world? No, because we’re real, and people in a simulation aren’t real.

When I say that, people look at me as if I were stupid, or at least, impossibly naive. Dude,  read some philosophy, they seem to say. Dontcha know that Socrates said we are all just grains of sand blowing in the wind?

But I persist – nothing in a simulation actually exists (clue’s in the name), so it follows that if we exist, we are not in a simulation. Surely no-one doubts their own existence (remember that parallel with Descartes), or if they do, only on the kind of philosophical level where you can doubt the existence of anything? If you don’t even exist, why do I even have to address your simulated arguments?

I do, though. Actually, non-existent people can have rather good arguments; dialogues between imaginary people are a long-established philosophical method (in my feckless youth I may even have indulged in the practice myself).

But I’m not entirely sure what the argument against reality is. People do quite often set out a vision of the world as powered by maths; somewhere down there the fundamental equations are working away and the world is what they’re calculating. But surely that is the wrong way round; the equations describe reality, they don’t dictate it. A system of metaphysics that assumes the laws of nature really are explicit laws set out somewhere looks tricky to me; and worse, it can never account for the arbitrary particularity of the actual world. We sort of cling to the hope that this weird specificity can eventually be reduced away by titanic backward extrapolation to a hypothetical time when the cosmos was reduced to the simplicity of a single point, or something like it; but we can’t make that story work without arbitrary constants and the result doesn’t seem like the right kind of explanation anyway. We might appeal instead to the idea that the arbitrariness of our world arises from it’s being an arbitrary selection out of the incalculable banquet of the multiverse, but that doesn’t really explain it.

I reckon that reality just is the thing that gets left out of the data and the theory; but we’re now so used to the supremacy of those two we find it genuinely hard to remember, and it seems to us that a simulation with enough data is automatically indistinguishable from real events – as though once your 3D printer was programmed, there was really nothing to be achieved by running it.

There’s one curious reference in Dahm’s piece which makes me wonder whether Christof Koch agrees with me. She says the Integrated Information Theory doesn’t allow for computer consciousness. I’d have thought it would; but the remarks from Koch she quotes seem to be about how you need not just the numbers about gravity but actual gravity too, which sounds like my sort of point.

Regular readers may already have noticed that I think this neglect of reality also explains the notorious problem of qualia; they’re just the reality of experience. When Mary sees red, she sees something real, which of course was never included in her perfect theoretical understanding.

I may be naive, but you can’t say I’m not consistent…

A somewhat enigmatic report in the Daily Telegraph says that this problem has been devised by Roger Penrose, who says that chess programs can’t solve it but humans can get a draw or even a win.

I’m not a chess buff, but it looks trivial. Although Black has an immensely powerful collection of pieces, they are all completed bottled up and immobile, apart from three bishops. Since these are all on white squares, the White king is completely safe from them if he stays on black squares. Since the white pawns fencing in Black’s pieces are all on black squares, the bishops can’t do anything about them either. It looks like a drawn position already, in fact.

I suppose Penrose believes that chess computers can’t deal with this because it’s a very weird situation which will not be in any of their reference material. If they resort to combinatorial analysis the huge number of moves available to the bishops is supposed to render the problem intractable, while the computer cannot see the obvious consequences of the position the way a human can.

I don’t know whether it’s true that all chess programs are essentially that stupid, but it is meant to buttress Penrose’s case that computers lack some quality of insight or understanding that is an essential property of human consciousness.

This is all apparently connected with the launch of a new Penrose Institute, whose website is here, but appears to be incomplete. No doubt we’ll hear more soon.

Give up on real comprehension, says Daniel Dennett in From Bacteria to Bach and Back: commendably honest but a little discouraging to the reader? I imagine it set out like the warning above the gates of Hell: ‘Give up on comprehension, you who turn these pages’.  You might have to settle for acquiring some competences.

What have we got here? In this book, Dennett is revisiting themes he developed earlier in his career, retelling the grand story of the evolution of minds. We should not expect big new ideas or major changes of heart ( but see last week’s post for one important one). It would have been good at this stage to have a distillation; a perfect, slim little volume presenting a final crystalline formulation of what Dennett is all about. This isn’t that. It’s more like a sprawling Greatest Hits album. In there somewhere are the old favourites that will always have the fans stomping and shouting (there’s some huffing and puffing from Dennett about how we should watch out because he’s coming for our deepest intuitions with scary tools that may make us flinch, but honestly by now this stuff is about as shocking and countercultural as your dad’s Heavy Metal collection); but we’ve also got unnecessary cover versions of ideas by other people, some stuff that was never really a hit in the first place, and unfortunately one or two bum notes here and there.

And, oh dear, another attempt to smear Descartes by association. First Dennett energetically promoted the phrase “Cartesian theatre” – so hard some people suppose that it actually comes from Descartes; now we have ‘Cartesian gravity’, more or less a boo-word for any vaguely dualistic tendency Dennett doesn’t like. This is surely not good intellectual manners; it wouldn’t be quite so bad if it wasn’t for the fact that Descartes actually had a theory of gravity, so that the phrase already has a meaning. Should a responsible professor be spreading new-minted misapprehensions like this? Any meme will do?

There’s a lot about evolution here that rather left me cold (but then I really, really don’t need it explained again, thanks); I don’t think Dennett’s particular gift is for popularising other people’s ideas and his take seems a bit dated. I suspect that most intelligent readers of the book will already know most of this stuff and maybe more, since they will probably have kept up with epigenetics and the various proposals for extension of the modern synthesis that have emerged in the current century, (and the fascinating story of viral intervention in human DNA, surely a natural for anyone who likes the analogy of the selfish gene?) none of which gets any recognition here (I suppose in fairness this is not intended to be full of new stuff). Instead we hear again the tired and in my opinion profoundly unconvincing story about how leaping (‘stotting’) gazelles are employing a convoluted strategy of wasting valuable energy as a lion-directed display of fitness. It’s just an evasive manoeuvre, get over it.

For me it’s the most Dennettian bits of the book that are the best, unsurprisingly. The central theme that competence precedes, and may replace, comprehension is actually well developed. Dennett claims that evolution and computation both provide ‘inversions’ in which intentionless performance can give the appearance of intentional behaviour. He has been accused of equivocating over the reality of intentionality, consciousness and other concepts, but I like his attitude over this and his defence of the reality of ‘free-floating rationales’ seems good to me. It gives us permission to discuss the ‘purposes’ of things without presupposing an intelligent designer whose purposes they are, and I’m completely with Dennett when he argues that this is both necessary and acceptable. I’ve suggested elsewhere that talking about ‘the point’ of things, and in a related sense, what they point to, is a handy way of doing this. The problem for Dennett, if there is one, is that it’s not enough for competence to replace comprehension often; he needs it to happen every time by some means.

Dennett sets out a theoretical space with ‘bottom-up vs top-down’, ‘random vs directed search’, and ‘comprehension’ as its axes; at one corner of the resulting cube we have intentionless structures like a termite colony; at the other we have fully intentional design like Gaudi’s church of the Sagrada Familia, which to Dennett’s eye resembles a termite colony. Gaudi’s perhaps not the ideal choice here, given his enthusiasm for natural forms; it makes Dennett seem curiously impressed by the underwhelming fact that buildings by an architect who borrowed forms from the natural world turn out to have forms resembling those found in nature.

Still, the space suggests a real contrast between the mindless processes of evolution and deliberate design, which at first sight looks refreshingly different and unDennetian. It’s not, of course; Dennett is happy to embrace that difference so long as we recognise that the ‘deliberate design’ is simply a separate evolutionary process powered by memes rather than genes.

I’ve never thought that memes, Richard Dawkins’s proposed cultural analogue of genes, were a particularly helpful addition to Dennett’s theoretical framework, but here he mounts an extended defence of them. One of the worst flaws in the theory as it stands – and there are several – is its confused ontology. What are memes – physical items of culture or abstract ideas? Dennett, as a professional philosopher, seems more sensitive to this problem than some of the more metaphysically naive proponents of the meme. He provides a relatively coherent vision by invoking the idea that memes are ‘tokens’; they may take all sorts of physical forms – written words, pictures, patterns of neuronal firing – but each form is a token of a particular way of behaving. The problem here is that anything at all can serve as a token of any meme; we only know that a given noise or symbol tokens a specific meme because of its meaning. There may be – there certainly are – some selective effects that bite on the actual form of particular tokens. A word that is long or difficult to pronounce is more likely to be forgotten. But the really interesting selections take place at the level of meanings; that requires a much more complex level of explanation. There may still be mechanisms involved that are broadly selective if not exactly Darwinian – I think there are – but I believe any move up to this proper level of complexity inevitably edges the simplistic concept of the meme out of play.

The original Dawkinsian observation that the development of cultural items sometimes resembles evolution was sound, but it implicitly called for the development of a general theory which in spite of some respectable attempts, has simply failed to appear. Instead, the supporters of memetics, perhaps trapped by the insistent drumbeat of the Dawkinsian circus, have tended to insist instead that it’s all Darwinian natural selection. How a genetic theory can be Darwinian when Darwin never heard of genes is just one of the lesser mysteries here (should we call it ‘Mendelian’ instead? But Darwin’s name is the hooray word here just as Descartes’ is the cue for boos). Among the many ways in which cultural selection does not resemble biological evolution, Dennett notes the cogent objection that there is nothing that corresponds to DNA; no general encoding of culture on which selection can operate. One of the worst “bum notes” in the book is Dennett’s strange suggestion that HTML might come to be our cultural DNA. This is, shall we say, an egregious misconception of the scope of text mark-up language.

Anyway, it’s consciousness we’re interested in (check out Tom Clark’s thoughtful take here) and the intentional stance is the number the fans have been waiting for; cunningly kept till last by Dennett. When we get there, though, we get a remix instead of the classic track. Here he has a new metaphor, cunningly calculated to appeal to the youth of today; it’s all about apps. Our impression of consciousness is a user illusion created by our gift for language; it’s like the icons that activate the stuff on your phone. You may object that a user illusion already requires a user, but hang on. Your ability to talk about yourself is initially useful for other people, telling them useful stuff about your internal states and competences, but once the system is operating, you can read it too. It seems plausible to me that something like that is indeed an important part of the process of consciousness, though in this version I felt I had rather lost track of what was illusory about it.

Dennett moves on to a new attack on qualia. This time he offers an explanation of why people think they occur – it’s because of the way we project our impressions back out into the world, where they may seem unaccountable. He demonstrates the redundancy of the idea by helpfully sketching out how we could run up a theory of qualia and noting how pointless they are. I was nodding along with this. He suggests that qualia and our own sense of being the intelligent designers in our own heads are the same kind of delusion, simply applied externally or internally. I suppose that’s where the illusion is.

He goes on to defend a sort of compatibilist view of free will and responsibility; another example of what Descartes might be tempted to label Dennettian Equivocation, but as before, I like that posture and I’m with him all the way. He continues with a dismissal of mysterianism, leaning rather more than I think is necessary on the interesting concept of joint understanding, where no one person gets it all perfectly, but nothing remains to be explained, and takes a relatively sceptical view of the practical prospects for artificial general intelligence, even given recent advances in machine learning. Does Google Translate display understanding (in some appropriate sense); no, or rather, not yet. This is not Dennett as we remember him; he speaks disparagingly of the cheerleaders for AI and says that “some of us” always discounted the hype. Hmm. Daniel Dennett, long-time AI sceptic?

What’s the verdict then? Some good stuff in here, but as always true fans will favour the classic album; if you want Dennett at his best the aficionado will still tell you to buy Consciousness Explained.