It’s all theology.

Consciousness: it’s all theology in the end. Or philosophy, if the distinction matters. Beyond a certain point it is, at any rate; when we start asking about the nature of consciousness, about what it really is. That, in bald summary, seems to be the view of Robert A. Burton in a resigned Nautilus piece. Burton is a neurologist, and he describes how Wilder Penfield first provided really hard and detailed evidence of close, specific correlations between neural and mental events, by showing that direct stimulation of neurons could evoke all kinds of memories, feelings, and other vivid mental experiences. Yet even Penfield, at the close of his career did not think the mind, the spirit of man, could yet be fully explained by science, or would be any time soon; in the end we each had to adopt personal assumptions or beliefs about that.

That is more or less the conclusion Burton has come to. Of course he understands the great achievements of neuroscience, but in the end the Hard Problem, which he seems to interpret rather widely, defeats analysis and remains a matter for theology or philosophy, in which we merely adopt the outlook most congenial to us. You may feel that’s a little dismissive in relation to both fields, but I suppose we see his point. It will be no surprise that Burton dislikes Dennett’s sceptical outlook and dismissal of religion. He accuses him of falling into a certain circularity: Dennett claims consciousness is an illusion, but illusions, after all, require consciousness.

We can’t, of course, dismiss the issues of consciousness and selfhood as completely irrelevant to normal life; they bear directly on such matters as personal responsibility, moral rights, and who should be kept alive. But I think Burton could well argue that the way we deal with these issues in practice tends to vindicate his outlook; what we often see when these things are debated is a clash between differing sets of assumptions rather than a skilful forensic duel of competing reasoning.

Another line of argument that would tend to support Burton is the one that says worries about consciousness are largely confined to modern Western culture. I don’t know enough for my own opinion to be worth anything, but I’ve been told that in classical Indian and Chinese thought the issue of consciousness just never really arises, although both traditions have long and complex philosophical traditions. Indeed, much the same could be said about Ancient Greek philosophy, I think; there’s a good deal of philosophy of mind, but consciousness as we know it just doesn’t really present itself as a puzzle. Socrates never professed ignorance about what consciousness was.

A common view would be that it’s only after Descartes that the issue as we know it starts to take shape, because of his dualism; a distinction between body and spirit that certainly has its roots in the theological (and philosophical) traditions of Western Christianity. I myself would argue that the modern topic of consciousness didn’t really take shape until Turing raised the real possibility of digital computers; consciousness was recruited to play the role of the thing computers haven’t got, and our views on it have been shaped by that perspective over the last fifty years in particular. I’ve argued before that although Locke gives what might be the first recognisable version of the Hard Problem, with an inverted spectrum thought experiment, he actually doesn’t care about it much and only mentions it as a secondary argument about matters that to him, seemed more important.

I think it is true in some respects that as William James said, consciousness is the last remnant of the vanishing soul. Certainly, when people deny the reality of the self, it often seems to me that their main purpose is to deny the reality of the soul. But I still believe that Burton’s view cedes too much to relativism – as I think Fodor once said, I hate relativism. We got into this business – even the theologians – because we wanted the truth, and we’re not going to be fobbed off with that stuff! Scientists may become impatient when no agreed answer is forthcoming after a couple of centuries, but I cling to the idea that there is a truth if the matter about personhood, freedom, and consciousness. I recognise that there is in this, ironically, a tinge of an act of faith, but I don’t care.

Unfortunately, as always things are probably more complicated than that. Could freedom of the will, say, be a culturally relative matter? All my instincts say no, but if people don’t believe themselves to be free, doesn’t that in fact impose some limits on how free they really are? If I absolutely do not believe I’m able to touch the sacred statue, then although the inability may be purely psychological, couldn’t it be real? It seems there are at least some fuzzy edges. Could I abolish myself by ceasing to believe in my own existence? In a way you could say that is in caricature form what Buddhists believe (though they think that correctly understood, my existence was a delusion anyway). That’s too much for me, and not only because of the sort of circularity mentioned above; I think it’s much too pessimistic to give up on a good objective accounts of agency and selfhood.

There may never be a single clear answer that commands universal agreement on these issues, but then there has never been, and never will be, complete agreement about the correct form of human government; but we have surely come on a bit in the last thousand years. To abandon hope of similar gradual consensual progress on consciousness might be to neglect our civic duty. It follows that by reading and thinking about this, you are performing a humane and important task. Well done!

[Next week I return from a long holiday; I’ll leave you to judge whether I make more or less sense.]

Morality and Consciousness

What is the moral significance of consciousness? Jim Davies addressed the question in a short but thoughtful piece recently.

Davies quite rightly points out that although the nature of consciousness is often seen as an academic matter, remote from practical concerns, it actually bears directly on how we treat animals and each other (and of course, robots, an area that was purely theoretical not that long ago, but becomes more urgently practical by the day). In particular, the question of which entities are to be regarded as conscious is potentially decisive in many cases.

There are two main ways my consciousness affects my moral status. First, if I’m not conscious, I can’t be a moral subject, in the sense of being an agent (perhaps I can’t anyway, but if I’m not conscious it really seems I can’t get started). Second, I probably can’t be a moral object either; I don’t have any desires that can be thwarted and since I don’t have any experiences, I can’t suffer or feel pain.

Davies asks whether we need to give plants consideration. They respond to their environment and can suffer damage, but without a nervous system it seems unlikely they feel pain. However, pain is a complex business, with a mix of simple awareness of damage, actual experience of that essential bad thing that is the experiential core of pain, and in humans at least, all sorts of other distress and emotional response. This makes the task of deciding which creatures feel pain rather difficult, and in practice guidelines for animal experimentation rely heavily on the broad guess that the more like humans they are, the more we should worry. If you’re an invertebrate, then with few exceptions you’re probably not going to be treated very tenderly. As we come to understand neurology and related science better, we might have to adjust our thinking. This might let us behave better, but it might also force us to give up certain fields of research which are useful to us.

To illustrate the difference between mere awareness of harm and actual pain, Davies suggests the example of watching our arm being crushed while heavily anaesthetised (I believe there are also drugs that in effect allow you to feel the pain while not caring about it). I think that raises some additional fundamental issues about why we think things are bad. You might indeed sit by and watch while your arm was crushed without feeling pain or perhaps even concern. Perhaps we can imagine that for some reason you’re never going to need your arm again (perhaps now you have a form of high-tech psychokinesis, an ability to move and touch things with your mind that simply outclasses that old-fashioned ‘arm’ business), so you have no regrets or worries. Even so, isn’t there just something bad about watching the destruction of such a complex and well-structured limb?

Take a different example; everyone is dead and no-one is ever coming back, not even any aliens. The only agent left is a robot which feels no pleasure or pain but makes conscious plans; it’s a military robot and it spends its time blowing up fine buildings and destroying works of art, for no particular reason. Its vandalistic rampage doesn’t hurt anyone and cannot have any consequences, but doesn’t its casual destructiveness still seem bad?

I’d like to argue that there is a badness to destruction over and above its consequential impact, but it’s difficult to construct a pure example, and I know many people simply don’t share my intuition. It is admittedly difficult because there’s always the likelihood that one’s intuitions are contaminated by ingrained assumptions about things having utility. I’d like to say there’s a real moral rule that favours more things and more organisation, but without appealing to consequentialist arguments it’s hard for me to do much more than note that in fact moral codes tend to inhibit destruction and favour its opposite.

However, if my gut feeling is right, it’s quite important, because it means the largely utilitarian grounds used for rules about animal research and some human matters, re not quite adequate after all; the fact that some piece of research causes no pain is not necessarily enough to stop its destructive character bring bad.

It’s probably my duty to work on my intuitions and arguments a bit more, but that’s hard to do when you’re sitting in the sun with a beer in the charming streets of old Salamanca…

Symptoms of consciousness

Where did consciousness come from? A recent piece in New Scientist (paywalled, I’m afraid) reviewed a number of ideas about the evolutionary origin and biological nature of consciousness. The article obligingly offered a set of ten criteria for judging whether an organism is conscious or not…

  • Recognises itself in a mirror
  • Has insight into the minds of others
  • Displays regret having made a bad decision
  • Heart races in stressful situations
  • Has many dopamine receptors in its brain to sense reward
  • Highly flexible in making decisions
  • Has ability to focus attention (subjective experience)
  • Needs to sleep
  • Sensitive to anaesthetics
  • Displays unlimited associative learning

This is clearly a bit of a mixed bag. One or two of these have a clear theoretical base; they could be used as the basis of a plausible definition of consciousness. Having insight into the minds of others (‘theory of mind’) is one, and unlimited associative learning looks like another. But robots and aliens need not have dopamine receptors or racing hearts, yet we surely wouldn’t rule out their being conscious on that account. The list is less like notes towards a definition and more of a collection of symptoms.

They’re drawn from some quite different sources, too. The idea that self-awareness and awareness of the minds of others has something to do with consciousness is widely accepted and the piece alludes to some examples in animals. A chimp shown a mirror will touch a spot that had been covertly placed on its forehead, which is (debatably) said to prove it knows that the reflection is itself. A scrub jay will re-hide food if it was seen doing the hiding the first time – unless it was seen only by its own mate. A rat that pressed the wrong lever in an experiment will, it seems, gaze regretfully at the right one (‘What do you do for a living?’ ‘Oh, I assess the level of regret in a rat’s gaze.’) Self-awareness certainly could constitute consciousness if higher-order theories are right, but to me it looks more like a product of consciousness and hence a symptom, albeit a pretty good one.

Another possibility is hedonic variation, here championed by Jesse Prinz and Bjørn Grinde. Many animals exhibit a raised heart rate and dopamine levels when stimulated – but not amphibians or fish (who seem to be getting a bad press on the consciousness front lately). There’s a definite theoretical insight underlying this one. The idea is that assigning pleasure to some outcomes and letting that drive behaviour instead of just running off fixed patterns instinctively, allows an extra degree of flexibility which on the whole has a positive survival value. Grinde apparently thinks there are downsides too and on that account it’s unlikely that consciousness evolved more than once. The basic idea here seems to make a lot of sense, but the dopamine stuff apparently requires us to think that lizards are conscious while newts are not. That seems a fine distinction, though I have to admit that I don’t have enough experience of newts to make the judgement (or of lizards either if I’m being completely honest).

Bruno van Swinderen has a different view, relating consciousness to subjective experience. That, of course, is notoriously unmeasurable according to many, but luckily van Swinderen thinks it correlates with selective attention, or indeed is much the same thing. Why on earth he thinks that remains obscure, but he measures selective attention with some exquisitely designed equipment plugged into the brains of fruit flies. (‘Oh, you do rat regret? I measure how attentively flies are watching things.’)

Sleep might be a handy indicator, as van Swinderen believes it is creatures that do selective attention that need it. They also, from insects to vertebrates (fish are in this time), need comparable doses of anaesthetic to knock them out, whereas nematode worms need far more to stop them in their tracks. I don’t know whether this is enough. I think if I were shown a nematode that had finally been drugged up enough to make it keep still, I might be prepared to say it was unconscious; and if something can become unconscious, it must previously have been conscious.

Some think by contrast that we need a narrower view; Michael Graziano reckons you need a mental model, and while fish are still in, he would exclude the insects and crustaceans van Swinderen grants consciousness to. Eva Jablonka thinks you need unlimited associative learning, and she would let the insects and crustaceans back in, but hesitates over those worms. The idea behind associative learning is again that consciousness takes you away from stereotyped behaviour and allows more complex and flexible responses – in this case because you can, for example, associate complex sets of stimuli and treat them as one new stimulus, quite an appealing idea.

Really it seems to me that all these interesting efforts are going after somewhat different conceptions of consciousness. I think it was Ned Block who called it a ‘mongrel’ concept; there’s little doubt that we use it in very varied ways, to describe the property of a worm that’s still moving at one end, to the ability to hold explicit views about the beliefs of other conscious entities at the other. We don’t need one theory of consciousness, we need a dozen.

Hippocampal holodeck

Matt Faw says subjective experience is caused by a simulation in the hippocampus – a bit like a holodeck. There’s a brief description on the Brains Blog, with the full version here.

In a very brief sketch, Faw says that data from various systems is pulled together in the hippocampal system and compiled into a sort of unified report of what’s going on. This is sort of a global workspace system, whose function is to co-ordinate. The ongoing reportage here is like a rolling film or holodeck simulation, and because it’s the only unified picture available, it is mistaken for the brain’s actual interaction with the world. The actual work of cognition is done elsewhere, but this simulation is what gives rise to ‘neurotypical subjective experience’.

I’m uneasy about that; it doesn’t much resemble what I would call subjective experience. We can have a model of the self in the world without any experiencing going on (Roger Penrose suggested the example of a video camera pointed at a mirror), while the actual subjectivity of phenomenal experience seems to be nothing to do with the ‘structural’ properties of whether there’s a simulation of the world going on.

I believe the simulation or model is supposed to help us think about the world and make plans; but the actual process of thinking about things rarely involves a continuous simulation of reality. If I’m thinking of going out to buy a newspaper, I don’t have to run through imagining what the experience is going to be like; indeed, to do so would require some effort. Even if I do that, I’m the puppeteer throughout; it’s not like running a computer game and being surprised by what happens. I don’t learn much from the process.

And what would the point be? I can just think about the world. Laboriously constructing a model of the world and then thinking about that instead looks like a lot of redundant work and a terrible source of error if I get it wrong.

There’s a further problem for Faw in that there are people who manage without functioning hippocampi. Although they undoubtedly have serious memory problems, they can talk to us fairly normally and answer questions about their experiences. It seems weird to suggest that they don’t have any subjective experience; are they philosophical zombies?

Faw doesn’t want to say so. Instead he likens their thought processes to the ones that go on when we’re driving without thinking. Often we find we’ve driven somewhere but cannot remember any details of the journey. Faw suggests that what happens here is just that we don’t remember the driving. All the functions that really do the cognitive work are operating normally, but whereas in other circumstances their activity would get covered (to some extent) in the ‘news bulletin’ simulation, in this case our mind is dealing with something more interesting (a daydream, other plans, whatever), and just fails to record what we’ve done with the brake and the steering wheel. But if we were asked at the very moment of turning left what we were doing, we’d have no problem answering. People with no hippocampi are like this; constantly aware of the detail of current cognition, stuff that is normally hidden from neurotypically normal people, but lacking the broader context which for us is the source of normal conscious experience.

I broadly like this account, and it points to the probability that the apparent problems for Faw are to a degree just a matter of labelling. He’s calling it subjective experience, but if he called it reportable subjective experience it would make a lot of sense. We only ever report what we remember to have been our conscious experience: some time, even if only an instant, has to have passed. It’s entirely plausible that that we rely on the hippocampus to put together these immediate, reportable memories for us.

So really what I call subjective experience is going on all the time out there; it doesn’t require a unified account or model; but it does need the hippocampal newsletter in order to become reportable. Faw and I might disagree on the fine philosophical issue of whether it is meaningful to talk about experiences that cannot, in principle, be reported; but in other ways we don’t really differ as much as it seemed.

Superfluous Consciousness?

Do we need robots to be conscious? Ryota Kanai thinks it is largely up to us whether the machines wake up – but he is working on it. I think his analysis is pretty good and in fact I think we can push it a bit further.

His opening remarks, perhaps due to over-editing, don’t clearly draw the necessary distinction between Hard and Easy problems, or between subjective p-consciousness and action-related a-consciousness (I take it to be the same distinction, though not everyone would agree). Kanai talks about the unsolved mystery of experience, which he says is not a necessary by-product of cognition, and says that nevertheless consciousness must be a product of evolution. Hm. It’s p-consciousness, the ineffable, phenomenal business of what experience is like, that is profoundly mysterious, not a necessary by-product of cognition, and quite possibly nonsense. That kind of consciousness cannot in any useful sense be a product of evolution, because it does not affect my behaviour, as the classic zombie twin thought experiment explicitly demonstrates.  A-consciousness, on the other hand, the kind involved in reflecting and deciding, absolutely does have survival value and certainly is a product of evolution, for exactly the reasons Kanai goes on to discuss.

The survival value of A-consciousness comes from the way it allows us to step back from the immediate environment; instead of responding to stimuli that are present now, we can respond to ones that were around last week, or even ones that haven’t happened yet; our behaviour can address complex future contingencies in a way that is both remarkable and powerfully useful. We can make plans, and we can work out what to do in novel situations (not always perfectly, of course, but we can do much better than just running a sequence of instinctive behaviour).

Kanai discusses what must be about the most minimal example of this; our ability to wait three seconds before responding to a stimulus. Whether this should properly be regarded as requiring full consciousness is debatable, but I think he is quite right to situate it within a continuum of detached behaviour which, further along, includes reactions to very complex counterfactuals.

The kind he focuses on particularly is self-consciousness or higher-order consciousness; thinking about ourselves. We have an emergent problem, he points out, with robots  whose reasons are hidden; increasingly we cannot tell why a complex piece of machine learning resulted in the behaviour that resulted. Why not get the robot to tell us, he says; why not enable it to report its own inner states? And if it becomes able to consider and explain its own internal states, won’t that be a useful facility which is also like the kind of self-reflecting consciousness that some philosophers take to be the crucial feature of the human variety?

There’s an immediate and a more general objection we might raise here. The really bad problem with machine learning is not that we don’t have access to the internal workings of the robot mind; it’s really that in some cases there just is no explanation of the robot’s behaviour that a human being can understand. Getting the robot to report will be no better than trying to examine the state of the robot’s mind directly; in fact it’s worse, because it introduces a new step into the process, one where additional errors can creep in. Kanai describes a community of AIs, endowed with a special language that allows them to report their internal states to each other. It sounds awfully tedious, like a room full of people who, when asked ‘How are you?’ each respond with a detailed health report. Maybe that is quite human in a way after all.

The more general theoretical objection (also rather vaguer, to be honest) is that, in my opinion at least, Kanai and those Higher Order Theory philosophers just overstate the importance of being able to think about your own mental states. It is an interesting and important variety of consciousness, but I think it just comes for free with a sufficiently advanced cognitive apparatus. Once we can think about anything, then we can of course think about our thoughts.

So do we need robots to be conscious? I think conscious thought does two jobs for us that need to be considered separately although they are in fact strongly linked. I think myself that consciousness is basically recognition. When we pull off that trick of waiting for three seconds before we respond to a stimulus, it is because we recognise the wait as a thing whose beginning is present now, and can therefore be treated as another present stimulus. This one simple trick allows us to respond to future things and plan future behaviour in a way that would otherwise seem to contradict the basic principle that the cause must come before effect.

The first job that does is allow the planning of effective and complex actions to achieve a given goal. We might want a robot to be able to do that so it can acquire the same kind of effectiveness in planning and dealing with new situations which we have ourselves, a facility which to date has tended to elude robots because of the Frame Problem and other issues to do with the limitations of pre-programmed routines.

The second job is more controversial. Because action motivated by future contingencies has a more complex causal back-story, it looks a bit spooky, and it is the thing that confers on us the reality (or the illusion, if you prefer) of free will and moral responsibility. Because our behaviour comes from consideration of the future, it seems to have no roots in the past, and to originate in our minds. It is what enables us to choose ‘new’ goals for ourselves that are not merely the consequence of goals we already had. Now there is an argument that we don’t want robots to have that. We’ve got enough people around already to originate basic goals and take moral responsibility; they are a dreadful pain already with all the moral and legal issues they raise, so adding a whole new category of potentially immortal electronic busybodies is arguably something best avoided. That probably means we can’t get robots to do job number one for us either; but that’s not so bad because the strategies and plans which job one yields can always be turned into procedures after the fact and fed to ‘simple’ computers to run. We can, in fact, go on doing things the way we do them now; humans work out how to deal with a task and then give the robots a set of instructions; but we retain personhood, free will, agency and moral responsibility for ourselves.

There is quite a big potential downside, though; it might be that the robots, once conscious, would be able to come up with better aims and more effective strategies than we will ever be able to devise. By not giving them consciousness we might be permanently depriving ourselves of the best possible algorithms (and possibly some superior people, but that’s a depressing thought from a human point of view). True, but then I think that’s almost what we are on the brink of doing already. Kanai mentions European initiatives which may insist that computer processes come with an explanation that humans can understand; if put into practice the effect, once the rule collides with some of those processes that simply aren’t capable of explanation, would be to make certain optimal but inscrutable algorithms permanently illegal.

We could have the best of both worlds if we could devise a form of consciousness that did job number one for us without doing job two as an unavoidable by-product, but since in my view they’re all acts of recognition of varying degrees of complexity, I don’t see at the moment how the two can be separated.

Under-hypnotised

Maybe hypnosis is the right state of mind and ‘normal’ is really ‘under-hypnotised’?

That’s one idea that does not appear in the comprehensive synthesis of what we know about hypnosis produced by Terhune, Cleeremans, Raz and Lynn. It is a dense, concentrated document, thick with findings and sources, but they have done a remarkably good job of keeping it as readable as possible, and it’s both a useful overview and full of interesting detail. Terhune has picked out some headlines here.

Hypnosis, it seems, has two components; the induction and one or more suggestions. The induction is what we normally think of as the process of hypnotising someone. It’s the bit that in popular culture is achieved by a swinging watch, mystic hand gestures or other theatrical stuff; in common practice probably just a verbal routine. It seems that although further research is needed around optimising the induction, the details are much less important than we might have been led to think, and Terhune et al don’t find it of primary interest. The truth is that hypnosis is more about the suggestibility of the subject than about the effectiveness of the induction. In fact if you want to streamline your view, you could see the induction as simply the first suggestion. Post-hypnotic suggestions, which take effect after the formal hypnosis session has concluded, may be somewhat different and may use different mechanisms from those that serve immediate suggestions, though it seems this has yet to be fully explored.

Broadly, people fall into three groups. 10 to 15 per cent of people are very suggestible, responding strongly to the full range of suggestions; about the same proportion are weakly suggestible and respond to hypnosis poorly or not at all; the rest of us are somewhere in the middle. Suggestibility is a fairly fixed characteristic which does not change over time and seems to be heritable; but so far as we know it does not correlate strongly with many other cognitive qualities or personality traits (nor with dissociative conditions such as Dissociative Identity Disorder, formerly known as Multiple Personality Disorder). It does interestingly resemble the kind of suggestibility seen in the placebo effect – there’s good evidence of hypnosis itself being therapeutically useful for certain conditions – and both may be correlated with empathy.

Terhune et al regard the debate about whether hypnosis is an altered state of consciousness as an unproductive one; but there are certainly some points of interest here when it comes to consciousness. A key feature of hypnosis is the loss of the sense of agency; hypnotised subjects think of their arm moving, not of having moved their arm. Credible current theories attribute this to the suppression of second-order mental states, or of metacognition; amusingly, this ‘cold control theory’ seems to lend some support to the HOT (higher order theory) view of consciousness (alright, please yourselves). Typically in the literature it seems this is discussed as a derangement of the proper sense of agency, but of course elsewhere people have concluded that our sense of agency is a delusion anyway. So perhaps, to repeat my opening suggestion, it’s the hypnotised subjects who have it right, and if we want to understand our own minds properly we should all enter a hypnotic state. Or perhaps that’s too much like noticing that blind people don’t suffer from optical illusions?

There’s a useful distinction here between voluntary control and top-down control. One interesting thing about hypnosis is that it demonstrates the power of top-down control, where beliefs, suggestions, and other high-level states determine basic physiological responses, something we may be inclined to under-rate. But hypnosis also highlights strongly that top-down control does not imply agency; perhaps we sometimes mistake the former for the latter? At any rate it seems to me that some of this research ought to be highly relevant to the analysis of agency, and suggests some potentially interesting avenues.

Another area of interest is surely the ability of hypnosis to affect attention and perception. It had been shown that changes in colour perception induced by hypnosis are registered in the brain differently from mere imagined changes. If we tell someone under hypnosis to see red for green and green for red, does that change the qualia of the experience or not? Do they really see green instead of red, or merely believe that’s what is happening? If anything the facts of hypnosis seem to compound the philosophical problems rather than helping to solve them; nevertheless it does seem to me that quite a lot of the results so handily summarised here should have a bigger impact on current philosophical discussion than they have had to date.

 

Consciousness in the Singularity

What is it like to be a Singularity (or in a Singularity)?

You probably know the idea. At some point in the future, computers become generally cleverer than us. They become able to improve themselves faster than we can do, and an accelerating loop is formed where each improvement speeds up the process of improving, so that they quickly zoom up to incalculable intelligence and speed, in a kind of explosion of intellectual growth. That’s the Singularity. Some people think that we mere humans will at some point have the opportunity of digitising and uploading ourselves, so that we too can grow vastly cleverer and join in the digital world in which these superhuman could scious entities will exist.

Just to clear upfront, I think there are some basic flaws in the plausibility of the story which mean the Singularity is never really going to happen: could never happen, in fact. However, it’s interesting to consider what the experience would be like.

How would we digitise ourselves? One way would be to create a digital model of our actual brain, and run that. We could go the whole hog and put ourselves into a fully simulated world, where we could enjoy sweet dreams forever, but that way we should miss out on the intellectual growth which the Singularity seems to offer, and we should also remain at the mercy of the vast new digital intellects who would be running the show. Generally I think it’s believed that only by joining in the cognitive ascent of these mighty new minds can we assure our own future survival.

In that case, is a brain simulation enough? It would run much faster than a meat brain, a point we’ll come back to, but it would surely suffer some of the limitations that biological brains are heir to. We could perhaps gradually enhance our memory and other faculties and gradually improve things that way, a process which might provide a comforting degree of continuity, but it seems likely that entities based on a biological scheme like this would be second-class citizens within the digital world, falling behind the artificial intellects who endlessly redesign and improve themselves. Could we then preserve our identity while turning fully digital and adopting a radical new architecture?

The subject of what constitutes personal identity, be it memory, certain kinds of continuity, or something else, is too large to explore here, except to note a basic question; can our identity ultimately be boiled down to a set of data? If the answer is yes (I actually believe it’s ‘no’, but today we’ll allow anything) , then one way or another the way is clear for uploading ourselves into an entirely new digital architecture.

The way is also clear for duplicating and splitting ourselves. Using different copies of our data we can become several people and follow different paths. Can we then re-merge? If the data that constitutes us is static, it seems we should be able to recombine it with few issues; if it is partly a description of a dynamic process we might not be able to do the merger on the fly, and might have to form a third, merged individual. Would we terminate the two contributing selves? Would we worry less about ‘death’ in such cases? If you know your data can always be brought back into action, terminating the processes using that data (for now) might seem less frightening than the irretrievable destruction of your only brain.

This opens up further strange possibilities. At the moment our conscious experience is essentially linear (it’s a bit more complex than that, with layers and threads of attention, but broadly there’s a consistent chronological stream). In the brave new world our consciousness could branch out without limit; or we could have grid experiences, where different loci of consciousness follow crossing paths, merging at each node and the splitting again, before finally reuniting in one node with (very strange) remembered composite experience.

If merging is a possibility, then we should be able to exchange bits of ourselves with other denizens of the digital world, too. When handed a copy of part of someone else we might retain it as exterior data, something we just know about, or incorporate it into a new merged self, whether a successor to ourselves as ourselves, or as a kind of child; if all our data is saved the difference perhaps ceases to be of great significance. Could we exchange data like this with the artificial entities that were never human, or would they be too different?

I’m presupposing here that both the ex-humans and the artificial consciousnesses here remain multiple and distinct. Perhaps there’s an argument for generally merging into one huge consciousness? I think probably not, because it seems to me that multiple loci of consciousness would just get more done in the way of thinking and experiencing. Perhaps when we became sufficiently linked and multi-threaded, with polydimensional multi-member grid consciousnesses binding everything loosely together anyway the question of whether we are one or many – and how many – might not seem important any more.

If we can exchange experiences, does that solve the Hard Problem? We no longer need to worry whether your experience of red is the same as mine, we just swap. Now many people (and I am one) would think that fully digitised entities wouldn’t be having real experiences anyway, so any data exchange they might indulge in would be irrelevant. There are several ways it could be done, of course. It might be a very abstract business or entities of human descent might exchange actual neural data from their old selves. If we use data which, fed into a meat brain, definitely produces proper experience, it perhaps gets a little harder to argue that the exchange is phenomenally empty.

The strange thing is, even if we put all the doubts aside and assume that data exchanges really do transfer subjective experience, the question doesn’t go away. It might be that attachment to a particular node of consciousness conditions the experience so that it is different anyway.

Consider the example of experiences transferred within a single individual, but over time. Let’s think of acquired tastes. When you first tasted beer, it seemed unpleasant; now you like it. Does it taste the same, with you having learnt to like that same taste? Or did it in fact taste different to you back then – more bitter, more sour? I’m not sure it’s possible to answer with great confidence. In the same way, if one node within the realm of the Singularity ‘runs’ another’s experience, it may react differently, and we can’t say for sure whether the phenomenal experience generated is the same or not…

I’m assuming a sort of cyberspace where these digital entities live – but what do they do all day? At one end of the spectrum, they might play video games constantly – rather sadly reproducing the world they left behind. Or at the intellectually pure end, they might devote themselves to the study of maths and philosophy. Perhaps there will be new pursuits that we, in our stupid meaty way, cannot even imagine as yet. But it’s hard not to see a certain tedious emptiness in the pure life of the mind as it would be available to these intellectual giants. They might be tempted to go on playing a role in the real world.

The real world, though, is far too slow. Whatever else they have improved, they will surely have racked up the speed of computation to the point where thousands of years of subjective time take only a few minutes of real world time. The ordinary physical world will seem to have slowed down very close to the point of stopping altogether; the time required to achieve anything much in the real world is going to seem like millions of years.

In fact, that acceleration means that from the point of view of ordinary time, the culture within the Singularity will quickly reach a limit at which everything it could ever have hoped to achieve is done. Whatever projects or research the Singularitarians become interested in will be completed and wrapped up in the blinking of an eye. Unless you think the future course of civilisation is somehow infinite, it will be completed in no time. This might explain the Fermi Paradox, the apparently puzzling absence of advanced alien civilisations: once they invent computing, galactic cultures go into the Singularity, wrap, themselves up in a total intellectual consummation, and within a few days at most, fall silent forever.

The Hard Problem of physics

Is there a Hard Problem of physics that explains the Hard Problem of consciousness?

Hedda Hassel Mørch has a thoughtful piece in Nautilus’s interesting Consciousness issue (well worth a look generally) that raises this idea. What is the alleged Hard Problem of physics? She say it goes like this…

What is physical matter in and of itself, behind the mathematical structure described by physics?

To cut to the chase, Mørch proposes that things in themselves have a nature not touched by physics, and that nature is consciousness. This explains the original Hard Problem – we, like other things, just are by nature conscious; but because that consciousness is our inward essence rather than one of our physical properties, it is missed out in the scientific account.

I’m sympathetic to the idea that the original Hard Problem is about an aspect of the world that physics misses out, but according to me that aspect is just the reality of things. There may not, according to me, be much more that can usefully be said about it. Mørch, I think, takes two wrong turns. The first is to think that there are such things as things in themselves, apart from observable properties. The second is to think that if this were so, it would justify panpsychism, which is where she ends up.

Let’s start by looking at that Hard problem of physics.  Mørch suggests that physics is about the mathematical structure of reality, which is true enough, but the point here is that physics is also about observable properties; it’s nothing if not empirical. If things have a nature in themselves that cannot be detected directly or indirectly from observable properties, physics simply isn’t interested, because those things-in-themselves make no difference to any possible observation. No doubt some physicists would be inclined to denounce such unobservable items as absurd or vacuous, but properly speaking they are just outside the scope of physics, neither to be affirmed nor denied. It follows, I think, that this can’t be a Hard Problem of physics; it’s actually a Hard Problem of metaphysics.

This is awkward because we know that human consciousness does have physical manifestations that are readily amenable to physical investigation; all of our conscious behaviour, our speech and writing, for example. Our new Hard Problem (let’s call it the NHP) can’t help us with those; it is completely irrelevant to our physical behaviour and cannot give us any account of those manifestations of consciousness. That is puzzling and deeply problematic – but only in the same way as the old Hard Problem (OHP) – so perhaps we are on the right track after all?

The problem is that I don’t think the NHP helps us even on a metaphysical level. Since we can’t investigate the essential nature of things empirically, we can only know about it by pure reasoning; and I don’t know of any purely rational laws of metaphysics that tell us about it. Can the inward nature of things change? If so, what are the (pseudo-causal?) laws of intrinsic change that govern that process? If the inward nature doesn’t change, must we take everything to be essentially constant and eternal in itself? That Parmenidean changelessness would be particularly odd in entities we are relying on to explain the fleeting, evanescent business of subjective experience.

Of course Mørch and others who make a similar case don’t claim to present a set of a priori conclusions about their own nature; rather they suggest that the way we know about the essence of things is through direct experience. The inner nature of things is unknowable except in that one case where the thing whose inner nature is to be known is us. We know our own nature, at least. It’s intuitively appealing – but how do we know our own real nature? Why should being a thing bring knowledge of that thing? Just because we have an essential nature here’s no reason to suppose we are acquainted with that inner nature; again we seem to need some hefty metaphysics to explain this, which is actually lacking. All the other examples of knowledge I can think of are constructed, won through experience, not inherent. If we have to invent a new kind of knowledge to support the theory the foundations may be weak.

At the end of the day, the simplest and most parsimonious view, I think, is to say that things just are made up of their properties, with no essential nub besides. Leibniz’s Law tells us that that’s the nature of identity. To be sure, the list will include abstract properties as well as purely physical ones, but abstract properties that are amenable to empirical test, not ones that stand apart from any possible observation. Mørch disagrees:

Some have argued that there is nothing more to particles than their relations, but  intuition rebels at this claim. For there to be a relation, there must be two things being related. Otherwise, the relation is empty—a show that goes on without performers, or a castle constructed out of thin air.

I think the argument is rather that the properties of a particle relate to each other, while these groups of related properties relate in turn to other such groups. Groups don’t require a definitive member, and particles don’t require a single definitive essence. Indeed, since the particle’s essential self cannot determine any of its properties (or it could be brought within the pale of physics) it’s hard to see how it can have a defined relation to any of them and what role the particle-in-itself can play in Mørch’s relational show.

The second point where I think Mørch goes wrong is in the leap to panpsychism. The argument seems to be that the NHP requires non-structural stuff (which she likens to the hardware on which the software of the laws of physics runs – though I myself wouldn’t buy unstructured hardware); the OHP gives us the non-structural essence of conscious experience (of course conscious experience does have structure, but Mørch takes it that down there somewhere is the structureless ineffable something-it-is-like); why not assume that the latter is universal and fills the gap exposed by the NHP?

Well, because other matter exhibits no signs of consciousness, and because the fact that our essence is a conscious essence just wouldn’t warrant the assumption that all essences are conscious ones. Wouldn’t it be simpler to think that only the essences of outwardly conscious beings are conscious essences? This is quite apart from the many problems of panpsychism, which we’ve discussed before, and which Mørch fairly acknowledges.

So I’m not convinced, but the case is a bold and stimulating one and more persuasively argued than it may seem from my account. I applaud the aims and spirit of the expedition even though I may regret the direction it took.

Autism, Camouflage, and Consciousness

Does recent research into autism suggest real differences between male and female handling of consciousness?

Traditionally, autism has been regarded as an overwhelmingly male condition. Recently, though it has been suggested that the gender gap is not as great as it seems; it’s just that most women with autism go undiagnosed. How can that be? It is hypothesised that some sufferers are able to ‘camouflage’ the symptoms of their autism, and that this suppression of symptoms is particularly prevalent among women.

‘Camouflaging’ means learning normal social behaviours such as giving others appropriate eye contact, interpreting and using appropriate facial expressions, and so on. But surely, that’s just what normal people do? If you can learn these behaviours, doesn’t that mean you’re not autistic any more?
There’s a subtle distinction here between doing what comes naturally and doing what you’ve learned to do. Camouflaging, on this view, requires significant intellectual resources and continuous effort, so that while camouflaged sufferers may lead apparently normal lives, they are likely to suffer other symptoms arising from the sheer mental effort they have to put in – fatigue, depression, and so on.

Measuring the level of camouflaging – which is obviously intended to be undetectable – obviously raises some methodological challenges. Now a study reported in the invaluable BPS Research Digest claims to have pulled it off. The research team used scanning and other approaches, but their main tool was to contrast two different well-established methods of assessing autism – the Autism Diagnostic Observation Schedule on the one hand and the Autism Spectrum Quotient on the other. While the former assesses ‘external’ qualities such as behaviour, the latter measures ‘internal’ ones. Putting it crudely, they measure what you actually do and what you’d like to do respectively. The ratio between the two scores yields a measure of how much camouflaging is going on, and in brief the results confirm that camouflaging is present to a far greater degree in women. I think in fact it’s possible the results are understated; all of the subjects were people who had already been diagnosed with autism; that criterion may have selected women who were atypically low in the level of camouflaging, precisely because women who do a lot of camouflaging would be more likely to escape diagnosis.

The research is obviously welcome because it might help improve diagnosis rates for women, but also because a more equal rate of autism for men and women perhaps helps to dispel the idea, formerly popular but (to me at least) rather unpalatable, that autism is really little more than typical male behaviour exaggerated to unacceptable levels.

It does not eliminate the tricky gender issues, though. One thing that surely needs to be taken into account is the possibility that accommodating social pressures is something women do more of anyway. It is plausible (isn’t it?) that even among typical average people, women devote more effort to social signals, listening and responding, laughing politely at jokes, and so on. It might be that there is a base level of activity among women devoted to ‘camouflaging’ normal irritation, impatience, and boredom which is largely absent in men, a baseline against which the findings for people with autism should properly be assessed. It might have been interesting to test a selection of non-autistic people, if that makes sense in terms of the tests. How far the general underlying difference, if it exists, might be due to genetics, socialisation, or other factors is a thorny question.

At any rate, it seems to me inescapable that what the study is really attempting to do with its distinction between outward behaviour and inward states, is to measure the difference between unconscious and conscious control of behaviour. That subtle distinction, mentioned above, between natural and learned behaviour is really the distinction between things you don’t have to think about, and things that require constant, conscious attention. Perhaps we might draw a parallel of sorts with other kinds of automatic behaviour. Normally, a lot of things we do, such as walking, require no particular thought. All that stuff, once learned, is taken care of by the cerebellum and the cortex need not be troubled (disclaimer: I am not a neurologist). But people who have their cerebellum completely removed can apparently continue to function: they just have to think about every step all the time, which imposes considerable strain after a while. However, there’s no special organ analogous to the cerebellum that records our social routines, and so far as I know it’s not clear whether the blend of instinct and learning is similar either.

In one respect the study might be thought to open up a promising avenue for new therapeutic approaches. If women can, to a great extent, learn to compensate consciously for autism, and if that ability is to a great extent a result of social conditioning, then in principle one option would be to help male autism sufferers achieve the same thing through applying similar socialisation. Although camouflaging evidently has its downsides, it might still be a trick worth learning. I doubt if it is as simple as that, though; an awful lot of regimes have been tried out on male sufferers and to date I don’t believe the levels of success have been that great; on the other hand it may be that pervasive, ubiquitous social pressure is different in kind from training or special regimes and not so easily deployed therapeutically. The only way might be to bring up autistic boys as girls…

If we take the other view, that women’s ability or predisposition to camouflage is not the result of social conditioning, then we might be inclined to look for genuine ‘hard-wired’ differences in the operation of male and female consciousness. One route to take from there would be to relate the difference to the suggested ability of women (already a cornerstone of gender-related folk psychology) to multi-task more effectively, dividing conscious attention without significant loss to the efficiency of each thread. Certainly one would suppose that having to pay constant attention to detailed social cues would have an impact on the ability to pay attention to other things, but so far as I know there is no evidence that women with camouflaged autism are any worse at paying attention generally than anyone else. Perhaps this is a particular skill of the female mind, while if men pay that much attention to social cues, their ability to listen to what is actually being said is sensibly degraded?

The speculative ice out here is getting thinner than I like, so I’ll leave it there; but in all seriousness, any study that takes us forward in this area, as this one seems to do, must be very warmly welcomed.

Chess problem computers can’t solve?

A somewhat enigmatic report in the Daily Telegraph says that this problem has been devised by Roger Penrose, who says that chess programs can’t solve it but humans can get a draw or even a win.

I’m not a chess buff, but it looks trivial. Although Black has an immensely powerful collection of pieces, they are all completed bottled up and immobile, apart from three bishops. Since these are all on white squares, the White king is completely safe from them if he stays on black squares. Since the white pawns fencing in Black’s pieces are all on black squares, the bishops can’t do anything about them either. It looks like a drawn position already, in fact.

I suppose Penrose believes that chess computers can’t deal with this because it’s a very weird situation which will not be in any of their reference material. If they resort to combinatorial analysis the huge number of moves available to the bishops is supposed to render the problem intractable, while the computer cannot see the obvious consequences of the position the way a human can.

I don’t know whether it’s true that all chess programs are essentially that stupid, but it is meant to buttress Penrose’s case that computers lack some quality of insight or understanding that is an essential property of human consciousness.

This is all apparently connected with the launch of a new Penrose Institute, whose website is here, but appears to be incomplete. No doubt we’ll hear more soon.