Posts tagged ‘consciousness’

Watch the full video with related content here.

The discussion following the presentations I featured last week.

multi-factor-awareness-cIs awareness an on/off thing? Or is it on a sort of dimmer switch? A degree of variation seems to be indicated by the peculiar vagueness of some metal content (or is that just me?). Fazekas and Overgaard argue that even a dimmer switch is far too simple; they suggest that there are at least four ways in which awareness can be graduated.

First, interestingly, we can be aware of things to different degrees on different levels. Our visual system identifies dark and light, then at a higher level identifies edges, at a higher level again sees three dimensional shapes, and higher again, particular objects. We don’t have to be equally clear at all levels, though. If we’re looking at the dog, for example, we may be aware that in the background is the cat, and a chair; but we are not distinctly aware of the cat’s whiskers or the pattern on the back of the chair. We have only a high-level awareness of cat and chair. It can work the other way, too; people who suffer from face-blindness, for example, may be well able to describe the nose and other features of someone presented to them without recognising that the face belongs to a friend.

That is certainly not the only way our awareness can vary, though; it can also be of higher or lower quality. That gives us a nice two-dimensional array of possible vagueness; job done? Well no, because Fazekas and Overgaard think quality varies in at least three ways.

  • Intensity
  • Precision
  • Temporal Stability

So in fact we have a matrix of vagueness which has four dimensions, or perhaps I should say three plus one.

The authors are scrupulous about explaining how intensity, precision, and temporal stability probably relate to neuronal activity, and they are quite convincing; if anything I should have liked a bit more discussion of the phenomenal aspects – what are some of these states of partially-degraded awareness actually like?

What they do discuss is what mechanism governs or produces their three quality factors. Intensity, they think, comes from the allocation of attention. When we really focus on something, more neuronal resources are pulled in and the result is in effect to turn up the volume of the experience a bit.

Precision is also connected with attention; paying attention to a feature produces sharpening and our awareness becomes less generic (so instead of merely seeing something is red, we can tell whether it is magenta or crimson, and so on). This is fair enough, but it does raise a small worry as to whether intensity and precision are really all that distinct. Mightn’t it just be that enhancing the intensity of a particular aspect of our awareness makes it more distinct and so increases precision? The authors acknowledge some linkage.

Temporal stability is another matter. Remember we’re not talking here about whether the stimulus itself is brief or sustained but whether our awareness is constant. This kind of stability, the authors say, is a feature of conscious experience rather than unconscious responses and depends on recurrence and feedback loops.

Is there a distinct mechanism underlying our different levels of awareness? The authors think not; they reckon that it is simply a matter of what quality of awareness we have on each level (I suppose we have to allow for the possibility if not the certainty that some levels will be not just poor quality but actually absent. I don’t think I’m typically aware of all possible levels of interpretation when considering something.

So there is the model in all its glory; but beware; there are folks around who argue that in fact awareness is actually not like this, but in at least some cases is an on/off matter. Some experiments by Asplund were based on the fact that if a subject is presented with two stimuli in quick succession, the second is missed. As the gap increases, we reach a point where the second stimulus can be reported; but subjects don’t see it gradually emerging as the interval grows; rather With one gap it’s not there, while With a slightly larger one, it is.

Fazekas and Overgaard argue that Asplund failed to appreciate the full complexity of the graduation that goes on; his case focuses too narrowly on precision alone. In that respect there may be a sharp change, but in terms of intensity or temporal stability, and hence in terms of awareness overall, they think there would be graduation.

A second rival theory which the authors call the ‘levels of processing view’ or LPV, suggests that while awareness of low-level items is graduated, at a high level you’re either aware or not. These experiments used colours to represent low-level features and numbers for high level ones, and found that while there was graduated awareness of the former, with the latter you either got the number or you didn’t.

Fazekas and Overgaard argue that this is because colours and numbers are not really suitable for this kind of comparison. Red and blue are on a continuous scale and one can morph gradually into the other; the numeral ‘7’ does not gradually change into ‘9’. This line of argument made sense to me, but on reflection caused me some problems. If it is true that numerals are just distinct in this way, that seems to me to concede a point that makes the LPV case seem intuitively appealing; in some circumstances things just are on/off. It seemed true at first sight that numerals are distinct in this way, but when I thought back to experiences in the optician’s chair, I could remember cases where the letter I was trying to read seemed genuinely indeterminate between two or even more alternatives. Now though, if My first thought was wrong and numerals are not in fact inherently distinct in this way, that seems to undercut Fazekas and Overgaard’s counter-argument.

On the whole the more complex model seems to provide better explanatory resources and I find it broadly convincing, but I wonder whether a reasonable compromise couldn’t be devised, allowing that for certain experiences there may be a relatively sharp change of awareness, with only marginal graduation? Probably I have come up with a vague and poorly graduated conclusion…

Watch the full video with related content here.

What is the problem about consciousness? A Royal Institution video with interesting presentations (part 2 another time).

Anil Seth presents a striking illusion and gives an optimistic view of the ability of science to tackle the problem; or maybe we just get on with the science anyway? The philosophers may ask good questions, but their answers have always been wrong.

Barry Smith says that’s because when the philosophers have sorted a subject out it moves over into science. One problem is that we tend to miss thinking about consciousness and think about its contents. Isn’t there a problem: to be aware of your own awareness changes it? I feel pain in my body, but could consciousness be in my ankle?

Chris Frith points out that actually only a small part of our mental activity has anything to do with consciousness, and in fact there is evidence to show that many of the things we think are controlled by conscious thought really are not: a vindication of Helmholtz’s idea of unconscious inference. Thinking about your thinking messes things up?  Try asking someone how happy they feel – guaranteed to make them less happy immediately…

Set the Hard Problem aside and tackle the real problem instead, says Anil K Seth in a thought-provoking phenomenological-investigationpiece in Aeon. I say thought-provoking; overall I like the cut of his jib and his direction of travel: most of what he says seems right. But somehow his piece kept stimulating the  cognitive faculty in my brain that generates quibbles.

He starts, for example, by saying that in philosophy a Cartesian debate over mind-stuff and matter-stuff continues to rage. Well, that discussion doesn’t look particularly lively or central to me. There are still people around who would identify as dualists in some sense, no doubt, but by and large my perception is that we’ve moved on. It’s not so much that monism won, more that that entire framing of the issue was left behind. ‘Dualist’, it seems to me is now mainly a disparaging term applied to other people, and whether he means it or not Seth’s remark comes across as having a tinge of that.

Indeed, he proceeds to say that David Chalmers’ hard/easy problem distinction is inherited from Descartes. I think he should show his working on that. The Hard Problem does have dualist answers, but it has non-dualist ones too. It claims there are things not accounted for by physics, but even monists accept that much. Even Seth himself surely doesn’t think that people who offer non-physics accounts of narrative or society must therefore be dualists?

Anyway, quibbling aside for now, he says we’ll get on better if we stop worrying about why consciousness exists at all and try instead to relate its features to the underlying biological processes. That is perfectly sensible. It is depressingly possible that the Hard Problem will survive every advance in understanding, even beyond the hypothetical future point when we have a comprehensive account of how the mind works. After all, we’re required to find it conceivable that my zombie twin could be exactly like me without having real subjective experience, so it must be possible that we could come to understand his mind totally without having any grasp on my qualia.

How shall we set about things, then? Seth proposes distinguishing between level of consciousness, contents, and self. That feels an uncomfortable list to me; this is uncharacteristically tidy-minded, but I like all members of a list to be exclusive and similar; whereas as Seth confirms, self here is to be seen as part of the contents. To me, it’s a bit as if he suggested that in order to analyse a performance of a symphony we should think about loudness, the work being performed, and the tune being played. That analogy points to another issue; ‘loudness’ is a legitimate quality of orchestral music but we need to remember that different instruments may play at different volumes and that the music can differ in quality in lots of other important ways. Equally, the level of consciousness is not really as simple as ten places on a dial.

Ah, but Seth recognises that. He distinguishes between consciousness and wakefulness. For consciousness it’s not the number of neurons involved or their level of activity that matters. It turns out to be complexity: findings by Massimini show that pulses sent into a brain in dreamless sleep produce simple echoes; sent into a conscious brain (whose overall level of activity may not be much greater) they produce complex reflected and transformed patterns. Seth hopes that this can be the basis of a ‘conscious meter’, the value of which for certain comatose patients is readily apparent. He is pretty optimistic generally about how much light this might shed on consciousness, rather as thermometers transformed…

“our physical understanding of heat (as average molecular kinetic energy)”

(Unexpectedly, a physics quibble; isn’t that temperature? Heat is transferred energy, isn’t it?)

Of course a complex noise is not necessarily a symphony and complex brain activity need not be conscious; Seth thinks it needs to be informative (whatever that may mean) and integrated. This of course links with Tononi’s Integrated Information Theory, but Seth sensibly declines to go all the way with that; to say that consciousness just is integrated information seems to him to be going too far; yielding again to the desire for deep, final yet simple answers, a search which just leads to trouble.

Instead he proposes, drawing on the ideas of Helmholtz, that we see the brain as a prediction machine. He draws attention to the importance of top-down influences on perception; that is, instead of building up a picture from the elements supplied by the senses, the brain often guesses what it is about to see and hear, and presents us with that unless contradicted by the senses – sometimes even if it is contradicted by the senses. This is hardly new (obviously not if it comes from Helmholtz (1821-1894)), but it does seem that Seth’s pursuit of the ‘real problem’ is yielding some decent research.

Finally Seth goes on to talk a little about the self. Here he distinguishes between bodily, perspectival, volitional, narrative and social selves. I feel more comfortable with this list than the other one – except that these are are all deemed to be merely experienced. You can argue that volition is merely an impression we have; that we just think certain things are under our conscious control – but you have to argue for it. Just including that implicitly in your categorisation looks a bit question-begging.

Ah, but Seth does go on to present at least a small amount of evidence. He talks first about a variant of the rubber hand experiment, in which said item is made to ‘feel’ like your own hand: it seems that making a virtual hand flash in time with the subject’s pulse enhances the impression of ownership (compared with a hand that flashes out of synch), which is indeed interesting. And he mentions that the impression of agency we have is reinforced when our predictions about the result are borne out. That may be so, but the fact that our impression of agency can be influenced by other factors doesn’t mean our agency is merely an impression – any more than a delusion about a flashing hand proves we don’t really have a hand.

But honestly, quibbles aside this is sensible stuff. Maybe I should give all that Hard Problem stuff a rest…

The way we think about consciousness is just wrong, it seems.

First, says Markus Gabriel, we posit this bizarre entity the Universe, consisting of everything, and then ask whether consciousness is part of it; this is no way to proceed. In fact ‘consciousness’ covers many different things; once correctly analysed many of them are unproblematic (The multilingual Gabriel suggests in passing that there is no satisfactory German word equivalent to ‘mind’, and for that matter, no good English equivalent of ‘geist’.) He believes there is more mystery about how, for example, the brain deals with truth.

Ray Brassier draws a distinction between knowing what consciousness is and knowing what it means. A long tradition suggests that because we have direct acquaintance with consciousness our impressions are authoritative and we know its nature. In fact the claims about phenomenal experience made by Chalmers and others are hard to justify. I can see, he says, that there are phenomenal qualities – being brown, or square – attached to a table, but the idea that phenomenal things are going on in my mind separate from the table seems to make no sense.

Eva Jablonka takes a biological and evolutionary view. Biological stuff is vastly more complex than non-biological stuff and requires different explanations. She defends Chalmers’s formulation of the problem, but not his answers; she is optimistic that scientific exploration can yield enlightenment. She cites the interesting case of Daniel Kish  whose eyes were removed in early infancy but who has developed echolocation skills to the point where he can ride a bike and find golf balls – it seems his visual cortex has been recruited for the purpose. Surely, says Jablonka, he must have a somewhat better idea of what it is like to be a bat?

There’s a general agreement that simplistic materialism is outdated and that a richer naturalism is required (not, of course, anything like traditional dualism).

heterogeneous-ontologyIs downward causation the answer? Does it explain how consciousness can be a real and important part of the world without being reducible to physics? Sean Carroll had a sensible discussion of the subject recently.

What does ‘down’ even mean here? The idea rests on the observation that the world operates on many distinct levels of description. Fluidity is not a property of individual molecules but something that ’emerges’ when certain groups of them get together. Cells together make up organisms that in turn produce ecosystems. Often enough these levels of description deal with progressively larger or smaller entities, and we typically refer to the levels that deal with larger entities as higher, though we should be careful about assuming there is one coherent set of levels of description that fit into one another like Russian dolls.

Usually we think that reality lives on the lowest level, in physics. Somewhere down there is where the real motors of the universe are driving things. Let’s say this is the level of particles, though probably it is actually about some set of entities in quantum mechanics, string theory, or whatever set of ideas eventually proves to be correct. There’s something in this view because it’s down here at the bottom that the sums really work and give precise answers, while at higher levels of description the definitions are more approximate and things tend to be more messy and statistical.

Now consciousness is quite a high-level business. Particles make proteins that make cells that make brains that generate thoughts. So one reductionist point of view would be that really the truth is the story about particles: that’s where the course of events is really decided, and the mental experiences and decisions we think are going on in consciousness are delusions, or at best a kind of poetic approximation.

It’s not really true, however, that the entities dealt with at higher levels of description are not real. Fluidity is a perfectly real phenomenon, after all. For that matter the Olympics were real, and cannot be discussed in terms of elementary particles. What if our thoughts were real and also causally effective at lower levels of description? We find it easy to think that the motion of molecules ’caused’ the motion of the football they compose, but what if it also worked the other way? Then consciousness could be real and effectual within the framework of a sufficiently flexible version of physics.

Carroll doesn’t think that really washes, and I think he’s right. It’s a mistake to think that relations between different levels of description are causal. It isn’t that my putting the beef and potatoes on the table caused lunch to be served; they’re the same thing described differently. Now perhaps we might allow ourselves a sense in which things cause themselves, but that would be a strange and unusual sense, quite different from the normal sense in which cause and effect by definition operate over time.

So real downward causality, no: if by talk of downward causality people only mean that real effectual mental events can co-exist with the particle story but on a different level of description, that point is sound but misleadingly described.

The thing that continues to worry me slightly is the question of why the world is so messily heterogeneous in its ontology – why it needs such a profusion of levels of description in order to discuss all the entities of interest. I suppose one possibility is that we’re just not looking at things correctly. When we look for grand unifying theories we tend to look to ever lower levels of description and to the conjectured origins of the world. Perhaps that’s the wrong approach and we should instead be looking for the unimaginable mental perspective that reconciles all levels of description.

Or, and I think this might be closer to it, the fact that there are more things in heaven and earth than are dreamt of in anyone’s philosophy is actually connected with the obscure reason for there being anything. As the world gets larger it gets, ipso facto, more complex and reduction and backward extrapolation get ever more hopeless. Perhaps that is in some sense just as well.

(The picture is actually a children’s puzzle from 1921 – any solutions? You need to know it is called ‘Illustrated Central Acrostic’)  


neural-netInteresting piece here reviewing the way some modern machine learning systems are unfathomable. This is because they learn how to do what they do, rather than being set up with a program, so there is no reassuring algorithm – no set of instructions that tells us how they work. In fact they way they make their decisions may be impossible to grasp properly even if we know all the details because it just exceeds in brute complexity what we can ever get our minds around.

This is not really new. Neural nets that learn for themselves have always been a bit inscrutable. One problem with this is brittleness: when the system fails it may not fail in ways that are safe and manageable, but disastrously. This old problem is becoming more important mainly because new approaches to deep machine learning are doing so well; all of a sudden we seem to be getting a rush of new systems that really work effectively at quite complex real world tasks. The problems are no longer academic.

Brittle behaviour may come about when the system learns its task from a limited data set. It does not understand the data and is simply good at picking out correlations, so sometimes it may pick out features of the original data set that work well within that set, and perhaps even work well on quite a lot of new real world data, but don’t really capture what’s important. The program is meant to check whether a station platform is dangerously full of people, for example; in the set of pictures provided it finds that all it needs to do is examine the white platform area and check how dark it is. The more people there are, the darker it looks. This turns out to work quite well in real life, too. Then summer comes and people start wearing light coloured clothes…

There are ways to cope with this. We could build in various safeguards. We could make sure we use big and realistic datasets for training or perhaps allow learning to continue in real world contexts. Or we could just decide never to use a system that doesn’t have an algorithm we can examine; but there would be a price to pay in terms of efficiency for that; it might even be that we would have to give up on certain things that can only be effectively automated with relatively sophisticated deep learning methods. We’re told that the EU contemplates a law embodying a right to explanations of how software works. To philosophers I think this must sound like a marvellous new gravy train, as there will obviously be a need to adjudicate what counts as an adequate explanation, a notoriously problematic issue. (I am available as a witness in any litigation for reasonable hourly fees.)

The article points out that the incomprehensibility of neural network-based systems is in some ways really quite like the incomprehensibility of the good old human brain. Why wouldn’t it be? After all, neural nets were based on the brain. Now it’s true that even in the beginning they were very rough approximations of real neurology and in practical modern systems the neural qualities of neural nets are little more than a polite fiction. Still, perhaps there are properties shared by all learning systems?

One reason deep learning may run into problems is the difficulty AI always has in dealing with relevance.  The ability to spot relevance no doubt helps the human brain check whether it is learning about the right kind of thing, but it has always been difficult to work out quite how our brains do it, and this might mean an essential element is missing from AI approaches.

It is tempting, though, to think that this is in part another manifestation of the fact that AI systems get trained on limited data sets. Maybe the radical answer is to stop feeding them tailored data sets and let  our robots live in the real world; in other words, if we want reliable deep learning perhaps our robots have to roam free and replicate the wider human experience of the world at large? To date the project of creating human-style cognition has been in some sense motivated by mere curiosity (and yes, by the feeling that it would be pretty cool to have a robot pal) ; are we seeing here the outline of an argument that human-style AGI might actually be the answer to genuine engineering problems?

What about those explanations? Instead of retaining philosophers and lawyers to argue the case, could we think about building in a new module to our systems, one that keeps overall track of the AI and can report the broad currents of activity within it? It wouldn’t be perfect but it might give us broad clues as to why the system was making the decisions it was, and even allow us to delicately feed in some guidance. Doesn’t such a module start to sound like, well, consciousness? Could it be that we are beginning to see the outline of the rationales behind some of God’s design choices?

Edward WittenWe’ll never understand consciousness, says Edward Witten. Ashutosh Jogalekar’s post here features a video of the eminent physicist talking about fundamentals; the bit about consciousness starts around 1:10 if you’re not interested in string theory and cosmology. John Horgan has also weighed in with some comments; Witten’s view is congenial to him because of his belief that science may be approaching an end state in which many big issues are basically settled while others remain permanently mysterious. Witten himself thinks we might possibly get a “final theory” of physics (maybe even a form of string theory), but guesses that it would be of a tricky kind, so that understanding and exploring the theory would itself be an endless project, rather the way number theory, which looks like a simple subject at first glance, proves to be capable of endless further research.

Witten, in response to a slightly weird question from the interviewer, declines to define consciousness, saying he prefers to leave it undefined like one of the undefined terms set out at the beginning of a maths book. He feels confident that the workings of the mind will be greatly clarified by ongoing research so that we will come to understand much better how the mechanisms operate. But why these processes are accompanied by something like consciousness seems likely to remain a mystery; no extension of physics that he can imagine seems likely to do the job, including the kind of new quantum mechanics that Roger Penrose believes is needed.

Witten is merely recording his intuitions, so we shouldn’t try to represent him as committed to any strong theoretical position; but his words clearly suggest that he is an optimist on the so-called Easy Problem and a pessimist on the Hard one. The problem he thinks may be unsolvable is the one about why there is “something it is like” to have experiences; what it is that seeing a red rose has over and above the acquisition of mere data.

If so, I think his incredulity joins a long tradition of those who feel intuitively that that kind of consciousness just is radically different from anything explained or explainable by physics. Horgan mentions the Mysterians, notably Colin McGinn, who holds that our brain just isn’t adapted to understanding how subjective experience and the physical world can be reconciled; but we could also invoke Brentano’s contention that mental intentionality is just utterly unlike any physical phenomenon; and even trace the same intuition back to Leibniz’s famous analogy of the mill; no matter what wheels and levers you put in your machine, there’s never going to be anything that could explain a perception (particularly telling given Leibniz’s enthusiasm for calculating machines and his belief that one day thinkers could use them to resolve complex disputes). Indeed, couldn’t we argue that contemporary consciousness sceptics like Dennett and the Churchlands also see an unbridgeable gap between physics and subjective, qualia-having consciousness? The difference is simply that in their eyes this makes that kind of consciousness nonsense, not a mystery.

We have to be a bit wary of trusting our intuitions. The idea that subjective consciousness arises when we’ve got enough neurons firing may sound like the idea that wine comes about when we’ve added enough water to the jar; but the idea that enough ones and zeroes in data registers could ever give rise to a decent game of chess looks pretty strange too.

As those who’ve read earlier posts may know, I think the missing ingredient is simply reality. The extra thing about consciousness that the theory of physics fails to include is just the reality of the experience, the one thing a theory can never include. Of course, the nature of reality is itself a considerable mystery, it just isn’t the one people have thought they were talking about. If I’m right, then Witten’s doubts are well-founded but less worrying than they may seem. If some future genius succeeds in generating an artificial brain with human-style mental functions, then by looking at its structure we’ll only ever see solutions to the Easy Problem, just as we may do in part when looking at normal biological brains. Once we switch on the artificial brain and it starts doing real things, then experience will happen.

introspection2We don’t know what we think, according to Alex Rosenberg in the NYT. It’s a piece of two halves, in my opinion; he starts with a pretty fair summary of the sceptical case. It has often been held that we have privileged knowledge of our own thoughts and feelings, and indeed of our own decisions; but the findings of Benjamin Libet about decisions being made before we are aware of them; the phenomenon of blindsight which shows we may go on having visual knowledge we’re not aware of; and many other cases where it can be shown that motives are confabulated and mental content is inaccessible to our conscious, reporting mind; these all go to show that things are much more complex than we might have thought, and that our thoughts are not, as it were, self-illuminating. Rosenberg plausibly suggests that we use on ourselves the kind of tools we use to work out what other people are thinking; but then he seems to make a radical leap to the conclusion that there is nothing else going on.

Our access to our own thoughts is just as indirect and fallible as our access to the thoughts of other people. We have no privileged access to our own minds. If our thoughts give the real meaning of our actions, our words, our lives, then we can’t ever be sure what we say or do, or for that matter, what we think or why we think it.

That seems to be going too far.  How could we ever play ‘I spy’ if we didn’t have any privileged access to private thoughts?

“I spy, with my little eye, something beginning with ‘c'”
“Is it ‘chair’?”
“I don’t know – is it?”

It’s more than possible that Rosenberg’s argument has suffered badly from editing (philosophical discussion, even in a newspaper piece, seems peculiarly information-dense; often you can’t lose much of it without damaging the content badly). But it looks as if he’s done what I think of as an ‘OMG bounce’; a kind of argumentative leap which crops up elsewhere. Sometimes we experience illusions:  OMG, our senses never tell us anything about the real world at all! There are problems with the justification of true belief: OMG there is no such thing as knowledge! Or in this case: sometimes we’re wrong about why we did things: OMG, we have no direct access to our own thoughts!

There are in fact several different reasons why we might claim that our thoughts about our thoughts are immune to error. In the game of ‘I spy’, my nominating ‘chair’ just makes it my choice; the content of my thought is established by a kind of fiat. In the case of a pain in my toe, I might argue I can’t be wrong because a pain can’t be false: it has no propositional content, it just is. Or I might argue that certain of my thoughts are unmediated; there’s no gap between them and me where error could creep in, the way it creeps in during the process of interpreting sensory impressions.

Still, it’s undeniable that in some cases we can be shown to adopt false rationales for our behaviour; sometimes we think we know why we said something, but we don’t. I think by contrast I have occasionally, when very tired, had the experience of hearing coherent and broadly relevant speech come out of my own mouth without it seeming to come from my conscious mind at all. Contemplating this kind of thing does undoubtedly promote scepticism, but what it ought to promote is a keener awareness of the complexity of human mental experience: many layered, explicit to greater or lesser degrees, partly attended to, partly in a sort of half-light of awareness… There seem to be unconscious impulses, conscious but inexplicit thought; definite thought (which may even be in recordable words); self-conscious thought of the kind where we are aware of thinking while we think… and that is at best the broadest outline of some of the larger architecture.

All of this really needs a systematic and authoritative investigation. Of course, since Plato there have been models of the structure of the mind which separate conscious and unconscious, id, ego and superego: philosophers of mind have run up various theories, usually to suit their own needs of the moment; and modern neurology increasingly provides good clues about how various mental functions are hosted and performed. But a proper mainstream conception of the structure and phenomenology of thought itself seems sadly lacking to me. Is this an area where we could get funding for a major research effort; a Human Phenomenology Project?

It can hardly be doubted that there are things to discover. Recently we were told, if not quite for the first time, that a substantial minority of people have no mental images (although at once we notice that there even seen to be different ways of having mental images). A systematic investigation might reveal that just as we have four blood groups, there are four (or seven) different ways the human mind can work. What if it turned out that consciousness is not a single consistent phenomenon, but a family of four different ones, and that the four tribes have been talking past each other all this time…?

upload angelA digital afterlife is likely to be available one day, according to Michael Graziano, albeit not for some time; his piece re-examines the possibility of uploading consciousness, and your own personality, into a computer. I think he does a good job of briefly sketching the formidable difficulties involved in scanning your brain, and scanning so precisely that your individual selfhood could be captured. In fact, he does it so well that I don’t really understand where his ultimate optimism comes from.

To my way of thinking, ‘scan and build’ isn’t even the most promising way of duplicating your brain. One more plausible way would be some kind of future bio-engineering where your brain just grows and divides, rather in the way that single cells do. A neater way would be some sort of hyper-path through space that split you along the fourth spatial dimension and returned both slices to our normal plane. Neither of these options is exactly a feasible working project, but to me they seem closer to being practical than a total scan. Of course neither of them offers the prospect of an afterlife the way scanning does, so they’re not really relevant for Graziano here. He seems to think we don’t need to go down to an atom by atom scan, but I’m not sure why not. Granted, the loss of one atom in the middle of my brain would not destroy my identity, but not scanning to an atomic level generally seems a scarily approximate and slapdash approach to me given the relevance of certain key molecules in the neural process –  something Graziano fully recognises.

If we’re talking about actual personal identity I don’t think it really matters though, because the objection I consider strongest applies even to perfect copies. In thought experiments we can do anything, so let’s just specify that by pure chance there’s another brain nearby that is in every minute detail the same as mine. It still isn’t me, for the banal commonsensical reason that copies are not the original. Leibniz’s Law tells us that if B has exactly the same properties as A, then it is A: but among the properties of a brain are its physical location, so a brain over there is not the same as one in my skull (so in fact I cheated by saying the second brain was the same in every detail but nevertheless ‘nearby’).

Now most philosophers would say that Leibniz is far too strong a criterion of identity when it comes to persons. There have been hundreds of years of discussion of personal identity, and people generally espouse much looser criteria for a person than they would for a stone – from identity of memories to various kinds of physical, functional, or psychological continuity. After all, people are constantly changing: I am not perfectly identical in physical terms to the person who was sitting here an hour ago, but I am still that person. Graziano evidently holds that personal identity must reside in functional or informational qualities of the kind that could well be transferred into a digital form, and he speaks disparagingly of ‘mystical’ theories that see problems with the transfer of consciousness. I don’t know about that; if anyone is hanging on to residual spiritual thinking, isn’t it the people who think we can be ‘taken out of’ our bodies and live forever? The least mystical stance is surely the one that says I am a physical object, and with some allowance for change and my complex properties, my identity works the same as that of any other physical object. I’m a one-off, particular thing and copies would just be copies.

What if we only want a twin, or a conscious being somewhat like me? That might still be an attractive option after all. OK, it’s not immortality but I think without being rampant egotists most of us probably feel the world could stand a few more people like ourselves around, and we might like to have a twin continuing our good work once we’re gone.

That less demanding goal changes things. If that’s all we’re going for, then yes, we don’t need to reproduce a real brain with atomic fidelity. We’re talking about a digital simulation, and as we know, simulations do not reproduce all the features of the thing being simulated – only those that are relevant for the current purpose. There is obviously some problem about saying what the relevant properties are when it comes to consciousness; but if passing the Turing Test is any kind of standard then delivering good outputs for conversational inputs is a fair guide and that looks like the kind of thing where informational and functional properties are very much to the fore.

The problem, I think, is again with particularity. Conscious experience is a one-off thing while data structures are abstract and generic. If I have a particular experience of a beautiful sunset, and then (thought experiments again) I have an entirely identical one a year later, they are not the same experience, even though the content is exactly the same. Data about a sunset, on the other hand, is the same data whenever I read or display it.

We said that a simulation needs to reproduce the relevant aspects of the the thing simulated; but in a brain simulation the processes are only represented symbolically, while one of the crucial aspects we need for real experience is particular reality.

Maybe though, we go one level further; instead of simulating the firing of neurons and the functional operation of the brain, we actually extract the program being run by those neurons and then transfer that. Here there are new difficulties; scanning the physical structure of the brain is one thing; working out its function and content is another thing altogether; we must not confuse information about the brain with the information in the brain. Also, of course, extracting the program assumes that the brain is running a program in the first place and not doing something altogether less scrutable and explicit.

Interestingly, Graziano goes on to touch on some practical issues; in particular he wonders how the resources to maintain all the servers are going to be found when we’re all living in computers. He suspects that as always, the rich might end up privileged.

This seems a strange failure of his technical optimism. Aren’t computers going to go on getting more powerful, and cheaper? Surely the machines of the twenty-second century will laugh at this kind of challenge (perhaps literally). If there is a capacity problem, moreover, we can all be made intermittent; if I get stopped for a thousand years and then resume, I won’t even notice. Chances are that my simulation will be able to run at blistering speed, far faster than real time, so I can probably experience a thousand years of life in a few computed minutes. If we get quantum computers, all of us will be able to have indefinitely long lives with no trouble at all, even if our simulated lives include having digital children or generating millions of digital alternates of ourselves, thereby adding to the population. Graziano, optimism kicking back in, suggests that we can grow  in understanding and come to see our fleshly life as a mere larval stage before we enter on our true existence. Maybe, or perhaps we’ll find that human minds, after ten billion years (maybe less) exhaust their potential and ultimately settle into a final state; in which case we can just get the computers to calculate that and then we’ll all be finalised, like solved problems. Won’t that be great?

I think that speculations of this kind eventually expose the contrast between the abstraction of data and the reality of an actual life, and dramatise the fact, perhaps regrettable, perhaps not, that you can’t translate one into the other.