Posts tagged ‘personal identity’

In What’s Next? Time Travel and Phenomenal Continuity Giuliano Torrengo and Valerio Buonomo argue that our personal identity is about continuity of phenomenal experience, not such psychological matters as memory (championed by John Locke). They refer to this phenomenal continuity as the ‘stream of consciousness’. I’m not sure that William James, who I believe originated the phrase, would have seen the stream of consciousness as being distinct from the series of psychological states in our minds, but it is a handy label.

To support their case, Torrengo and Buonomo have a couple of thought experiments. The first one involves a couple of imaginary machines. One machine transfers the ‘stream of consciousness’ from one person to another while leaving the psychology (memories, beliefs, intentions) behind, the other does the reverse, moving psychology but not phenomenology. Torrengo and Buonomo argue that having your opinions, beliefs and intentions changed, while the stream of consciousness remained intact would be akin to a thorough brainwashing. Your politics might suddenly change, but you would still be the same person. Contrariwise, if your continuity of experience moved over to a different body, it would feel as if you had gone with it.

That is plausible enough, but there are undoubtedly people would refuse to accept it because they would deny that this separation of phenom and psych is possible, or crucially, even conceivable. This might be because they think the two are essentially identical, or because they think phenomenal experience arises directly out of psychology. Some would probably deny that phenomenal experience in this sense even exists.

There is a bit of scope for clarification about what variety of phenomenal experience Torrengo and Buonomo have in mind. At one point they speak of it as including thought, which sounds sort of psychological to me. By invoking machines, their thought experiment shows that their stream of consciousness is technologically tractable, not the kind of slippery qualic experience which lies outside the realm of physics.

Still, thought experiments don’t claim to be proofs; they appeal to intuition and introspection, and with some residual reservations, Torrengo and Buonomo seem to have one that works on that level. They consider three objections. The first complains that we don’t know how rich the stream of consciousness must be in order to be the bearer of identity. Perhaps if it becomes attentuated too much it will cease to work? This business of a minimum richness seems to emerge out of the blue and in fact Torrengo and Buonomo dismiss it as a point which affects all ‘mentalist’ theories. The second objection is a clever one; it says we can only identify a stream of consciousness in relation to a person in the first place, so using it as a criterion of personal identity begs the question. Torrengo and Buonomo essentially deny that there needs to be an experiencing subject over and above the stream of consciousness. The third challenge arises from gaps; if identity depends on continuity, then what happens when we fall asleep and experience ceases? Do we acquire a new identity? Here it seems Torrengo and Buonomo fall back on a defence used by others; that strictly speaking it is the continuity of capacity for a given stream of consciousness that matters. I think a determined opponent might press further attacks on that.

Perhaps, though, the more challenging and interesting thought experiment is the second, involving time travel. Torrengo is the founder of the Centre for Philosophy of Time in Milan, and has a substantial body of work on the the experience of time and related matters, so this is his home turf in a sense. The thought experiment is quite simple; Lally invents a time machine and uses it to spend a day in sixties London. There are two ways of ordering her experience. One is the way she would see it; her earlier life, the time trip, her later life. The other is according to ‘objective’ time; she appears in old London Town and then vanishes; much later lives her early life, then is absent for a short while and finally lives her later life. These can’t both be right, suggest Torrengo and Buonomo, and so it must surely be that her experience goes off on the former course while her psychology goes the other way.

This doesn’t make much sense to me, so perhaps I have misunderstood. Certainly there are two time lines, but Lally surely follows one and remains whole? It isn’t the case that when she is in sixties London she lacks intentions or beliefs, having somehow left those behind. Torrengo and Buonomo almost seem to think that is the case; they say it is possible to imagine her in sixties London not remembering who she is. Who knows, perhaps time machines do work like that, but if so we’re running into one of the weaknesses of thought experiments methodologically; if you assume something impossible like time travel to begin with, it’s hard to have strong intuitions about what follows.

At the end of the day I’m left with a sceptical feeling not about Torrengo and Buonomo‘s ideas in particular but about the whole enterprise of trying to reduce or analyse the concept of personal identity. It is, after all, a particular case of identity and wouldn’t identity be a good candidate for being one of those ‘primitive’ ideas that we just have to start with? I don’t know; or perhaps I should just say there is a person who doesn’t know, whose identity I leave unprobed.

The new Blade Runner film has generated fresh interest in the original film; over on IAI Helen Beebee considers how it nicely illustrates the concept of ‘q-memories’.

This relates to the long-established philosophical issue of personal identity; what makes me me, and what makes me the same person as the one who posted last week, or the same person as that child in Bedford years ago? One answer which has been a leading contender at least since Locke is memory; my memories together constitute my identity.

Memories are certainly used as a practical way of establishing identity, whether it be in probing the claims of a supposed long-lost relative or just testing your recall of the hundreds of passwords modern life requires. It is sort of plausible that if all you memories were erased you would become new person with a fresh start; there have been cases of people who lost decades of memory and underwent personality change, identifying with their own children more readily than their now wrinkly-seeming spouses.

There are various problems with memory as a criterion of identity, though. One is the point that it seems to be circular. We can’t use your memories to validate your identity because in accepting them as your memories we are already implicitly taking you to be the earlier person they come from. If they didn’t come from that person they aren’t validly memories. To get round this objection Shoemaker and Parfit adopted the concept of quasi- or q-memories. Q-memories are like memories but need not relate to any experience you ever had. That, of course, is too loose, allowing delusions to be used as criteria of identity, so it is further specified that q-memories must relate to an experience someone had, and must have been acquired by you in an appropriate way. The appropriate ways are ones that causally relate to the original experience in a suitable fashion, so that it’s no good having q-memories that just happen to match some of King Charles’s. You don’t have to be King Charles, but the q-memories must somehow have got out of his head and into yours through a proper causal sequence.

This is where Blade Runner comes in, because the replicant Rachael appears to be a pretty pure case of q-memory identity. All of her memories, except the most recent ones, are someone else’s; and we presume they were duly copied and implanted in a way that provides the sort of causal connection we need.

This opens up a lot of questions, some of which are flagged up by Beebee. But  what about q-memories? Do they work? We might suspect that the part about an appropriate causal connection is a weak spot. What’s appropriate? Don’t Shoemaker and Parfit have to steer a tricky course here between the Scylla of weird results if their rules are too loose, and the Charybdis of bringing back the circularity if they are too tight? Perhaps, but I think we have to remember that they don’t really want to do anything very radical with q-memories; really you could argue it’s no more than a terminological specification, giving them license to talk of memories without some of the normal implications.

In a different way the case of Rachael actually exposes a weak part of many arguments about memory and identity; the easy assumption that memories are distinct items that can be copied from one mind to another. Philosophers, used to being able to specify whatever mad conditions they want for their thought-experiments, have been helping themselves to this assumption for a long time, and the advent of the computational metaphor for the mind has done nothing to discourage them. It is, however, almost certainly a false assumption.

At the back of our minds when we think like this is a model of memory as a list of well-formed propositions in some regular encoding. In fact, though, much of what we remember is implicit; you recall that zebras don’t wear waistcoats though it’s completely implausible that that fact was recorded anywhere in your brain explicitly. There need be nothing magic about this. Suppose we remember a picture; how many facts does the picture contain? We can instantly come up with an endless list of facts about the relations of items in the picture, but none were encoded as propositions. Does the Mona Lisa have her right hand over her left, or vice versa? You may never have thought about it, but be easily able to recall which way it is. In a computer the picture might be encoded as a bitmap; in our brain we don’t really know, but plausibly it might be encoded as a capacity to replay certain neural firing sequences, namely those that were caused by the original experience. If we replay the experience neurally, we can sort of have the experience again and draw new facts from it the way we could from summoning up a picture; indeed that might be exactly what we are doing.

But my neurons are not wired up like yours, and it is vanishingly unlikely that we could identify direct equivalents of specific neurons between brains, let alone whole firing sequences. My memories are recorded in a way that is specific to my brain, and they cannot be read directly across into yours.

Of course, replicants may be quite different. It’s likely enough that their brains, however they work, are standardised and perhaps use a regular encoding which engineers can easily read off. But if they work differently from human brains, then it seems to follow that they can’t have the same memories; to have the same memories they would have to be an unbelievably perfect copy of the ‘donor’ brain.

That actually means that memories are in a way a brilliant criterion of personal identity, but only in a fairly useless sense.

However, let me briefly put a completely different argument in a radically different direction. We cannot upload memories, but we know that we can generate false ones by talking to subjects or presenting fake evidence. What does that tell us about memories? I submit it suggests that memories are in essence beliefs, beliefs about what happened in the past. Now we might object that there is typically some accompanying phenomenology. We don’t just remember that we went to the mall, we remember a bit of what it looked like, and other experiential details. But I claim that our minds readily furnish that accompanying phenomenology through confabulation, given the belief, and in fact that a great deal of the phenomenological dressing of all memories, even true ones, is actually confected.

But I would further argue that the malleability of beliefs means that they are completely unsuitable as criteria of identity; it follows that memories are similarly unsuitable, so we have been on the wrong track throughout. (Regular readers may know that in fact I subscribe to a view regarded by most as intolerably crude; that human beings are physical objects like any other and have essentially the same criteria of identity.)

 

brainpeelNot according to Keith B. Wiley and Randal A.Koene. They contrast two different approaches to mind uploading: in the slow version neurons or some other tiny component are replaced one by one with a computational equivalent; in the quick, the brain is frozen, scanned, and reproduced in a suitable computational substrate. Many people feel that the slow way is more likely to preserve personal identity across the upload, but Wiley and Koene argue that it makes no difference. Why does the neuron replacement have to be slow? Do we have to wait a day between each neuron switch? hard to see why – why not just do the switch as quickly as feasible? Putting aside practical issues (we have to do that a lot in this discussion), why not throw a single switch that replaces all the neurons in one go? Then if we accept that, how is it different from a destructive scan followed immediately by creation of the computational equivalent (which, if we like, can be exactly the same as the replacement we would have arrived at by the other method)? If we insist on a difference, argue Wiley and Koene, then somewhere along the spectrum of increasing speed there must be a place where preservation of identity switches abruptly to loss of identity; this is quite implausible and there are no reasonable arguments that suggest where this maximum speed should occur.

One argument for the difference comes from non-destructive scanning. Wiley and Koene assume that the scanning process in the rapid transfer is destructive; but if it were not, the original brain would continue on its way, and there would be two versions of the original person. Equally, once the scanning is complete there seems to be no reason why multiple new copies could not be generated. How can identity be preserved if we end up with multiple versions of the original? Wiley and Koene believe that once we venture into this area we need to expand our concept of identity to include the possibility of a single identity splitting, so for them this is not a fatal objection.

Perhaps the problem is not so much the speed in itself as the physical separation? In the fast version the copy is created some distance away from the original whereas in gradual replacement the new version occupies essentially the same space as the original – might it be this physical difference which gives rise to differing intuitions about the two methods? Wiley and Koene argue that even in the case of gradual replacement, there is a physical shift. The replacement neuron cannot occupy exactly the same space as the one it is to replace, at least not at the moment of transfer. The spatial difference may be a matter of microns rather then metres, but here again, why should that make a difference? As with speed, are going to fix on some arbitrary limit where the identity ceases to be preserved, and why should that happen?

I think Wiley and Koene don’t do full justice to the objection here. I don’t think it really rests on physical separation; it implicitly rests on continuity. Wiley and Koene dismiss the idea that a continuous stream of consciousness is required to preserve identity, but it can be defended. It rests on the idea that personal identity resides not in the data or the function in general, but a specific instance in particular. We might say that I as a person am not equivalent to SimPerson V – I am equivalent to a particular game of SimPerson V, played on a particular occasion. If I reproduce that game exactly on another occasion, it isn’t me, it’s a twin.

Now the gradual replacement scenario arguably maintains that kind of identity. The new, artificial neurons enter an ongoing live process and become part of it,  whereas in the scan and create process the brain is necessarily stopped, translated into data, and then recreated. It’s neither the speed nor the physical separation that disrupts the preservation of the identity: it’s the interruption.

Can that be right though – is merely stopping someone enough to disrupt their identity? What if I were literally frozen in some icy accident, so that my brain flat-lined; and then restored and brought back to life. Are we forced to say that the person after the freezing is a twin, not the same? That doesn’t seem right. Perhaps brute physical continuity has some role to play after all; perhaps the fact that when I’m frozen and brought back it’s the same neurons that are firing helps somehow to sustain the identity of the process over the stoppage and preserve my identity?

Or perhaps Wiley and Koene are right after all?