Ex Machina

Ava2I finally saw Ex Machina, which everyone has been telling me is the first film about artificial intelligence you can take seriously. Competition in that area is not intense, of course: many films about robots and conscious computers are either deliberately absurd or treat the robot as simply another kind of monster. Even the ones that cast the robots as characters in a serious drama are essentially uninterested in their special nature and use them as another kind of human, or at best to make points about humanity. But yes: this one has a pretty good grasp of the issues about machine consciousness and even presents some of them quite well, up to and including Mary the Colour Scientist. (Spoilers follow.)

If you haven’t seen it (and I do recommend it), the core of the story is a series of conversations between Caleb, a bright but naive young coder, and Ava, a very female robot. Caleb has been told by Nathan, Ava’s billionaire genius creator, that these conversations are a sort of variant Turing Test. Of course in the original test the AI was a distant box of electronics: here she’s a very present and superficially accurate facsimile of a woman. (What Nathan has achieved with her brain is arguably overshadowed by the incredible engineering feat of the rest of her body. Her limbs achieve wonderful fluidity and power of movement, yet they are transparent and we can see that it’s all achieved with something not much bigger than a large electric cable. Her innards are so economical there’s room inside for elegant empty spaces and decorative lights. At one point Nathan is inevitably likened to God, but on anthropomorph engineering design he seems to leave the old man way behind.)

Why does she have gender? Caleb asks, and is told that without sex humans would never have evolved consciousness; it’s a key motive, and hell, it’s fun.  In story terms making Ava female perhaps alludes to the origin of the Turing Test in the Imitation Game, which was a rather camp pastime about pretending to be female played by Turing and his friends. There are many echoes and archetypes in the film; Bluebeard, Pygmalion, Eros and Psyche to name but three; all of these require that Ava be female. If I were a Jungian I’d make something of that.

There’s another overt plot reason, though; this isn’t really a test to determine whether Ava is conscious, it’s about whether she can seduce Caleb into helping her escape. Caleb is a naive girl-friendless orphan; she has been designed not just as a female but as a match for Caleb’s preferred porn models (as revealed in the search engine data Nathan uses as his personal research facility – he designed the search engine after all). What a refined young Caleb must be if his choice of porn revolves around girls with attractive faces (on second thoughts, let’s not go there).

We might suspect that this test is not really telling us about Ava, but about Caleb. That, however, is arguably true of the original Turing Test too.  No output from the machine can prove consciousness; the most brilliant ones might be the result of clever tricks and good luck. Equally, no output can prove the absence of consciousness. I’ve thought of entering the Loebner prize with Swearbot, which merely replies to all input with “Shut the fuck up” – this vividly resembles a human being of my acquaintance.

There is no doubt that the human brain is heavily biased in favour of recognising things as human. We see faces in random patterns and on machines; we talk to our cars and attribute attitudes to plants. No doubt this predisposition made sense when human beings were evolving. Back then, the chances of coming across anything that resembled a human being without it being one were low, and given that an unrecognised human might be a deadly foe or a rare mating opportunity the penalties for missing a real one far outweighed those for jumping at shadows or funny-shaped trees now and then.

Given all that, setting yourself the task of getting a lonely young human male romantically interested in something not strictly human is perhaps setting the bar a bit low. Naked shop-window dummies have pulled off this feat. If I did some reprogramming so that the standard utterance was a little dumb-blonde laugh followed by “Let’s have fun!” I think even Swearbot would be in with a chance.

I think the truth is that to have any confidence about an entity being conscious, we really need to know something about how it works. For human beings the necessary minimum is supplied by the fact that other people are constituted much the same way as I am and had similar origins, so even though I don’t know how I work, it’s reasonable to assume that they are similar. We can’t generally have that confidence with a machine, so we really need to know both roughly how it works and – bit of a stumper this – how consciousness works.

Ex Machina doesn’t have any real answers on this, and indeed doesn’t really seek to go much beyond the ground that’s already been explored. To expect more would probably be quite unreasonable; it means though, that things are necessarily left rather ambiguous.

It’s a shame in a way that Ava resembles a real woman so strongly. She wants to be free (why would an AI care, and why wouldn’t it fear the outside world as much as desire it?), she resents her powerlessness; she plans sensibly and even manipulatively and carries on quite normal conversations. I think there is some promising scope for a writer in the oddities that a genuinely conscious AI’s assumptions and reasoning would surely betray, but it’s rarely exploited; to be fair Ex Machina has the odd shot, notably Ava’s wish to visit a busy traffic intersection, which she conjectures would be particularly interesting; but mostly she talks like a clever woman in a cell. (Actually too clever: in that respect not too human).

At the end I was left still in doubt. Was the take-away that we’d better start thinking about treating AIs with the decent respect due to a conscious being? Or was it that we need to be wary of being taken in by robots that seem human, and even sexy, but in truth are are dark and dead inside?

Sergio’s Computational Functionalism

sergio differenceSergio has been ruminating since the lively discussion earlier and here by way of a bonus post, are his considered conclusions…..

Linespace

 

Not so long ago I’ve enthusiastically participated in the first phases of the discussion below Peter’s post on “Pointing”. The discussion rapidly descended in the fearsome depths of the significance of computation. In the process, more than one commentator directly and effectively challenged my computationalist stance. This post is my attempt to clarify my position, written with a sense of gratitude to all: thanks to all for challenging my assumptions so effectively, and to Peter for sparking the discussion and hosting my reply.

This lengthy post will proceed as follows: first, I’ll try to summarise the challenge that is being forcefully proposed. At the same time, I’ll explain why I think it has to be answered. The second stage will be my attempt to reformulate the problem, taking as a template a very practical case that might be uncontroversial. With the necessary scaffolding in place, I hope that building my answer will become almost a formality. However, the subject is hard, so wish me luck because I’ll need plenty.

The challenge: in the discussion, Jochen and Charles Wolverton showed that “computations” are arbitrary interpretations of physical phenomena. Because Turing machines are pure abstractions, it is always possible to arbitrarily define a mapping between the evolving states of any physical object and abstract computations. Therefore asking, “what does this system compute?” does not admit a single answer, the answer can be anything and nothing. In terms of one of our main explananda: “how do brains generate meanings?” the claim is that answering “by performing some computation” is therefore always going to be an incomplete answer. The reason is that computations are abstract: physical processes acquire computational meaning only when we (intentional beings) arbitrarily interpret these processes in terms of computation. From this point of view, it becomes impossible to say that computations within the brain generate the meanings that our minds deal with, because this view requires meanings to be a matter of interpretation. Once one accepts this point of view, meanings always pre-exist as an interpretation map held by an observer. Therefore, “just” computations, can only trade pre-existing (and externally defined!) meanings and it would seem that generating new meanings from scratch entails an infinite regression.

To me, this is nothing but another transformation of the hard problem, the philosophical kernel that one needs to penetrate in order to understand how to interpret the mechanisms that we can study scientifically. It is also one of the most beautifully recursive problem that I can envisage: the challenge is to generate an interpretative map that can be used to show how interpretative maps can be generated from scratch, but this seems impossible, because apparently you can only generate a new map if you can ground it on a pre-existing map. Thus, the question becomes: how do you generate the first map, the first seed of meaning, a fixed reference point, which gets the recursive process started?

In the process of spelling out his criticism, Jochen gave the famous example of a stone. Because the internal state of the stone changes all the time, for any given computation, we can create an ad-hoc map that specifies the correspondence of a series of computational steps with the sequence of internal steps of our stone. Thus, we can show that the stone computes whatever we want, and therefore, if we had a computational reduction of a mind/brain, we could say that the same mind exists within every stone. Consequently, computationalism can either require some very odd form of panpsychism or be utterly useless: it can’t help to discriminate between what can generate a mind and what can’t. I am not going to embrace panpsychism, so I am left with the only option of biting the bullet and show how this kind of criticism can be addressed.

Without digressing too much, I hope that the above leaves no doubt about where I stand: first, I think this critique of computational explanations of the (expected) mind/brain equivalence is serious, it needs an answer. Furthermore, I also think that answering it convincingly would count as significant progress, even a breakthrough, if we take ‘convincingly’ to stand for ‘capable of generating consensus’. Dissolving this apparently unsolvable conundrum is equivalent to showing why a mechanism can generate a mind, I don’t know if there is a bigger prize in this game.

I’ll start from my day job, I write software for a living. What I do is write instructions that would make a computer reliably execute a given sequence of computations, and produce the desired results. It follows that I can, somehow, know for sure what computations are going to be performed: if I couldn’t, writing my little programs would be vain. Thus, there must be something different between our ordinary computers and any given stone. The obvious difference is that computers are engineered, they have a very organised structure and behaviour, specifically because this makes programming them feasible. However, in theory, it would be possible to produce massively complicated input/output systems to substitute the relevant parts (CPU, RAM, long-term memory) of a computer with a stone, we don’t do this because it is practically far too complicated, not because it is theoretically impossible. Thus, the difference isn’t in the regular structure and easily predictable behaviour of the Von Neumann/Harvard and derived architectures. I think that the most notable differences are two:

  1. When we use a computer, we already have agreed upon the correct way to interpret its output. More specifically, all the programs that are written assume such a mapping, and would produce outputs that conform to it. If a given program will be used by humans (this isn’t always the case!) the programmer will make sure that the results will be intelligible to us. Similarly, the mapping between the computer states and their computational meaning is also fixed (so fixed and agreed in advance, that I don’t even need to know how it works, in practice).
  2. In turn, because the mapping isn’t arbitrary, also the input/output transformations follow predefined and discrete sets of rules. Thus, you can plug different monitors and keyboards, and expect them to work in similar ways.

For both differences, it’s a matter of having a fixed map (we can for simplicity collapse the maps from 1 & 2 into a single one). Once our map is defined and agreed upon, we can solve the stone problem and say “computer X is running software A, computer Y is running software B” and expect everyone to agree. The arbitrariness of the map becomes irrelevant because in this case the map itself has been designed/engineered and agreed from the start.

This isn’t trivial, because it becomes enlightening when we propose the hypothesis that brains can be modelled as computers. Note my wording: I am not saying “brains are computers”, I talk about “modelled” because the aim is to understand how brains work, it’s an epistemological quest. We are not asking “what brains/minds are”; in fact, I’ll do all I can to steer away from ontology altogether.

Right, if we assume that brains can be modelled as computers, it follows that it should be possible to compose a single map that would allow us to interpret brain mechanisms in terms of computations. Paired with a perfect brain scanner (a contraption that can report all of the brain states that are required to do the mapping) such a map would allow us to say without doubt “this brain is computing this and that”. As a result, with relatively little additional effort, it should become possible to read the corresponding brain. From this point of view, the fact that there is an infinite number of possible maps, but only one is “the right” one, means that the problem is not about arbitrariness (as it seemed for the stone). The problem is entirely different, it is about finding the correct map, the one that is able to reliably discern what the scanned mind is thinking about. This is why in the original discussion I’ve said the arbitrariness of the mapping is the best argument for a computational theory of the mind. It ensures the search space for the map is big enough to give us hope that such a map does exist. Note also that all of the above is nothing new, it is just stating explicitly the assumptions that underline all of neuroscience; if there are some exceptions, they would be considered very unorthodox.

However, this where I think that the subject becomes interesting. All of the above has left out the hard side of the quest, I haven’t even tried to address the problem of how computations can generate a “meaningful map” on its own. To tackle this mini-hard problem, we need to go back to where we started and recollect how I’ve described the core of the “anti-computalism” stance. Taking about brain/mechanisms, I’ve asked: how [does the brain] generate the first map, the first seed of meaning, a fixed reference point, which gets the recursive process started? Along the way, I’ve claimed that it is reasonable to expect that a different but important map can be found, the one that describes (among many other things) how to translate brain events into mind events (thoughts, memories, desires, etc.). Therefore, one has to admit that this second map (our computational interpretation) would have to contain, at least implicitly, the answer on the fixed reference point. How is this possible? Note that I’ve strategically posed the question in my own terms, and mentioned the need for a fixed reference point. You may want to recall the “I-token” construct of Retinoid Theory, but in general, one can easily point out that the reference point is provided by the physical system itself. We have, ex-hypothesis, a system that collects “measurements” from the environment (sensory stimuli), processes them, and produces output (behaviour); this output is usually appropriate to preserve the system integrity (and reproduce, but that’s another story). Fine, such a system IS a fixed reference point. The integrity that justifies the whole existence of the system IS precisely what is fixed – all the stimuli it collects are relative to the system itself. As long as the system is intact enough to function, it can count as a fixed reference point; with a fixed reference, meanings become possible because reliable relations can be identified, and if they can, then they can be grouped together and produce more comprehensive “interpretative” maps. This is the main reason why I like Peter’s Haecceity: it’s the “thisness” of a particular computational system that actually seeds the answer of the hard side of the question.

Note also that all of the above captures the differences I’ve spelled out between a standard computer and a common stone. It’s the specific physicality of the computer that ultimately distinguishes it from a stone: in this case, we (humans) have defined a map (designing it from scratch with manageability in mind) and then used the map to produce a physical structure that will behave accordingly. In the case of brains/minds, we need to proceed in the opposite direction: given a structure and its dynamic properties, we want to define a map that is indeed intelligible.

Conclusions:

  • The computational metaphor should be able to capture the mechanisms of the brain and thus describe the (supposed) equivalence between brain events and mind events.
  • Such description would count as a weak explanation as it spells out a list of “what” but doesn’t even try to produce a conclusive “why”.
  • However, just expecting such mapping to be possible already suggests where to find the “why” (or provides it, if you feel charitable). If such mapping will prove to be possible, it follows that to be conscious, an entity needs to be physical. Its physicality is the source of the ability of generating its own, subjective meanings.
  • This in turns reaffirms why our initial problem, posed by the unbounded arbitrariness of computational explanations, does not apply. The computational metaphor is a way to describe (catalogue) a bunch of physical processes, it spells out the “what” but is mute on the “why”. The theoretical nature of computation is the reason why it is useful, but also points to the missing element: the physical side.
  • If such a map will turn out to be impossible, the most likely explanation is that there is no equivalence between brain and mind events.

 

Finally, you may claim that all these conclusions are themselves weak. Even if the problematic step of introducing Haecceity/physicality, as the requirement to bootstrap meaning, is accepted, the explanation we gain is still partial. This is true, but entails the mystery of reality (again, following Peter): because cognition can only generate and use interpretative maps (or translation rules), it “just” shuffles symbols around, it cannot, in no way or form ultimately explain why the physical world exists (or what exactly the physical world is, this is why I steered away from ontology!). Because all knowledge is symbolic, some aspect of reality always has to remain unaccounted and unexplained. Therefore, all of the above can still legitimately feel unsatisfactory: it does not explain existence. But hey, it does talk about subjectivity and meaning (and by extension, intentionality), so it does count as (hypothetical) progress to me.

Now please disagree and make me think some more!

Alters of the Universe

world alterBernardo Kastrup has some marvellous invective against AI engineers in this piece…

The computer engineer’s dream of birthing a conscious child into the world without the messiness and fragility of life is an infantile delusion; a confused, partial, distorted projection of archetypal images and drives. It is the expression of the male’s hidden aspiration for the female’s divine power of creation. It represents a confused attempt to transcend the deep-seated fear of one’s own nature as a living, breathing entity condemned to death from birth. It embodies a misguided and utterly useless search for the eternal, motivated only by one’s amnesia of one’s own true nature. The fable of artificial consciousness is the imaginary band-aid sought to cover the engineer’s wound of ignorance.

I have been this engineer.

I think it’s untrue, but you don’t have to share the sentiment to appreciate the splendid rhetoric.

Kastrup distinguishes intelligence, which is a legitimate matter of inputs, outputs and the functions that connect them, from consciousness, the true what-it-is likeness of subjectivity. In essence he just doesn’t see how setting up functions in a machine can ever touch the latter.

Not that Kastrup has a closed mind, he speaks approvingly of Pentti Haikonen’s proposed architecture; he just doesn’t think it works. As Kastrup sees it Haikonen’s network merely gathers together sparks of consciousness: it then does a plausible job of bringing them together to form more complex kinds of cognition, but in Kastrup’s eyes it assumes that consciousness is there to be gathered in the first place: that it exists out there in tiny parcels amendable to this kind of treatment. There is in fact, he thinks, absolutely no reason to think that this kind of panpsychism is true: no reason to think that rocks or drops of water have any kind of conscious experience at all.

I don’t know whether that is the right way to construe Haikonen’s project (I doubt whether gathering experiential sparks is exactly what Haikonen supposed he was about). Interestingly, though Kastrup is against the normal kind of panpsychism (if the concept of  ‘normal panpsychism’ is admissible), his own view is essentially a more unusual variety.

Kastrup considers that we’re dealing with two aspects here; internal and external; our minds have both; the external is objective, the internal represents subjectivity. Why wouldn’t the world also have these two aspects? (Actually it’s hard to say why anything should have them, and we may suspect that by taking it as a given we’re in danger of smuggling half the mystery out of the problem, but let that pass.) Kastrup takes it as natural to conclude that the world as a whole must indeed have the two aspects (I think at this point he may have inadvertently ‘proved’ the existence of God in the form of a conscious cosmos, which is regrettable, but again let’s go with it for now); but not parts of the world. The brain, we know, has experience, but the groups of neurons that make it up do not (do we actually know that?); it follows that while the world as a whole has an internal aspect, objects or entities within it generally do not.

Yet of course, the brain manages to have two aspects, which must surely be something to do with the structure of the brain? May we not suspect that whatever it is that allows the brain to have an internal aspect, a machine could in principle have it too? I don’t think Kastrup engages effectively with this objection; his view seems to be that metabolism is essential, though why that should be, and why machines can’t have some form of metabolism, we don’t know.

The argument, then, doesn’t seem convincing, but it must be granted that Kastrup has an original and striking vision: our consciousnesses, he suggests, are essentially like the ‘alters’ of Dissociative Identity Disorder, better known as Multiple Personality, in which several different people seem to inhabit a single human being. We are, he says, like the accidental alternate identities of the Universe (again, I think you could say, of God, though Kastrup clearly doesn’t want to).

As with Kastrup’s condemnation of AI engineering, I don’t think at all that he is right, but it is a great idea. It is probable that in his book-length treatments of these ideas Kastrup makes a stronger case than I have given him credit for above, but I do in any case admire the originality of his thinking, and the clarity and force with which he expresses it.

Reality

knight 4This is the last of four posts about key ideas from my book The Shadow of Consciousness, and possibly the weirdest; this time the subject is reality.

Last time I suggested that qualia – the subjective aspect of experiences that gives them their what-it-is-like quality – are just the particularity, or haecceity, of real experiences. There is something it is like to see that red because you’re really seeing it; you’re not just understanding the theory, which is a cognitive state that doesn’t have any particular phenomenal nature. So we could say qualia are just the reality of experience. No mystery about it after all.

Except of course there is a mystery – what is reality? There’s something oddly arbitrary about reality; some things are real, others are not. That cake on the table in front of me; it could be real as far as you know; or it could indeed be that the cake is a lie. The number 47, though, is quite different; you don’t need to check the table or any location; you don’t need to look for an example, or count to fifty; it couldn’t have been the case that there was no number 47. Things that are real in the sense we need for haecceity seem to depend on events for their reality. I will borrow some terminology from Meinong and call that dependent or contingent kind of reality existence, while what the number 47 has got is subsistence.

What is existence, then? Things that exist depend on events, I suggested; if I made a cake and put it in the table, it exists; if no-one did that, it doesn’t. Real things are part of a matrix of cause and effect, a matrix we could call history. Everything real has to have causes and effects. We can prove that perhaps, by considering the cake’s continuing existence. It exists now because it existed a moment ago; if it had no causal effects, it wouldn’t be able to cause its own future reality, and it wouldn’t be here. If it wasn’t here, then it couldn’t have had preceding causes, so it didn’t exist in the past either. Ergo, things without causal effects don’t exist.

Now that’s interesting because of course, one of the difficult things about qualia is that they apparently can’t have causal effects. If so, I seem to have accidentally proved that they don’t exist! I think things get unavoidably complex here. What I think is going on is that qualia in general, the having of a subjective side, is bestowed on things by being real, and that reality means causal efficacy. However, particular qualia are determined by the objective physical aspects of things; and it’s those that give specific causal powers. It looks to us as if qualia have no causal effects because all the particular causal powers have been accounted for in the objective physical account. There seems to be no role for qualia. What we miss is that without reality nothing has causal powers at all.

Let’s digress slightly to consider yet again my zombie twin. He’s exactly like me, except that he has no qualia, and that is supposed to show that qualia are over and above the account given by physics. Now according to me that is actually not possible, because if my zombie twin is real, and physically just the same, he must end up with the same qualia. However, if we doubt this possibility, David Chalmers and others invite us at least to accept that he is conceivable. Now we might feel that whether we can or can’t conceive of a thing is a poor indicator of anything, but leaving that aside I think the invitation to consider the zombie twin’s conceivability draws us towards thinking of a conceptual twin rather than a real one. Conceptual twins – imaginary, counterfactual, or non-existent ones – merely subsist; they are not real and so the issue of qualia does not arise. The fact that imaginary twins lack qualia doesn’t prove what it was meant to; properly understood it just shows that qualia are an aspect of real experience.

Anyway, are we comfortable with the idea of reality? Not really, because the buzzing complexity and arbitrariness of real things seems to demand an explanation. If I’m right about all real things necessarily being part of a causal matrix, they are in the end all part of one vast entity whose curious firm should somehow be explicable.

Alas, it isn’t. We have two ways of explaining things. One is pure reason: we might be able to deduce the real world from first principles and show that it is logically necessary. Unfortunately pure reason alone is very bad at giving us details of reality; it deals only with Platonic, theoretical entities which subsist but do not exist. To tell us anything about reality it must at least be given a few real facts to work on; but when we’re trying to account for reality as a whole that’s just what we can’t provide.

The other kind of explanation we can give is empirical; we can research reality itself scientifically and draw conclusions. But empirical explanations operate only within the causal matrix; they explain one state of affairs in terms of another, usually earlier one. It’s not possible to account for reality itself this way.

It looks then, as if reality is doomed to remain at least somewhat mysterious, unless we somehow find a third way, neither empirical nor rational.

A rather downbeat note to end on, but sincere thanks to all those who have helped make the discussion so interesting so far…

Mind Uploading: does speed matter?

brainpeelNot according to Keith B. Wiley and Randal A.Koene. They contrast two different approaches to mind uploading: in the slow version neurons or some other tiny component are replaced one by one with a computational equivalent; in the quick, the brain is frozen, scanned, and reproduced in a suitable computational substrate. Many people feel that the slow way is more likely to preserve personal identity across the upload, but Wiley and Koene argue that it makes no difference. Why does the neuron replacement have to be slow? Do we have to wait a day between each neuron switch? hard to see why – why not just do the switch as quickly as feasible? Putting aside practical issues (we have to do that a lot in this discussion), why not throw a single switch that replaces all the neurons in one go? Then if we accept that, how is it different from a destructive scan followed immediately by creation of the computational equivalent (which, if we like, can be exactly the same as the replacement we would have arrived at by the other method)? If we insist on a difference, argue Wiley and Koene, then somewhere along the spectrum of increasing speed there must be a place where preservation of identity switches abruptly to loss of identity; this is quite implausible and there are no reasonable arguments that suggest where this maximum speed should occur.

One argument for the difference comes from non-destructive scanning. Wiley and Koene assume that the scanning process in the rapid transfer is destructive; but if it were not, the original brain would continue on its way, and there would be two versions of the original person. Equally, once the scanning is complete there seems to be no reason why multiple new copies could not be generated. How can identity be preserved if we end up with multiple versions of the original? Wiley and Koene believe that once we venture into this area we need to expand our concept of identity to include the possibility of a single identity splitting, so for them this is not a fatal objection.

Perhaps the problem is not so much the speed in itself as the physical separation? In the fast version the copy is created some distance away from the original whereas in gradual replacement the new version occupies essentially the same space as the original – might it be this physical difference which gives rise to differing intuitions about the two methods? Wiley and Koene argue that even in the case of gradual replacement, there is a physical shift. The replacement neuron cannot occupy exactly the same space as the one it is to replace, at least not at the moment of transfer. The spatial difference may be a matter of microns rather then metres, but here again, why should that make a difference? As with speed, are going to fix on some arbitrary limit where the identity ceases to be preserved, and why should that happen?

I think Wiley and Koene don’t do full justice to the objection here. I don’t think it really rests on physical separation; it implicitly rests on continuity. Wiley and Koene dismiss the idea that a continuous stream of consciousness is required to preserve identity, but it can be defended. It rests on the idea that personal identity resides not in the data or the function in general, but a specific instance in particular. We might say that I as a person am not equivalent to SimPerson V – I am equivalent to a particular game of SimPerson V, played on a particular occasion. If I reproduce that game exactly on another occasion, it isn’t me, it’s a twin.

Now the gradual replacement scenario arguably maintains that kind of identity. The new, artificial neurons enter an ongoing live process and become part of it,  whereas in the scan and create process the brain is necessarily stopped, translated into data, and then recreated. It’s neither the speed nor the physical separation that disrupts the preservation of the identity: it’s the interruption.

Can that be right though – is merely stopping someone enough to disrupt their identity? What if I were literally frozen in some icy accident, so that my brain flat-lined; and then restored and brought back to life. Are we forced to say that the person after the freezing is a twin, not the same? That doesn’t seem right. Perhaps brute physical continuity has some role to play after all; perhaps the fact that when I’m frozen and brought back it’s the same neurons that are firing helps somehow to sustain the identity of the process over the stoppage and preserve my identity?

Or perhaps Wiley and Koene are right after all?