Mind Uploading: does speed matter?

brainpeelNot according to Keith B. Wiley and Randal A.Koene. They contrast two different approaches to mind uploading: in the slow version neurons or some other tiny component are replaced one by one with a computational equivalent; in the quick, the brain is frozen, scanned, and reproduced in a suitable computational substrate. Many people feel that the slow way is more likely to preserve personal identity across the upload, but Wiley and Koene argue that it makes no difference. Why does the neuron replacement have to be slow? Do we have to wait a day between each neuron switch? hard to see why – why not just do the switch as quickly as feasible? Putting aside practical issues (we have to do that a lot in this discussion), why not throw a single switch that replaces all the neurons in one go? Then if we accept that, how is it different from a destructive scan followed immediately by creation of the computational equivalent (which, if we like, can be exactly the same as the replacement we would have arrived at by the other method)? If we insist on a difference, argue Wiley and Koene, then somewhere along the spectrum of increasing speed there must be a place where preservation of identity switches abruptly to loss of identity; this is quite implausible and there are no reasonable arguments that suggest where this maximum speed should occur.

One argument for the difference comes from non-destructive scanning. Wiley and Koene assume that the scanning process in the rapid transfer is destructive; but if it were not, the original brain would continue on its way, and there would be two versions of the original person. Equally, once the scanning is complete there seems to be no reason why multiple new copies could not be generated. How can identity be preserved if we end up with multiple versions of the original? Wiley and Koene believe that once we venture into this area we need to expand our concept of identity to include the possibility of a single identity splitting, so for them this is not a fatal objection.

Perhaps the problem is not so much the speed in itself as the physical separation? In the fast version the copy is created some distance away from the original whereas in gradual replacement the new version occupies essentially the same space as the original – might it be this physical difference which gives rise to differing intuitions about the two methods? Wiley and Koene argue that even in the case of gradual replacement, there is a physical shift. The replacement neuron cannot occupy exactly the same space as the one it is to replace, at least not at the moment of transfer. The spatial difference may be a matter of microns rather then metres, but here again, why should that make a difference? As with speed, are going to fix on some arbitrary limit where the identity ceases to be preserved, and why should that happen?

I think Wiley and Koene don’t do full justice to the objection here. I don’t think it really rests on physical separation; it implicitly rests on continuity. Wiley and Koene dismiss the idea that a continuous stream of consciousness is required to preserve identity, but it can be defended. It rests on the idea that personal identity resides not in the data or the function in general, but a specific instance in particular. We might say that I as a person am not equivalent to SimPerson V – I am equivalent to a particular game of SimPerson V, played on a particular occasion. If I reproduce that game exactly on another occasion, it isn’t me, it’s a twin.

Now the gradual replacement scenario arguably maintains that kind of identity. The new, artificial neurons enter an ongoing live process and become part of it,  whereas in the scan and create process the brain is necessarily stopped, translated into data, and then recreated. It’s neither the speed nor the physical separation that disrupts the preservation of the identity: it’s the interruption.

Can that be right though – is merely stopping someone enough to disrupt their identity? What if I were literally frozen in some icy accident, so that my brain flat-lined; and then restored and brought back to life. Are we forced to say that the person after the freezing is a twin, not the same? That doesn’t seem right. Perhaps brute physical continuity has some role to play after all; perhaps the fact that when I’m frozen and brought back it’s the same neurons that are firing helps somehow to sustain the identity of the process over the stoppage and preserve my identity?

Or perhaps Wiley and Koene are right after all?

77 thoughts on “Mind Uploading: does speed matter?

  1. I think the worry about twins and copies is misplaced: if the mind can be conceived of as being a probability distribution over certain possible states—i.e. if there is genuine randomness, and a mind is at some point in time in a state corresponding, for instance, to ‘wanting to eat a ham sandwich’ with probability p versus ‘wanting to eat a tuna sandwich’ with probability 1-p—then there exists no physically implementable process that copies this mind, i.e. creates a twin with exactly the same probability distribution over possible actions/states.

    This is due to a result known as the classical no-cloning theorem (‘classical’ because, somewhat oddly, its quantum counterpart—the ‘no-cloning theorem’—was discovered earlier), which basically tells you that you can’t clone an arbitrary probability distribution—such that if you have a box with a red ball inside with probability p, and a green ball with probability 1-p, there’s no physical dynamics that leaves you afterwards with two boxes, each of which contains a red ball with probability p, and a green one with probability 1-p.

    This of course clashes with our intuition about the in-principle simulatability of minds: once we put a mind in a computer, so to speak, it seems that we can copy it indefinitely. But there’s a difference: computers can’t come up with genuine randomness; for a mind to depend on this, it would thus have to be imported from the environment—e.g. via various sense organs. But then, the state of the computer is again a probability distribution over various possibilities, and can likewise not be cloned exactly.

    So here, we have a criterion for a closest-continuer theory of identity, which is simply given by having the same dispositional state, i.e. probability distribution over possible states.

  2. I’m not up on the paradigm of uploading a brain. Does it work for the whole body? So if you made a simulation of a body it would be alive? The idea of simulated intelligence sounds more plausible as computers are thinking things and not living things

  3. I think you’re describing a migratory system, in the slow system. Really I think rather than copying, what you want is the artificial system is made available to the biological system to ‘grow into’. Ie, the bio can form connections to the artificial synapses, programming them and being programmed by them. Eventually with the passing of the biological (hopefully slowly so as to facilitate the best migration possible) some of the momentum of the original person is carried across. But ‘transfer’ it aint!

    Really if you’re describing the slow replacement of neurons, that’s no different than destructive scan and replication – you are just destructive scanning one neuron at a time and replacing. I’m not sure that’s any different from whole brain destructive scanning and replication, really.

  4. @Howard: Apparently we upload our brains into a virtual environment that will enable us to feel qualia…even though it’s incredibly obvious simulated fire produces no heat. That a physical world might be necessary for qualia, and consciousness, seems to be ignored by what IMO is the obvious dualism inherent in this uploading idea.

  5. Sci, I’m with you in spirit, but I think the argument ‘a simulated fire produces no heat’ is not as persuasive as it may seem—one might argue that the reason you don’t feel the heat is that you’re causally isolated from the fire by virtue of not being part of the same simulation, with the ‘output’ part of the simulation being limited to reproducing the fire’s light. But while the simulated fire does not burn you, or a piece of wood before the computer’s monitor, it would burn a simulated log, and conceivably also a simulated you, if there were such a thing, because those would be part of the same causal structure.

  6. @Jochen: It’s possible, but it seems to this is rather boldly extrapolating from what we currently know about consciousness as well as the physical world.

    But I recognize fear of death is a powerful motivator, so this digression will likely occupy considerable resources before either convincing the upload-atheists like myself or being declared a failure.

  7. Hi Peter,

    You haven’t raised the possibility that there is no fact of the matter, that identity is a human construct and has no basis in objective reality. We are evolved to think of ourselves as existing continuously over time because it is useful for us to do so — we care about our future successor states because not caring tends to lead to recklessness and a failure to pass on genes.

    That’s not to say there’s anything wrong with thinking about identity as we do. Just because it’s a human or social construct does not mean that it is not valid and useful, but we should not expect there to be a fact of the matter on questions such as these. There are only different ways of thinking about it with various advantages and disadvantages.

    We could choose to think of identity as having something to do with physical continuity or we could choose to think of it as having something to do with the particular pattern of information processing a brain instantiates. The former seems to rescue us from cases of diverging identities, but that’s an illusion in my view. Suppose each neuron is replaced successively by not one but two identical nano-devices, and later on the artificial brain is split continuously into two separate copies.

    Similarly we can imagine some sort of quantum experiment where your whole body is duplicated in such a way that each copy has equal claim to continuity with the original. Indeed, if the Many Worlds Interpretation is correct, this is happening all the time. And of course a human embryo can split to become two individuals, each with equal claim to the identity of the original (if embryos can be said to have identity at all).

    So I think if we want a concept of identity, we need to be open to the idea that identity can split. In my view this rather undermines the motivation for supposing that continuity is required for maintenance of identity and makes more attractive the view that identity ought to be held to be related to patterns rather than stuff.

  8. Hi Jochen,

    I think you misinterpret the classical no-cloning theorem. This theorem states that you can’t reproduce a random event with the same probabilities if all you are given is the outcome of similar previous events. In other words, there is no way to determine the precise underlying probability distribution if all you have access to is the output of that distribution.

    But if you can see inside the black box and understand how the probabilities arise, then you can certainly produce a clone. Now, I’m not saying we can do that, but it is not true to say that it is logically impossible to produce a clone of an object which involves probability.

    But I also take issue with your implication that our identity is so sensitively bound to precise probabilistic outcomes. Whatever your favoured interpretation of QM, there are uncountable approximately random events happening in my brain at any given moment. Since there is no way to know which way these events will turn out, and since these events can presumably bubble up via the butterfly effect to have real consequences, there is no possibility of predicting what I am going to do in every 50/50 situation. In other words, more than one outcome is consistent with who I am, at least with my concept of myself and with how others think of me. If I could be cloned and the clone made different choices in such cases, I don’t see how it would be any less me, because both versions of the choice are compatible with who I am.

    In other words, I cannot entertain the idea that I would cease to be me if an electron happened to swerve right rather than left.

    For similar reasons, I don’t take seriously the idea that we need real randomness to be conscious. If our identities and our minds cannot be terrifically sensitive to the precise outcome of random events, and if it is not easy to distinguish pseudorandom events from random events, then pseudorandomness will do as a proxy for randomness and nothing of importance will have changed.

    I think your answer to the “simulated fire will not burn” argument is exactly right.

  9. Hi Sci,

    “It’s amazing what delusional faith can bring out in people.”

    I don’t think that’s fair. Yudkowsky is committed to rationalism, decision theory, utilitarianism and so on. This is nothing like delusional faith. I think he over-reacted in this case and I don’t agree with his conclusion or how he handled it, but I think his willingness to take uncomfortable ideas seriously if they follow from his premises is almost commendable.

    If you think he is wrong, I think it would be better to help find the flaw in the argument rather than dismissing it because you think the conclusion is absurd.

    “the obvious dualism inherent in this uploading idea.”

    I agree that there is dualism here, but it is dualism of structure and substance, not Cartesian dualism. As such it is quite tenable and is no different than the dualism between software and hardware. There’s nothing mystical about it: for instance it does not require a magic interface in the pineal gland or whatever.

    So, while you may have good reasons to argue against my view, I dispute that the “dualism” accusation is the knockout blow you seem to think it is.

    “I recognize fear of death is a powerful motivator”

    You could be more charitable. If Yudkowsky can be lead to a horrific conclusion you find absurd (the basilisk) by his commitment to certain premises, then it is also plausible that he could be lead to more optimistic conclusions you find absurd. No need to appeal to wishful thinking to explain it.

    I for one have no expectation of being saved from death by mind uploading. I believe it is possible in principle, because that is where I have been lead by extrapolation from my beliefs on various subjects, but I don’t think it is at all feasible in the forseeable future and perhaps it never will be. As such I resent the implication that my belief is motivated by fear of death.

  10. Disagreeable Me:

    But if you can see inside the black box and understand how the probabilities arise, then you can certainly produce a clone. Now, I’m not saying we can do that, but it is not true to say that it is logically impossible to produce a clone of an object which involves probability.

    But if you can see inside the box, then you also don’t have genuine randomness, but merely randomness due to ignorance of the true microdynamics. The point I’m trying to make is precisely that there is genuine randomness on which our decisions may be predicated, ‘bubbling up’ from quantum mechanics (on most interpretations). This doesn’t negate the possibility of transtemporal identity: our state of mind will, at any given point, be a probability distribution (a mixture), rather than any definite, singular (pure) state, but that doesn’t prohibit lawful evolution. It’s merely the instantaneous state that can’t be duplicated, but that doesn’t mean it can’t evolve to a different state while still being me, any more than a change of the state of a rock implies that it’s no longer the same rock. And yes, I know identity can be a subtle concept—see Theseus’ ship or my grandfather’s axe—, and I don’t purport to answer these. But I don’t think that a probabilistic account makes matters any worse, at least.

  11. Hi Jochen,

    > But if you can see inside the box, then you also don’t have genuine randomness

    I don’t think that follows. Correct me if I’m wrong, but it is possible to set up a quantum mechanical experiment such that you have 50% probability of measuring spin up and 50% probability of measuring spin down. If you are privy to details about how the mechanism in the box has been constructured, you could be in a position to know that such an experiment has been set up within it. In this case, we have true probability and we also have the ability to set up an identical experiment in another box.

    Conversely, the classical no-cloning theorem just means that if you observe a sequence of up-up-down-up-down-down you can’t be sure what the underlying probabilities of up and down are.

    Now, if you did clone the box in such a way, you will of course not expect to see identical output from each box going forwards. This is to be expected since the output from each is random. But the output will be characteristically the same — it will have approximately the same frequency of ups and downs in each case. Similarly, if you cloned me, I would not expect my clone to have identical behaviour to me, but I would expect my clone’s behaviour to be perfectly characteristic of my behaviour and so the clone would have equal claim to my identity.

  12. @Disagreeable Me: “ut I think his willingness to take uncomfortable ideas seriously if they follow from his premises is almost commendable.”

    We’ll have to agree to disagree because what you see as commendable I see as completely delusional.

    “I agree that there is dualism here, but it is dualism of structure and substance, not Cartesian dualism. As such it is quite tenable and is no different than the dualism between software and hardware.”

    There’s no dualism in software and hardware because hardware is all there is unless there’s a mind projecting meaning onto the physical patterns produced by our computers. Computers have only derived intentionality. Now whether we have intrinsic intentionality may be up for grabs, but there’s no mechanism for this as of yet and extraordinary claims require extraordinary evidence.

  13. Hi Sci,

    > We’ll have to agree to disagree because what you see as commendable I see as completely delusional.

    I think we can agree to disagree on whether it is commendable, but I think the assertion that it is delusional is unsupportable. Like it or not, Yudkowsky has reasons for what he believes. He can give a pretty compelling account of how he came to his conclusion. Whether you think the conclusion is warranted or not, this is not a delusion. If it were, it would be easy to point out where he goes wrong. But it isn’t.

    > There’s no dualism in software and hardware because hardware is all there is

    That is a tenable view, certainly. It’s not one I subscribe to myself. Rather than argue that point, I wanted to point out that not all dualisms are created equally. The dualism of form and substance implicit in say mathematical Platonism is a whole lot more respectable than Cartesian dualism whether you subscribe to it or not.

  14. @D. Me: “If it were, it would be easy to point out where he goes wrong. But it isn’t.”

    Perhaps you could explain why the terror caused by Roko’s basilisk is justifiable?

  15. Disagreeable Me (love the handle, by the way, but it’s a bit cumbersome to type out…):

    I don’t think that follows. Correct me if I’m wrong, but it is possible to set up a quantum mechanical experiment such that you have 50% probability of measuring spin up and 50% probability of measuring spin down. If you are privy to details about how the mechanism in the box has been constructured, you could be in a position to know that such an experiment has been set up within it. In this case, we have true probability and we also have the ability to set up an identical experiment in another box.

    Well, if you know that you have 50% probability of measuring spin up/down, then you also know the state of the particle, it’s 1/?2*(|up> + |down>). Then, of course you can clone, because you just can re-prepare that state. What the no-cloning theorem asserts is that you can’t clone an unknown state, i.e. there’s no transformation you can implement that produces from some unknown state plus some blank reference two copies of that state. That’s simply due to the linearity of the allowed transformation.

    Now, this linearity holds in classical as well as quantum mechanics, and thus, so does the theorem. What you propose to do is to essentially reduce the probabilistic classical system to its deterministic microdynamics, and then, clone that—for which there is of course no obstruction. But in a genuinely probabilistic theory, there is no such more fundamental deterministic layer you can appeal to, and hence, in general, the state an object in such a theory can be in can’t be copied.

    So no, in general, you can’t produce a copy that will have the same probability of reacting to some given stimulus in a certain way as you do; hence, we can identify you with that unique object that, say, eats a cheese sandwhich with probability p, and a toast Hawaii with probability 1-p. Any purported clone of yours would have some different p’.

    If you’re interested in more detail, I’ve given a simple proof of the above here.

  16. So for some reason, I seem to have problems getting a post through right now… I’ll return later and try again.

  17. Hi Sci,

    > Perhaps you could explain why the terror caused by Roko’s basilisk is justifiable?

    I didn’t say it was justified, I said it was not deluded.

    The point is that there is an argument there to support it. That argument is predicated on a large array of premises any one of which you will probably disagree with, but each of these premises is reasonably tenable.

    I mean, you may disagree with the possibility of a future artificial superintelligence, but you probably wouldn’t go so far as to call the idea delusional. Similarly, you probably don’t think it is even remotely possible for us to be in a simulation (since you don’t seem to think that a simulated intelligence would be conscious), but you probably wouldn’t call me delusional for entertaining it as something that might be possible in principle.

    Same goes for utilitarianism and decision theory and a number of other assumptions that feed into the argument, such as various ideas about what constitutes personal identity and so on.

    The end result is a conclusion that you find so preposterous that it strikes you as delusional, despite the fact that it arises out of an argument that depends on (relatively) tenable assumptions.

    To me, a delusion is more like a conviction arrived at independently of any justification, despite evidence to the contrary, for reasons such as paranoia or wishful thinking or egotism. Often this conviction is rationalised later. That’s not what happened in this case. Roko’s basilisk arose out of honestly and fearlessly examining the consequences of prior intellectual commitments. There is no evidence to the contrary. If you want to poke holes in it, you will have to find weaknesses in the argument (of which there are many).

    If you’re interested, there’s a pretty detailed discussion of the ideas here:
    http://rationalwiki.org/wiki/Roko%27s_basilisk

    The tone of the article is about right, I think. It is pretty dismissive and incredulous of the conclusion, but it never goes quite as far as you in calling the whole thing deluded. Instead, it criticises by explaining the weaknesses in the argument. Interestingly, these criticisms mostly originated from within the “religion of computationalism” itself.

    Which illustrates the point that it’s not true that a significant portion of the Strong AI crowd believe in the conclusions of Roko’s basilisk, so it’s not a great illustration of the problems with that community.

  18. Sorry, but I think the idea that programs are “intelligent” to be delusional.

    We’ll have to agree to disagree.

  19. Disagreeable Me, in the experiment you’re proposing, you could indeed produce a clone—but only because knowing that you have 50% probability of observing spin up/down entails knowing the entire state, which you can then just re-prepare. The no-cloning theorem applies to unknown states, however. There, because of the linearity of the allowed transformations, it’s impossible to take a system in some state and some blank reference system, and apply some operation such that you end up with two copies of the original. This holds just as well for quantum mechanics as it does for (probabilistic) classical mechanics.

    What you proposed earlier, to open up the box and reverse-engineer it, is essentially just reducing the apparent probabilities to their deterministic underpinnings, and then copying that—on which there is obviously no restriction. But in a genuinely probabilistic setting, you don’t have that option, and it’s indeed the case that you can’t create a system that responds to the same stimuli in the same way as the original with the same probability. Thus, one can uniquely identify ‘you’ with that object that has a cheese sandwhich with probability p, and a toast Hawaii with probability 1-p. No clone will precisely duplicate this state of affairs.

    If you’re interested in some more detail on this, I’ve given a simple proof of the above here, in comment 9: http://www.consciousentities.com/?p=1851

    So I wonder whether that’ll post…

  20. Hi Jochen,

    I agree that the no-cloning theorem applies to unknown states. My problem is that you assume that it is not possible to know the state of the brain. This may be true on quantum physics, but it is not obviously true in a classical scenario.

    I mean, you seem to think that knowing the probabilities necessarily means that the system is deterministic. I don’t know where you get this from. If you prepare an experiment so that you have 50% chance of measuring spin up and 50% of measuring spin down, you are fully aware of the probabilities *and* it is truly probabilistic.

    In any event, I do agree that various results from quantum physics imply that it does not make sense to imagine that we can read the precise state of the brain down to the position of every particle. But to me, those same results imply that identity and personality and mind are not tied precisely to such a granular state or they would be too fragile. From my perspective, the approximate state of the brain, at the level of neurons or slightly below, is probably enough detail to capture what it is that makes me me and you you.

  21. Jochen,

    Sorry you’ve been having problems – can’t see what the issue is. If something else needs restoring or there are other problems please drop me an email (hankins peter with no space at gmail dot com)

  22. Hi Peter, thanks for restoring my post; I don’t know precisely what the problem was, either, but it seems to have something to do with the url I tried to post—when I took it out, it finally worked. I hope I haven’t spammed some queue or whatever with thirty different versions of the above two posts…

    Disagreeable Me:

    I agree that the no-cloning theorem applies to unknown states. My problem is that you assume that it is not possible to know the state of the brain. This may be true on quantum physics, but it is not obviously true in a classical scenario.

    Well, I’m positing a classical scenario with irreducible randomness, which, as you said, tends to ‘bubble up’ from the quantum level. In fact, even in billard, which one usually thinks of as being as Newtonian as it’s possible to get, after a couple of collisions (I think eight, IIRC), the dominating source of error in the prediction of the ball’s trajectories stems from quantum uncertainty. Thus, under such conditions, it is indeed not possible (fundamentally so) to give more than a probabilistic prediction of whether a given ball goes into the pocket, for instance. Given that our brains engage in dynamics vastly more complex then that, one would expect there to be similar uncertainties.

    I mean, you seem to think that knowing the probabilities necessarily means that the system is deterministic.

    No, I don’t. Knowing the probabilities—all of them, under the assumption that we’re dealing with irreducible randomness—entails knowing the state (I’ve given it above in post #16), and thus, entails the ability to clone, in the quantum case as well as the classical case.

    If there is a more fundamental dynamics that you can appeal to—which isn’t the case in genuinely probabilistic theories—then knowing this entails the system being deterministic, because it can be used to completely predict all outcomes, as well as constructing a clone.

    From my perspective, the approximate state of the brain, at the level of neurons or slightly below, is probably enough detail to capture what it is that makes me me and you you.

    I’m not even sure that level of detail is necessary, in general. One might coarse-grain far further, for instance to the level of ‘wanting a cheese sandwich’ versus ‘wanting a toast Hawaii’, or their respective neural correlates. But if the randomness in this description is irreducible—i.e. if it doesn’t emerge from a more fundamental deterministic dynamics we merely don’t have knowledge about, but ‘bubbles up’ from the quantum level—then you will not be able to produce a clone matching your mental state.

  23. Hi Jochen,

    I’m not sure how much we disagree then. We’re agreed that the no-cloning theorem precludes creating a precise clone if you don’t know the state. We’re agreed that you can know the precise state of a quantum system if you’ve prepared it yourself or know how it was prepared.

    I was not at any point suggesting that we could know the fundamental deterministic dynamics that give rise to apparent probability. I was talking about knowing the underlying probabilistic state, as in the example when you prepare it yourself.

    I agree with you that it’s hard to see how we could ever know this state for a biological brain. Presumably we’d have to prepare the quantum state ourselves somehow, which doesn’t seem plausible.

    I agree with you that coarser grains than the level of neurons might be sufficient detail to produce a cloned person with a very similar personality, memories, abilities and so on to the original. For me, if it is so similar that nobody can tell the difference, then I’m inclined to view it as the same identity.

    I am unsympathetic to views which regard the precise quantum state as terrifically significant for identity. As well as being too brittle a view of identity (we’re changing all the time so identity needs some flexibility to be preserved), it doesn’t acknowledge that there may be dampening effects on how much our decisions are swayed by quantum fluctuations. For instance, a computer is also composed of quantum particles, yet quantum fluctuations have very little influence on how a deterministic algorithm turns out. I actually do think quantum effects probably do bubble up to have macroscopic influences at a large scale, but I’m less sure that getting the probabilities a little wrong at the particle level in an attempt at cloning has a significant effect on the overall probabilities at a macroscopic level.

    For example, (a contrived, made up example that probably bears no relationship to actual biology) suppose that there are is a set of electrons in a particular neuron, and whether the neuron fires depends on a certain proportion of these reaching a certain energy level. We could measure the state for each of these electrons only imprecisely and yet the errors might cancel out on average so that at a coarser grain we have a very accurate measure indeed of the average energy of the ensemble, and it is this average that has significance for the probability of whether we choose one sandwich or another.

  24. I am unsympathetic to views which regard the precise quantum state as terrifically significant for identity. As well as being too brittle a view of identity (we’re changing all the time so identity needs some flexibility to be preserved), it doesn’t acknowledge that there may be dampening effects on how much our decisions are swayed by quantum fluctuations.

    Well, it may be the case that such a damping out happens, but equally, it might not be; and actually, it doesn’t necessarily seem to me that it should be—randomness can be a valuable resource for certain tasks.

    Take the billard example again: there, the tiny quantum uncertainty in the trajectory of one ball gets amplified such that ultimately, the probability of whether a certain ball goes in the pocket is dominated by this uncertainty; thus, you could not set up a copy of this billard table, with all balls having the same positions and momenta, such that on the second table, the appropriate ball falls with the same probability. Thus, you can easily get arbitrarily large differences depending on irreducible randomness.

    Computers are constructed such that they are deterministic: the logic gates used to build them are stable to random fluctuations, and even if they misfire, this will generally be detected, and corrected as an error. One has to expend work in order to make a computer secure to noise; but there doesn’t seem to be an a priori reason that our brains are similarly resistant to random influences, or that they need to be.

    As for the preservation of identity, the change that occurs in us from moment to moment is governed by continuous physical dynamics; but setting up a clone, even a close one, would introduce a necessary discontinuity. So at any given time, you could point to a unique sucessor by simply following the continuous (unitary, if we’re keeping track of the full quantum process) change—this is the only one that will contain, in an important sense, the same information, since such dynamics is information-preserving. Any purported clone will contain different information, by virtue of being goverend by a different (initial) probability distribution. So, you could use a kind of ‘principle of informational continuity’ to pick out your successor.

  25. Hi Jochen,

    > actually, it doesn’t necessarily seem to me that it should be—randomness can be a valuable resource for certain tasks.

    That’s fair enough. But that leaves the possibility that crude approximation of the state of enough fundamental particles might add up to a whole with essentially the same probabilities on average at a macroscopic level. Still random, so still useful for those tasks demanding randomness.

    > the change that occurs in us from moment to moment is governed by continuous physical dynamics

    I think this depends on which interpretation of QM you subscribe to. If you believe in objective collapse, then there is a discontinuity every time you have a collapse, with any number of distinct successor states equally possible and each equally you. If you believe in MWI, then there are numerous actual successor states, each equally you. Among this infinite number of successor states you could probably find one indistinguishable from the attempted clone. Only in some sort of hidden variable interpretation is there a unique evolving state which is identifiable with you.

    But even in this case, the argument you make seems to me to have a hidden dependency on the system being closed. Of course we both know it isn’t. Even cosmic rays could potentially have an influence, and of course so do our interaction with the macroscopic world and, indeed, each other. If we consider identity to belong to the biological brain (or organism as a whole), largely independent of its environment, then we need to take into account all the possible interactions the environment could have on the state of the particles in the brain. We are again left with an infinite set of possible successor states, and again, I suspect that within this set you will find one the same as an attempted clone. And that is why I think a brittle concept of identity will not do.

    Well, this is how I think about it, since what is important to me is the state, which determines personality, memory, ability, beliefs and so on. If what is important to you is physical continuity then possibly my argument is not as persuasive, but then I don’t really see why physical continuity should be so important for identity. It seems a little arbitrary to me, motivated perhaps only by a desire to avoid having to worry about the splitting of identities. But as I said upthread, one can imagine scenarios where it doesn’t achieve even this, so why bother with it?

  26. Disagreeable Me, have you had a look about the discussion of the classical no-cloning theorem I linked to earlier? I think this makes it more clear what I mean. Basically, you have a state, and some group of transformations taking valid states to valid states. Now, if you go with the most general such group on a state space that’s essentially just a space of properties, whose convex combinations yield probability distributions over possible properties, then one can rigorously establish that there’s no cloning of a general state.

    If we consider identity to belong to the biological brain (or organism as a whole), largely independent of its environment, then we need to take into account all the possible interactions the environment could have on the state of the particles in the brain.

    Hmm, I’ll have to think about that, but I don’t think this is true (but it’s a very good point). Basically, from any one moment to the next, the mutual information between a mind and its successor can be made arbitrarily large, while there will be a maximum it can’t exceed for any given clone; so one can just look at the correlation between purported predecessor and successor states, and single out the one that maximizes them as ‘the most you’—which I think makes good intuitive sense, since it’s the one whose memories agree most closely with yours, and who is most likely to act the same way you do in response to a given stimulus, etc. Basically, given some time resolution, I’d conjecture that there’d always be a possibility to find a successor state closer to you than the best clone, if that successor state was arrived at by the most general dynamics on some (possibly imaginary) closed system of which you are a subsystem. I’ll see if I maybe actually can prove this…

    And I’m not really taking physical continuity, per se, as being fundamental for identity; like you, I actually think more about a given system’s state (one could for instance establish some kind of ‘classical teleportation’-protocol that produces a copy of me at some distant location, while destroying the original me—I’d have no problem calling the result ‘me’). I think what’s important is usually the information contained in the state, and as I said, I believe that this is not perfectly copyable, while I’d taken as a given (but recognize that I haven’t shown) that the most general physical dynamics will always result in a ‘closer’ successor.

  27. Hi Jochen,

    > have you had a look about the discussion of the classical no-cloning theorem I linked to earlier?

    It’s certainly interesting but I’m not sure I’m competent to answer it. I’m open to it being correct, but there are a few things that seem a bit fishy to me, like why the best way to model mapping the state of one system to another is to start by representing the state of two systems as a multiplication of two matrices, or why we must do all tranformations in one single matrix multiplication on both matrices. I assume there are very good reasons for this but as a layman they are not clear to me.

    Don’t get me wrong. I’m not questioning the no cloning theorem. If it’s a theorem accepted by the mathematical community I am certain it must be right. I’m more concerned about its applicability in this case.

    I am still left unconvinced that we cannot reproduce a quantum system because I don’t think you have established that we cannot know the underlying probabilistic state and produce it again. If we know that there is a green ball in the box with probability p and a red ball with probability 1-p (as we might if we prepared it so), then I don’t see why we cannot prepare a second box in the same way. So, if there is a probabilistic computer system which we have prepared, I don’t see why we cannot prepare a second identical computer system.

    For instance, you might represent the state of each system with 1 and 0 as with a classical system, but among the data encoded by the 1’s and 0’s might be probabilities. Having such a classical deterministic underpinning doesn’t rule out the system being intrinsically probabilistic. One might imagine a piece of hardware being incorporated into the computer that exploits quantum mechanics to, when given an input of p, yield a value of 1 with probability p, and this could be the basis of adding as much genuinely probabilistic behaviour to the system as you like while keeping the state as a set of ones and zeroes which do not fall victim to the matrix multiplication argument you deployed in the earlier post.

    To me, that demonstrates that there is no reason to suppose the cloning of an artificial intelligence is impossible. The cloning of the precise quantum state of a biological brain probably is impossible, but in this case I would deploy my earlier arguments against brittle concepts of identity. I think the clone could be enough like me to consider it to share my identity. It would indeed be more like the me of that moment than the me of the day before.

    > while I’d taken as a given (but recognize that I haven’t shown) that the most general physical dynamics will always result in a ‘closer’ successor.

    So, what if the original you is destroyed by the cloning process but the teleported clone is only as good as if you were not destroyed? In this case, the teleported clone is a closer successor, since the original is no longer available. Is it still you, under the ‘closest’ successor heuristic? If so, it’s good enough to be you if you are destroyed, but not good enough to be you if you are not destroyed. So whether this clone, on a distant planet, perhaps, is really you depends on what happened to the original you elsewhere. I personally don’t find this conclusion to be plausible.

    Anyway, as I’ve mentioned, by focusing on the ‘closest’ successor you seem to be trying to avoid the problem of the splitting of identities. But might there not be continuous physical dynamic scenarios where your identity really does split into two, the way an embryo splits in two? Or like happens in the MWI? Or by gradual piecewise replacement of neurons with electronic devices which are designed to be duplicated? Or if we did destructive mind-uploading onto a substrate that explicitly records probabilities as ones and zeroes and then duplicated that? If so, then what’s the point of the closest successor heuristic?

  28. So, if there is a probabilistic computer system which we have prepared, I don’t see why we cannot prepare a second identical computer system.

    Oh no, in this case, we obviously can clone! Again, only unknown states—corresponding to unknown probabilitiy tables—can’t be cloned. So if we prepare some state, then certainly we can clone it, by just re-preparing.

    But in general, we won’t have some prepared state, but some unknown state given to us by nature; and again in general, we can’t get to know this state, either (if we don’t have infinitely many copies of it), since any measurement will ‘change’ it (if you want to talk this way; I’m just using it as a shorthand for now).

    One might imagine a piece of hardware being incorporated into the computer that exploits quantum mechanics to, when given an input of p, yield a value of 1 with probability p, and this could be the basis of adding as much genuinely probabilistic behaviour to the system as you like while keeping the state as a set of ones and zeroes which do not fall victim to the matrix multiplication argument you deployed in the earlier post.

    Well, again, in that case you’d know the state, and I never claimed that you can’t clone a known state. But in general, whether in billards or brains, we won’t know the state, and don’t even in principle have any way of getting at it.

    So, what if the original you is destroyed by the cloning process but the teleported clone is only as good as if you were not destroyed?

    This case I would consider to be my death—with ‘closest successor’, I really mean something infinitesimally different from myself; otherwise, there’d always be some successor, perhaps even in some completely different person if they happen to be the closest to my mental state at its dissolution.

    But might there not be continuous physical dynamic scenarios where your identity really does split into two, the way an embryo splits in two?

    Well, this is really the case I’m trying to guard against, here: if the no cloning argument is right, then no, there is no such dynamic.

    I’m frankly not sure how this plays out in a many-worlds scenario; but of course, that one has problems regarding personal identity no matter what, which to me is one reason I find it unattractive. But let’s not go into quantum interpretations, here; for the argument, all I really need is some source of genuine probability—if you’re a proponent of a view in which that doesn’t exist (e.g. Bohmian mechanics), then you of course won’t be prepared to grant me that, but that’s at least something on which reasonable disagreement is possible.

  29. Hi Jochen,

    > Oh no, in this case, we obviously can clone!

    OK, so it seems we agree there is nothing ruling out the cloning of an AI (your comment on the previous article seemed to disagree with this). The issue is only whether we can clone a biological brain.

    > This case I would consider to be my death—with ‘closest successor’, I really mean something infinitesimally different from myself;

    Right. In this case I’m sorry to break it to you, but I think you’re probably already dead! And many many times over. External influences are changing the state of your brain every moment, often in ways more significant than infinitesimal.

    > if the no cloning argument is right, then no, there is no such dynamic.

    I don’t think so. The no cloning argument is an argument against reproducing an exact state. It doesn’t guard against a singleton smoothly and continuously dividing into two successors, neither of which are at all the same as the original but each of which has developed gradually and continuously from the original, like a cell dividing, and so each of which has equal claim to identity. Again, the two successors are not supposed to be quantum-particle level identical, but are more like two successors that are as similar to the original as you are to the you of yesterday.

    For the record, I’m a proponent of the MWI. I don’t find its problems with personal identity to be an issue as I see personal identity as a human construct and so don’t think it should have any sway in choosing which interpretation is most plausible. And anyway, as stated, I don’t actually have a problem with identity splitting as you do.

  30. OK, so it seems we agree there is nothing ruling out the cloning of an AI (your comment on the previous article seemed to disagree with this).

    Well, there the issue is whether we could achieve the requisite control over the state, but in principle, and if we carefully shielded it against random influences, then yes, I’d consider cloning possible. Although, cards on the table, I think that ultimately randomness is essential for a conscious mind.

    Right. In this case I’m sorry to break it to you, but I think you’re probably already dead! And many many times over. External influences are changing the state of your brain every moment, often in ways more significant than infinitesimal.

    Certainly not in classical physics: every state change is a continuous trajectory through phase space. And even in quantum physics, one can trace out the environmental influence, and be left with a positive map taking states to their successors—I’d be perfectly happy to admit such a state as my future self.

    I don’t think so. The no cloning argument is an argument against reproducing an exact state. It doesn’t guard against a singleton smoothly and continuously dividing into two successors, neither of which are at all the same as the original but each of which has developed gradually and continuously from the original, like a cell dividing, and so each of which has equal claim to identity.

    If you could do this, you could also clone—in fact, this operation can be written as first some time evolution producing a successor, and then a cloning operation. So either that which gets cloned can’t be a successor under my requirements, or isn’t possible.

    Or, taken the other way, the dynamics is in general reversible, so if your process were possible, then from me in state 1 we’d produce two successors in state 2, where state 2 is such that it could be reached from state 1 using the ordinary time evolution. But then, we could simply apply the inverse of this evolution, and end up with two entities in state 1, i.e. two clones. But we know we can’t clone, so we also can’t branch.

  31. Whoopsie, I bungled that up somewhere. But I guess it’s still readable, right? I don’t want Peter to have to play cleanup after me again…

  32. Hi Jochen,

    It’s still readable.

    > Although, cards on the table, I think that ultimately randomness is essential for a conscious mind.

    And I don’t.

    But even so, I don’t see why we can’t have the best of both worlds. As I said, we can have a machine which has a state encoded entirely by classical ones and zeroes, but where this state includes descriptions of the probabilities. If the machine has a device for producing genuinely probabilistic outcomes on demand according to a specified probability, then we can have classical state and ultimate randomness at the same time.

    > Certainly not in classical physics: every state change is a continuous trajectory through phase space

    OK. So now I will posit that for every imperfect clone there is a continuous trajectory through phase space that will take you from where you are now to the state of the clone, whether or not this trajectory is realised in actuality. This clone is probably more like you than you would be to the you of five minutes hence. If what is really important to you is state rather than continuity, then I don’t understand why it is important what path the physical particles have taken to get to a state that you recognise as identical to yourself.

    Take another example. Say I perform some form of advanced brain surgery on you that completely changes your personality and beliefs and memories. In this case, there is a continuous physical change from the brain state of you now to the you post-surgery. At every Planck-time interval during the surgery, changes in successive states are infinitesimally small. Nevertheless, in my view you have lost your identity and are in effect a new person.

    To my mind, whether another brain state is me is a question chiefly of how closely it resembles me. If the me of 50 years hence has very different views and personality and no memory of the me now (as is perfectly possible) then I am quite open to considering that to be an entirely different person. Even so, to give a nod towards continuity, I would view each iteration of me at one second intervals to be the same as the previous.

    (So, red is a different colour than green, but it is possible to transition from red to green by infinitesimal and insigificant steps such that each iteration can be considered effectively the same as the last).

    So, in one way of looking at things there is some conservation of identity, and in the other there isn’t. I think the mistake is to think that there is a fact of the matter rather than holding identity to be no more than a useful heuristic.

    > in fact, this operation can be written as first some time evolution producing a successor, and then a cloning operation

    No it couldn’t, as I didn’t state that the two clones need to be identical to each other, only that each has continuously evolved from the parent state. Again, think of the splitting of an embryo as a rough analogy.

    > then from me in state 1 we’d produce two successors in state 2

    No, one successor in state 2a and one from state 2b, each of which are successor states to state 1, and each which can be reached by evolution of state 1, e.g. if we subject state 1 to a cloning/splitting operation.

    > But then, we could simply apply the inverse of this evolution, and end up with two entities in state 1, i.e. two clones.

    No, we wouldn’t. The time reversal would be to merge two clones (2a and 2b) into one person in state 1. Think of two embryos coming together to form one (as can actually happen in cases of genetic mosaicism).

  33. If the machine has a device for producing genuinely probabilistic outcomes on demand according to a specified probability, then we can have classical state and ultimate randomness at the same time.

    Yes, I agreed with that possibility in my last post; I’m not sure what good it’d be, but, going with the terminology of my post from the other thread, you’d have a machine in the state (p,1-p), which, if p is known, as it is in this case, would be perfectly well copyable. My main considerations are however on the case of randomness coming ‘from the outside’, i.e. the environment, which won’t in general be of a known (or even knowable) form.

    So now I will posit that for every imperfect clone there is a continuous trajectory through phase space that will take you from where you are now to the state of the clone, whether or not this trajectory is realised in actuality. This clone is probably more like you than you would be to the you of five minutes hence.

    This is very well possible, but I don’t see that it’s a problem. Think of me (or the succesion of instantaneous ‘me’s) as a four-dimensionally extended object with proper temporal parts, in analogy to an ordinary three-dimensional object. In just the same way I wouldn’t consider a sawed-off leg as being part of the table anymore, I wouldn’t consider a clone a discontinuous jump distant from me a part of that fourdimensional successions of mes, since it’s not part of that same continuum.

    Otherwise, what about another person just by random chance developing to a point close to me in this sense? Would you really want to hold that there is in that situation any ambiguity about who is me, and who is them?

    Say I perform some form of advanced brain surgery on you that completely changes your personality and beliefs and memories. In this case, there is a continuous physical change from the brain state of you now to the you post-surgery. At every Planck-time interval during the surgery, changes in successive states are infinitesimally small.

    Yes, that’s a fair point: in this sense, my notion of identity is almost certainly too strong. You might come up with less benevolent scenarios—putting me through a meat grinder is also a continuous transformation, but nobody would consider the resulting pink sludge to be ‘me’ in any real sense. It’s a sort of sorites paradox: at some point, changes that individually don’t suffice to change the identity of something may nevertheless accumulate to be utterly destructive. And frankly, I don’t know how to answer that.

    But I don’t believe I’m alone in suffering this problem—your view seems just as vulnerable to it: you’d generally hold that you’re the same now as in five minutes; but you also claim it’s possible for you to be someone different in fifty years. But of course, fifty years is also just a string of five-minute (or however long you’d take this sameness interval to be) intervals.

    No, we wouldn’t. The time reversal would be to merge two clones (2a and 2b) into one person in state 1.

    I think you misunderstand me. What must hold for both 2a and 2b to be valid possible successors of 1, is that there is a dynamics A: 1 –> 2a, and a dynamics B: 1 –> 2b, such that if you had 2a and 2b, you could apply the time-reversed versions A’: 2a –> 1, B’: 2b –> 1, yielding (A’ o B’): (2a, 2b) –> (1, 1). Then, if there were some dynamics C: 1 –> (2a, 2b), you could concatenate these processes to yield 1 –> (2a, 2b) –> (1, 1), and thus, produce a clone; of course, this would not be the same as applying C’. The only requirement for this is that both 2a and 2b are possible successors (and time-reversibility of the dynamics, given in both classical and quantum mechanics).

  34. Hi Jochen,

    > I’m not sure what good it’d be

    Well, if you think that true probability is necessary for consciousness, it would allow for consicousness while at the same time allowing for cloning, which might be useful for any number of reasons.

    > Think of me (or the succesion of instantaneous ‘me’s) as a four-dimensionally extended object with proper temporal parts

    I’m perfectly happy to do that. Indeed that is more or less how I do think of identity. However, this view of identity is very amenable to splits, which you don’t like. A split in identy is just a fork of a tree or a river. The two branches can be seen as part of the overall structure.

    > I wouldn’t consider a clone a discontinuous jump distant from me a part of that fourdimensional successions of mes, since it’s not part of that same continuum.

    Which to me emphasises the point that you regard continuity as being more important than state for identity, while I regard state as being more important. However where for you the physical continuity of the brain is disjoint from that of a clone, I think of identity as being something that persists at a higher level (that of patterns) and see the identity as branching much as a software project can fork.

    > Otherwise, what about another person just by random chance developing to a point close to me in this sense?

    Good question. So, if there were another person who by random chance happened to have all the same knowledge, all the same memories, all the same relationships to other people as I have, then I would consider that to be the same person. That is pretty much impossible on earth, as otherwise I would have had to occupy the same space at the same time, but it is possible that there are such instances of me if space is big enough. Indeed, if space is infinite (as I suspect it may be) then there are an infinite number of such instances at very great distances from here. Since all of these instances see and think the exact same thing at every moment, I have no way of knowing which one of them I am. Indeed I don’t think this is even a meaningful question. I am all of them. Only once any two instances start to diverge from each other can they be distinguished.

    Similarly, the MWI can be viewed either as Many Worlds constantly dividing or it can be viewed as Many Worlds always being distinct while evolving in lockstep and at some point starting to evolving differently. In the former view, one instance of me becomes two, and in the latter two instances of me start to diverge. I don’t see these two views as being different in any meaningful way. Whatever has precisely my mental state is simply me, because I define myself as my mental state and its evolution.

    > But of course, fifty years is also just a string of five-minute (or however long you’d take this sameness interval to be) intervals.

    Yes, I acknowledged this objection explicitly. And my response is to reject the idea that there is a fact of the matter on the question of identity. It is only a heuristic and so we should not be surprised or troubled by the fact that it falls apart in edge cases.

    If we want to persist with this idea at all (which it seems we must), we have a choice between a very precise but brittle concept which errs on the side of failing to preserve identity when it should be maintained or a more flexible, accommodating concept which errs on the side of being ready to ascribe identity where it is dubious.

    On a brittle concept, I think we are already dying and being replaced every moment. I see this as a more severe problem than a flexible concept which allows for teleportation and uploading and so on.

    Will I survive uploading? Fundamentally the question is meaningless and has no answer and depends only on what you mean by “I”. Do I think it is reasonable to prefer a concept of identity which allows for uploading? Yes.

    > What must hold for both 2a and 2b to be valid possible successors of 1, is that there is a dynamics A: 1 –> 2a, and a dynamics B: 1 –> 2b

    There is another way of looking at it though. Suppose there is a dynamics 1 -> 2a + 2b. So, say 2a is the state where you think “Woah, I’ve just been cloned, and I see my clone to the right of me!” and 2b is the state “Woah, I’ve just been cloned, and I see my clone to the left of me!”. So you can think of the two clones 2a and 2b as collectively forming state 2, being a state of affairs which has resulted from the smooth dynamical evolution of state 1 (as in a splitting embryo).

    However, this state 2 consists of two minds in two bodies, so from this point on it is more natural to think of these as two distinct states.

    The cloning operation itself changes you, but no more destructively (ex hypothesi) than any other interaction with the external world. So, you couldn’t do a time reversal of state 2a to get state 1 without state 2b acting on it (merging), just as you can’t get a time reversal of a ground up person to a whole person without the time-reversed action of the meat grinder (ungrinding).

  35. Indeed that is more or less how I do think of identity. However, this view of identity is very amenable to splits, which you don’t like.

    Well, I’m not sure about ‘very amenable’—to the best of my knowledge, I’ve never split, and I don’t think anybody I know has, either. So it seems that splits—at least of the observable, whoops-there’s-a-clone-of-me sort—are actually very rare, to the point of none having ever occurred that I know of. So what grounds do we have for thinking they’re possible? I’m just pointing out that, on purely physical (or maybe information-theoretical) grounds, there may not actually be a way to achieve this splitting leading to two copies of ‘you’ in any meaningful sense.

    So, if there were another person who by random chance happened to have all the same knowledge, all the same memories, all the same relationships to other people as I have, then I would consider that to be the same person.

    Except ‘all the same’ is a much too strong requirement, isn’t it? Even from now on to a moment later, it wouldn’t hold. So you need to invoke some notion of ‘close enough’ in order to tell who’s a valid successor. But that of course invites in some serious issues of vagueness—how close does it have to be? For instance, if you were blown to smithereens by a freak lab explosion, would somebody who’s almost like you, with similar memories, knowledge, etc. be enough? Somebody whose only difference is, say, that he has six fingers on his left hand? (Certainly, in a large enough universe etc. etc.)

    And my response is to reject the idea that there is a fact of the matter on the question of identity. It is only a heuristic and so we should not be surprised or troubled by the fact that it falls apart in edge cases.

    See, to me, both this and a splitting identities view invite so many problems and paradoxes that it seems worthwhile to look for an alternative, or perhaps more accurately, try and reevaluate the assumptions that make us believe in the possibility of splitting etc. For instance, if identity is nothing but a heuristic, why should I care about what happens to my distant future successor? Why would I pay into a pension fund if ‘I’ am never going to benefit from it? Is it just evolutionary expediency that makes me care about my future self? Or perhaps worse, what about punishment? How should one justify punishing somebody if there’s no sense in which it was ‘them’ that committed the crime?

    And in a splitting twins scenario, the problems grow even worse: if somebody offered to clone you, give one clone a million dollars, and kill the other—not instantaneously, but, as we might suppose, painlessly, and perhaps even without the clone knowing about it beforehand—then if it’s genuinely the case that each of these successors is equally well you, you ought to gladly take the offer. But of course, what will happen—or at least what is the only thing I can see happening—is that you either get a million dollars, or die. And if this still seems appealing to you, consider being cloned a million times, with only one clone receiving the million, and the rest being killed—let’s say an hour after the cloning process—swiftly, painlessly, and without any of them knowing beforehand. Would you truly take that offer?

    Suppose there is a dynamics 1 -> 2a + 2b. So, say 2a is the state where you think “Woah, I’ve just been cloned, and I see my clone to the right of me!” and 2b is the state “Woah, I’ve just been cloned, and I see my clone to the left of me!”. So you can think of the two clones 2a and 2b as collectively forming state 2, being a state of affairs which has resulted from the smooth dynamical evolution of state 1 (as in a splitting embryo).

    Well, one consistent possibility would be to deny that either of them is me—if neither could have been arrived at by evolving my state at a given point in time forward, then they’re simply not me. But I’m not sure one has to go that far. I don’t think (at least on a classical level) that such a process actually exists—certainly, the states you describe could also be arrived at from my original state, by some clever trickery (say, using a mirror and some clever suggestion), I could be made to think to myself, “Woah, I’ve just been cloned, and I see my clone to the right of me!”, or the other way around. So, I’d suspect that there’d always be a transformation taking 2a to 1, and 2b to 1. (I know this is true in quantum mechanics as long as you don’t want to generate any entanglement between the two ‘copies’ of 1 you end up with; so it should be true in classical mechanics without qualification. But I have to think about it.)

  36. I know this is true in quantum mechanics as long as you don’t want to generate any entanglement between the two ‘copies’ of 1 you end up with

    Of course, this would in general necessitate knowing the state 1… Hmm.

  37. Hi Jochen,

    Before I go any further I just wanted to note how much I’m enjoying the conversation and that I hold your reasoning and knowledge in very high regard.

    > to the best of my knowledge, I’ve never split, and I don’t think anybody I know has, either

    Certainly nobody has ever been teleport-cloned or mind uploaded, but if they were, then the four-dimensional view of identity would be quite useful in giving an account of what has taken place.

    Nevertheless, whether splitting has actually taken place depends on what you count as a split and what are the entities you consider to have meaningful identities. On the MWI, you have split. In an infinite universe where you are considered to share identity with all identical instances, you have split. If you are an identical twin and can be considered to have inherited an identity from the first zygote, you have split. Nations split. Cells split. Software projects split. Split brain patients are sometimes (perhaps inaccurately) portrayed as having two distinct personalities or minds each of which would have equal claim to the identity of the patient pre-callosumectomy.

    The four dimensional view of identity can be used to give an account of how to deal with identity in each of these real cases as well as the hypothetical thought experiments about whole mind splits.

    > Except ‘all the same’ is a much too strong requirement, isn’t it?

    I’m not assuming the quantum state must be precisely the same, but moment to moment the things I described do hold pretty constant (a sixth finger would not be particularly significant as I do not invest much of my concept of self in the number of fingers I have). I don’t think it is possible for there to be another person on earth who is sufficiently like me to actually be me, because to be so like me that person would have had to be in the same places as me at the same times.

    > would somebody who’s almost like you, with similar memories, knowledge, etc. be enough

    Whether I have been blown to smithereens wouldn’t have much to do with it. Regardless, if there were such a person then it would be enough, but the memories and knowledge and so on would have to be very similar indeed (effectively the same) for me to count it as the same person.

    Given what I said about needing to be in the same place and the same time to reach this threshold of similarity, I don’t think such a person could exist on Earth unless they somehow materialised fully formed as an adult. If this were to happen, then yes, I would consider them to be me if I could be satisfied that they were similar enough. This would be difficult to establish in practice. And of course we would from that moment diverge so that (assuming I have not been blown to smithereens after all) the two of me after this materialisation are no longer identical to each other even though they both inherit the identity of the me prior to materialisation. By analogy: though a river may fork, and each fork be equally a part of the same river, the two forks are not identical to each other.

    > Is it just evolutionary expediency that makes me care about my future self?

    Yes, I think so.

    I think, in general, the “why should I care about X” question is often a category mistake. Forget asking why should you care about your future self — why should you care about your present self? You care about your present self (and your future self) because you are programmed to do so by evolution. And that is perfectly fine, there’s no reason to do anything or want anything except in light of some basic preprogrammed desire or motivation.

    (I have a similar view of morality, by the way. I don’t think there is any purely rational justification for morality. Morality doesn’t need to be justified. I am moral (by which I mostly mean motivated by concerns other than self-interest, i.e. compassion) because I want to be moral. I want to be moral because I’m just built that way. It’s no more irrational than wanting to be self-centred.)

    > How should one justify punishing somebody if there’s no sense in which it was ‘them’ that committed the crime?

    For me, punishment is only ever justified on consequentialist grounds. Will the punishment (or a public commitment to punish) deter undesirable behaviour in future? Then punish. The retributive notion of someone deserving punishment, so that punishment is a good end in itself, is not one I subscribe to.

    > if it’s genuinely the case that each of these successors is equally well you, you ought to gladly take the offer

    If the murder is painless and without knowledge, then perhaps I should. If I’m an emotionless Vulcan at least. In practice, I rather think the whole thing would be too traumatic and troubling to be worthwhile. Emotions are not always grounded in rationality. There’s also the question of how I could be satisfied that everything would proceed as promised. As such, in reality, I would never take accept such a deal.

    > one consistent possibility would be to deny that either of them is me—if neither could have been arrived at by evolving my state at a given point in time forward, then they’re simply not me

    But, ex hypothesi, both instances *have* been arrived at by evolving your state at a given point in time forward. Granted, only by interaction with some sort of cloning apparatus, but this need be no different from your interaction with external objects every day.

    If you want to be very strict about it, then you could maintain that you are not identical to either of the clones but to the set of clones, as it is this set and not an individual clone which has evolved from the prior state. But now you’re distributing your identity over two different persons which I think is worse than just accepting that your identity has split.

    > I don’t think (at least on a classical level) that such a process actually exists

    I’m only highlighting a logical possibility. We’d probably have to go quite far into the realms of absurd science fiction thought experiment to make the process more concrete. MWI is perhaps the most plausible example. Another idea might be some kind of technology that causes all the neurons to split and then somehow separates each set from each other and reconnects them all to form two brains continuously from one. I mean, it’s stupidly absurd but I’m just trying to establish a point of principle. In a more realistic mind-uploading scenario there has been a continuous process to produce both the state of the brain post-upload and the state of the uploaded brain. The no-cloning argument doesn’t rule this out because it can be viewed as analogous to such a split — the original brain does not have to be left undisturbed but neither does it have to be completely destroyed.

    > So, I’d suspect that there’d always be a transformation taking 2a to 1, and 2b to 1

    There is probably always a transformation taking any state to any other state. For instance, there is presumably a transformation whereby my atoms could be rearranged to form a Velociraptor.

    But if you’re talking about reversibility, what you have to mean is what happens when we turn back the clock on the state of the world as a whole. That includes reversing whatever happened to cause the identity split, meaning that a time reversal of the world entails having the clones merge. If you reverse only the state of the brain without reversing the state of the world, then it’s not at all clear what you’re going to end up with. Certainly not state 1. To reverse time and rebuild someone who has gone through a meat grinder, you’re going to need to reverse the meat grinder too.

  38. Before I go any further I just wanted to note how much I’m enjoying the conversation and that I hold your reasoning and knowledge in very high regard.

    The feeling’s mutual; I’m getting a good dose of new perspective out of this exchange! Makes me question my assumptions, which is always a good thing.

    Whether I have been blown to smithereens wouldn’t have much to do with it.

    Well, the idea is to eliminate any other, closer successor—such that the only viable continuation would be somebody not-all-that-much-but-still-kinda like you (not a very precise term, I grant). The question is, would you continue your experience as that other person—or would your experience end with being blown to bits? My intuition would be the latter—that’s just it, done with, and over.

    If this were to happen, then yes, I would consider them to be me if I could be satisfied that they were similar enough.

    I’m not even sure I understand the sentence ‘I would consider them to be me’—to me, this seems like a contradiction in terms. I mean, there you stand, your particular view on the world being tied to some particular indexicals, it’s here and now, and not over there, where your clone stands; you can ‘point at’ that clone, have them as an intentional object of your thoughts in the way that you can have, say, your coffee pot in your thoughts, as something internal, something merely referred to or represented by your mental state, as opposed to the way you yourself appear in your mind, as the thing that does the representing, that has the thought, that appreciates its intentionality—you know your own subjective state of mind in a way completely unlike how you know that of your clone, which can only be by third-personal data, and still, you wouldn’t make any difference between yourself and that clone, between the thinker of your thoughts (or perhaps more accurately, lest we get ensnared in homuncular traps, the sum of these thoughts) and an object thereof? I frankly can’t see how this is a cogent proposition.

    Forget asking why should you care about your future self — why should you care about your present self? You care about your present self (and your future self) because you are programmed to do so by evolution.

    Well, I would say I care about my present self because of past experience: hunger is unpleasant, but can be alleviated by eating, and so on. Thus, I know how to bring my present self into a pleasant state, and strive to do so; that this pleasantry is ultimately due to evolution, i.e. that it only feels the particular way it does because if eating felt bad, then my ancestors wouldn’t have made it, is secondary (of course, I’m using the teleological language here merely as shorthand).

    But in a sense, this learning from past experience is something that becomes difficult to sustain under a conception of identity-as-fiction: in order to be causally efficacious, it seems that it needs to be my personal past experience that I refer to; failing that, I can’t see how to justify any judgments based on this experience, which undermines almost everything that we take ourselves to know.

    Anyway, under a conception of sustained personal identity, I think it is simple to forge a reason why I should care about the future: past experience tells me that certain behaviours now may lead to a more pleasurable experience in the future. But without a continuing thread of identity, neither would past experience tell me that (since it wouldn’t be my past experience), nor would I be justified in experiencing any future gain of pleasure, since I wouldn’t be around in the future.

    If the murder is painless and without knowledge, then perhaps I should. If I’m an emotionless Vulcan at least.

    See, that’s kinda my question: in what sense would that be the logical option? To me, in the case of a million clones, the logical expectation would be that with an overwhelming chance, I’ll die, and then nothing. I mean, suppose that before the experiment, a random number between one and a million is chosen—say 763,289—which designates the number of the clone to get the million.

    So now say the experiment is performed. You’re anaesthetised, the cloning operation is performed, and afterwards, every version of you is brought into a room, where they are woken up. Upon waking, the first thing each of them sees is the number they were assigned written on the ceiling.

    To me, it seems irresistible to conclude that your experience would be the following: with an overwhelming probability, you close your eyes and fall asleep, and then, upon opening them again, see some number—say 1—and are subsequently killed. End of story.

    With some tiny probability, you will open your eyes, see the number 763,289, and are handed a million bucks to enjoy. One of these stories takes place; your experience will only be one of all of those possible experiences. So to me, the logical conclusion would be that with an overwhelming chance, I will wake up in one of the rooms that don’t feature the number 763,289 on the ceiling, and subsequently die; hence, I wouldn’t take the proposal, without that being (to me, at least) an emotionally-influenced decision; I take myself to be merely acting rationally here.

    Or take the following additional variant: what if, unbeknownst to you pre-cloning, the clones are told that they will die? Wouldn’t you think that the thoughts of those clones that die would be along the lines of ‘Fuck, I shouldn’t have taken the offer’? Would they be wrong in thinking this?

    But, ex hypothesi, both instances *have* been arrived at by evolving your state at a given point in time forward. Granted, only by interaction with some sort of cloning apparatus,

    You’d actually need a bit more than just the cloning apparatus: since I have a certain number of atoms, occupying a certain number of degrees of freedom in state space, you’d need additional material to produce my pseudo-clones. But then, I think, we’re getting into serious trouble regarding the continued identity of either myself, or the additional material—and probably of both.

    Consider a collision of two celestial bodies, leading to both merging into a hot ball of molten stone, and later separating again, say due to tidal gravitational or centrifugal forces: would you say that either resultant is still ‘the same as’ one of the original bodies? That both could be considered the descendants of one of them? Or would you rather be inclined to subscribe to each of them a separate, new identity?

    Now consider the case of cloning me by bringing me into contact with enough raw material and producing by some mixing and re-mixing two successors that you now claim both to be ‘me’. What happened to the identity of the chunk of matter that went into the production process?

    Indeed, since there’s no ethics commission for thought experiments, suppose that raw material was another sentient person. With equal rights as you claim 2a and 2b to be my future states, they could be counted as the other person’s—so, something has to give. And I think the most reasonable option is that in such a case, neither of us continues, by symmetry—every other option seems to be merely arbitrary.

  39. Hi Jochen,

    > Well, the idea is to eliminate any other, closer successor

    Sure. But since I don’t subscribe to the closest successor theory I’m just explaining that on my view it doesn’t matter if I have been blown to smithereens.

    > I’m not even sure I understand the sentence ‘I would consider them to be me’—to me, this seems like a contradiction in terms.

    I think I anticipated this confusion and I tried to explain it with my analogy to the forking river, each branch of which is not identical to the other.

    So, when I say the other guy is ‘me’, I mean he is just as valid an inheritor of the identity of the past me as I am. But from this point forward we are not identical to each other. We are separate branches of a river that has forked.

    However, from the point of view of my past self, before the materialisation, I have two futures and should act in the best interests of both. I could, for instance, sign a contract to the effect that if another me materialises then my wealth should be divided equally between the two of me (hmm, perhaps I should find a lawyer and draw something up now just in case, eh?).

    > Well, I would say I care about my present self because of past experience: hunger is unpleasant

    OK, but then I can rephrase my question: why do you find hunger unpleasant? Or why do you care if you experience unpleasant sensations? So, certain things don’t really need rational justifications because we’re just built that way.

    I think that we need the ‘fiction’ of personal identity in order to behave adaptively. So I see the concept of selfhood as necessary for any evolved sentient being and not as something that is there in underlying reality.

    > See, that’s kinda my question: in what sense would that be the logical option?

    I agree with your analysis, but I think the problem is in how we weight the different outcomes. I more or less see instantaneous, unexpected, painless death with no suffering caused to loved ones as a neutral outcome. It’s not good or bad, so logically it is outweighed even by the tiny chance of winning the jackpot.

    This is perhaps an unusual view of the cost of death so perhaps it’s probably best not to get into it. If we instead changed your thought experiment so the clones were shipped off to Mars against their will to seed a new colony, then I would certainly think it a bad idea to take the deal, and indeed I discussed this on a blog post linked below.

    http://disagreeableme.blogspot.co.uk/2012/06/essential-identity-meet-your-future.html

    > would you say that either resultant is still ‘the same as’ one of the original bodies?

    I think that’s one of the edge cases where identity falls apart. I would be inclined to say that you can pick whatever option you like with about as much justification.

    But consider two waves passing through each other. At a certain point, they might add up to make one big wave, but only for an instant. Then this combined wave splits and the two original waves continue on their way. In this case, since a wave is a pattern rather than a specific collection of stuff, I would say each wave preserves identity with its former self after the separation.

    > What happened to the identity of the chunk of matter that went into the production process?

    It depends what object you think of when you say “chunk of matter”. If you mean a collection of atoms, then that collection continues to exist. If you mean a specific object with a specific shape and so on, then it is destroyed.

    > With equal rights as you claim 2a and 2b to be my future states, they could be counted as the other person’s

    Well, only if you consider physical continuity to be key to identity, as you do. I consider similarity of mental states to be key to identity, so this ground up matter would not be a future state of any person to me.

    So, the problem is that according to your definition, it seems that 2a and 2b either collectively or separately count as successors to state 1. If something has to give, it seems to me that the onus is on you to point out what that might be.

  40. So, when I say the other guy is ‘me’, I mean he is just as valid an inheritor of the identity of the past me as I am. But from this point forward we are not identical to each other. We are separate branches of a river that has forked.

    However, from the point of view of my past self, before the materialisation, I have two futures and should act in the best interests of both.

    I think this is what trips me up. For me, identity is a transitive, reflexive, symmetrical relation, and hence, if x is identical to a, and x is identical to b, then a is identical to b. So if a is me and b is me, then it would follow that a is b. I simply don’t understand how this could not hold, and the resulting relation still be one of ‘identity’ in any meaningful sense.

    I think that we need the ‘fiction’ of personal identity in order to behave adaptively.

    I don’t think this is necessary. We could care about future ‘selves’ in the same way as we care about our children, without believing that we are them; i.e. we could believe that we cease to exist every night when going to sleep, and in the morning, a different entity—albeit sharing most of the matter we’re composed of, and inheriting the greatest part of our memories and experiences—rises. We nevertheless might have a stake in that entity’s well-being, just as we have in our children; we could be interested in (or maybe even hard-wired) to care about the continuation of our knowledge, for example, our insights and experiences. Our successors would be our ‘memetic’ offspring, in the same way that our children are our genetic offspring.

    That just doesn’t happen to be the way things are. I set my alarm clock in the evening because I don’t want to wake up late; I do my work today so I don’t have to do it tomorrow; and so on. And in most cases (all cases we’ve encountered so far, I believe), there is absolutely no issue with the question of how it is that the man that rises in the morning is the same as the one that went to bed at night—it’s not any more difficult than that the keyboard on my desk is the same one today as the one I tried getting clear on my views with yesterday: it’s just physically the same thing, perhaps give or take a few dust motes and breadcrumbs.

    But then, there came clever people inventing thought experiments with teleporters and clones and brains in vats and whatnot, and it seems that this unproblematic criterion of identity suddenly goes away. I merely think that maybe it’s a bit too early to give up yet—as I do with most forms of eliminativism. Too many people are willing to give up and incur the accompanying metaphysical cost, when really, we haven’t even tried to really get clear about things and stand our ground. Reconciling subjective experience—qualia—with physical reality is hard; finding grounds for meaningful free will in a natural world is hard; finding a good criterion of personal identity is hard. In fact, all of those may turn out to be impossible, and the notions they’re based on might turn out to be fundamentally misguided (I believed so for a long time, actually). But nobody said this was going to be easy. But the fruits you have to climb to the very top of the tree to get are sometimes the juiciest.

    OK, after this little rally-the-troops rant, back to the issue at hand… 😉

    I more or less see instantaneous, unexpected, painless death with no suffering caused to loved ones as a neutral outcome. It’s not good or bad, so logically it is outweighed even by the tiny chance of winning the jackpot.

    This is perhaps an unusual view of the cost of death

    Yeah, I’d say it probably is. If one thinks about the things people endure, the hardships they incur, the odds they overcome just to go on living a little longer, it’s probably safe to say it isn’t going to be widely shared.

    That said, I’m very open to the idea of there being something like a ‘fate worse than death’. There certainly are situations in which I think death would be preferable, and given the fact that we routinely enjoy activities that involve at least some small risk of dying, I think most people (although they might avow the opposite) are at least implicitly comfortable with the idea that some things are worth (the risk of) dying for.

    But still, to me, a neutral outcome would be nothing much happening—afterwards, I am in much the same state as I am right now. I’m neither constantly elated at the prospect of going on living another second, but I’m also far from despairing. All in all, and at least at the moment, living is certainly preferable to dying.

    But I think it’s more interesting that you seem to agree with the idea that you’ll only get to experience one of the outcomes—I would have thought that’s anathema to the splitting-identities idea, because then, it seems to me that there’s a further fact that picks out you among all the clones. Or, in other words, if you wake up and see the number 12, what makes it so that it’s you who sees that number, and not one of the others? Does the universe just toss a coin regarding which clone gets to carry the torch?

    Well, only if you consider physical continuity to be key to identity, as you do.

    Well, it was you who proposed the 4d-splitting as a way to have multiple versions of me even if cloning is disallowed; I’m just pointing out that I don’t see that any of those multiple versions would in fact be me. Otherwise, I’ve already agreed that my notion of identity is too strong in some cases—re the meat grinder. But that’s really a different problem: what I’ve been claiming is that it’s a necessary condition, effectively, but fails to be sufficient in some cases. So, cloning is disallowed because the necessary condition—that there is some physical dynamics producing the clone—fails to be fulfilled; but this doesn’t imply that the forking leads to multiple copies of me.

    So it’s consistent for me to hold that in such a case, I simply cease to exist—and in view of the fact that the additional material needed could come from another person, I think it’s the only consistent stance. Logically, the four options in such a case are that 2a and 2b are both my successors, both the other person’s, one a successor of each of us, or none a successor of either. The first three would require some form of symmetry breaking that I can’t see any grounds for; so I go with the fourth, albeit acknowledging that I don’t precisely know what principle of identity is violated in this case. I can think of a few options, but I’d have to study possible splitting dynamics in more detail to say anything definite.

  41. Hi Jochen,

    > For me, identity is a transitive, reflexive, symmetrical relation, and hence, if x is identical to a, and x is identical to b, then a is identical to b

    It’s not that simple for entities that change in time. The you of right now is not strictly identical to the you of yesterday. These are two snapshots of you at different points of time and so different parts of a 4 dimensional you, and these two parts are not the same *parts* as each other, but being part of that same 4 dimensional identity we say the two parts share an identity even though they are not, strictly, identical to each other when viewed as parts.

    Again, it’s like a river that may fork and reform and so on. Each fork is still the river, so it shares the identity of the river. But each fork is not the same as each other fork.

    So if my identity is split by some kind of cloning operation, though each copy will inherit the identity of the self before the split (so that self has two futures), after the split they can be seen as separate branches with distinct identities when viewed as branches. So I can see a clone of me and have my own thoughts about him to which he is not privy, while recognising that he is just as much a continuation of the pre-cloning me as I am.

    > I merely think that maybe it’s a bit too early to give up yet

    I’m not quite giving up. I am putting forth a particular view of identity. But I think it’s wrong to expect that view to be robust in all edge cases, because there is no reason to think it has any meaning as far as underlying reality is concerned.

    > If one thinks about the things people endure, the hardships they incur, the odds they overcome just to go on living a little longer

    I said I wasn’t going to get into this because it’s a bit off-topic, but I’ll just note that fear is a powerful motivator. People behave as they’ve been wired to behave by evolution, not because dying is intrinsicly bad.

    > But I think it’s more interesting that you seem to agree with the idea that you’ll only get to experience one of the outcomes

    That’s not quite how I view it.

    Before the cloning, I think you have a future for each of the clones. But after the cloning, each of the clones experiences only one of those futures. I think the right way to make decisions in this situation is to think of it as if you will end up being a randomly selected clone (even though you will end up being all of them). This is also how we should make decisions in MWI. If the outcome of a 50/50 quantum-random experiment determines whether I get a big payout or am shipped off to Mars against my will, I won’t take the deal even though I believe both outcomes will happen to me, because I will experience this as being the same as the case of having a 50% chance of being shipped to Mars, which is unacceptable to me.

    > Does the universe just toss a coin regarding which clone gets to carry the torch?

    So, no. The appearance of randomness is a subjective illusion, as is the illusion of randomness on the MWI (sorry to keep bringing this up but it is a very convenient plausible real-world case where cloning actually happens).

    > So, cloning is disallowed because the necessary condition—that there is some physical dynamics producing the clone—fails to be fulfilled

    But I don’t see how it fails to be fulfilled. The meatgrinder is physical dynamics acting on the brain. The cloning apparatus (and additional matter, which might just come from food in the normal way — I didn’t say this process had to be fast, it could take years) is physical dynamics acting on the brain.

    To draw a distinction between these cases and the normal evolution of the brain, your concept of identity seems to require that the brain be isolated from its environment. But it isn’t. Cosmic rays and radiation are interfering with the precise quantum state of your particles every second. Material extracted from food gets into your brain from your blood stream. Signals from your nerves originating in events in the outside world affect your brain state too. And of course we have brain surgery and brian injury for more significant changes.

    The idea of the brain as a black box which evolves according to its own dynamics only is not tenable. So having an external influence such as a meat grinder or a cloning machine interfere with the brain is not a disruption of its continuous dynamics unless you can explain what it is that has destroyed the identity. In the case of the meat grinder, it is the scrambling of all the neural connections that together conspire to produce a mind that has certain memories, beliefs, desires and so on. But, ex hypothesi, the cloning machine performs no such scrambling and your mental attributes are left relatively undisturbed (say, as undisturbed as they are by the interference of cosmic rays).

    > and in view of the fact that the additional material needed could come from another person, I think it’s the only consistent stance

    I really don’t think that makes any difference. The food you eat could come from another person too. So it isn’t the material that makes you you, it’s the pattern it is arranged in.

    > The first three would require some form of symmetry breaking that I can’t see any grounds for

    Where’s the symmetry breaking in 2a and 2b being both your successors? The fourth option, that you have no successors, seems to me to be a case of symmetry breaking, since by the criteria you have outlined for having successor states, both 2a and 2b would seem to me to be valid successor states (having been produced by continuous evolution of state 1 according to some dynamics, while more or less maintaining your various mental attributes, as opposed to say meat grinding).

    > albeit acknowledging that I don’t precisely know what principle of identity is violated in this case.

    Well, fair enough.

    But this leads me to wonder if you are reasoning a little from wishful thinking. You don’t want it to be possible to split an identity, because that would pose difficulties. As such, you seem to be determined to find reasons to deny that it is possible, and where you can’t find reasons you assert that there must be a reason though you can’t see what it is.

    This attitude has led you to disbelieve in the MWI, and to think that mind uploading or teleportation would have to destroy the original. As an exercise in open-mindedness, I would encourage to think about how you would deal with identity in hypothetical cases where your objections fail.

    What if MWI is true?
    What if (probably counter-factually) quantum mechanics plays no significant role in brain information processing and so we could be cloned as easily as a classical computer?
    What if it turns out we are all inside a computer simulation, and this simulation is forked, each instance evolving differently from that point on?

    Are each of these scenarios really totally inconceivable? Is there any possibility at all that you are wrong that identity cannot split? If there is a chance, however small, that identity could split in some possible world, then it would be useful to have a concept of identity that can give a coherent account of the splitting. And that’s what I’m trying to do.

  42. It’s not that simple for entities that change in time. The you of right now is not strictly identical to the you of yesterday.

    But there is, at least plausibly, something that is continuous with that me of yesterday; and that’s what the identity relation applies to. Take a lump of green putty: I can pound it and distort it, and nevertheless, it will remain that same piece of green putty throughout those transformations, so it is identical qua being that piece of green putty. But if I, e.g., change its color, it will cease to be a piece of green putty, and hence, will in particular cease to be that specific piece of green putty; likewise, if I freeze it, and shatter it, or if I separate it in two pieces, and so on.

    ‘Identity’ does not necessarily imply ‘the same in all respects’, merely ‘the same in all relevant aspects’—that is, contingent properties may change, but there are certain necessary properties by virtue some thing is the thing it is, and which, if changed, result in that thing no longer being that thing. The challenge is in identifying these properties. Thus, even for a temporally evolving object, my requirements remain reasonable—as long as the contingent properties change, the thing remains the thing it is; and once the necessary properties change, it ceases to be what it was.

    Before the cloning, I think you have a future for each of the clones. But after the cloning, each of the clones experiences only one of those futures.

    But how is that not flatly contradictory? Either you’ll be clone1 and clone2 and clone3 and…, or you’ll end up as clone1 or clone2 or clone3 or… These are mutually exclusive states of affairs, but you seem to argue that we must consider both. In particular, you seem to be saying that you’ll have to make a decision before the cloning as if the and-case will come to pass, while actually, the or-case is going to occur.

    But I don’t see how it fails to be fulfilled. The meatgrinder is physical dynamics acting on the brain. The cloning apparatus (and additional matter, which might just come from food in the normal way — I didn’t say this process had to be fast, it could take years) is physical dynamics acting on the brain.

    Well, I meant here a cloning process in the sense of me being in some state, and there coming into being two exact copies of me being in that exact state, which is what is prohibited by the no-cloning theorem; I’ve been trying to refer to this as ‘cloning’, while using the work ‘forking’ for your proposal of a process ending up with two distinct entities, also distinct from the original, that nevertheless seem to have equal claim to being me. And I’m not viewing the brain in isolation in any way.

    I really don’t think that makes any difference. The food you eat could come from another person too.

    Well, but that wouldn’t make me them, would it? Even if I ate them whole. But in the forking-case, we have two persons going in, and two persons coming out; the question which of the two output-persons is whose successor is what I think doesn’t have any defensible answer. Why would they be my successors, when the other person underwent the exact same process? Any claim towards that could be equally well made to support the proposition that they are both the other person’s successors, with me thus being dead and gone.

    But this leads me to wonder if you are reasoning a little from wishful thinking. You don’t want it to be possible to split an identity, because that would pose difficulties.

    The thing is, to me, those seem like bona-fide paradoxes; and generally, I do end to suppose that the occurrence of paradox is due to a fault in reasoning. So, that’s why I’m looking at whether the assumption that there is some physically realizable way to produce two copies of ‘the same’ person is actually defensible.

    This attitude has led you to disbelieve in the MWI

    No, my disbelief in the MWI stems from a lot of different issues, mostly the problems with probability, the preferred-basis problem (I know that the common gloss is that this is solved by decoherence, or its further developments such as einselection and quantum darwinism, but that’s not in fact the case—decoherence only occurs if you have a pre-set split of the world in system and environment—a tensor product structure—but that itself requires a pre-selected structure that is ultimately equivalent to a preferred basis), and some more technical troubles.

    As for your other possibilities, yes, of course it’s possible, perhaps even plausible, that I’m wrong about all this—perhaps consciousness does not depend on nonalgorithmic resources, and could be implemented on a computer, and thus is copyable, and so on; but none of this has been demonstrated, and in fact, there are a good number of theoretical arguments against these possibilities—so I’m merely developing an alternative possibility within the constraints of what we know to be the case, in the hopes that it might be able to give a more thorough picture and dispense with a few of the paradoxes we encounter with respect to these issues. I have no delusions of grandeur that my view will ultimately turn out to be the right one, but that shouldn’t dissuade anyone from trying, IMHO—because nobody really has the right to expect that he will succeed where so many failed, yet time and again, that’s what happens, which it only can if people try.

  43. Hi Jochen,

    > But there is, at least plausibly, something that is continuous with that me of yesterday; and that’s what the identity relation applies to

    > ‘Identity’ does not necessarily imply ‘the same in all respects’, merely ‘the same in all relevant aspects’

    Of course, yes. I know what you’re saying and I understand all that. But it’s a bit difficult to talk about these things unless we can recognise how we can speak differently of identity in different contexts.

    Let’s go back to your lump of green putty. Say you tear it in two. You can say it has ceased to exist, or you can say that it continues to exist but now in the form of two lumps rather than one. What has become of the single lump? It has become two lumps. So each of these two lumps is still in some sense a continuous successor of the same lump of green putty (or they can each be said to be half of its successor).

    But the two lumps are not identical to each other, even though, collectively, they can be said to share in the identity of the original.

    In this view, if we want to be rigorous about it, the you of this moment is not you, it is just a part of the real 4D you. The thoughts you are thinking right now are associated with this particular part of you only.

    If your identity splits, then the 2a you is a different part of the real 4D you than the 2b you. So 2a is not identical to 2b, in the sense that 2a’s thoughts belong to 2a alone and not 2b, but 2a and 2b are both parts of the real 4D you.

    At this point, it is useful to impose a conceptual distinction so as to label the branches for 2a and 2b as sub-identities in their own right, though identites that are continuous with the pre-split you. 2a’s self-interest is now concerned with the future of the 2a branch alone and similar for 2b. As such, 2a and 2b can from this point on be considered to be different people (just as the branches of a river can be considered to be separate rivers or 2 lumps of putty to be separate objects).

    But if the you of yesterday is the same as the you of today (meaning that the you of yesterday has continuously become the you of today), that same relation still holds for the pre-split you and each branch of the post-split you. If the self-interest of the you of yesterday is concerned with the you of today, then the self-interest of a pre-split you lies in the self-interest of both copies of a post-split you.

    > Either you’ll be clone1 and clone2 and clone3 and…, or you’ll end up as clone1 or clone2 or clone3

    Both states of affairs do happen because “you” refers to something different in each context.

    Whenever we talk about how things look from “your” perspective, we must locate that perspective at a point of time and space. Before the split, “you” refers to a perspective looking towards a future with many branches. Each of those branches is in “your” future. After the split, there are multiple perspectives. “you” refers to the perspective of one of these branches, which has one future and one past. Each clone will therefore perceive itself to have been chosen at random.

    So, before the split, you should understand that you will become all of the clones. The ‘and’ case needs to be considered. After the split, any particular instance of you will perceive itself to be just one of the branches, selected randomly. The ‘or’ case needs to be considered.

    You will become all of the clones in the sense that it is wrong to think that any particular clone is the “real” you, carrying the torch as you say. You will become just one of the clones in the sense that only one life story can be experienced at a time, and if you interview each of the clones afterwards this is how they will have perceived the events. Each will believe itself to be the “real” you. The difference between me and you is perhaps that if I were cloned I would recognise the claims of my clones as being as justified as my own. We are all equally continuations of past me (just as the me of today is a continuation of the me of yesterday) but now we diverge and we are not identical to each other any longer. The “me” of tomorrow is not the same as the “other me” of tomorrow.

    > Well, but that wouldn’t make me them, would it? Even if I ate them whole

    No it wouldn’t. Because eating them would necessarily destroy the pattern that makes them them. Because pattern is what matters for identity.

    > Why would they be my successors, when the other person underwent the exact same process?

    Because the pattern that makes you you was more or less preserved. The pattern that makes the other person the other person was completely obliterated.

    > No, my disbelief in the MWI stems from a lot of different issues,

    Fair enough. I misconstrued something you said earlier. I won’t get into a defense of MWI here (I would merely refer you to Sean Carroll in any case, but you’re probably already familiar with his views anyway).

    > of course it’s possible, perhaps even plausible, that I’m wrong about all this
    > but none of this has been demonstrated

    I think you’re missing the point of my challenge. Let’s just say that none of these possibilities are true in reality. But let’s imagine the counter-factual scenario where they are true (you can take your pick).

    Now, how would you give an account of identity in such a case? I submit that if you think about it enough you would have to end up somewhere like where I am now. If you can no longer simply rule out the possibility of splitting, then you’re going to have to resolve some apparent contradictions.

    You have some issues with my account of such scenarios, so my question to you is how would you deal with them? If, for instance you think that any splitting necessarily kills you, then if MWI turned out to be correct you would be dying every moment. Is that what you would actually believe if you were somehow convinced of the MWI? Or would you instead revise your ideas about identity?

  44. Let’s go back to your lump of green putty. Say you tear it in two. You can say it has ceased to exist, or you can say that it continues to exist but now in the form of two lumps rather than one. What has become of the single lump?

    And the ambiguity of that account is just what I am trying to highlight—if its unity, or maybe its size/mass, is a relevant property, then the original lump has ceased to exist, and two new lumps have come into being. Certainly, if I were to further subdivide the lump, at some point—at least once we’re getting to the realm of individual molecules, but I’d guess much earlier—everybody will agree that the original lump has ceased to exist. So in what sense could it have survived the first subdivision?

    All I want to point out is that an account of identity that denies the continuation of identity post such a split is defensible, and does not seem to me to incur any obvious paradoxes, contrary to the splitting-identity account.

    If your identity splits, then the 2a you is a different part of the real 4D you than the 2b you. So 2a is not identical to 2b, in the sense that 2a’s thoughts belong to 2a alone and not 2b, but 2a and 2b are both parts of the real 4D you.

    But even this view is, to me, highly problematic: a table and its legs are different things, and saying that 2a and 2b are me is like saying the table is its legs, it seems. They would at best be in some sense part of me; but it’s not the identity of my parts that’s in dispute, but the identity of me as a whole thing upon itself.

    So you can’t assert all of the following things: 1) The 4d-object is me, 2) the branch before the splitting is me, 3) the branches after the splitting are individually me; and it seems to me that your account hinges on some conflation between these possibilities.

    On a single identity view, however, those problems go away: The 4d-object is me up until the splitting, and what grows out of it afterwards is not me, just as a tree is not the ground it grows out of, despite being continuously connected to it.

    So, before the split, you should understand that you will become all of the clones. The ‘and’ case needs to be considered. After the split, any particular instance of you will perceive itself to be just one of the branches, selected randomly.

    But for me, this leads to a decision-theoretic inconsistency: seeing as how I think death is a generally negative outcome, this means that before the split, I should choose to take the offer—after all, I will get the million bucks. However, after the split, with overwhelming probability, I will consider this choice to be wrong—and thus, if what you describe is the rational way to make that decision, I will have rationally come to a wrong conclusion, which generally signals an inconsistency in the system of reasoning you’re using.

    So I would propose to repeal the assumption that before the split, we should believe that we’ll end up being all the clones, and thus, argue in favour of a single-identity theory.

    Because the pattern that makes you you was more or less preserved. The pattern that makes the other person the other person was completely obliterated.

    Under the version of the thought experiment I’m describing, I don’t see how that is necessarily the case. Both of us go into the machine, have our degrees of freedom re-scrambled, and out come two persons which need in no way be any closer to one of us than to the other. In this scenario, it’s not at all clear who is whose successor; but if that’s the case, then it follows that in a splitting scenario, it’s in general unclear (since there exists a concrete example where it’s unclear). Thus, there don’t seem to be compelling grounds on which to accept those successors as either or both being me.

    Basically, maybe to get things clear, where I stand right now is that there’s a necessary condition for identity, which is that of a continuous physical evolution—everything that is derived from me may be my successor. However, in some cases (hence in general), this alone is not sufficient to determine identity—things that are derived from me via a continuous physical process, i.e. a meat grinder, may unambiguously fail to be me. Hence, there need to be additional sufficient criteria in order to settle questions of identity.

    Regarding the forking scenario, I’m merely investigating whether such a forking unambiguously leads to a splitting of identity—and I think a case exists (that where two people enter the forking machine), in which it doesn’t, without introducing arbitrariness. Hence, it’s implausible that in every case of forking, identity is preserved. But then, it’s at least possible that it isn’t preserved in any such case.

    If, for instance you think that any splitting necessarily kills you, then if MWI turned out to be correct you would be dying every moment.

    Actually, I’d consider the MWI a quite benevolent case of splitting, because in short, it can be viewed as introducing a new indexical, akin to time and location, tied to worlds/branches (I think Simon Saunders has been an influential proponent of this view). Roughly, I don’t have a problem with different versions of ‘me’ at different points in time; it’s where the river forks, and there are two me at the same time, that I think things start bordering on the nonsensical. However, introducing a new branch indexical ameliorates that issue—that there is a different me at the same time but on a different branch is not a priori any more difficult than that there is a different me at the same location at a different time.

    However, for other scenarios, such as teleportation/cloning etc., if those were right, then I’d be wrong, of course; either I’d have to come to grips with dying every moment (which is really just saying that there is no concept of identity), or I’d have to try hard and make sense of the concept of forking identities, since in such a case, its possibility would give me some assurance that the paradoxes I see are only apparent.

    But again, the mere counterfactual consideration of such scenarios does nothing to force me to reevaluate my stance, and more than the counterfactual consideration that the moon might be made of green cheese shold influence the planning of a mission to go there. In other words, I’ll try to attack those problems once I’m convinced they actually exist.

  45. Hi Jochen,

    > if its unity, or maybe its size/mass, is a relevant property, then the original lump has ceased to exist

    Agreed. So, the question of whether identity is preserved is meaningless until you specify which properties you consider to be included in that identity. If what is important to being that green lump of putty is that it is made of putty, and that it is continuous in space, then each lump can be said to be a successor. If it is also important that it have approximately the same mass then the identity is destroyed. So we can look at it either way, depending on the properties we deem important.

    For my own identity, I deem the pattern of information processing that makes up my mind to be the most important property. I think you do too, we differ only on whether physical continuity is also important.

    > All I want to point out is that an account of identity that denies the continuation of identity post such a split is defensible,

    Sure it is. But so is an account of identity where you die every night when you go to sleep and a new person wakes up every morning. So is an account where you die and are replaced every time you are struck by a cosmic ray. What I’m trying to tease out is how you make your concept of identity robust against these disturbances to the continuity of consciousness but not fragile with regard to cloning.

    > and saying that 2a and 2b are me is like saying the table is its legs, it seems

    Saying that 2a and 2b are me is just the same thing as that there is a “different me” on a “different branch” of MWI. It is even like saying that the person who wakes up tomorrow morning is me. When we say this latter phrase we don’t mean that this snapshot of me waking up tomorrow constitutes a complete life story. When I say “I am typing” I don’t mean that the whole of my life is spent typing. We routinely use personal pronouns to refer to momentary snapshots and only occasionally use it to refer to entire lives.

    The difference is that 2a and 2b refer to specific parts of a life that has branched, and can no longer be used to refer to the overall structure. We sometimes use terms and phrases like this in the real world too. We talk of the early Wittgenstein versus the late Wittgenstein, for instance. We can even mix these holistic and partial concepts of identity in the same sentence and context makes it clear what is intended. Early Wittgenstein is not Late Wittgenstein but both are Wittgenstein. 2a is not 2b but both are me (if I am speaking before the split).

    > So you can’t assert all of the following things: 1) The 4d-object is me, 2) the branch before the splitting is me, 3) the branches after the splitting are individually me;

    I think that when we talk of the holistic “I” or “me”, we must be talking about a continuous structure extending back into the past and into the future from the current perspective (which is tied to a particular time and place). In a world with no splitting of identites, the same structure is picked out whenever and wherever this pronoun is uttered by a particular person. In a world where identities can split, this is no longer true.

    If “I” is uttered before a branch, this interpretation would pick out the whole branching 4D structure, including 2a and 2b. If I is uttered after a branch by 2a, it doesn’t pick out 2b’s branch, because to get to 2b’s branch from the current perspective you would have to go back in time to the split and then reverse direction to go forwards up the other branch. This branch is neither in 2a’s past nor in 2a’s future so is not part of the structure 2a refers to by “I”, and it is pretty much irrelevant to 2a’s self-interest. The speaker can neither remember nor expect to one day experience anything on 2b’s branch, so it makes sense to exclude it from the speaker’s concept of identity.

    > The 4d-object is me up until the splitting, and what grows out of it afterwards is not me

    OK, but why? Ex hypothesi, there has been no disruption of physical continuity, and the patterns of thought that make you you are left relatively undisturbed. So what is it that causes you to wink out of existence? Is it simply the cardinality of you being greater than one?

    That seems really arbitrary to me. I mean, you would still be you if there were less physical continuity with one of the branches than the other, right? Your metric would pick out a closest successor in 2a, say, and so you would continue to exist as long as there is more discontinuity with 2b (if 2b were produced by scanning and rebuilding you, say). So, assuming 2a remains very continuous, whether you live or die depends on the continuity with 2b. We can imagine variations of this thought experiement where we adjust the relative continuity of the 2a or 2b branch. If either is slightly more continuous than the other, you live. Only if they are balanced precisely the same so that you can’t pick out one as being preferred do you die. Something is wrong with this picture!

    > But for me, this leads to a decision-theoretic inconsistency: seeing as how I think death is a generally negative outcome

    Yes it does, for certain versions of the thought experiment (particularly those where you never know you made a wrong choice). However, for the version of the thought experiment you outline here there is a way out. If you can know you have made a wrong choice before you die, then you can consistently adopt a strategy which seeks to avoid states where you know you have made a wrong choice and are scared and miserable and regretful because you know you going to die. The problem you highlight only arises if there are no such negative states, for instance if all the clones but one are instantly killed before they are even aware the cloning has happened.

    But since I don’t regard death (which is simply non-existence) as a negative outcome then I don’t have the same decision theoretic inconsistency. I think the problem is with your evaluation of death rather than with the proposed theory of identity.

    This implies that perhaps it is wrong to think we can achieve an understanding without thrashing that particular issue out. But again, I would ask you, in the hypothetical scenario where cloning were possible, what decision theoretic approach would you recommend? Perhaps you think no such approach is possible, and take this as evidence that there is some logical inconsistency in the concept of cloning, even though we don’t know what that is.

    > Under the version of the thought experiment I’m describing, I don’t see how that is necessarily the case.

    It certainly isn’t necessarily the case. But that is what I am stipulating as a thought experiment to challenge your view. In the thought experiment, we end up with two copies of brains which have approximately your mental attributes. Neither of them are in precisely the same state (as the no cloning theorem tells us is impossible), but this is normal as our brains are constantly changing state anyway. Both states have also continuously evolved from the initial state of your brain.

    However, if the material for the additional brain comes from that of another person, that person is killed as soon as their brain is scrambled, because their mental attributes are no longer instantiated anywhere.

    If you want to stipulate an alternative thought experiment where both input brains are scrambled and the two output brains bear no resemblance to anyone, I will wholeheartedly agree that entirely new identities have been created.

    > Hence, there need to be additional sufficient criteria in order to settle questions of identity.

    Right. And what might those be? I hold that the criterion of having very similar (essentially indistinguishable) mental states must hold for a proximate successor. In the thought experiment, there are two successors that meet these criteria of continuity and similarity of mental state. It seems that you also have the criterion of there being one successor which is closer than any other, which as noted before I find to be quite arbitrary.

    > Actually, I’d consider the MWI a quite benevolent case of splitting

    I’m not sure I follow why this is. For example, what about your decision theoretic argument if we replace physical cloning with quantum cloning based on the results of a quantum experiment such as “if it’s spin up you die, if it’s spin down you get a load of cash”? At the point of time you make the decision, you are not on any particular one of these branches, meaning that effectively you have no branch indexical, so I don’t see how the idea of the branch indexical helps you.

    If the point of the branch indexical is just to distinguish your copies from each other, then in the case of physical cloning we can just reuse the space indexical, since there is one clone of you at one location and another at another location.

    You say:
    > there is a different me at the same time but on a different branch

    I say (with regards to cloning or mind uploading):
    > there is a different me at the same time but at a different location

    I don’t see that there is any important difference here.

    Besides, are you not making the same error of transitivity and reflexivity of identity as I did? In what sense can it be you if it is different? Don’t get me wrong, I understand perfectly what you mean, I’m just pointing out that when I spoke in a similar way it caused consternation for you.

    > But again, the mere counterfactual consideration of such scenarios does nothing to force me to reevaluate my stance

    Or course it does nothing to argue against your view that cloning is probably not possible.

    But if you have a concept of identity which completely falls apart in certain possible or even plausible scenarios, it suggests to me that there is something wrong with your concept of identity. Ideally, we want our frameworks and concepts to be robust in even contrived scenarios. Often they are not. When this happens, it implies that these frameworks cannot be taken to be absolute, that there are some holes that need patching. This is why arguments such as the utility monster are taken to be serious criticisms of ideas such as utilitarianism.

  46. So, the question of whether identity is preserved is meaningless until you specify which properties you consider to be included in that identity.

    Yes, you need a definition of the object in order to pinpoint its identity; this is in part why I specified a green lump of putty: a mere lump of putty would not change its identity qua lump of putty upon being re-coloured, but a green lump of putty would obviously no longer be a green lump of putty once it’s red.

    This is just to highlight that in order to decide whether some possible successor of mine actually is me in a given situation, we need a good idea of what makes me me; and in part, getting at that idea is what I’m all about here. I think one aspect of this is definitely the information contained in my state (and I think we’re pretty much agreed on that); this information is conserved upon continuous physical transformations.

    However, there are some physical transformations that lead to this information being so scrambled that it is in effect not recoverable; in theory, for example, I could be evaporated, and later on, reconstituted in the precise state I was in before the evaporation, by simply inverting the velocities of all the molecules—no information was lost at any point. But still, in between, when I’m a cloud of gas, few people would hold that that’s still me. This suggests that there is a kind of organizational principle needed in order to identify me across time. One might suggest that the entropy needs to be bounded at all times, for instance—and after all, working to lower their entropy is all that lifeforms ever do, in a manner of speaking.

    So the question is, what transformations are compatible with those (admittedly vague) principles? And one thing we’ve learned is that in general, the forking transformation isn’t, because there exist cases where there seem to be no grounds on which to consider its outcome to be me. Are there special cases where forking leads to ‘copies of me’? Well, I can’t exclude that. But I think the fact that in the only case we’ve analyzed in any depth so far (because the answer there seems relatively unambiguous) the answer is negative is at least evidence that it might be negative in general—we certainly don’t yet have any case where I’d consider the answer to be unambiguously positive.

    Saying that 2a and 2b are me is just the same thing as that there is a “different me” on a “different branch” of MWI.

    I disagree. For one thing, the causal separation of branches means that we don’t have to face the somewhat embarrassing proposition that there is the same thing twice—i.e. that we’re asserting that there are two indiscernible (with respect to the right properties) objects, which nevertheless are differently individuated.

    Also, in a many-worlds context, both copies contain the full information about the original object; but in the forking case, only the combination of 2a and 2b suffices to ‘turn back the time’ and produce 1 (otherwise, again, we could clone). So possibly this could be another criterion: my successor is that single object which contains all my information. Thus, if we don’t have such a single object, then we don’t have a successor, and the split into 2a and 2b is just like the split into a cloud of molecules.

    When we say this latter phrase we don’t mean that this snapshot of me waking up tomorrow constitutes a complete life story.

    But under reversible physical dynamics, that one snapshot is sufficient to infer your whole life story (taking into account whatever you’ve interacted with). In a forking-case, that isn’t so.

    Ex hypothesi, there has been no disruption of physical continuity, and the patterns of thought that make you you are left relatively undisturbed. So what is it that causes you to wink out of existence? Is it simply the cardinality of you being greater than one?

    No, it’s merely the fact that none of 2a and 2b individually contains the information that constitutes me (if they did, they’d be clones). So I can only be the conjunction of 2a and 2b, but not either of them. But then I as an object have ceased to exist.

    For example, what about your decision theoretic argument if we replace physical cloning with quantum cloning based on the results of a quantum experiment such as “if it’s spin up you die, if it’s spin down you get a load of cash”?

    Well, there’s been a lot of discussion regarding the issue of personal identity in many-worlds interpretations. I think most appropriate are perhaps the many-minds theories of Albert/Loewer and Lockwood. Basically, the idea is that there is a continuum of minds associated with each physical body, and on a branching, a proportion of minds in accordance with the Born probability of the branch travels along each. Then, of course, it’s a simply matter of probability that I will experience either death or money, and I can base my decision on that.

    An alternative is the single-mind view also due to, I think, Albert. There, only a single mind exists that probabilistically follows one particular path to the branching structure. This has the curious consequence that most paths are populated by ‘mindless hulks’—zombies—without any accompanying mental experience, and in fact, if not for some unknown reason all minds go down the same path in each splitting, almost all of the people you meet are mindless hulks, to. But in this case, I would likewise experience either death or money. In fact, it’s exactly the same as the many-minds scenario with all the excess baggage from minds that do not constitute my experience removed.

    So no, in a many-worlds scenario, one does not necessarily incur any decision-theoretic inconsistency.

    If the point of the branch indexical is just to distinguish your copies from each other, then in the case of physical cloning we can just reuse the space indexical, since there is one clone of you at one location and another at another location.

    Well, but again this would incur the embarassment of one and the same thing being bilocated—if the Mona Lisa were both in the Louvre and for sale at a local garage sale, then you’d probably be inclined to think that only one of them is the genuine article, no? But no such problem exists if the Mona Lisa were in the Louvre yesterday, and somewhere else today (as long as it isn’t also in the Louvre anymore).

    Of course, this hinges on a slightly different conception of identity that I’m not sure I’m really prepared to defend here: that not only physical composition, but also, causal history is part of what makes something that very thing. This is really a different question (‘Is Swamp Man Davidson?’), and I think it would take us too far afield here.

    Nevertheless, the introduction of the branch indexical helps here because in a sense, we go from a branching 4d-object to a monolithical ‘5d’ object (although calling the additional indexical an additional dimension is ultimately misleading). An object in a superposition, as everything is in a MWI, is then simply not only extended in time, but also ‘extended’ across branches; but such an extension does not introduce any conceptually novel problems.

  47. Hi Jochen,

    > And one thing we’ve learned is that in general, the forking transformation isn’t

    I would define a forking transformation as one where one brain is split into two brains more or less identical to it. Defined as such we haven’t learned any such thing.

    To me, your argument looks a little like the following. Suppose we are wondering if identity can be conserved after a cranial impact. I say that it can, because a light tap on the head doesn’t do much to change who I am. You say that perhaps it cannot, because dropping an elephant onto one’s skull tends to cause significant disruption to mental processing, and by extrapolation perhaps the same is true for all cranial impacts.

    I’m interested only in the ambiguous case, because I want to tease out what criteria you would use to justify a suspicion that your identity has been destroyed. And you’ve expanded on that in your last post (a single object containing all relevant information for time reversal), which is great, and I’ll get to in a bit.

    > For one thing, the causal separation of branches means that we don’t have to face the somewhat embarrassing proposition that there is the same thing twice

    No more embarrassing, I feel, than the case where a cell divides. In this case, we are left with two more or less identical cells. I would not say that any cell has died. Neither would I say that one of the cells we see is a mother and one a daughter. I would instead say that the original cell’s identity (such as it is) has divided with it and that one has become two. I don’t see what is embarrassing about this.

    > that we’re asserting that there are two indiscernible (with respect to the right properties) objects, which nevertheless are differently individuated.

    Firstly, the two objects are not absolutely indescernible, in that they are in different locations, so that’s one thing. Secondly, they are not (long) indiscernible with respect to the properties important for identity, because from the moment of forking on they will diverge and have different experiences and different mental states, and this is what allows us to say they are different individuals from this point on.

    > Also, in a many-worlds context, both copies contain the full information about the original object;

    I would question whether this is actually true. I am not confident I know how time reversal works in MWI or in QM in general, but my understanding is that it is the evolution of the wavefunction as a whole that is deterministic and unitary. If so, I don’t see how you can time reverse one isolated collapsed perspective (i.e. a branch) without time reversing all of them.

    > So possibly this could be another criterion: my successor is that single object which contains all my information.

    But then you never have a successor, because you’re sending information out into the environment all the time. It is also the case that your current brain state is in part caused by information from your environment.

    This means that to do a time-reversal on your state you need to do a time reversal on all other objects you may have interacted with too. For instance, you may have seen an insect buzzing past your monitor a second ago, and your brain state contains a memory of that. To do a time reversal of your brain state, you would need to do a time reversal of all the objects you have interacted with, for instance that insect. Indeed you’d probably have to do a time reversal of everything in the light cone of the event to which you want to rewind. Partial time reversals don’t make sense: it’s all or nothing.

    > But under reversible physical dynamics, that one snapshot is sufficient to infer your whole life story (taking into account whatever you’ve interacted with). In a forking-case, that isn’t so.

    It is so, because your clone is just another object you’ve interacted with.

    > So I can only be the conjunction of 2a and 2b, but not either of them. But then I as an object have ceased to exist.

    As noted, you would cease to exist any time you interacted with anything on such an account of identity. So we need something a little more flexible, such as identifying with a certain approximate conjunction of mental states. In this account you have two successors.

    I was interested in your discussion of identity in MWI. Personally, I don’t think any of the proposals are as satisfying as simply allowing identity to split.

    > many-minds theories of Albert/Loewer and Lockwood

    So, you have an infinite number of identical minds hitching along for the ride with your physical brain, branching off when the universe does. Doesn’t this mean that we “have to face the somewhat embarrassing proposition that there is the same thing twice—i.e. that we’re asserting that there are two indiscernible (with respect to the right properties) objects, which nevertheless are differently individuated.”

    > the single-mind view also due to, I think, Albert.

    Now this clearly is Cartesian dualism. You have a soul hitching along which may or may not be present, and other people may be philosophical zombies who believe themselves to be conscious but aren’t. That’s a little too absurd even for me, I’m afraid. Seems to fail Occam’s razor in that it posits an additional entity (the soul) that I don’t think we need.

    > then you’d probably be inclined to think that only one of them is the genuine article, no?

    Sure, because I know of no apparatus for replicating physical objects wholesale.

    But let us suppose that there is a way of producing an atom by atom duplicate of the Mona Lisa. Certainly, I recognise that the one in the Louvre is worth a lot more, but I see this as arising only out of the human intuition of metaphysical essentialism, which I see as a kind of benign madness.

    http://disagreeableme.blogspot.co.uk/2012/06/essential-identity-essence-of-essence.html

  48. I would define a forking transformation as one where one brain is split into two brains more or less identical to it. Defined as such we haven’t learned any such thing.

    Defined as such, it’s impossible, because for one thing, two brains contain more mass than one. So we need at least some auxiliary matter; but then we are again in the case that this auxiliary matter may equally well be a person with their own identity, and the forking process being one that is perfectly symmetrical between both, leading again to the conclusion that identity is not conserved.

    Suppose we are wondering if identity can be conserved after a cranial impact. I say that it can, because a light tap on the head doesn’t do much to change who I am. You say that perhaps it cannot, because dropping an elephant onto one’s skull tends to cause significant disruption to mental processing, and by extrapolation perhaps the same is true for all cranial impacts.

    That’s not a good analogy, since we’re precisely trying to find out whether there is, in fact, an identity-preserving forking. So a better analogy would be that in all cases where we know what being hit on the head does, identity isn’t conserved, which should quite rationally make anybody weary of even receiving the lightest tap to their cranium.

    In this case, we are left with two more or less identical cells. I would not say that any cell has died.

    Neither would I, but certainly, the identity of the cell qua single cell has not been preserved. And nobody would be tempted to point to one of the daughter cells and claim it was the very same cell as the other; *that* cell is not the same as *that other* cell. But if there are two copies that supposedly equally well are me, then that’s what we’d have to do, at least with respect to the properties sufficient for me-ness.

    Firstly, the two objects are not absolutely indescernible, in that they are in different locations, so that’s one thing.

    Well, but spatial properties don’t individuate: if I exchanged both objects, then you couldn’t tell whether I had done so; thus, it’s part of neither objects identity that is is located at some given point in space. It’s also no use to say that they aren’t indiscernible for a long time—whether or not it persists, the fact that this state exists is the problem.

    If so, I don’t see how you can time reverse one isolated collapsed perspective (i.e. a branch) without time reversing all of them.

    Yes, you are right, I was speaking too sloppily. In general, the state of the object post-collapse/branching (in a single branch) does not suffice to reconstruct its state beforehand. I was thinking about this in roughly the same way that a branching enables ‘cloning’ in some fashion, without thereby violating the no-cloning theorem: the same probabilistic information (that makes me me) may be present in both branches, contrary to the case of cloning ‘within’ a branch.

    But then you never have a successor, because you’re sending information out into the environment all the time.

    Well, but classical information is copyable; if I copy some part of a newspaper, and give away the copy, the newspaper hasn’t lost anything. Neither do I necessarily loose anything in my interactions with the environment.

    Partial time reversals don’t make sense: it’s all or nothing.

    In quantum mechanics, partial time reversal only doesn’t make sense (in the sense of there being no valid state produced in this way) if there is entanglement between the system being reversed and some exterior system (in fact, this is one of the most common tests for entanglement, known as the ‘positive partial transpose’-criterion). Otherwise, and hence, in a classical setting, it’s fine.

    As noted, you would cease to exist any time you interacted with anything on such an account of identity.

    I frankly don’t know where you get that from. All state change is, ultimately, interaction; but not all interactions are such that any information loss is incurred.

    Perhaps think of that which makes me me as some kind of kernel of probabilistic information: this may change, and in general, will, but only when those changes are irreversible is anything lost. However, it may not be copied; therfore, for instance in the forking scenario, each of my would-be successors can inherit only some part of this information, and neither is me.

    I was interested in your discussion of identity in MWI.

    If you’re interested in more details about that, then I can heartily recommend David Albert’s ‘Quantum Mechanics and Experience’, which is not just great content-wise, but also a joy to read. Plus, little formal familiarity is needed to follow the presentation there (in fact, it’s been criticized for being too informal, but personally, I thought it didn’t suffer for this—quite the contrary). Also, the book by Albert’s student Jeffrey Barrett, ‘The Quantum Mechanics of Minds and Worlds’ might similarly be worth checking out (it’s kind of a successor to Albert’s), and lastly perhaps Michael Lockwood’s ‘Mind, Brain, and the Quantum’.

    Sure, because I know of no apparatus for replicating physical objects wholesale.

    I think we need to distinguish here between two instances of the same general type, i.e. two different tokens, and two copies of one and the same instance, i.e. a single token somehow replicated.

    For instance, nobody would be tempted to say that two copies of a book are in some meaningful sense the same thing, and hence, there’s no problem with both lying on the same table. But it’s more problematic if we are to insist that the left book is actually the same book as the right one. This is the situation, I think, we’re forced into when considering multiple mes.

    This doesn’t imply essentialism, but I think it does mean that we need to take some care regarding the causal history of an object when trying to pin down identity. Think about two distinct ways the situation above could have come about: in one, there’s simply two books from the same production run (which we can, for the sake of argument, suppose to be identical to the molecular level), while in the other, the second book is the first book, sent back in time and placed next to its original.

    The description of the situation as a physical state may well be identical, but still, I’d argue that there’s a relevant difference. For example, if you were to rip out a page of the first book in the second scenario, then the second book would lack that same page by virtue of that same act of vandalism, while in the first scenario, two independent such acts would be necessary to bring about the same state of affairs.

    So just having two Mona Lisas on their own might not raise any conceptual difficulties; but insisting that they are in fact the very same painting does.

  49. Hi Jochen,

    > Defined as such, it’s impossible, because for one thing, two brains contain more mass than one.

    It should be clear by now that I’m assuming there is another source of mass.

    > and the forking process being one that is perfectly symmetrical between both,

    By my definition it would not be symmetrical. One brain is copied. The source of additional matter, e.g. food (which may be a brain also) is scrambled as is always the case when we assimilate matter during growth or building or whatever.

    > a better analogy would be that in all cases where we know what being hit on the head does

    I reject the way you’re framing the question. I think we know (or have stipulated) all the pertinent facts. What remains to be decided is how we ought to think of this case. Personally, I am as confident that identity is preserved in the forking I described as when I am tapped on the head, because this follows from how I think of identity. So what we have is not some unknown information, some empirical datum we are missing, but a disagreement on how concepts are defined. Similarly, I might some day meet someone who thinks that tapping him on the head rearranges his atoms enough to imply a change of identity. I don’t think that would make this form of argument any more reasonable if such a person were to propose it.

    > if I copy some part of a newspaper, and give away the copy, the newspaper hasn’t lost anything.

    OK. Clasically this is a passive interaction from the newspaper’s perspective, so it would seems so. But consider the case of an object which is acted upon, like a thrown ball in flight. If you reverse the time for the ball alone (reversing the momenta of all its particles), and not the air or the thrower, you will not find that it traces the same trajectory backwards. You will find that it decelerates along the horizontal axis rather than accelerate as it should. You will find that it ends up rolling along the ground rather than in the hand of a thrower.

    The brain is, like a ball, an object which is acted upon by its environment. If you reverse it and it alone it’s not going to end up where it started.

    > in the sense of there being no valid state produced in this way

    So it depends on what you mean by “valid”. I’m talking about reversing actually producing an earlier state, retracing steps in other words. My contention is that unless you reverse the environment too, this is not what you get.

    > I frankly don’t know where you get that from. All state change is, ultimately, interaction; but not all interactions are such that any information loss is incurred.

    I hope you see where I get it from now. I’m not thinking about it in terms of information loss. I’m just saying that if you reverse an object alone and not the things it was interacting with then it is not going to end up where it started but somewhere else entirely.

    > those changes are irreversible is anything lost.

    Changes are irreversible only if you plan on reversing them without taking the reversals of the wider environment into account. On such a view, pretty much all changes would be irreversible.

    > So just having two Mona Lisas on their own might not raise any conceptual difficulties; but insisting that they are in fact the very same painting does.

    I think this ignores the point that I’m saying not that an identity is shared (and continues to be shared) but that the identity splits. I’ve already explained the limitations of the transitivity of identity in this scheme. Personal (let’s call it temporal identity), as opposed to mathematical identity, depends on the point in time and space where it is applied.

    So in the case where a Mona Lisa is duplicated by a magic quantum teleporter, I am inclined to say both copies are continuations of the original one. The object that was the original one continues to exist but in two places. But now, from a perspective after the split, the two objects have diverged and have separate identities. They are separate branches of a fork, and so what you do to one is not copied in another. Again, it’s not unlike two branches of a river. The river exists at each branch, each branch is (a part of the river), but each branch is distinct from the other, not least because they are neither in each other’s past (upstream) or future (downstream). I doesn’t seem to me to be a particularly troubling proposition.

    The time travelling book is a different case because they are in a future/past relationship. This is all consistent with what I outlined earlier about the meaning of “I” when uttered from different perspectives.

    If you think I’m playing fast and loose with the concept of identity here and would prefer to stick to strict mathematical atemporal identity, then, fine. Let’s talk about what that would mean. In this case, the two Mona Lisa’s are the same object, in the sense that two legs of a table are the same object. The apparent absurdity is reconciled by understanding that they are two parts of the same object connected 4-dimensionally. I prefer the modified temporal version of identity not because I need it to wiggle out of some problem but because it is more natural to think of two cloned objects as separate objects.

  50. By my definition it would not be symmetrical.

    Well, I’m trying to consider forking in the general case, which includes not just your particular scenario, in order to try and tease out conclusions about what happens during the average fork.

    Personally, I am as confident that identity is preserved in the forking I described as when I am tapped on the head, because this follows from how I think of identity.

    But the question at issue here is certainly whether either of our ways of thinking about identity is right, no? So you can’t formulate the answer to some thought experiment by just flatly assuming that it is…

    The brain is, like a ball, an object which is acted upon by its environment. If you reverse it and it alone it’s not going to end up where it started.

    But I’m not requiring it to; I merely want it to contain the same (pertinent) information. I think that, for the moment, I’m quite happy with the formulation I used in the last post: there is a kernel of probabilistic information that makes up me; this may evolve, but only in such a way as to incur no significant information loss, that is, from one instant to the next, nothing gets lost. It can’t be cloned, none of the branches of a fork is me, and all the nasty paradoxes of multiply-instantiated selves vanish.

    In this case, the two Mona Lisa’s are the same object, in the sense that two legs of a table are the same object.

    Sorry, but I don’t see that they are the same object at all. They’re different parts of one object, but are neither identical to the whole object, nor to one another.

  51. If I were to copy a mind I would first switch it off, then make a copy, and then restart the original. Not restarting the original would be equal to letting the person die. Can’t I restart a brain/mind? Of course I can. People very often “restart” after strong epileptic seizures. They “cease to be” to some time (as persons), but they “come back” after waking up. It is possible because what matters for continuity of personality is brains connections, not necessarily non-stop failure-free performance.

    Uploaded copy would the “just the same”, but not “the same”. Exactly as is the case with books: one copy of The Origin of Species is “equal” to another, but they are still two separate copies.

    If we allowed the two copies to operate we would have two persons, who would be behaviorally indistinguishable at first. After a while they would develop and diverge more and more.

  52. Hi Jochen,

    > Well, I’m trying to consider forking in the general case, which includes not just your particular scenario

    That’s fine, but it’s not really forking, because in forking (as in a river branching), one becomes two or more. Two becoming two is not forking as I conceive of it, but I will grant that my forking could be seen as a special case of a more general phenomenon of a certain number of objects being scrambled and rebuilt as another number of objects.

    > But the question at issue here is certainly whether either of our ways of thinking about identity is right, no?

    I don’t think one way of thinking about it is right, because I don’t think there is a fact of the matter. I think there are more problematic and less problematic ways to think about it.

    > So you can’t formulate the answer to some thought experiment by just flatly assuming that it is…

    OK. What I’m trying to get at is that you said we know what happens in one case and we don’t know what happens in another case, but I don’t think that’s quite right. I would say rather that we agree what happens in one case and don’t agree on another case, and that this doesn’t get us much. If we brought more people into the discussion you then there may no longer be universal agreement on any case.

    > But I’m not requiring it to; I merely want it to contain the same (pertinent) information

    But it won’t. As in the ball, if you reverse it without reversing its environment, it will not end up in the state it was in. Part of the information needed to reverse it has diffused into the environment. And part of the information needed to reverse the environment has diffused into the brain. No information is lost, it’s just spread around.

    > They’re different parts of one object, but are neither identical to the whole object, nor to one another.

    True. But if you point at one corner of a table and ask an ordinary person “Is this a table?” they will say “Yes.” If you point at another corner of the same table and ask the same question, they will also say yes. Unless the context is very clear, a part of an object is assumed to represent the object as a whole. It is in this way that the two Mona Lisas, on a strict theory of identity, are the same object. Even more strictly, they are different parts of the same 4D object.

    I wonder how you don’t see the point of the ball analogy. Are you perhaps taking the ball’s location too literally? You don’t care if the brain is in a different location as long as its in the same internal state?

    But it won’t be in the same internal state either. Consider again the case of cosmic rays. Flying through your body every instant, and no doubt having some effect on the underlying probabilities of the particles in your brain. If you reverse time without reversing the trajectory of all the particles that have passed through and affected your brain, then you simply will not end up in the same state. Imagine the cosmic rays and your particles as billiard balls (which I know they are nothing like, but I think the principle of interaction is the same). Suppose a cosmic billiard ball deflects the path of a brain billiard ball (and is itself deflected). That particle, part of the state of your brain, is now on a new trajectory. But if you reverse time now without the influence of that cosmic billiard ball, that new trajectory will be projected backward in time beyond where it actually began, meaning you end up in a completely different state (since the same kind of thing will be happening all over the brain) than you started out in. I have to admit my imagination fails me in trying to picture what this would be like. I don’t know if your brain would disintegrate or just go haywire, firing randomly, or if you would start to experience the arrow of time in reverse. Whatever happens you won’t end up in the state you started in.

  53. That’s fine, but it’s not really forking, because in forking (as in a river branching), one becomes two or more.

    Well, but as we both acknowledge, unlike a river branch, we need additional material to produce two copies. Hence, to me, the most general case includes all conceivable sources of additional material, and all conceivable processes that produces two outgoing copies, and thus, also includes the case of the ingoing extra material being another person, and both are treated in exactly the same way. So, in general, identity is not conserved in forking, which makes it sensible to ask if it is conserved in any fork at all.

    As in the ball, if you reverse it without reversing its environment, it will not end up in the state it was in.

    But you will end up with a state in which it’s clearly the same ball, so evidently, the ball’s identity was preserved. If I slice the ball in half, then in general, there is no way to reverse the dynamics on one of the halves in order to get out the original ball again, so there is no sense in which the half is the same thing as the original ball. And if I bring in some additional material and construct the missing half from that, then I can only get an exact match if I can clone, that is, if I can duplicate the probabilistic information of the original second half, such that I end up with a complete copy of the original information. Here of course the analogy breaks down, because the identity of a ball is unlikely to meaningfully depend on probabilistic information, but since I’m starting out from the assumption that a mind does, we still have good grounds to reject the idea that identity is preserved in a forking.

    If you point at another corner of the same table and ask the same question, they will also say yes.

    But this is merely habit. They could, for instance, be wrong, if all they see is a leg, and assume (falsely) that the rest of the table is attached.

    Even more strictly, they are different parts of the same 4D object.

    In the general case of a reproduction, I don’t see how they are. One could be reproduced from a photo (or some highly detailed scan), for instance, and thus, be a 4d-object that has never been any part of the original Mona Lisa. And if you want to argue that taking the photo is enough to make both part of the same object, then there’s really only one great big 4d-object, the universe (or causally connected parts of it, anyway).

    That’s exactly the reason I introduced the temporal copy of the book on the table: there, it’s clear that they are part of the same 4d-object, due to their causal connection.

    Whatever happens you won’t end up in the state you started in.

    I’m not requiring that it should end up in the same state, but merely that the state it is in is continuously connected to one that is unambiguously me—i.e. that that kernel of probabilistic information has not undergone a dissipative process. This is my attempt at fixing a sufficient condition for identity: if no such continuous connection exists, then there is now different information in the ‘copied’ brain; so if the original information specified me, then the new information specifies somebody else.

  54. Hi Jochen,

    > Hence, to me, the most general case includes all conceivable sources of additional material, and all conceivable processes that produces two outgoing copies

    That word “copies” is the important one. In my view, having a copy implies that it is more or less indistinguishable. It’s not clear whether you see these copies as being the same as each other or not.

    > So, in general, identity is not conserved in forking, which makes it sensible to ask if it is conserved in any fork at all.

    My position has always been that identity is about having certain mental states. So of course identity could only be conserved in processes where mental states are conserved. I think “forking” or “scrambling” processes which conserve and duplicate mental states also conserve and duplicate identity. Scrambling processes which don’t conserve mental states are irrelevant.

    > But you will end up with a state in which it’s clearly the same ball, so evidently, the ball’s identity was preserved.

    OK. Good point. The difference between the transformation on the ball here and on the transformation in a forking procedure is that we get the same matter in and out.

    So a better transformation might be the assimilation of nutrients into the body and the excretion of waste. If you want to play this process back in time, you’re going to have to play it back on all the waste too, solid, liquid and gas. If you don’t do this right, I feel that perhaps your body may disintegrate or something. You would be destroyed and so would your identity.

    But I could be wrong about this. It’s pretty hard to imagine, after all. I think that another scenario is also likely. Reversing time leads to seemingly highly improbable coincidences, leading to a fall in entropy. If things are not exactly right, then these coincidences will fail to happen. If you choose not to reverse the environment, things will not be exactly right. Entropy will cease to fall and indeed will begin to rise again. I think that this means that a conscious being would begin to experience time in reverse too, so as you continue to reverse time the being gets older rather than younger. This corresponds to how the ball would lose horizontal momentum to air resistance rather than gain it.

    So now you are no longer requiring that time reversal brings us back to the same state (this was your original view) but instead that it brings us continuously to a new state which we would still consider to be the same identity since it is similar enough and has been transformed continuously.

    However this would also hold in a forking scenario. If you time reverse either of the copies without time reversing the other or its environment, then that entity will soon be evolving with increasing entropy much as if time had not been reversed. It will not regress to the starting state but will progress to a new state, a state which is continuous with and similar to 2a, say. As such this state would still be a continuation of the identity.

    > In the general case of a reproduction, I don’t see how they are.

    In the general case of a reproduction, the copy is imperfect, so there is a meaningful distinction to be made between the original and the replica. In the case of a perfect reproduction, then I don’t think there is any meaningful difference any more save physical continuity which I don’t regard as important for personal identity for reasons discussed.

    > it’s clear that they are part of the same 4d-object, due to their causal connection.

    Well, it’s not like the Monas Lisa are completely without causal connection. They are both descendants of Da Vinci’s original work after all, and so causally connected to events which transpired in renaissance Italy. If he had chosen to give the Mona Lisa a moustache, say, then they would each be a little different. It’s just that they are no longer causally connected to each other, because they are on separate branches.

    > I’m not requiring that it should end up in the same state, but merely that the state it is in is continuously connected to one that is unambiguously me

    As is the case in the forking scenario.

    > that that kernel of probabilistic information has not undergone a dissipative process.

    I don’t think you have yet managed to explain what this means. What is the difference between being affected by the forking process and being affected by the passing of cosmic rays through your skull? Is it not the case that information is dissipated in both cases?

  55. It’s not clear whether you see these copies as being the same as each other or not.

    Well, they’re at most as similar as no-cloning allows.

    I think “forking” or “scrambling” processes which conserve and duplicate mental states also conserve and duplicate identity.

    But how could you have such a process if you have some uncopyable probabilistic information?

    It will not regress to the starting state but will progress to a new state, a state which is continuous with and similar to 2a, say. As such this state would still be a continuation of the identity.

    Of 2a’s, yes. But not of me: information not present in 2a is needed to get to something that could on its own be continuously connected with me.

    In the case of a perfect reproduction, then I don’t think there is any meaningful difference any more save physical continuity which I don’t regard as important for personal identity for reasons discussed.

    I was merely wondering about your use of ‘4d-object’. It seems to me that the creation of a different (perfect) copy is more analogous to a second 4d-object coming into existence, rather than the copy somehow being tied to the original’s worldtube.

    Well, it’s not like the Monas Lisa are completely without causal connection.

    But it’s also clear that that’s of a very different kind—again, in the time copy example, if I tear out a page in the earlier copy, it’s also missing in the latter. I could damage either Mona Lisa without that having an effect on the other.

    What is the difference between being affected by the forking process and being affected by the passing of cosmic rays through your skull? Is it not the case that information is dissipated in both cases?

    To be honest, I’m not sure how precise I can become in this medium. But let’s for the sake of simplicity look at the identity of a single qubit in an unknown state. Any ‘forking’ could only lead to, at best, the original qubit, plus an imperfect copy; and most generally, one would end up with two imperfect copies. From neither of these could one, without knowing the original state, get to the state of the original. Information has been lost.

    But an interaction with a stray cosmic ray would merely set up an entangled pair (if the state of the outgoing ray depends on the state the qubit is in). There, no state change of the original qubit occurs: in particular, all measurements carried out upon it would yield the same outcomes. No information has thus been lost, and we still have the same qubit.

  56. OK, let’s scratch that last example, I don’t think it really does what I intended it to do. First of all, the QM treatment introduces some worries, so I’ll revert to classical probabilistic modelling; secondly, there are then some interactions with the stray cosmic ray under which the qubit’s identity isn’t preserved. But that last part may actually be a blessing in disguise, since of course, we know there are also real-world interactions under which identity isn’t preserved.

    For example, imagine a bit that is with probability p in the state 0, and with probability (1-p) in state 1. Now, what happens depends on the precise character of the interaction with the cosmic ray: obviously, if it doesn’t change the bit’s state, then all the information will be preserved; likewise, if it affects both possible values of the bit symmetrically (i.e. flips 1 to 0 and vice versa)—afterwards, the bit will be in state 1 with probability p and in state 0 with probability (1-p), from which the original state can be recovered by simply undoing the bit flip.

    But if it interacts asymmetrically—say, if it flips the bit only if it is 1, and leaves a 0 unchanged—then there will be information loss: afterwards, the bit will certainly in the state 0, and no information about the original probabilities will be recoverable. In fact, from an information-theoretic point of view, it will then carry no information at all anymore—its original identity will have been obliterated.

    So there exists two kinds of interactions: information-preserving and information-destroying ones. But that’s actually just what we should expect: in the real world, there likewise exist such interactions. If I interact with a car in such a manner as to get in and drive somewhere, I wouldn’t expect my identity to suffer; however, if I interacted in a manner such as standing in front of it while it drives at a high speed, scattering my broken remains on the roadside after the crash, identity loss seems an obvious outcome.

    But then, the no-cloning theorem simply tells us that there is no information-preserving dynamics which produces two copies of the original bit, and hence, such a transformation would be intrinsically information- (and identity-) destroying.

    So, in general, whether information is preserved depends on the particulars of the interaction; but as soon as it involves cloning, it can’t be preserved.

  57. Hi Jochen,

    > But how could you have such a process if you have some uncopyable probabilistic information?

    So the idea is that we shouldn’t be too precious about precise quantum state. I argue that this state is being disrupted all the time by interactions with the environment, so if we require it to be preserved exactly then we’re dying every moment. If a future state is approximately physically identical to me (e.g. as good a clone as QM will allow, or perhaps as close as my typical successor states are to each other in an interval of a few milliseconds) it’s good enough to be considered a successor state for my personal identity.

    > Any ‘forking’ could only lead to, at best, the original qubit, plus an imperfect copy; and most generally, one would end up with two imperfect copies.

    Right. And my view is that this is also the case for general interactions with the environment, where qubits (if you want to characterise it in this way, but I’m not convinced brain state is composed of qubits per se) are disrupted all the time.

    > OK, let’s scratch that last example,

    OK.

    > QM treatment introduces some worries, so I’ll revert to classical probabilistic modelling

    Agreed. Good idea.

    > likewise, if it affects both possible values of the bit symmetrically (i.e. flips 1 to 0 and vice versa)—afterwards, the bit will be in state 1 with probability p and in state 0 with probability (1-p), from which the original state can be recovered by simply undoing the bit flip.

    Not so fast. You have to know that you have to undo the bit flip. You can only know that by reversing the cosmic ray also.

    > But if it interacts asymmetrically—say, if it flips the bit only if it is 1, and leaves a 0 unchanged—then there will be information loss:

    OK, in your example there is information loss.

    > So there exists two kinds of interactions: information-preserving and information-destroying ones.

    Hmm, OK, but I would say the only information-preserving interactions are non-interactions where both your brain and the cosmic ray are left completely unaffected. Any actual interaction would necessarily change the state of the qubit in a way which cannot be reversed without the cosmic ray.

    It’s not clear whether you consider your list of scenarios to be particularly representative or exhaustive. There are important cases you have not considered. For instance, the cosmic ray could slightly increase the probability of the qubit being in state 1 and decrease the probability of it being in state 0. I think this kind of interaction would be much more characteristic of what is actually happening every moment than anything you listed, and it is just the kind of thing that is irreversible without the cosmic ray.

    So, again, if you want identity to imply perfect reversibility from looking at the brain alone, then it means you are dying and being replaced every moment, or that identity is nonsense. This will not do, so we need a more flexible concept of identity.

  58. Not so fast. You have to know that you have to undo the bit flip.

    I don’t mean literally undoing it; I merely mean to say that such a transformation exists (among the admissible transformations taking probability vectors to probability vectors), and that hence, both pre- and post-interaction states carry the same information—for another way of looking at this, the Shannon entropy will in both cases be the same. Or, for yet another way, both the bit before and after the interaction will be perfectly correlated. This is what characterizes information-preserving interactions.

    There are important cases you have not considered. For instance, the cosmic ray could slightly increase the probability of the qubit being in state 1 and decrease the probability of it being in state 0.

    Interactions can be characterized fully by the way they act on the basis vectors of the system, thanks to the linearity of the dynamics. So an interaction such as the one you’re describing would be a probabilistic one, say, sending 0 to p*0 + (1-p)*1. You’re right that I didn’t consider those, because a) we know that such interactions in the real world only occur due to quantum uncertainty, and I was considering the case where we don’t take quantum effects into account, and b) they don’t complicate my account significantly—you’d simply have to check whether conserve information or not.

    Plus, of course, in reality things are going to be more complicated than in the case of a single bit—a system could well employ redundancy and other error-correcting measures to develop some resilience even against dissipative dynamics.

  59. It’s really not terribly different from the case of a newspaper—many interactions will destroy the information within, but a lot also won’t. The only difference is that unlike a deterministic newspaper, a probabilistic one can’t be cloned.

  60. Hi Jochen,

    > Or, for yet another way, both the bit before and after the interaction will be perfectly correlated.

    I really think you’re misapplying concepts here, and I think you’ll see that too if you just think about it a little more. Suppose some qubits are flipped by cosmic rays and others are not, and the pattern of flips is essentially random and can only be understood and reversed by looking at the cosmic rays. In a computer, this would essentially erase all the data on it. In order for information to be preserved, you have to know which bits were flipped and which were not. You have to know the state of the whole system, including cosmic rays.

    > So an interaction such as the one you’re describing would be a probabilistic one
    > we know that such interactions in the real world only occur due to quantum uncertainty, and I was considering the case where we don’t take quantum effects into account

    I’m confused now, because in the original thought experiment, though you asserted you were considering the classic case, you had explicit discussion of probabilities all the same. You said “imagine a bit that is with probability p in the state 0, and with probability (1-p) in state 0”.

    Nevertheless, whether the bit flips will largely depend on the state of the cosmic ray. Thus, in the typical interaction between the notional qubits of your brain and cosmic rays, the transformation will be irreversible unless you reverse both. Thus we are dying every moment unless there is a large amount of redundancy and error correction as you suggest.

    This is the case in the newspaper and in classical computers. The information contained does not depend on any particular atom or particle. All interactions with the newspaper destroy some information (in that the interactions are not reversible with the newspaper alone), but the information they destroy is usually information we don’t care much about, such as the state of a particular electron.

    My position is that this is also the case with brains. That our identity has nothing to do with the state of individual electrons. That there is redundancy and error correction and that it is implausible that the interaction of a single cosmic ray could kill us and replace us with a new person when we are being hit by millions of them every minute. If this is so then the no-cloning theorem does not rule out forking or cloning or mind uploading because our identity is not tied to precise quantum state.

  61. In a computer, this would essentially erase all the data on it.

    But the fact that our computers don’t routinely have all their data erased shows that such interactions can’t be all that frequent, or that it’s possible to guard against their effects, no?

    If this is so then the no-cloning theorem does not rule out forking or cloning or mind uploading because our identity is not tied to precise quantum state.

    This doesn’t follow. Even if our identity doesn’t depend on the precise microstate (about which I agree), that doesn’t mean that it isn’t tied to probabilistic information—take my example from way back when, the question of whether you’ll eat a ham sandwich or a toast Hawaii. There are certainly a great amount of microstates corresponding to either possibility, but still, the overall distribution might be genuinely probabilistic—with probability p, it’s one from the set of the ham sandwich-microstates, and with probability (1-p), it’s a toast Hawaii microstate. This still wouldn’t be clonable, but it would be resilient to (a large number of) cosmic ray interactions. Certainly, there is always the possibility that some unfortunate event occurs that might have an effect on these probabilities, but well, this is, again, just what we observe—sometimes, people do in fact loose their identities and die.

  62. Hi Jochen

    > But the fact that our computers don’t routinely have all their data erased shows that such interactions can’t be all that frequent, or that it’s possible to guard against their effects, no?

    It doesn’t show that they’re not frequent, I don’t think. It shows that such interactions don’t matter for most purposes. This is what I am seeking to establish, that minor perturbations of microstate don’t typically make much difference to anything even though we can conceive of situations where they might.

    However, if we deliberately want to make a system which incorporates randomness, then these perturbations might percolate up to make macroscopic differences in behaviour. So a computer that has a device which harnesses quantum randomness might actually be meaningfully influenced by cosmic rays. In my view this does not mean that the identity of the system is altered. It just means that the system could go one way or another and the cosmic rays can influence which way it goes.

    > Even if our identity doesn’t depend on the precise microstate (about which I agree)

    If it doesn’t depend on the precise microstate then I believe the no cloning theorem doesn’t apply.

    > There are certainly a great amount of microstates corresponding to either possibility

    True. But even slightly altering a specific microstate will typically have a very small effect on the macroscopic aggregate probability.

    > This still wouldn’t be clonable, but it would be resilient to (a large number of) cosmic ray interactions.

    In my view, it’s either clonable or it is not resilient to cosmic ray interactions. I don’t think you can have it both ways.

    I think we’ve established that the no-cloning theorem doesn’t really prove anything. It shows that you can’t have a perfect reproduction, but we’ve already agreed (it seems to me) that perfect reproduction is not necessary for preservation of identity. Now we’re just haggling over whether cloning would be a more drastic transformation of state than we undergo every day. I don’t see any argument that this should necessarily be so.

  63. However, if we deliberately want to make a system which incorporates randomness, then these perturbations might percolate up to make macroscopic differences in behaviour.

    They might, of course. In that case, you would die—for instance, there certainly is the possibility that a stray cosmic ray strikes some neuron, producing a cascade leading to some kind of seizure that is ultimately fatal.

    But in most cases, errors introduced by cosmic rays can be simply corrected. Consider a system that doesn’t consist of a single random bit, but of three, containing exactly the same information—the change, or even deletion, of one bit won’t make a difference for the information contained in the system, as long as you’ve chosen some appropriate error correcting code (which can be done—in fact, that’s the way things are done in proposed schemes of quantum communication).

    If it doesn’t depend on the precise microstate then I believe the no cloning theorem doesn’t apply.

    Why wouldn’t it? Whether low-level probabilities or high-level probabilities are considered makes absolutely no difference to the proof of the theorem. You could, for example, have a box—a macroscopic box, like a shoebox—with either a red ball or a green ball in it. In this case, the no-cloning theorem tells you that there is no physical dynamics such that you can produce a copy of that box in which there is a red ball or a green ball with exactly the same probability as in the original box.

    True. But even slightly altering a specific microstate will typically have a very small effect on the macroscopic aggregate probability.

    Not really. If ‘wanting a ham sandwich’ corresponds to all the bit strings from 1-5000, in binary notation, and ‘wanting a toast Hawaii’ corresponds to the bit strings from 5001-10000, then on almost all of these bit strings, a single impact by a cosmic ray won’t change anything: changing one bit in one of those strings will lead to another bit string still in that same set. The same goes for a probabilistic combination of one from the set 1-5000 with probability p, and one of the set 5001-1000 with probability 1-p. If we say that the ‘real’ microstate is 2375 with probability p, and 9873 with probability (1-p), then a cosmic ray impact could probably change this to 2503 with probability p, flipping one bit in the bit-string; but this would equate to the same macrostate.

    In my view, it’s either clonable or it is not resilient to cosmic ray interactions. I don’t think you can have it both ways.

    I hope the example above makes it clear that you can.

  64. Perhaps think of this as transmitting a message, from a point t1 in time to a point t2: given a well-chosen code, most single-bit, and a lot many-bit errors can be corrected, and hence, the information content doesn’t change—and then, the identity stays conserved. This doesn’t change once you introduce probabilistic combinations of messages.

    Only once the message gets too corrupted to still be interpreted at the other ‘end’ is there any loss of information.

  65. Hi Jochen,

    > They might, of course. In that case, you would die—for instance, there certainly is the possibility that a stray cosmic ray strikes some neuron, producing a cascade leading to some kind of seizure that is ultimately fatal.

    I don’t think the scenario you paint is illustrative of what we are talking about. We are talking about the idea that you might choose a ham sandwich instead of a toast Hawaii. To me, that’s fine. To you, messing with these probabilities implies that you have died and an impostor taken your place.

    > But in most cases, errors introduced by cosmic rays can be simply corrected.

    Of course. So most small changes don’t matter. Which to me means the no-cloning theorem doesn’t apply.

    > You could, for example, have a box—a macroscopic box, like a shoebox—with either a red ball or a green ball in it.

    If you’re comparing it to such a classical system then it’s hard to make sense of the idea that there is an underlying probability there. In fact, the probability is 100% that the ball is red if there’s a red ball or 100% that it is green if there’s a green ball. The only probabilities we can meaningfully discuss in such a classical system are epistemic probabilities which have nothing to do with the system and everything to do with the subjective state of the observer. The physical dynamics of cloning such a system are simple. Open the box. See the colour of the ball. Close the box. Put an identical ball in an identical box.

    > then on almost all of these bit strings, a single impact by a cosmic ray won’t change anything

    That’s not quite right the way you’ve described it. Most of these bit strings would be flipped to the other category by a flip in the most significant bit. But your point would probably be made by the observation that most cosmic rays will not hit the most significant bit or change the category.

    > If we say that the ‘real’ microstate is 2375 with probability p, and 9873 with probability (1-p), then a cosmic ray impact could probably change this to 2503 with probability p

    Right. So 0100101000111 (2375) could go to 0100111000111 (2503). Or, it could go to 1100101000111 (6471). The probability that it goes to a number > 5000 is small but it’s there and that will have an overall effect on the macroscopic probability.

    But, honestly, I find your example to be confusing. You seem to be assuming both that there is real hidden variable microstate and that there are intrinsic probabilities in play. This to me is a mixture of two ways of looking at things that doesn’t help at all, much as in the case of the ball in the box which is supposed to have some intrinsic non-epistemic probability or being green or red.

    Besides, you’re basically assuming that the system can be in state x with probability p and state y with probability 1-p, and stipulating that the interaction with the cosmic ray doesn’t change this at all. This doesn’t seem to me to be meeting the challenge but to be ignoring it. You’re not considering the very plausible (in my view overwhelmingly likely) case that external influences routinely affect the probabilities of various quantum measurements one could make, the very same measurements sensitive to the argument of the no cloning theorem.

    Intuitively, I associate things such as deciding what to eat with the firing of neurons, and intuitively I can imagine cosmic rays having some marginal influence on the probability of a neuron firing and so influencing that decision in a tiny way. I totally accept that it is possible to have systems which are resilient to such manipulation, however I don’t see what the no-cloning theorem has got to do with them, because if they can tolerate and correct for the interference of cosmic rays then I don’t see why they couldn’t tolerate and correct for whatever happens the process of being copied. This is why we can copy newspapers after all.

    If there is some chance no matter how small that a cosmic ray could change some aspect of a system (e.g. going to 6471 rather than 2503), then on QM that probability is as real as anything else until it has been measured and it gets factored into the probability that the system is in the macroscopic state you care about. This means that the probabilities you are concerned with have to be changing all the time.

    > Perhaps think of this as transmitting a message

    Again, I’m totally on board with what you’re saying here, but if you can perform error correction then that provides a way to survive cloning too irrespective of the no cloning theorem.

  66. Open the box. See the colour of the ball. Close the box. Put an identical ball in an identical box.

    In which case you would quite obviously have failed at producing a box that contains a green ball with some probability, and a red ball with another probability, producing instead a box that contains, say, a green ball with certainty.

    But, honestly, I find your example to be confusing. You seem to be assuming both that there is real hidden variable microstate and that there are intrinsic probabilities in play.

    No. Assume the whole thing is coupled to a quantum system whose measurement would produce one outcome (a) with probability p, and another (b) with probability (1-p). If the outcome of this measurement were a, then the total system, corresponding to the whole mind/brain, would be placed in microstate 2375; if the outcome were b, it would be put in the microstate 9873. Hence, it’s in a probabilistic combination of microstates: p*(2375) + (1-p)*(9873).

    Viewed from the macroscopic point of view, this means it is in a probabilistic combination of p*(want ham) + (1-p)*(want Hawaii). This state is non-clonable, ultimately because the quantum system isn’t. But it’s nevertheless resilient to small changes: flipping a bit may produce the microstate p*(2503) + (1-p)*(9873), which, however, still reproduces the same macrostate. Recall that one gets the effect of interactions on probabilistic combinations by combining its effects on the basis states (the bit-strings).

    Again, I’m totally on board with what you’re saying here, but if you can perform error correction then that provides a way to survive cloning too irrespective of the no cloning theorem.

    Again, I think you’re misunderstanding what the application of the no-cloning theorem means here. After an attempt at cloning, you would have some entity in the (micro-) state q*(2375) + (1-q)*(9873), with q different from and bounded away from p. Error correction, without knowledge of the original probabilities, doesn’t help at all.

  67. Hi Jochen,

    > producing instead a box that contains, say, a green ball with certainty.

    Which, as I said, is what you started out with. Any probability was in your head and not an aspect of the system being cloned.

    > Assume the whole thing is coupled to a quantum system whose measurement would produce one outcome (a) with probability p, and another (b) with probability (1-p).

    OK, but now what does the numeric microstate give us? Why not stick with a and b? Introducing microstate on top of these probabilities is pointless. The microstate you are describing seems to have no role but to allow for some kind of epiphenomenal interaction with cosmic rays which you can stipulate makes no difference to anything. You might as well simply stipulate that the cosmic ray has no effect.

    What I am suggesting is that the cosmic ray might have an effect on the system that is causally efficacious, i.e. the quantum system that has outcome a or b.

    > flipping a bit may produce the microstate p*(2503) + (1-p)*(9873)

    You’re assuming that there is real underlying probability in the quantum system but not in the interaction that flips the bit. In reality, both would be quantum, and the bit flip would have some small probability of switching the macrostate. If the probability of there being a bit flip is 1/1000, and the probability of a bit flip changing category is about 1/14 (as it would be in your example), then for each of your microstate numbers there is a probability of about 0.00007 (let’s call this k) that it has switched to the other food choice.

    So your actual macrostate is not p*(want ham) + (1-p)*(want Hawaii) but something like [(1-k)(p) + (k)(1-p)](want ham) + [(1-k)(1-p) + (k)(p)](want Hawaii). (I feel I haven’t got this math quite right but you probably get the point).

    > with q different from and bounded away from p.

    Right. Just as the introduction of k above changes the probabilities in your example.

    > Error correction, without knowledge of the original probabilities, doesn’t help at all.

    Right. And the same goes for cosmic ray interaction, which I hold will either change the probabilities or you must have enough redundancy and encoding to effectively reconstruct them.

  68. Any probability was in your head and not an aspect of the system being cloned.

    Which, however, doesn’t matter as far as the no-cloning theorem is concerned. Of course, if you go and look into the box, then you don’t have a probabilistic system anymore, so no-cloning no longer applies; but my stipulation from the start was that mental states are probabilistic entities.

    OK, but now what does the numeric microstate give us? Why not stick with a and b?

    I was merely looking for a simple way to illustrate error correction, by pointing out that in a typical macroscopic system, information will be redundantly encoded, and hence, unaffected by stray cosmic rays (barring some massive unlikely event).

    You could equally well consider a system that consists of n copies of one that yields a with probability p, and b with probability (1-p). There again, the loss/corruption of the probabilities on some number of sub-systems is tolerable, while no-cloning nevertheless holds.

    You’re assuming that there is real underlying probability in the quantum system but not in the interaction that flips the bit.

    You’re right, I am assuming that the cosmic ray will interact with just one of the bits. This is because after any interaction, this is what we’ll find. Any cosmic ray traversing matter will in general be sharply localized—think of the tracks in cloud chambers, for instance. Or, think about a CCD-chip: a cosmic ray impinging will in general only light up a single pixel; so simply suppose that the bits are located on those pixels, and we’ll find only one has flipped.

  69. And I mean, ultimately, one only needs to point to the fact that secure and fault-tolerant quantum communication is possible to establish that something like the scheme I’m proposing can be made to work: there, it is likewise the case that errors can be corrected, without sacrificing the security afforded by the no-cloning theorem.

  70. Hi Jochen,

    > Which, however, doesn’t matter as far as the no-cloning theorem is concerned.

    It absolutely does. If all you’re talking about is an epistemic probability, i.e. the credence that you give to a certain proposition being true, then you know the reasons you have for deriving that credence and you can reproduce that. The no cloning theorem makes no sense unless the probability has some meaning outside your head.

    > Of course, if you go and look into the box, then you don’t have a probabilistic system anymore

    The way you described it, you never had a probabilistic system to begin with.

    > You could equally well consider a system that consists of n copies of one that yields a with probability p, and b with probability (1-p).

    How would that work? Say you want to set it so you have a macroscopic probability for a a of 0.1 and b of 0.9. How are you going to set the system up, say if you have 1000 subsystems? Let’s try one naive approach, where each of the subsystems has a probability of yielding a with probability 0.1 and b with probability 0.9. How do we generate your macroscopic result? It cannot be by majority poll of the subsystems, because if it were then you’d pretty much always get b. You could get pretty close to what you want by choosing one of the subsystems at random to be representative, but if cosmic rays have changed any of those subsystems then that will affect your overall probability slightly.

    > There again, the loss/corruption of the probabilities on some number of sub-systems is tolerable

    You’re assuming that the redundancy always does its job and there is never a failure. In fact, there is some tiny probability that there will be more corruption than tolerable, and this affects the macroscopic probability of some result that depends on these sub-sysytems. That doesn’t mean that the whole thing falls apart or is corrupted because we’re starting with the premise that more than one outcome (ham sandwich or toast Hawaii) are consistent with the stability of the system, so we’re not talking about a computer crashing or a person dying. Rather it just means that some macroscopic random event that could go one way or another with probability p and p-1 now goes one way or another with probability q and q-1, where q is almost but not quite the same as p.

    In the same way, if there is massive redundancy, then it seems plausible that all the changes to the probabilities of the subsystems would approximately even out in a cloning operation and we would be left with only a tiny change to p and p-1, just as in the case of cosmic rays.

    > Any cosmic ray traversing matter will in general be sharply localized

    Only after it is measured. And it’s not clear that it’s being measured here. It could end up entangled with the qubit, thereby affecting the qubit’s probability of being in a certain state.

    > And I mean, ultimately, one only needs to point to the fact that secure and fault-tolerant quantum communication is possible to establish that something like the scheme I’m proposing can be made to work:

    Good point. I think I would need to study up on those in order to give you a response. Off the top of my head, as far as I know these systems only work by insulating particles from interaction with the environment. It’s perfectly possible to disrupt or destroy the quantum communication, it’s just not possible to replicate it. They make it impossible to replicate by making it very fragile.

    I don’t see any reason to suppose that the brain is so fragile or that it relies on such delicate insulation from the environment. The brain will work just fine even if the probabilities of certain events are disturbed slightly. It is not crucial to your survival as a biological organism that you choose a ham sandwich over a toast Hawaii.

    But we’re getting out of areas I’m really comfortable with. I don’t really know what I’m talking about. I will allow that I could be wrong, that perhaps there are systems which cannot be cloned but which are robust and resilient in interaction with the environment. I find this to be implausible and wonder if someone who knew more about it could show you to be wrong, but I am not such a person. I also find it implausible that the brain is such a system, because I don’t see why it should be and the examples you have used to illustrate the idea seem to me to fail for one reason or another while being very contrived.

    So, in summary, I will concede an outside chance that the quantum (as opposed to classical) no cloning theorem is a problem for mind uploading or forking or cloning or whatever, but my credence for this is marginal at best.

  71. then you know the reasons you have for deriving that credence and you can reproduce that.

    No, in this case, you’d also know the probabilities, i.e. the whole state, and can just re-prepare. What you need is something like Knightian uncertainty: not only do you not know what’s in the box, you even don’t know with what probability either of the possible options is realized.

    Regardless, you’re right that in the case I’m considering, the probabilities involved are meant to be genuine, not reducible to underlying deterministic dynamics.

    How do we generate your macroscopic result?

    Think of it simply as a repetition of measurements: individual systems may, through say an interaction with a cosmic ray, deviate from the original probability distribution. But, on average, the original probability distribution can still be recovered from the sample, since the errors will tend to average out. Thus, we can ‘error-correct’ to a new ensemble, each of whose members will embody the original information.

    You’re right to point out that this won’t perfectly well reproduce the original probability distribution. But recall what I said earlier: I don’t require the message to be conveyed with character-by-character precision, I merely require it to still be legible.

    So let’s consider a mental state as a series of ‘characters’, each of which will be a probability distribution over several possible ‘symbols’ (which corresponds to acknowledging that merely the question of whether one wants a ham sandwich or toast Hawaii is not sufficient to pin down the full mental state). Not all of these characters, however, will matter; there is redundancy in the overall state. So, when the probability distribution of some of them gets ‘corrupted’, then, up to a certain point—in fact, up to the compression limit of the message given by the Shannon entropy—the total state will still be preserved.

    Only after it is measured.

    Well, that depends on your interpretation. In objective collapse theories, ‘measurement’ really isn’t different from any other interaction. Also, in many-worlds models, each world will only contain a sharply located cosmic ray, with the uncertainty being manifested in the fact that it takes a different path in every world.

    They make it impossible to replicate by making it very fragile.

    Fragility is not the issue, it’s not what gives security—in such a case, you could always assume a sufficiently adroit adversary in order to read the message. The security comes from the impossibility of cloning.

    So, in summary, I will concede an outside chance that the quantum (as opposed to classical) no cloning theorem is a problem for mind uploading or forking or cloning or whatever, but my credence for this is marginal at best.

    Mind uploading (destructively) would actually still work, under any no-cloning theorem (in the quantum case, it’d basically be what’s somewhat misleadingly called ‘quantum teleportation’). And as regards to quantum processes occurring in nature, recent years have brought about exciting proposals in this regard, and many people are now sure that things like avian magnetoreception, photosynthesis, and even olfaction depend on quantum coherent processes—whose coherence is often, in fact, bolstered rather than destroyed by the noisy environments they’re immersed in.

  72. Hi Jochen,

    > Regardless, you’re right that in the case I’m considering, the probabilities involved are meant to be genuine

    Right, so I’m more amenable to arguments from the quantum no cloning theorem than the classical analogue, which I don’t think really proves anything fundamental but only about what can be done given limited epistemic access.

    > But, on average, the original probability distribution can still be recovered from the sample, since the errors will tend to average out.

    Sure! They will tend to even out. But you won’t get *exactly* the right probability, you’ll get very close. The same approach you have used can more or less get around the classical no cloning theorem. By observing the outputs of a probabilistic system for long enough, you will tend to observe results that hew very close to the underlying probability. But there is no way to get it exactly right and that’s what the theorem proves. If you can tolerate a little error then you have severely undercut the argument that cloning is impossible, because the no cloning theorem only proves you can’t make an exact replica, not that you can’t make an approximate replica.

    > And as regards to quantum processes occurring in nature

    I don’t doubt that quantum processes happen in nature. I doubt that these quantum processes depend crucially on being isolated from external interference, which could come in the form of cosmic rays or cloning/forking.

  73. But you won’t get *exactly* the right probability, you’ll get very close.

    I’ve already discussed this in my last post. And as for no cloning, there is actually a finite difference between the closest clone and the original, while with sufficient error correction, you can achieve any desired degree of approximation.

    I doubt that these quantum processes depend crucially on being isolated from external interference, which could come in the form of cosmic rays or cloning/forking.

    Yes, that’s why I brought up the above examples in order to show that even in non-isolated systems, quantum coherence can be preserved.

  74. > And as for no cloning, there is actually a finite difference between the closest clone and the original, while with sufficient error correction, you can achieve any desired degree of approximation.

    It seems to me that the error correction might allow a clone with any desired degree of approximation also.

    Anyway, in conclusion, it seems to me that the no cloning theorem is about getting absolute precision. Once you get away from that then I don’t think that the no cloning theorem really applies. I don’t think there’s much to say on it. We’re basically just trading assertions.

  75. Have you ever wondered what it would be like to have an android body? What about mind uploading? I do not believe that mind uploading will offer immortality or life extension because it will not actually be you. While it is a way to live on, you would just be able to see a new version of yourself from your old body. Even as an android, I imagine you may have difficulties. For instance, food and alcohol would not be the same. I have written a fictional story about a person who has uploaded his mind into a perfect android body. You would think he would be living in a dream. But he will find that his past will come back to haunt him. My story is called A New Self by Roy Wells. You can borrow the book for free if you have a Kindle and Amazon Prime. I hope you enjoy it!

Leave a Reply

Your email address will not be published. Required fields are marked *