Downloading Hauskeller

uploadMichael Hauskeller has an interesting and very readable paper in the International Journal of Machine Consciousness on uploading – the idea that we could transfer ourselves from this none-too solid flesh into a cyborg body or even just into the cloud as data. There are bits I thought were very convincing and bits I thought were totally wrong, which overall is probably a good sign.

The idea of uploading is fairly familiar by now; indeed, for better or worse it resembles ideas of transmigration, possession, and transformation which have been current in human culture for thousands of years at least. Hauskeller situates it as the logical next step in man’s progressive remodelling of the environment, while also nodding to those who see it as  the next step in the evolution of humankind itself. The idea that we could transfer or copy ourselves into a computer, Hauskeller points out, rests on the idea that if we recreate the right functional relationships, the phenomenological effects of consciousness will follow; that, as Minsky put it, ‘Minds are what Brains do’. This remains for Hauskeller a speculation, an empirical question we are not yet in a position to test, since we have not as yet built a whole brain simulation (not sure how we would test phenomenology even after that, but perhaps only philosophers would be seriously worried about it…). In fact there are some difficulties, since it has been shown that identical syntax does not guarantee identical semantics (So two identical brains could contain identical thoughts but mean different things by them – or something strange like that. While I think the basic point is technically true with reference to derived intentionality, for example in the case of books – the same sentence written by different people can have different meanings – it’s not clear to me that it’s true for brains, the source of original intentionality.).

However, as Hauskeller says, uploading also requires that identity is similarly transferable, that our computer-based copy would be not just a mind, but a particular mind – our own. This is a much more demanding requirement. Hauskeller suggests the analogy of books might be brought forward; the novel Ulysses can be multiply realised in many different media, but remains the same book. Why shouldn’t we be like that? Well, he thinks readers are different. Two people might both be reading Ulysses at the same moment, meaning the contents of their minds were identical; but we wouldn’t say they had become the same person. Conceivably at least, the same mind could be ‘read’ by different selves in the same way a single book can be read by different readers.

Hauskeller’s premise there is questionable – two people reading the same book don’t have identical mental content (a point he has just touched on, oddly enough, since it would follow from the fact that syntax doesn’t guarantee semantics, even if it didn’t follow simply from the complexity of our multi-layered mental lives). I’d say the very idea of identical mental content is hard to imagine, and that by using it in thought-experiments we risk, as Dennett has warned, mistaking our own imaginative difficulties for real-world constraints. But Hauskeller’s general point, that identity need not follow from content alone, is surely sound enough.

What about Ray Kurzweil’s argument from gradualism? This points out that we might replace someone with cyborg parts bit by bit. We wouldn’t have any doubt about the continuing identity of someone with a cyborg eye; nor someone with an electronic hippocampus. If each neuron were replaced by a functional equivalent one by one, we’d be forced to accept either that the final robot, with no biological parts at all, was indeed the same continuing person, or, that at some stage a single neuron made a stark binary difference between being the same person and not being the same person. If the final machine can be the same person, then uploading by less arduous methods is also surely possible since it’s equivalent to making the final machine by another route?

Hauskeller basically bites Kurzweil’s bullet. Yes, it’s conceivable that at some stage there will come neurons whose replacement quite suddenly switches off the person being operated on. I have a lot of sympathy with the idea that some particular set of neurons might prove crucial to identity, but I don’t think we need to accept the conceivability of sudden change in order to reject Kurzweil’s argument. We can simply suppose that the subject becomes a chimera; a compound of two identically-functioning people. The new person keeps up appearances alright, but the borders of the old personality gradually shrink to destruction, though it may be very unclear when exactly that should be said to have happened.

Suppose (my example) an image of me is gradually overlaid with an image of my identical evil twin Retep, line of pixels by line. No one can even tell the process is happening, yet at some stage it ceases to be a picture of me and becomes one of Retep. The fact that we cannot tell when does not prove that I am identical with Retep, nor that both pictures are of me.

Hauskeller goes on to attack ‘information idealism’. The idea of uploading often rests on the view that in the final analysis we consist of information, but

Having a mind generally means being to some extent aware of the world and oneself, and this awareness is not itself information. Rather, it is a particular way in which information is processed…

Hauskeller, provocatively but perhaps not unjustly, accuses those who espouse information idealism of Cartesian substance dualism; they assume the mind can be separated from the body.

But no, it can’t: in fact Hauskeller goes on to suggest that in fact the whole body is important to our mental life: we are not just our brains. He quotes Alva Noë and goes further, saying:

That we can manipulate the mind by manipulating the brain, and that damages to our brains tend to inhibit the normal functioning of our minds, does not show that the mind is a product of what the brain does.

The brain might instead, he says, be like a window; if the window is obscured, we can’t see beyond it, but that does not mean the window causes what lies beyond it.

Who’s sounding dualist now? I don’t think that works. Suppose I am knocked unconscious by the brute physical intervention of a cosh; if the brain were merely transmitting my mind, my mental processes would continue offstage and then when normal service was resumed I should be aware that thoughts and phenomenology had been proceeding while my mere brain was disabled. But it’s not like that; knocking out the brain stops mental processes in a way that blocking a window does not stop the events taking place outside.

Although I take issue with some of his reasoning, I think Hauskeller’s objections have some force, and the limited conclusion he draws – that the possibility of uploading a mind, let alone an identity, is far from established, is true as far as it goes.

How much do we care about identity as opposed to continuity of consciousness? Suppose we had to chose between on the one hand retaining our bare identity while losing all our characteristics, our memories, our opinions and emotions, our intelligence, abilities and tastes, and getting instead some random stranger’s equivalents; or on the other losing our identity but leaving behind a new person whose behaviour, memories, and patterns of thought were exactly like ours? I suspect some people might choose the latter.

If your appetite for discussion of Hauskeller’s paper is unsatisfied, you might like to check out John Danaher’s two-parter on it.

 

 

 

26 thoughts on “Downloading Hauskeller

  1. >”Suppose we had to chose between on the one hand retaining our bare identity while losing all our characteristics, our memories, our opinions and emotions, our intelligence, abilities and tastes…”

    What exactly is this ‘bare identity’ which you seem to think can be separated from those other characteristics you provide? It sounds to me like a confused notion, almost like saying let’s retain the bare identity of a red rubber ball while losing the red color, the rubber material, and the spherical shape.

  2. Fair point – after all, Leibniz’s Law defines identity as being posession of a completely identical set of characteristics, doesn’t it?

    I suppose ‘bare identity’ would have to be something like the special characteristic that no-one else standing beside you could ever have, no matter how identical otherwise. Hauskeller actually refers to something like this, speaking of it as ‘numerical’ – I suppose he means something like that you still get a unique number even when standing in a row of identical n-tuplets.

    You might want to deny the existence of such a thing, and I’m not especially concerned to defend it here – I’d be content to rephrase so I was saying ‘even if such a puzzling thing as bare identity existed, how much would it be worth?’

  3. Peter and haig, I would say that ‘bare identity’ is consciousness itself, and without this primitive identity all the other characteristics to which you refer could not exist. In the retinoid theory of consciousness this bare identity is a sense of being at the origin of a spatial surround. In *Space, self, and the theater of consciousness*, I refer to this fundamental level of consciousness as C1. See the bottom of p. 327, here:

    http://people.umass.edu/trehub/YCCOG828%20copy.pdf

  4. “How much do we care about identity as opposed to continuity of consciousness? Suppose we had to choose between on the one hand retaining our bare identity while losing all our characteristics, our memories, our opinions and emotions, our intelligence, abilities and tastes, and getting instead some random stranger’s equivalents; or on the other losing our identity but leaving behind a new person whose behaviour, memories, and patterns of thought were exactly like ours? I suspect some people might choose the latter.”

    I agree with Haig (as do you perhaps to some extent) that the idea of “bare identity” – something independent of our memories, opinions, emotions, intelligence, abilities and tastes that still picks out a unique person – seems implausible. To leave behind a person with behaviour, memories, and patterns of thought exactly like mine would be for me to continue in all respects that matter, as Derek Parfit would argue (Reasons and Persons).

    Still, one’s bare *consciousness* – the experience of being a witnessing self independent of one’s characteristics – does exist, such that if all my characteristics were changed bit by bit so that I became a completely different person, that bare consciousness would be continuous across the transformation.

    Note that I’m not saying there’s an *entity* that stays the same, only that the *experience* of being a particular self which possesses various characteristics is continuous. So I as a particular person with various characteristics could disappear, to be replaced by someone else, even as the consciousness of being a particular self (being uniquely here and now) continues. I use this possibility in a thought experiment that suggests we shouldn’t anticipate the end of consciousness at death, even though *we* end of course, http://www.naturalism.org/death.htm#contintuity

  5. Hi – my first time commenting here, although I have enjoyed reading your posts, Peter, and the discussions they stimulate for several years.

    The above post reminds me of a theme used by the hard Sci-Fi author Greg Egan in many of his novels.

    One question that always plagued me concerned the instantaneous scanning and upload of a human consciousness (rather than the gradual ‘chip-head’ scenario). I imagine myself as the subject. Before being anaesthetised and placed into the scanning machine, I am given the choice as to whether my biological consciousness is ‘terminated’ (a bullet to the head) the instant the scan and upload is complete, without being revived from anaesthesia. Or, whether to allow my biological self to be awakened back into the real world. Which should I choose?

    In the latter case ‘I’ run the risk (50/50) of awakening back into my biological body and the whole exercise being pointless – the 50/50 risk of this seems obvious to me, but please anyone, feel free to poke holes in this assumption. However, if I choose to end my biological instantiation before revival, does this then guarantee ‘I’ awake in my virtual heaven to live happily EVER AFTER, or do I still run the same 50/50 risk, only this time with much higher stakes – ie virtual heaven versus annihilation?

  6. Tom: “Still, one’s bare *consciousness* – the experience of being a witnessing self independent of one’s characteristics – does exist, such that if all my characteristics were changed bit by bit so that I became a completely different person, that bare consciousness would be continuous across the transformation.”

    So ‘bare consciousness’ functions as ‘bare identity’?

    Everytime I read your stuff, Tom, I always get the uncanny sense that you’re theorizing around the basic insight that informs my position. So in Blind Brain terms, your piece on death turns on what I call the ‘asymptotic structure’ of experience, the way the only boundaries that can be consciously experienced are the boundaries that can be framed within conscious experience. If one conceives of death as an absolute boundary, then it becomes impossible to consciously experience, which provides the basis for your argument: that death be conceived as ‘continuity by other means,’ insofar as it represents, for conscious experience, an interval that can never be consciously experienced.

    But here’s a question: Why doesn’t this ‘inability to experience death’ simply count as a *cognitive shortcoming,* one more example in a long list of examples where the perspectival constraints pertaining to conscious experience prevent or problematize the adequate cognition of what is actually going on? Aren’t you advocating that we embrace a peculiar and profound form of ignorance?

    It seems to me that what you’re calling ‘identity’ here is clearly an artifact of missing information and/or cognitive incapacity.

  7. Thanks for clarifying Peter. Yes, Leibniz’s identity of indiscernibles comes in handy here, and though two separate entities can, at least for the sake of this thought experiment, be completely identical to each other by having all the same characteristics, the fact that there are more than one of them means each can have a unique ordinal value and thus be different from each other after all.

    The problem of uploading is only a paradox when looked at from the erroneous perspective of trying to ‘preserve the self’ instead of ‘transcending the self’. Asking ‘is the upload really me?’ is the wrong question, instead one should ask ‘what can uploading enable us to become?’

  8. Hi haig,

    Not sure what you mean by ‘transcending the self’? Surely the ‘self’ is all we have; the only thing we can be absolutely, 100% sure of – cogito ergo sum.

    As for ‘preserving the self’ being an erroneous perspective, isn’t that the whole point of consciousness uploading (and most religious promises of an ‘afterlife’ as well)?

    To be frank, if I pay my hard earned wonga to Cybermind Corp in order to ensure an immortal existence in the environment of my choice (my personal heaven), I certainly would be pretty miffed if I came round after the scan to find myself still in the ‘real’ world with the prospect of having to turn up for work the next day because I’d just blown my life’s savings on the upload procedure.

    Assurances from Cybermind’s CEO that ‘a version’ of ‘me’ currently resided in their servers and was having a very nice time thank you very much just wouldn’t console me I’m afraid!

    Tom: I read your paper linked in your last post and enjoyed it immensely. Your description of the subject always being present was very well presented and I agree entirely with you – when the two absolutes are considered. I have undergone general anaesthesia for broken bones etc, and the way you describe it is exactly accurate – no subjective time elapses during the unconscious state. I would, however disagree that sleep, or even drug induced states are like this, and I think there is a spectrum of consciousness from fully present (awake) and totally unconscious (anaesthesia really is the only time I have ‘experienced’ total unconsciousness). This is a small point however, and does not detract much from your main argument. But again I find little in your thesis to console me with regard to the prospect of my ultimate death and destruction of my consciousness. As you carry your thought experiment through to its conclusion when the radically altered subject awakens, this person is no longer ‘me’ as you point out. I have, in fact, died quite as thoroughly as if I had been run over by the proverbial bus! The fact that another, presumably very confused (having just popped into existence de novo) entity is now conscious means absolutely nothing to me (or as I’m dead by this stage, more accurately, the prospect of this entity being conscious would have meant nothing to me prior to my unconscious interval).

    Again, the ‘self’ is all we have, or at least it’s all I have – I can’t be sure about anyone else after all!

  9. DD – (on your first comment) I suppose you arrive at the 50/50 chance because your self has two equally good places to end up in, so each has an equal chance.

    But it could be otherwise, couldn’t it? It could be that you automatically stay in the old body unless it’s destroyed; it could be that there are two of you with an equal claim to your identity after the scanning – or two new people, neither of whom is quite you any more.

    Personally I’m less comfortable with examples where the original body has to be destroyed as a separate operation apart from the scanning. Sheer physical continuity in those cases makes me feel that the result must surely be death for the original. If the scanning necessarily destroys the original it seems more like a transfer (although ‘seems’ is certainly the word – no strong reason why, for example, a delay or overlap – with the new you being created before the old one is destroyed – should necessarily make such a fundamental difference).

  10. To me, the critical issue is this bare identity concept. We are all transferred from our present self to next instant self, every instant. At each leap, probably, many elements of our brain’s configuration change. Our “character” changes. There is a “new you” at every new instant, so what remains? memories. Memories of the events, and memories of “how we were” and how we felt and thought. Bare identity is just this record, fidelity problems aside. It is only this record that allows us to identify us as us. Now, how many of these memories, and which ones, have to be reasonably preserved in order to keep your identity integrity?

    Something that has always amazed me is how when dreaming, even if it is one of this absurd non-sense dreams, in which you pop up in a certain scenario, not knowing how (and not caring about how !?), I still know who I am. The reason, I believe, is that during the dream, I always recall some stuff of my past, in way or another…. It would be interesting to ask subjects under psychotropic drugs effect (Aldous Huxley style) how the feel about their identity. I bet it is memories that keep it in one piece.

    Even at subconscious level there is always a connection to your memory that enables and evokes the feeling of being you.

    What is this record? it is definitely not a data base. Would it be possible to copy and transfer this record elsewhere that its original owner? The owner and the record (at the present instant) are one single thing.

    In my opinion only a perfect instantaneous cloning could make sense, and it seems not to be allowed by physics.

  11. If we understand ‘Thing 1 being copied perfectly to Thing 2’ as follows: (i) Thing 1 is pointed as some MASS CONCRETE OBJECT (ii) Thing 1 is identical with Thing 2 up to lowest rules of phisiscs -> it obvious that BEING A COPY Thing 2 will never be THE SAME as Thing 1 NEVER MEASURED AND COPIED. There will be always some gap in which we can stick some phenomenal theories of consciousness. However probably we do not have to copy so perfectly to create the COPY WHICH WILL BE A PERFECT COPY FROM ANY MEASURABLE PERSPECTIVE:) so we do not understand this longing discussion. http://www.philosophyinscience.com

  12. Vicente: “Bare identity is just this record, fidelity problems aside. It is only this record that allows us to identify us as us. Now, how many of these memories, and which ones, have to be reasonably preserved in order to keep your identity integrity?’

    Your computer has a record of memories, but it has no self identity. Don’t see that you have to have a *self* to which memories are attached before you can identify them as *your* memories. Your primitive identity must be given by the unique perspective of the self-locus (I!) within your brain’s retinoid space. Given this and the added mechanisms of your cognitive brain, all else follows.

  13. Scott in #6:

    “So ‘bare consciousness’ functions as ‘bare identity’?”

    No, what I meant was that bare consciousness (the sense of being a self Arnold is getting at in #12) is that which remains constant across *changes* in one’s identity as a particular person with particular characteristics. There is no “bare identity” that picks out a particular person, rather the sense of being a self is common to all of us.

    “Why doesn’t this ‘inability to experience death’ simply count as a *cognitive shortcoming,* one more example in a long list of examples where the perspectival constraints pertaining to conscious experience prevent or problematize the adequate cognition of what is actually going on? Aren’t you advocating that we embrace a peculiar and profound form of ignorance?”

    The cognitive shortcoming or mistake that I point out in the paper on death is that people think they are going to experience non-being, permanently. Understanding that one’s particular consciousness is bounded by death helps to correct this mistake. Seeing that we shouldn’t anticipate nothingness at death isn’t to embrace ignorance, seems to me, but the opposite. But perhaps I’ve misunderstood what you’re driving at.

  14. “The cognitive shortcoming or mistake that I point out in the paper on death is that people think they are going to experience non-being, permanently.”

    People think this? I’m not so sure. I actually think people largely assume the third-person perspective: you (whatever you are) tarry for a time, then kaput – which is quite correct, as it turns out. What you’re trying to do, it seems to me, is canonize the first-person perspective, and the way it’s limitations allow one to interpret away the third-person fact of non-existence.

  15. Arnold,

    My computer has no record of memories. It has information stored in arrays of bits (flip-flop states) that can be retrieved when properly addressed and processed.

    That information only becomes meaningful when it reaches a conscious mind, before it is just physical stuff showing a funny behaviour.

    The DNA that encodes a brain development (info), probably has as many memories at embryo stages as it has 90 years later, but the brain has stored a whole life experience. Information is part of experience but there are many other ingredients.

    The “experience”, the sum and integration of many memories, is what I believe provides the continuum of that self.

    You cannot identify them as your memories, because those memories define you in the first place. The way in which memories are processed evolve throughout life, and memories too, but the experience substrate is somehow preserved.

    Please note that I am referring to the feeling of being somebody… sometimes referred as “the roots”.

    What I would like to know is what is the minimum experience memories, in quantity and quality, required to keep you being you.

    I agree with you, that a point of view is needed, a perspective, but that can be quite common to all conscious entities. We are discussing what makes each of us feel us when we look at reality from the (I!).

    I could happen that we all share a common space to a large extent. We are just variations of a unique theme. Theoretically, two twins having very similar lifes, could be said of being almost one single person.

  16. Tom, Scott:

    Humans cannot handle the concept of “non-existence”. “Nothingness” cannot be implemented on human minds. There is no element in our ordinary experience upon which to construct a metaphor for that. For the same reason we cannot understand quantum physics. Of course, the concepts are there, but they are not fully meaningful to us, there is not mental model for them.

    So for materialists, a sort of never ending stay in void and obscurity is the closest image to kaput, they can produce in order to model “non-being”.

    Now, the interesting thing is that when you try to make that mental exercise, to picture yourself in a state of “nothingness”, there is still something there that makes you feel you, what is it? I believe it is a permanent subconscious connection to your experience bank that constitutes a pilar of the emotional mental frame. Something similar to the retinoid space, but for feelings.

    Severe memory loss conditions are usually behind identity problems, aren’t they?

  17. Scott in 15:

    “People think this? [that they will experience non-being as nothingness] I’m not so sure.”

    Well, I took pains to present enough examples at the start of the paper to show that many do think it, at least until the mistake is pointed out to them. And I continue to run across more examples, see for instance the last chapter (“Return to Nothingness”) of Jim Holt’s book Why Does the World Exist? which is rife with references to impending nothingness at death: “…it is the prospect of nothingness that induces in me a certain queasiness.” p. 268 (It’s a well-argued and very entertaining book otherwise.)

    “What you’re trying to do, it seems to me, is canonize the first-person perspective, and the way its limitations allow one to interpret away the third-person fact of non-existence.”

    It’s an indisputable 3rd person fact that persons cease to exist, but we shouldn’t anticipate nothingness at death. Since as we agree consciousness is bounded, it can’t experience its own non-being, in which case for the first-person perspective there is *only* being, never nothing.

    So what should we anticipate at death? The thought experiment in the paper, if you follow it through, suggests that we should anticipate experience, but not in the context of the person who dies. I call this “generic subjective continuity,” as opposed to *personal* subjective continuity, which is what we have from birth to death.

  18. Vicente, the information stored in your computer is its memory. I certainly agree that its memory has no meaning for the computer, but this is a separate issue.

    Vicente: “The ‘experience’, the sum and integration of many memories, is what I believe provides the continuum of that self. …. You cannot identify them as your memories, because those memories define you in the first place.”

    The integration of many memories contributes to your phenomenal self model (Metzinger’s PSM). But your PSM is not your core self (I!) which is the fixed event providing the continuum of you, the continuing feeling of being you, to which all of your memories are attached. The PSM is built around I!.

    Vicente: “I agree with you, that a point of view is needed, a perspective, but that can be quite common to all conscious entities.”

    All conscious entities have an egocentric perspective, so some point of view is found in all, but each conscious entity has its own privileged/unique perspective within its personal retinoid space.

    The common space shared by all is the physical world in which we live, but no phenomenal space can be shared by another.

  19. Arnold:

    “The common space shared by all is the physical world in which we live, but no phenomenal space can be shared by another.”

    Nicely put! The physical world is that which is public, shared, intersubjective; it’s 3rd person space as described by science, extended and quantitative. But experience – phenomenal, 1st person space – is categorically private, unshared, and qualitative. And thus its denizens are not possible objects of scientific, intersubjective observation. I will never feel, undergo or observe your or anyone else’s pain, which I suppose is a good thing given I have my own pains to deal with 🙂

  20. Tom: But we *conceive* things we’ve never experienced all the time. Why should the conception of death be any different aside from the fact that, unlike most other everyday things, we cannot imagine ‘what it would be like’ *for us*? You seem to be confusing the inability to experience ‘first-personal nonbeing’ with the inability to *cognize* it.

    As a result, your conclusion smacks of a deflationary palliative, following through on the basic intuitions that underwrite ‘life-after-death’ convictions without the ontological extravagrances offered by traditional religion.

    Arnold: “The common space shared by all is the physical world in which we live, but no phenomenal space can be shared by another.”

    It would be helpful to know what ‘phenomenal space’ meant! But given that it is something natural, then I don’t see why myriads of people can’t share a given phenomenal space. ‘Shared experience’ pretty clearly seems to be a fundamental social cornerstone. After all, something keeps people coming back to McDonald’s! Is it there nonshared, perfectly private, experience of eating a Big Mac?

  21. Scott:

    “You seem to be confusing the inability to experience ‘first-personal nonbeing’ with the inability to *cognize* it.”

    Well, I make clear that we can and should cognize our pending non-being, that is, understand that we will die as an objective fact. It’s an easy thing to do. But some folks, I hope you now agree, have the intuition we will experience or undergo that non-being, for instance as a falling into permanent nothingness. So I don’t think I’m confusing these things, just pointing out that we can do the former but not the later.

    “…your conclusion smacks of a deflationary palliative, following through on the basic intuitions that underwrite ‘life-after-death’ convictions without the ontological extravagances offered by traditional religion.”

    Whatever it smacks of, is it a reasonable conclusion, given the evidence and arguments?

  22. Scott: “It would be helpful to know what ‘phenomenal space’ meant!”

    Phenomenal space is where our conscious experience of the world around us happens. According to the retinoid theory of consciousness, phenomenal space is retinoid space which is an integral part of the retinoid system. The minimal neuronal structure and dynamics of the retinoid system have been specified and its functional properties have been empirically tested. There is a considerable amount of evidence in support of the retinoid model. Since your retinoid/phenomenal space is a part of your brain, others are unable to share this space just as they are unable to share the space of your heart.

    Scott: “‘Shared experience’ pretty clearly seems to be a fundamental social cornerstone.”

    There is a crucial difference between a ‘shared experience’ and a ‘shared phenomenal space’. The former can happen when several people are involved in the same situation (eating at McDonald’s?); the latter can not happen for the reason given above. We can infer and imagine what another’s phenomenal space/experience is like, but we cannot actually share it.

    Of course, if we grant literary license, anything is possible — and thank goodness for that.

  23. Tom: I fear that I don’t find the thesis that people think of death as an experience of nothingness as opposed to nothingness very convincing. I do think that people rely on the experiences they have to metaphorically broach experiences they haven’t – or cannot have. And I think this is a more sober conclusion to draw from the evidence you adduce (which, you have to admit, can be interpreted in several ways). Death is literally unimaginable, but we imagine nonetheless – concoct ‘permanent nothingnesses’ – *as best as we can.* People are pretty canny about caveats like this, I think.

    For me the crucial issue is one of how we should *conceptualize* this ubiquitous feature of conscious experience, these ‘Limits With One Side’ so dramatically epitomized by death. I think you’ve set your teeth in a very real issue – in some respects, *the* issue.

  24. Arnold: It all depends on how you define the parameters of ‘share,’ sure. We share phenomenal experiences the way we share an appendix as a certain component in a larger mechanism. Your appendix is your appendix, but who would suppose otherwise? And, it remains fair to say I possess the same appendix. As the product of neural activity phenomenal experiences are computational organs–one would hope! So what else would sharing be other than this?

    Because if this is what sharing amounts to, then we share phenomenal experiences as intimately as anything that can be shared. Whatever the hell they ‘are.’

    But I suspect one could toss a debate like this down the well of conceptual definition.

  25. @#8. DD:

    > “Surely the ‘self’ is all we have; the only thing we can be absolutely, 100% sure of – cogito ergo sum.”

    The ‘self’ is transient, dynamic, always changing, and always reacting to its environment. “I think therefore I am” is antiquated, it should be “I am thinking, therefore I am becoming”.

    > “As for ‘preserving the self’ being an erroneous perspective, isn’t that the whole point of consciousness uploading (and most religious promises of an ‘afterlife’ as well)?”

    Too many people think like this unfortunately, they view preserving the self from the defensive position, to avoid death and disease, but fail to look at it from a growth perspective, that ‘uploading’ is a transition into something much more than what you are now. If you think uploading just means you will live indefinitely in some form similar to your current body/mind, then that’s just a failure of imagination. I was at a lesswrong meetup and one guy, an avid baseball player, was just concerned about whether, if he went through cryonics and had his brain uploaded without his body, he’d be able to throw a baseball the way he does now with his physical body. That means something to him, ok, I understand that, but it’s such a shortsighted understanding of what the true ramifications of uploading is.

Leave a Reply

Your email address will not be published. Required fields are marked *