ScriptoriumHomo Artificialis had a post at the beginning of the month about Dmitry Itskov and his three-part project:

  • the development of a functional humanoid synthetic body manipulated through an effective brain-machine interface
  • the development of such a body, but including a life-support system for a human brain, so that the synthetic body can replace an existing organic one, and
  • the mapping of human consciousness such that it, rather than the physical brain, can be housed in the synthetic body

The daunting gradient in this progression is all too obvious. The first step is something that hasn’t been done but looks to be pretty much within the reach of current technology. The second step is something we have a broad idea of how to do – but it goes well beyond current capacity and may therefore turn out to be impossible even in the long run. The third step is one where we don’t even understand the goal properly or know whether the ambitious words “mapping of human consciousness” even denote something intelligible.

The idea of transferring your consciousness, to another person or to a machine, is often raised, but there isn’t always much discussion of the exact nature of the thing to be transferred. Generally, I suppose, the transferists reckon that consciousness arises from a physical substrate and that if we transfer the relevant properties of that substrate the consciousness will necessarily go with it. That may very well be true, but the devil is in that ‘relevant’.  If early inventors had tried to develop a flying machine by building in the relevant properties of birds, they would probably have gone long on feathers and flapping.

At least we could deal with feathers and flapping, but with consciousness it’s hard even to go wrong creatively because two of the leading features of the phenomenon as we now generally see it are simply not understood at all.  Qualia have no physical causality and are undetectable; and there is no generally accepted theory of intentionality, meaningfulness.

But let’s not despair too easily.  Perhaps we shouldn’t get too hooked up on the problems of consciousness here because the thing we’re looking to transfer is not consciousness per se but a consciousness. What we’re really talking about is personal identity. There’s a large philosophical literature about that subject, which was well established centuries before the issues relating to consciousness came into focus, and I think what we’re dealing with is essentially a modern view that personal identity equates to identity of consciousness. Who knows, maybe approaching from this angle will provide us with a new way in?

In any case, I think a popular view in this context might be that consciousness is built out of information, and that’s what we would be transferring. Unfortunately identity of information doesn’t seem to be what we need for personal identity. When we talk about the same information, we have no problem with it being in several places at once, for example: we don’t say that two copies of the same book have the same content but that the identity of the information is different; we say they contain the same information.  Speaking of books points to another problem: we can put any information we like down on paper but it doesn’t seem to me that I could exist in print form (I may be lacking in energy at times, but I’m more dynamic than that).

So perhaps it’s not that the information constitutes the identity; perhaps it simply allows us to reconstruct the identity? I can’t be a text, but perhaps my identity could persist in frozen recorded form in a truly gigantic text?

There’s a science fiction story in there somewhere about slow transfer; once dematerialised, instead of being beamed to his destination, the prophet is recorded in a huge set of books, carried by donkey across the deserts of innumerable planets, painstakingly transcribed in scriptoria, and finally reconstituted by the faithful far away in time for the end of the world. As a side observation, most transfer proposals these days speak of uploading your consciousness to a computer; that implies that whatever the essential properties of you are, they survive digitisation. Without being a Luddite, I think that if my survival depended on there being no difference between a digitised version of me and the messy analogue blob typing these words, I would get pretty damn picky about lossy codecs and the like.

Getting back to the point, then perhaps the required identity is like the identity of a game? We could record the positions half-way through a chess game and reproduce them on any chessboard.  Although in a sense that does allow us to reconstitute the same game in another place or time, there’s an important sense in which it wouldn’t really be the same game unless it was the same players resuming after an interval. In the same way we might be able to record my data and produce any number of identical twins, but I’d be inclined to say none of them would in fact be me unless the same… what?

There’s a clear danger here of circularity if we say that the chess game is the same only if the same people are involved. That works for chess, but it will hardly help us to say that my identity is preserved if the same person is making the decisions before and after. But we might scrape past the difficulty if we say that the key thing is that the same plans and intentions are resumed. It’s the same game if the same strategies and inclinations are resumed, and in a similar way it’s the same person if the same attitudes and intentions are reconstituted.

That sounds alright at first, though it raises further questions, but it takes us back towards the quagmire of intentionality, and moreover it faces the same problem we had earlier with information. It’s even clearer in the case of a game of chess that someone else could have the same plans and intentions without being me; why should someone built to the same design as me, no matter how exquisitely faithful in the minutest detail, be me?

I confess that to me the whole idea of a transfer seems to hark back to old-fashioned dualism.  I think most transferists would consider themselves materialists, but it does look as if what we’re transferring is an ill-defined but important entity distinct from the simple material substance of our brain. But I’m not an abstraction or a set of information, I’m a brute physical entity. It is, I think, a failure to recognise that the world does not consist of theory, and that a theory of redness does not contain any actual red, which gives rise to puzzlement over qualia; and a similar failure leads us to think that I myself, in all my inexplicably specific physical haecceity, can be reduced to an ethereal set of data.


  1. 1. Lukas says:

    “I confess that to me the whole idea of a transfer seems to hark back to old-fashioned dualism.”

    This is really compelling stuff, but to push it further: what if advocates switched from “whole-brain emulation” to “whole-body emulation”? And simply asserted that they would somehow model every particle of you?

  2. 2. Lukas says:

    (although that wouldn’t work, obviously, for Itskov’s project – I’m thinking more of Robin Hanson.)

  3. 3. Peter says:

    Well, to model me is to create a copy of some sort. A transfer implies that something has moved. What would that be?

    For many things it doesn’t matter. If we destroy a bag of sugar, or a molecule of salt here, and simultaneously create one somewhere else, that’s as good as transferring it, because these things don’t have an identity in any sense that matters; their relevant properties are a short list of physical/chemical properties. But it generally isn’t felt to be that way for people.

    You could deny that: you could say that someone who is identical to me in every physical and chemical respect just is me. But you have to deal with the problem of multiple copies – all me? Moreover, this isn’t what people who espouse the idea of transfers actually say. They don’t say: “it’ll be great in future when I can be destroyed while a copy of my relevant features is created”, they say “it’ll be great when I can transfer to another body and live forever”. What is it that moves that warrants this talk of a transfer?

  4. 4. Christophe Menant says:

    We can transfer a piece of sugar from a sugar-box to a cup of tea, or to a mouth.
    We cannot transfer life to matter (we can transfer a kidney from a human to another human, but not from a human to a computer).
    Assuming consciousness exists only within living organisms, the above tells us that consciousness is not today transferrable to computers, as life is not. So we have two questions: how could life be transferred to computers, and how could consciousness be dissociated from life. These questions indicate that life is to be taken into account when looking at connections between mind and matter.
    On a more general basis, I feel that complex notions related to human performances like identity, autonomy, meaning, consciousness, … should be addressed thru an evolutionary approach. The importance of life cannot there be bypassed. (see, and if you have more time

  5. 5. Doru says:

    I can imagine a scenario where two identical twins that grew up in absolute identical circumstances and share commune memories, are the subjects of this futuristic experiment: One of them is in a certain place having a specific recorded sensorial experience (visual and auditory), and the other one is in a deep trance in a lab where the same sensorial experience is transmitted directly into his brain. How the twins will describe the two conscious experiences? Or is it only one conscious experience copied twice into the two bodies? Is this a good way to thing of a conscious experience as being separated from the conscious experience-er?

  6. 6. Vicente says:

    Very interesting question. From my point this question hides a misconception, but it could help clarify the concept of consciousness.

    In order to copy/transfer something you need an static elment or a particular instance. You can copy a picture, or a film frame, but you can’t copy a film in its time extension.

    Peter, when you raise this possibility, you are implicitly assigning to consciousness an static nature. I believe it is the opposite.

    Consciosness is a flow, a dynamical process of a sentient agent interacting with the environment. You can’t copy that. You could copy certain data of the agent, maybe some coded memories, but not consciousness.

    Impermanence, that is the key factor. You would have to copy the substrate and the environment (boundary conditions), for a whole time period.

    If we accept some kind of dualism, then we could make the question about transferring the “conscious agent” from one physical substrate to another (re-encarnation), but that would be a different issue.

    I believe that a conscious flow, at least as we know it in our reality, is an extended experience process, that can’t be copied or transfered.

    In this line, you could consider, what particular elements of the conscious experience (not the global experience) then, are susceptible of being copied or stored?

  7. 7. Lloyd Rice says:

    As I have said elsewhere, I believe consciousness is the process of an organism or mechanism sensing itself in the world, together with the memories of such sensing in the past. So I agree with Vicente. It doesn’t make much sense to talk about transferring that process to another organism or mechanism. The memories would not be relevant. The current inputs would be related to a new and different host. Assuming both hosts still function, each could contemplate the other and the “new copy” (if it can be distinguished from the “original”) can remember the past experiences that took place in the other host. But the world has changed for that copy. It does not make much sense to me to talk about the new entity as being the “same individual” as the original.

  8. 8. Lloyd Rice says:

    Basically, what’s being transferred are the memories of past sensing in the previous host. Sensing begins anew in the new host and so must, to the extent possible, build upon the old memories. If the new mechanism includes structures anything like the amygdala/thalamus/striatum midbrain system of the human brain, then I predict that the moment of startup in the new body will represent a severe and possibly unrecoverable moment of schizophrenia.

  9. 9. john davey says:

    There is a confusion here that replication requires involve a transfer.

    You could physically replicate a brain, but that wouldn’t involve a transfer of anything. There is no movement of consciosness, as there is nothing to move. It’s not information. But I don’t see why you couldn’t duplicate a conscious state, although neither party – ‘old’ consciousness nor new – would be aware of it.

  10. 10. Lloyd Rice says:

    To me, either replication or transfer would involve a massive readout of the memories of the “original” brain. That is clearly not possible given a human brain at our current state of technology and will probably not be possible for quite some time to come.

    When I say “memories” are what would get transferred, I am also including (at least) two kinds of memories other than factual knowledge. I must include skills and habits. By skills, I include the “knowledge” of how to do things with or without the guidance of conscious thought. And with habits, I also include a range of participation by conscious thought processes.

    But this raises some of the issues I alluded to in #8, because both skills and habits involve an interplay between brain contents and physical structures. We must assume that any transfer would be able to adjust this interplay appropriately to fit the new body. That is no small order.

    And all of this leaves out the issues of what it would “feel” like to suddenly have an entirely new retinue of skills and habits.

  11. 11. Arnold Trehub says:

    Lloyd : “But this raises some of the issues I alluded to in #8, because both skills and habits involve an interplay between brain contents and physical structures.”

    Actually, *brain contents* is a part of *brain structure*. All of the memories and motor routines of person A are embodied in the synaptic transfer-weight profiles of A’s synaptic matrices and motor control registers. See *The Cognitive Brain*, MIT Press, 1991. The entire biophysical structure and dynamics of A’s brain would have to be transferred or recreated in B.

  12. 12. Vicente says:

    Consciousness is something that happens at present, as a combination of perception and intentionality.

    Consciousness is in fact transfered from one instant to the next one, that is why the self doesn’t exist (impermanence). Belly size appart, are any of you the same being as you were two decades ago? The people you were does not exist any more, and were not transfered anywhere, they just gradually evolved into you all.

    There is nothing to transfer, maybe some distorted data. Consciousness is a flow. Can you transfer the pure experience of blissful childhood moment?

    Consider how many episodes you’ve forgot, and there “you” are.

    Are you considering to pass somebody yourself in a USB pendrive?

    There is nothing to transfer.

  13. 13. Tom Clark says:


    “The entire biophysical structure and dynamics of A’s brain would have to be transferred or recreated in B.”

    Were the structure and dynamics recreated in a non-biophysical system, is there any reason to think the resulting system wouldn’t be conscious in the way A was?

  14. 14. Arnold Trehub says:

    Arnold: “The entire biophysical structure and dynamics of A’s brain would have to be transferred or recreated in B.”

    Tom: “Were the structure and dynamics recreated in a non-biophysical system, is there any reason to think the resulting system wouldn’t be conscious in the way A was?”

    The question then would be whether the relevant biophysical structure and dynamics of a living organism (A) can be recreated in a non-living physical artifact (B). My guess would be (i) that if this were possible and accomplished, then B would *no longer be non-biophysical* and would be conscious; (ii) if B remained non-biophysical after the supposed “transfer”, then system B would not be conscious. If, in case ii, B were tested in the SMTT paradigm, and exhibited the same patterns of response as A, then I would have to reconsider my opinion.

  15. 15. Tom Clark says:


    “If, in case ii, B were tested in the SMTT paradigm, and exhibited the same patterns of response as A, then I would have to reconsider my opinion.”

    So it seems that the bottom line consideration for you about a system being conscious isn’t being biophysical but exhibiting a certain pattern of responses stemming from its internal behavior control processes. Seems like the same structure and dynamics would produce the same pattern of responses (in the same environment), so can’t we conclude that replicating the structure and dynamics of a conscious system would result in the other system, artificial or not, living or not, being conscious?

  16. 16. Arnold Trehub says:

    Tom: “Seems like the same structure and dynamics would produce the same pattern of responses (in the same environment), so can’t we conclude that replicating the structure and dynamics of a conscious system would result in the other system, artificial or not, living or not, being conscious?”

    The language you use gets tricky here, Tom. If we actually replicate the structure and dynamics of a conscious (living) brain (A), then the other system (B) would be a living conscious system, i.e., case i. Case ii stipulates that the transfer was *incomplete* in some unspecified way so that B remained non-biophysical. My guess is that in this case B would not be conscious. However, the most powerful *empirical* test for phenomenal consciousness (in my view) is the SMTT test; so if B passed the SMTT test, I would be inclined to change my mind and consider B conscious. But notice that this is on the basis of an empirical criterion, not on the basis of a prior logical argument.

  17. 17. Tom Clark says:


    “If we actually replicate the structure and dynamics of a conscious (living) brain (A), then the other system (B) would be a living conscious system, i.e., case i.”

    If the replication is sufficiently fine grained, yes. But if our goal is replicating consciousness, we can’t assume that consciousness depends on the replication being sufficiently fine grained such that we end up with a living system. That would have to be shown on the basis of a theory of consciousness. And indeed your SMTT test for consciousness is independent of the grain of replication, since it uses the behavior that results from the operation of the structure and dynamics (whatever they are) as the criterion for consciousness.

    “However, the most powerful *empirical* test for phenomenal consciousness (in my view) is the SMTT test; so if B passed the SMTT test, I would be inclined to change my mind and consider B conscious. But notice that this is on the basis of an empirical criterion, not on the basis of a prior logical argument.”

    Right. Empirically we find consciousness associated with certain capacities (see ), which if instantiated and expressed should lead us to suppose the system is conscious unless there’s good reason to suppose otherwise. That the system that results from replication might not be biophysical or living doesn’t seem to me a good reason.

  18. 18. Vicente says:


    exhibiting a certain pattern of responses stemming from its internal behavior control processes

    Well, there is no exhibition, except for the external behaviour. If you want to call “pattern of responses” to ordinary behaviour, then every single mechanism is conscious, because every mechanism exhibits a certain pattern of responses stemming from its internal behavior control processes, mechanical or electronic. You are completely missing the important point.

    Empirically we find consciousness associated with certain capacities

    Associated? Which specific capacities?

  19. 19. Lloyd Rice says:

    I believe there are actually two components of human brain memory mechanisms, the synaptic configuration, as Arnold stated, but also the cell internal configuration determines when and whether a cell will respond to the synaptic activity. This was shown long ago by Kandel’s work, showing that synaptic activity led to DNA readouts, which led to certain protein synthesis, which changed the cells synaptic responses.

    However complicated this might be, I still believe it is just an issue of information storage, be that digital or analog. I see no reason that Vicente’s blissful childhood moments cannot be represented as information storage.

    Extracting that info from a human brain — now that is yet another matter.

    Still, I believe that consciousness is “simply” the processing of such information as an entity senses itself in the world. No magic sauce.

  20. 20. Vicente says:

    Ok Lloyd,

    Suppose you could record, with high fidelity, an experience. In order to play the recording, you would need an identical brain to the one that produced to experience, or not. How are you going to “inject” the recording in that brain, intracellular molecular processes included. Extremely difficult or impossible?

    Could it be reproduced elsewhere? in real terms definitely not. As a “movie” for others could be. But that has nothing to do with the original genuine experience.

    In psychological terms, what effect would it have to repeat an already experienced experience, overwriting memories (over the distored original ones)? new memories in conflict with previous ones? psychotic results?

    What’s all this about? fear to death? a new way to produce travel guides and documentaries?

  21. 21. Vicente says:

    Lloyd, sorry I forgot to point out that once you get down to recording and reproducing events and processes at molecular level, quantum effects could be significant, so there might be a lot of uncertainty and noise in both ways I/O, recording and reproducing. Considering the huge number of processes you have to record, this could imply a physical limit that impedes the whole game.

  22. 22. Tom Clark says:


    “Associated? Which specific capacities?”

    A positive account of sensory consciousness as informational states is emerging from neuroscience and neurophilosophy (see for instance Dehaene, 2002; Metzinger, 2000a, 2003).[11] Defined methodologically, consciously available information is just that embodied in representations that participate in functions subserving the empirically discovered capacities conferred by conscious states as opposed to unconscious states (Baars, 1999). For instance, conscious states have the capacity to make information available over extended time periods in the absence of continued stimulation; they permit novel, non-automatized behavior; and they allow spontaneous generation of intentional, goal-directed behavior with respect to perceived objects (Dehaene & Naccache, 2001). Studies of neural activity which contrast conscious and unconscious capacities indicate that phenomenal experience is associated with widely distributed but highly integrated neural processes involving communication between multiple functional sub-systems in the brain, each of which plays a more or less specialized role in representing features of the world and body (Kanwisher, 2001; Dehaene & Naccache, 2001; Jack & Shallice, 2001; Parvizi & Damasio, 2001, Crick & Koch, 2003). Such processes, it is hypothesized, constitute a distributed, ever-changing, but functionally integrated ‘global workspace’ (Baars, 1988; Dehaene & Naccache, 2001).

    – from

  23. 23. Arnold Trehub says:

    Tom: “Empirically we find consciousness associated with certain capacities, which if instantiated and expressed should lead us to suppose the system is conscious unless there’s good reason to suppose otherwise.”

    In my view, the decisive capacity for consciousness to exist is the capacity of a system to represent the volumetric surround in which it exists from a fixed perspectival origin; i.e., subjectivity. The distributed functionally integrated global workspace (GW) proposed and investigated by Baars and Dehaene and their colleagues does not account for subjectivity, even though a GW would be one important property of a subjective system. GW can be realized in a variety of electronic systems that are not conscious (e.g., a Google server center).

    The key question is how do we test for subjectivity in any sensory-motor system, either living or non-living. I suggest the SMTT test because successful performance on this test depends on a putative mechanism that can, in the absence of a corresponding sensory input, construct a phenomenal representation of an object in motion, from an egocentric perspective within its internal representation of its surround — the essential property of subjectivity.

  24. 24. Lloyd Rice says:

    Yes, Vicente. I do believe it is “just” a matter of representing the experiences as memory traces. Keep in mind that it is not stored like a movie (exactly), but rather associatively, each trace leading to the next. And, of course, it includes tactile, taste, smell, and, not least, internal bodily sensations. It will, as Arnold puts it, “represent the volumetric surround”.

    I see the big problem as readout from the human host, primarily because we do not, at this stage of history, know how to get to the bits. I do not see an insurmountable problem in putting those bits back into another host, assuming that access pathways have been engineered into the new body. As for the quantum effects, I suspect that’s not a major hurdle. Of course, QM is at play (as we so far understand it), but it is already at play in today’s electronics. Electronic structures will continue to evolve (if I can use that to mean “be further developed”), taking QM into account. I do not think there is any special use of QM effects in the sense that the “collapse” yields or destroys information, and in the sense that Penrose warned about. However the chip makers deal with it, that’s good enough for me.

    How that new host will deal with the info, the issues have been well stated.

    The essence, for me, as the primary issue of this blog, is that the new host will experience the world given whatever percepts and memories are presented to it. That experiencing is one and the same as its “consciousness”. How the new host deals with that info depends on how it is constructed, how it is built to deal with intellectual and emotional content.

  25. 25. Lloyd Rice says:

    Much has been made, here and elsewhere, of the point Arnold raises: How do we test for subjectivity. To me, that’s not really much of an issue. If you can converse with an entity and ask it about its subjective states, I am happy to accept that it has such states. Of course, this questioning must guard against the “rote reply” scenario that Searle is so afraid of. But all of the current crop of Turing Test applicants have gone well beyond that point. If the entity can talk coherently about its consciousness, then I accept that. I believe the question is not more than that because I completely reject the concept of philosophical zombies. If a mechanism has the capability of sensing itself in the world, then it is, to that extent, conscious. I believe there are degrees of consciousness, depending upon the richness of the sensing, but the bottom line is that sensing of self and being able to relate that sensed self to the world are the essential requirements.

  26. 26. Lloyd Rice says:

    I am not a panpsychist. A rock does not have it. A thermostat does not have it. When you disturb an ant and it rears up as if to challenge the disturbance, I accept that as the beginning, in some simple way, of a state of consciousness. I have no doubt about mice, dogs, etc.

  27. 27. Lloyd Rice says:

    Consider the IBM machine called “Watson” that played on Jeopardy last year. As I understand it, Watson’s sensory input was limited to the equivalent of a keyboard. Not much of a world view. My guess is that if you asked it about itself, the reply would perhaps include knowledge of the number of its servers, other such detail. But I doubt if it would be able to have a conversation about itself. I would put that in the category with Searle’s rote reply. It would be more like the ant that did not rear up, but just took a step back, changed course and went about its program.

    Humans do not always exhibit their consciousness, either. Sometimes they act more like the second ant than the first. Usually though, if you press the point, you can get them to rear up and challenge the disturbance.

  28. 28. Arnold Trehub says:

    Lloyd: “I believe there are degrees of consciousness, depending upon the richness of the sensing, but the bottom line is that sensing of self and being able to relate that sensed self to the world are the essential requirements.”

    Two key problems:

    1. What is the self?
    2. What is the world to which the self relates?

  29. 29. Vicente says:

    We are not discussing how consciousness is produced, but if consciousness can be transfered from one material host to another, although both questions are intimately related.

    The first point, to clarify, is what is the idetity to transfer, which is not clear. What I say is that:

    – We can transfer data but not a process. The frames of a film are not the original episode, not in the sense of reality, but in the sense of discrete recording and accuracy. Even an analogue recording system like a tape is discrete, because of its resolution. You would need something like a Shannon and Nyquist theory to show that you can “sample” brain/consciousness events and then reproduce them, in conscious experience terms. If to go down to intracellular molecular level were required for conscious experience recording, then, forget about it, Lloyd, you are recording a dynamical process, uncertainty in Time/Energy pair will impede it. Electronics are irrelevant here.

    – Even if you could store data related to a process, it doesn’t mean that you could reproduce the process, not even in movie-mode.

    – If what you want to transfer is the whole system, then first clarify what is the self to be transfer in that system, what does it consist of. If the self consists of the whole brain, looks like, then you have to replicate the brain, defeating the whole purpose of the transference exercise.

    I am about to finish reading “The Ego Trick” by J. Baggini, it includes some interesting ideas, and real cases, relevant to this topic.

  30. 30. Doru says:

    Processes (as sequences of finite states) are always duplicated and transferred between computers. The distinction may be that data is always informal (irrelevant to the hardware) and the processes are always formal (dependent on hardware). The problem with duplicating brains and what they think is the staggering complexity of combinatorial analogic used to describe and analyze information from environment (and self reflection as suggested above).

  31. 31. Lloyd Rice says:

    Arnold (re #28): Neither of these is a problem for me. The “self” is what I sense of myself as I sense the world. The world is the sum total of the surroundings that the entity (“this body”) has sensed since becoming able to sense. All of those sensory inputs form the memories by which I define “me”.

    My belief of all this is simply that “I” exist as a conscious entity simply because this body includes the sensory and processing capabilities of collecting and remembering information about the world in which it finds itself. To the extent that the entity senses itself (my body, my responses to others, my dreams, my thoughts, my plans, …), “I” am to that extent conscious.

    Vicente (re #29): I agree that Arnold’s comment #28 and my reply above are not the topic of this particular thread. But they certainly fall within the topic of the blog.

    I do not believe there is any process to be transferred. The process in question must be provided by the successful operation of the receiving entity (the “new host”). When the process begins in the new host, the memories transferred from the previous host define the new entity.

    There is no “self” to be transferred other than the memories from the prior host. The new host creates a new “self” as it remembers what it was, according to the transferred memories.

    I quite agree with Doru as to the staggering complexity of all of this.

  32. 32. Lloyd Rice says:

    I am what I am because I remember what I was.

  33. 33. Vicente says:


    I see no reason to think that the brain is a finite states machine.

    In order to duplicate a brain, its combinatorial complexity is the easiest of all the problems to tackle, irrelevant. Having said this, the unsurmountable complexity of the brain is stunning.

    Lloyd, my point is not that the comments were out of place, all comments are very welcome, in particular the very intelligent ones in this post. I meant that those comments were not clarifying if we can transfer a conscious entity from one material substrate to another, preserving its “core self”. In particular, I protested because nobody has so far defined what is the core self to be transferred, a main prerrequisite or precondition to start the debate.

    I believe there is a self, everevolving, how to define it, or explain it, difficult.

    Well, you are what you are because you remember what you were… because of the way you process those memories, because of your character, of your general psicological traits, of your body, of your prospects… etc etc, the autobiographical continuity is one factor amongst many.

  34. 34. Lloyd Rice says:

    I guess that I have given my understanding of what this “core self” is. And that is that I believe there is no such separate “thing”, object, process, or whatever, other than the supporting background. By that I include all the content of the memories as well as the processing power needed to turn sensory inputs into a world view and then use that world view to maintain the memories up to the current moment. This processing creates the “view” that I have of myself and the world. There is no “me” other than this view that I create.

    I am not so much concerned with the fine points of finite state machines. A FSM can be designed to do almost anything. The Goedel afficionados are keen to point to the limitations, but, in my opinion, these are details that do not really get in the way. The tasks that need to be done are basically straight-forward logical processing steps. Computer science is still not quite to the point needed to be able to integrate the inputs from multiple modalities in a single, coherent perceptual model, but they are not far from it.

    Upon this basic perceptual layer, our brains build many additional layers which serve as evaluators, amplifiers, selectors and several other such enhancers. The attention system is one such enhancement that helps to sort out the vitally important from the merely useful.

    Stunning complexity, yes. But incomprehensible, .. I think not.

    I was tempted to say “I remember, therefore I am.” But I thought that did not represent the processing as well as I wanted to do. This way, captures the past as well as the present.

  35. 35. Vicente says:

    Lloyd, and the future? we are always leaning towards the future. Your brain is very much design to compute expectations…

    I did like the reasoning based on identity comparissons presented by Baggini (already known of course). When we compare two individuals, what is there that makes them different? probably this question paves the way to approach the problem of identity and core self.

    The complexity of the brain can probably be understood, the product, I’m not that sure.

  36. 36. Lloyd Rice says:

    Expectations? You are absolutely correct.

  37. 37. Michael Baggot says:

    Let me first say that I am one of those Gödel incompleteness guys as well as a subsumption guy and a Chinese room guy. An open vs closed computation guy, an a priori (deduction) vs a posteriori (induction) guy, and an enisle experience as opposed to a hard problem or transparency guy.
    In other words, brains implement a unique computational architecture and consciousness is simply an inevitable consequence of that architecture. More importantly, however, this architecture allows brains to, in effect, program themselves, i.e., to make causal determinations about ones personal relationship to the experienced physical world as well as make determinations (by inference) about relations between strictly external entities.
    Simply transferring an independent, conscious entity to some arbitrary computational substrate is absolute nonsense. A conscious entity can only be transferred to another compatible architecture, specifically, one with all of its learning reduced to a tabula rasa. This would not be a transfer of process but a transfer of experientially acquired, i.e., learned, data. All of the experience generating phenomenal processes needed for acquiring new learning would be inherent in the basic architecture and no different from that of any other independent conscious entity.
    Of course, there is much more to this story but I think there is at bottom here a profound, enveloping simplicity rather than an excruciatingly endless complexity. I also think that Vicente has done very well here in raising these core issues.

  38. 38. Lloyd Rice says:

    Michael, I agree. I spoke of the past, Vicente speaks of the future. There are two other brain modules that come into play here which I shall loosely call the narrative builder and the planner. The narrative builder uses the contents of the past to construct a coherent narrative that will be (hopefully) relevant in the present. The planner uses that narrative and the predictions of the future to plot a future course of action, be that two minutes or two years ahead.

    Both of these modules put severe demands on a transfer process. Gazzaniga points out the important fact that the mind can modify the brain’s hardware. This being the case, the transfer process cannot simply move old software into (tabula rasa) new hardware, but the new hardware must be brought up to date with the current software — during, and as a crucial part of, the transfer process.

    I think that’s what you just said.

  39. 39. Thinker says:

    To copy is not to empty or transfer. It is to copy.

Leave a Reply