Selfless Transfers

transferThe new film Self/less is based around the transfer of consciousness. An old man buys a new body to transfer into, and then finds that contrary to what he was told, it wasn’t grown specially: there was an original tenant who moreover, isn’t really gone. I understand that this is not a film that probes the metaphysics of the issue very deeply; it’s more about fight scenes; but the interesting thing is how readily we accept the idea of transferred consciousness.
In fact, it’s not at all a new idea; if memory serves, H.G.Wells wrote a short story on a similar theme; a fit young man with no family is approached by a rich old man in poor health who apparently wants to leave him all his fortune; then he finds himself transferred unwillingly to the collapsing ancient body and the old man making off in his fresh young one. In Wells’ version the twist is that the old man gets killed in a chance traffic accident, thereby dying before his old body does anyway.
The thing is, how could a transfer possibly work? In Wells’ story it’s apparently done with drugs, which is mysterious; more normally there’s some kind of brain-transfer helmet thing. It’s pretty much as though all you needed to do was run an EEG and then reverse the polarity. That makes no sense. I mean, scanning the brain in sufficient detail is mind-boggling to begin with, but the idea that you could then use much the same equipment to change the content of the mind is in another league of bogglement. Weather satellites record the meteorology of the world, but you cannot use them to reset it. This is why uploading your mind to a computer, while highly problematic, is far easier to entertain than transferring it to another biological body.
The big problem is that part of the content of the brain is, in effect, structural. It depends on which neurons are attached to which (and for that matter, which neurons there are), and on the strength and nature of that linkage. It’s true that neural activity is important too, and we can stimulate that electrically; even with induction gear that resembles the SF cliché: but we can’t actually restructure the brain that way.
The intuition that transfers should be possible perhaps rests on an idea that the brain is, as it were, basically hardware, and consciousness is essentially software; but isn’t really like that. You can’t run one person’s mind on another’s brain.
There is in fact no reason to suppose that there’s much of a read-across between brains: they may all be intractably unique. We know that there tends to be a similar regionalisation of functions in the brain, but there’s no guarantee that your neural encoding of ‘grandmother’ resembles mine or is similarly placed. Worse, it’s entirely possible that the ‘meaning’ of neural assemblages differs according to context and which other structures are connected, so that even if I could identify my ‘grandmother’ neurons, and graft them in in place of yours, they would have a different significance, or none.
Perhaps we need a more sophisticated and bespoke approach. First we thoroughly decipher both brains, and learn their own idiosyncratic encoding works. Then we work out a translator. This is a task of unimaginable complexity and particularity, but it’s not obviously impossible in principle. I think it’s likely that for each pair of brains you would need a unique translator: a universal one seems such an heroic aspiration that I really doubt its viability: a universal mind map would be an achievement of such interest and power that merely transferring minds would seem like time-wasting games by comparison.
I imagine that even once a translator had been achieved, it would normally only achieve partial success. There would be a limit to how far you could go with nano-bot microsurgery; and there might be certain inherent limitations. Certain ideas, certain memories, might just be impossible to accommodate in the new brain because of their incompatibility with structural or conceptual issues that were too deeply embedded to change; there would be certain limitations. The task you were undertaking would be like the job of turning Pride and Prejudice into Don Quixote simply by moving the words around and perhaps in a few key cases allowing yourself one or two anagrams: the result might be recognisable, but it wouldn’t be perfect. The transfer recipient would believe themselves to be Transferee, but they would have strange memory gaps and certain cognitive deficits, perhaps not unlike Alzheimer’s, as well as artefacts: little beliefs or tendencies that existed neither in Transferee or Recipient, but were generated incidentally through the process of reconstruction.
It’s a much more shadowy and unappealing picture, and it makes rather clearer the real killer: that though Recipient might come to resemble Transferee, they wouldn’t really be them.
In the end, we’re not data, or a program; we’re real and particular biological entities, and as such our ontology is radically different. I said above that the plausibility of transfers comes from thinking of consciousness as data, which I think is partly true: but perhaps there’s something else going on here; a very old mental habit of thinking of the soul as detachable and transferable. This might be another case where optimists about the capacity of IT are unconsciously much less materialist than they think.

Transferring Consciousness

ScriptoriumHomo Artificialis had a post at the beginning of the month about Dmitry Itskov and his three-part project:

  • the development of a functional humanoid synthetic body manipulated through an effective brain-machine interface
  • the development of such a body, but including a life-support system for a human brain, so that the synthetic body can replace an existing organic one, and
  • the mapping of human consciousness such that it, rather than the physical brain, can be housed in the synthetic body

The daunting gradient in this progression is all too obvious. The first step is something that hasn’t been done but looks to be pretty much within the reach of current technology. The second step is something we have a broad idea of how to do – but it goes well beyond current capacity and may therefore turn out to be impossible even in the long run. The third step is one where we don’t even understand the goal properly or know whether the ambitious words “mapping of human consciousness” even denote something intelligible.

The idea of transferring your consciousness, to another person or to a machine, is often raised, but there isn’t always much discussion of the exact nature of the thing to be transferred. Generally, I suppose, the transferists reckon that consciousness arises from a physical substrate and that if we transfer the relevant properties of that substrate the consciousness will necessarily go with it. That may very well be true, but the devil is in that ‘relevant’.  If early inventors had tried to develop a flying machine by building in the relevant properties of birds, they would probably have gone long on feathers and flapping.

At least we could deal with feathers and flapping, but with consciousness it’s hard even to go wrong creatively because two of the leading features of the phenomenon as we now generally see it are simply not understood at all.  Qualia have no physical causality and are undetectable; and there is no generally accepted theory of intentionality, meaningfulness.

But let’s not despair too easily.  Perhaps we shouldn’t get too hooked up on the problems of consciousness here because the thing we’re looking to transfer is not consciousness per se but a consciousness. What we’re really talking about is personal identity. There’s a large philosophical literature about that subject, which was well established centuries before the issues relating to consciousness came into focus, and I think what we’re dealing with is essentially a modern view that personal identity equates to identity of consciousness. Who knows, maybe approaching from this angle will provide us with a new way in?

In any case, I think a popular view in this context might be that consciousness is built out of information, and that’s what we would be transferring. Unfortunately identity of information doesn’t seem to be what we need for personal identity. When we talk about the same information, we have no problem with it being in several places at once, for example: we don’t say that two copies of the same book have the same content but that the identity of the information is different; we say they contain the same information.  Speaking of books points to another problem: we can put any information we like down on paper but it doesn’t seem to me that I could exist in print form (I may be lacking in energy at times, but I’m more dynamic than that).

So perhaps it’s not that the information constitutes the identity; perhaps it simply allows us to reconstruct the identity? I can’t be a text, but perhaps my identity could persist in frozen recorded form in a truly gigantic text?

There’s a science fiction story in there somewhere about slow transfer; once dematerialised, instead of being beamed to his destination, the prophet is recorded in a huge set of books, carried by donkey across the deserts of innumerable planets, painstakingly transcribed in scriptoria, and finally reconstituted by the faithful far away in time for the end of the world. As a side observation, most transfer proposals these days speak of uploading your consciousness to a computer; that implies that whatever the essential properties of you are, they survive digitisation. Without being a Luddite, I think that if my survival depended on there being no difference between a digitised version of me and the messy analogue blob typing these words, I would get pretty damn picky about lossy codecs and the like.

Getting back to the point, then perhaps the required identity is like the identity of a game? We could record the positions half-way through a chess game and reproduce them on any chessboard.  Although in a sense that does allow us to reconstitute the same game in another place or time, there’s an important sense in which it wouldn’t really be the same game unless it was the same players resuming after an interval. In the same way we might be able to record my data and produce any number of identical twins, but I’d be inclined to say none of them would in fact be me unless the same… what?

There’s a clear danger here of circularity if we say that the chess game is the same only if the same people are involved. That works for chess, but it will hardly help us to say that my identity is preserved if the same person is making the decisions before and after. But we might scrape past the difficulty if we say that the key thing is that the same plans and intentions are resumed. It’s the same game if the same strategies and inclinations are resumed, and in a similar way it’s the same person if the same attitudes and intentions are reconstituted.

That sounds alright at first, though it raises further questions, but it takes us back towards the quagmire of intentionality, and moreover it faces the same problem we had earlier with information. It’s even clearer in the case of a game of chess that someone else could have the same plans and intentions without being me; why should someone built to the same design as me, no matter how exquisitely faithful in the minutest detail, be me?

I confess that to me the whole idea of a transfer seems to hark back to old-fashioned dualism.  I think most transferists would consider themselves materialists, but it does look as if what we’re transferring is an ill-defined but important entity distinct from the simple material substance of our brain. But I’m not an abstraction or a set of information, I’m a brute physical entity. It is, I think, a failure to recognise that the world does not consist of theory, and that a theory of redness does not contain any actual red, which gives rise to puzzlement over qualia; and a similar failure leads us to think that I myself, in all my inexplicably specific physical haecceity, can be reduced to an ethereal set of data.