Homo Artificialis had a post at the beginning of the month about Dmitry Itskov and his three-part project:
- the development of a functional humanoid synthetic body manipulated through an effective brain-machine interface
- the development of such a body, but including a life-support system for a human brain, so that the synthetic body can replace an existing organic one, and
- the mapping of human consciousness such that it, rather than the physical brain, can be housed in the synthetic body
The daunting gradient in this progression is all too obvious. The first step is something that hasn’t been done but looks to be pretty much within the reach of current technology. The second step is something we have a broad idea of how to do – but it goes well beyond current capacity and may therefore turn out to be impossible even in the long run. The third step is one where we don’t even understand the goal properly or know whether the ambitious words “mapping of human consciousness” even denote something intelligible.
The idea of transferring your consciousness, to another person or to a machine, is often raised, but there isn’t always much discussion of the exact nature of the thing to be transferred. Generally, I suppose, the transferists reckon that consciousness arises from a physical substrate and that if we transfer the relevant properties of that substrate the consciousness will necessarily go with it. That may very well be true, but the devil is in that ‘relevant’. If early inventors had tried to develop a flying machine by building in the relevant properties of birds, they would probably have gone long on feathers and flapping.
At least we could deal with feathers and flapping, but with consciousness it’s hard even to go wrong creatively because two of the leading features of the phenomenon as we now generally see it are simply not understood at all. Qualia have no physical causality and are undetectable; and there is no generally accepted theory of intentionality, meaningfulness.
But let’s not despair too easily. Perhaps we shouldn’t get too hooked up on the problems of consciousness here because the thing we’re looking to transfer is not consciousness per se but a consciousness. What we’re really talking about is personal identity. There’s a large philosophical literature about that subject, which was well established centuries before the issues relating to consciousness came into focus, and I think what we’re dealing with is essentially a modern view that personal identity equates to identity of consciousness. Who knows, maybe approaching from this angle will provide us with a new way in?
In any case, I think a popular view in this context might be that consciousness is built out of information, and that’s what we would be transferring. Unfortunately identity of information doesn’t seem to be what we need for personal identity. When we talk about the same information, we have no problem with it being in several places at once, for example: we don’t say that two copies of the same book have the same content but that the identity of the information is different; we say they contain the same information. Speaking of books points to another problem: we can put any information we like down on paper but it doesn’t seem to me that I could exist in print form (I may be lacking in energy at times, but I’m more dynamic than that).
So perhaps it’s not that the information constitutes the identity; perhaps it simply allows us to reconstruct the identity? I can’t be a text, but perhaps my identity could persist in frozen recorded form in a truly gigantic text?
There’s a science fiction story in there somewhere about slow transfer; once dematerialised, instead of being beamed to his destination, the prophet is recorded in a huge set of books, carried by donkey across the deserts of innumerable planets, painstakingly transcribed in scriptoria, and finally reconstituted by the faithful far away in time for the end of the world. As a side observation, most transfer proposals these days speak of uploading your consciousness to a computer; that implies that whatever the essential properties of you are, they survive digitisation. Without being a Luddite, I think that if my survival depended on there being no difference between a digitised version of me and the messy analogue blob typing these words, I would get pretty damn picky about lossy codecs and the like.
Getting back to the point, then perhaps the required identity is like the identity of a game? We could record the positions half-way through a chess game and reproduce them on any chessboard. Although in a sense that does allow us to reconstitute the same game in another place or time, there’s an important sense in which it wouldn’t really be the same game unless it was the same players resuming after an interval. In the same way we might be able to record my data and produce any number of identical twins, but I’d be inclined to say none of them would in fact be me unless the same… what?
There’s a clear danger here of circularity if we say that the chess game is the same only if the same people are involved. That works for chess, but it will hardly help us to say that my identity is preserved if the same person is making the decisions before and after. But we might scrape past the difficulty if we say that the key thing is that the same plans and intentions are resumed. It’s the same game if the same strategies and inclinations are resumed, and in a similar way it’s the same person if the same attitudes and intentions are reconstituted.
That sounds alright at first, though it raises further questions, but it takes us back towards the quagmire of intentionality, and moreover it faces the same problem we had earlier with information. It’s even clearer in the case of a game of chess that someone else could have the same plans and intentions without being me; why should someone built to the same design as me, no matter how exquisitely faithful in the minutest detail, be me?
I confess that to me the whole idea of a transfer seems to hark back to old-fashioned dualism. I think most transferists would consider themselves materialists, but it does look as if what we’re transferring is an ill-defined but important entity distinct from the simple material substance of our brain. But I’m not an abstraction or a set of information, I’m a brute physical entity. It is, I think, a failure to recognise that the world does not consist of theory, and that a theory of redness does not contain any actual red, which gives rise to puzzlement over qualia; and a similar failure leads us to think that I myself, in all my inexplicably specific physical haecceity, can be reduced to an ethereal set of data.