Archive for March, 2012

ScriptoriumHomo Artificialis had a post at the beginning of the month about Dmitry Itskov and his three-part project:

  • the development of a functional humanoid synthetic body manipulated through an effective brain-machine interface
  • the development of such a body, but including a life-support system for a human brain, so that the synthetic body can replace an existing organic one, and
  • the mapping of human consciousness such that it, rather than the physical brain, can be housed in the synthetic body

The daunting gradient in this progression is all too obvious. The first step is something that hasn’t been done but looks to be pretty much within the reach of current technology. The second step is something we have a broad idea of how to do – but it goes well beyond current capacity and may therefore turn out to be impossible even in the long run. The third step is one where we don’t even understand the goal properly or know whether the ambitious words “mapping of human consciousness” even denote something intelligible.

The idea of transferring your consciousness, to another person or to a machine, is often raised, but there isn’t always much discussion of the exact nature of the thing to be transferred. Generally, I suppose, the transferists reckon that consciousness arises from a physical substrate and that if we transfer the relevant properties of that substrate the consciousness will necessarily go with it. That may very well be true, but the devil is in that ‘relevant’.  If early inventors had tried to develop a flying machine by building in the relevant properties of birds, they would probably have gone long on feathers and flapping.

At least we could deal with feathers and flapping, but with consciousness it’s hard even to go wrong creatively because two of the leading features of the phenomenon as we now generally see it are simply not understood at all.  Qualia have no physical causality and are undetectable; and there is no generally accepted theory of intentionality, meaningfulness.

But let’s not despair too easily.  Perhaps we shouldn’t get too hooked up on the problems of consciousness here because the thing we’re looking to transfer is not consciousness per se but a consciousness. What we’re really talking about is personal identity. There’s a large philosophical literature about that subject, which was well established centuries before the issues relating to consciousness came into focus, and I think what we’re dealing with is essentially a modern view that personal identity equates to identity of consciousness. Who knows, maybe approaching from this angle will provide us with a new way in?

In any case, I think a popular view in this context might be that consciousness is built out of information, and that’s what we would be transferring. Unfortunately identity of information doesn’t seem to be what we need for personal identity. When we talk about the same information, we have no problem with it being in several places at once, for example: we don’t say that two copies of the same book have the same content but that the identity of the information is different; we say they contain the same information.  Speaking of books points to another problem: we can put any information we like down on paper but it doesn’t seem to me that I could exist in print form (I may be lacking in energy at times, but I’m more dynamic than that).

So perhaps it’s not that the information constitutes the identity; perhaps it simply allows us to reconstruct the identity? I can’t be a text, but perhaps my identity could persist in frozen recorded form in a truly gigantic text?

There’s a science fiction story in there somewhere about slow transfer; once dematerialised, instead of being beamed to his destination, the prophet is recorded in a huge set of books, carried by donkey across the deserts of innumerable planets, painstakingly transcribed in scriptoria, and finally reconstituted by the faithful far away in time for the end of the world. As a side observation, most transfer proposals these days speak of uploading your consciousness to a computer; that implies that whatever the essential properties of you are, they survive digitisation. Without being a Luddite, I think that if my survival depended on there being no difference between a digitised version of me and the messy analogue blob typing these words, I would get pretty damn picky about lossy codecs and the like.

Getting back to the point, then perhaps the required identity is like the identity of a game? We could record the positions half-way through a chess game and reproduce them on any chessboard.  Although in a sense that does allow us to reconstitute the same game in another place or time, there’s an important sense in which it wouldn’t really be the same game unless it was the same players resuming after an interval. In the same way we might be able to record my data and produce any number of identical twins, but I’d be inclined to say none of them would in fact be me unless the same… what?

There’s a clear danger here of circularity if we say that the chess game is the same only if the same people are involved. That works for chess, but it will hardly help us to say that my identity is preserved if the same person is making the decisions before and after. But we might scrape past the difficulty if we say that the key thing is that the same plans and intentions are resumed. It’s the same game if the same strategies and inclinations are resumed, and in a similar way it’s the same person if the same attitudes and intentions are reconstituted.

That sounds alright at first, though it raises further questions, but it takes us back towards the quagmire of intentionality, and moreover it faces the same problem we had earlier with information. It’s even clearer in the case of a game of chess that someone else could have the same plans and intentions without being me; why should someone built to the same design as me, no matter how exquisitely faithful in the minutest detail, be me?

I confess that to me the whole idea of a transfer seems to hark back to old-fashioned dualism.  I think most transferists would consider themselves materialists, but it does look as if what we’re transferring is an ill-defined but important entity distinct from the simple material substance of our brain. But I’m not an abstraction or a set of information, I’m a brute physical entity. It is, I think, a failure to recognise that the world does not consist of theory, and that a theory of redness does not contain any actual red, which gives rise to puzzlement over qualia; and a similar failure leads us to think that I myself, in all my inexplicably specific physical haecceity, can be reduced to an ethereal set of data.

The Nuffield Council on bioethics is running a consultation on the ethics of new brain technologies: specifically they mention neurostimulation and neural stem cell therapy. Neurostimulation includes transcranial magnetic stimulation (TMS) which typically requires nothing more than putting on a special cap or set of electrodes; and  deep brain stimulation (DBS) where the electrodes are surgically inserted into the brain.

All of these are existing technologies which are already in use to varying degrees, and the consultation is prudently geared towards gathering real experience. But of course we can range a bit more freely than that, and it raises an interesting general question: what new crimes can we now commit?

Disappointingly it actually seems that there aren’t many really new neurocrimes; most of the candidates turn out to be variations or extensions of the old ones. Even where there is an element of novelty there’s often a strong analogy which allows us to transpose an existing moral framework to the new conditions (not that that necessarily means that there are easy or uncontroversial answers to the questions, of course).

I think I’ve said before, for example, that TMS seems to hold out the prospect of something analogous to the trade in illicit drugs. An unscrupulous neurologist could surely sell wonderful experiences produced by neural stimulation and might well be able to create a dependency which could be exploited for money and general blackmail. The main difference here is that the crucial lever is control of the technology rather than control of the substance, but that is  a relatively small matter which has some blurry edges anyway.

It’s possible the new technologies might also be able to enhance your brain – if they allow better concentration or recall of information, for example. There is apparently some evidence that TMS might be capable of improving your exam scores. That clearly opens up a question as to whether enhanced performance in an exam, produced by neural stimulation, is cheating; and the wider question of whether easier access to TMS by wealthier citizens would build in a politically unacceptable advantage for those who are already privileged. So far as I know there’s no current drug or regime which automatically and reliably boosts academic performance; nevertheless, the issues are essentially the same as those which arise in the case of various other forms of exam cheating, or over access to superior educational facilities. There may be a new aspect to the problem here in that traditional approaches generally rest on the idea that each person has a genuine inherent level of ability; this may become less clear. If a quick shot of TMS through the skull boosts your performance for the next hour only, we might see things one way; whereas if wearing a set of electrodes helps you study and acquire permanently better understanding, we might be more inclined to think it is legitimate in at least some respects. Moreover a boost which can be represented as therapeutic, correcting a deficit rather than providing an enhancement, is far more likely to be deemed acceptable. All in all, we haven’t got anything much more than new twists on existing questions.

There is likely to be some scope for improperly influencing the behaviour of others through neural techniques, but this has clear parallels in hypnotism, confidence trickery, and other persuasive techniques; again there’s nothing completely novel here. Indeed, it could be argued that many con tricks and feats of conjuring rest on exploiting neurological quirks as it is.

To steal information from someone’s brain is morally not fundamentally different from stealing it out of their diary; and to injure someone by destroying a mental faculty broadly resembles physical injury – the two may indeed go together in many cases.

So what is new? I think if there is fresh scope for evil-doing it is probably to be found in the manipulation of personality and identity. Even here the path is not untrodden, with a substantial history of attempts to modify the personality through drugs or leucotomy; but there is now at least a prospect, albeit still some way off, of far better and more precise tools. As with cosmetic surgery, we might expect the modification of personality to be limited to cases where it has a therapeutic value, together with a range of elective cases over which there might be some argument. The novel thing here is that many cases would require consent; but unlike a nose job, personality modification attacks the basis of consent.

Consider an absurd example in which subject A seeks modification to achieve greater courage and maturity; having achieved both, the improved A now disapproves of the idea of personality modification and insists the changes constitute an injury which must be reversed; once they are reversed, A, with the old personality, wants them done again.

It could be worse; since personality and identity are linked, the new A might take a different line and insist that the changes made in the brain he inhabits were effectively the murder of an older self. This would be as bad a crime as old-fashioned killing, but now it’s no good reversing the changes because that amounts to a further murder, and it could be argued that the restored A is in fact not the original come back, but a third person who merely resembles the original; a kind of belated twin. A’s brother might sue all the new personalities on the basis that none of them has any more rights to the property and body of the original than a squatter in someone’s house.

In circumstances like these there might be a lobby for the view that personality modification should be subject to a blanket ban, in rather the same way that society generally bans us from editing out undesirable personalities with a gun – even our own.

Of course there is in principle another novel crime we might be able to commit: the removal of someone’s qualia, their inward subjective experience. This has often been contemplated in the philosophical literature (it is remarkable how many of the most popular thought-experiments in philosophy of mind – whose devotees generally seem the mildest and most enlightened of people – involve atrocious crimes); perhaps now it can become real. The crime would be undetectable since the ineffable qualities it removes could never be mentioned by the victim; the snag is that since there could be no way of measuring our success it’s probably impossible to devise or test the required diabolical apparatus…