Downloading Hauskeller

uploadMichael Hauskeller has an interesting and very readable paper in the International Journal of Machine Consciousness on uploading – the idea that we could transfer ourselves from this none-too solid flesh into a cyborg body or even just into the cloud as data. There are bits I thought were very convincing and bits I thought were totally wrong, which overall is probably a good sign.

The idea of uploading is fairly familiar by now; indeed, for better or worse it resembles ideas of transmigration, possession, and transformation which have been current in human culture for thousands of years at least. Hauskeller situates it as the logical next step in man’s progressive remodelling of the environment, while also nodding to those who see it as  the next step in the evolution of humankind itself. The idea that we could transfer or copy ourselves into a computer, Hauskeller points out, rests on the idea that if we recreate the right functional relationships, the phenomenological effects of consciousness will follow; that, as Minsky put it, ‘Minds are what Brains do’. This remains for Hauskeller a speculation, an empirical question we are not yet in a position to test, since we have not as yet built a whole brain simulation (not sure how we would test phenomenology even after that, but perhaps only philosophers would be seriously worried about it…). In fact there are some difficulties, since it has been shown that identical syntax does not guarantee identical semantics (So two identical brains could contain identical thoughts but mean different things by them – or something strange like that. While I think the basic point is technically true with reference to derived intentionality, for example in the case of books – the same sentence written by different people can have different meanings – it’s not clear to me that it’s true for brains, the source of original intentionality.).

However, as Hauskeller says, uploading also requires that identity is similarly transferable, that our computer-based copy would be not just a mind, but a particular mind – our own. This is a much more demanding requirement. Hauskeller suggests the analogy of books might be brought forward; the novel Ulysses can be multiply realised in many different media, but remains the same book. Why shouldn’t we be like that? Well, he thinks readers are different. Two people might both be reading Ulysses at the same moment, meaning the contents of their minds were identical; but we wouldn’t say they had become the same person. Conceivably at least, the same mind could be ‘read’ by different selves in the same way a single book can be read by different readers.

Hauskeller’s premise there is questionable – two people reading the same book don’t have identical mental content (a point he has just touched on, oddly enough, since it would follow from the fact that syntax doesn’t guarantee semantics, even if it didn’t follow simply from the complexity of our multi-layered mental lives). I’d say the very idea of identical mental content is hard to imagine, and that by using it in thought-experiments we risk, as Dennett has warned, mistaking our own imaginative difficulties for real-world constraints. But Hauskeller’s general point, that identity need not follow from content alone, is surely sound enough.

What about Ray Kurzweil’s argument from gradualism? This points out that we might replace someone with cyborg parts bit by bit. We wouldn’t have any doubt about the continuing identity of someone with a cyborg eye; nor someone with an electronic hippocampus. If each neuron were replaced by a functional equivalent one by one, we’d be forced to accept either that the final robot, with no biological parts at all, was indeed the same continuing person, or, that at some stage a single neuron made a stark binary difference between being the same person and not being the same person. If the final machine can be the same person, then uploading by less arduous methods is also surely possible since it’s equivalent to making the final machine by another route?

Hauskeller basically bites Kurzweil’s bullet. Yes, it’s conceivable that at some stage there will come neurons whose replacement quite suddenly switches off the person being operated on. I have a lot of sympathy with the idea that some particular set of neurons might prove crucial to identity, but I don’t think we need to accept the conceivability of sudden change in order to reject Kurzweil’s argument. We can simply suppose that the subject becomes a chimera; a compound of two identically-functioning people. The new person keeps up appearances alright, but the borders of the old personality gradually shrink to destruction, though it may be very unclear when exactly that should be said to have happened.

Suppose (my example) an image of me is gradually overlaid with an image of my identical evil twin Retep, line of pixels by line. No one can even tell the process is happening, yet at some stage it ceases to be a picture of me and becomes one of Retep. The fact that we cannot tell when does not prove that I am identical with Retep, nor that both pictures are of me.

Hauskeller goes on to attack ‘information idealism’. The idea of uploading often rests on the view that in the final analysis we consist of information, but

Having a mind generally means being to some extent aware of the world and oneself, and this awareness is not itself information. Rather, it is a particular way in which information is processed…

Hauskeller, provocatively but perhaps not unjustly, accuses those who espouse information idealism of Cartesian substance dualism; they assume the mind can be separated from the body.

But no, it can’t: in fact Hauskeller goes on to suggest that in fact the whole body is important to our mental life: we are not just our brains. He quotes Alva Noë and goes further, saying:

That we can manipulate the mind by manipulating the brain, and that damages to our brains tend to inhibit the normal functioning of our minds, does not show that the mind is a product of what the brain does.

The brain might instead, he says, be like a window; if the window is obscured, we can’t see beyond it, but that does not mean the window causes what lies beyond it.

Who’s sounding dualist now? I don’t think that works. Suppose I am knocked unconscious by the brute physical intervention of a cosh; if the brain were merely transmitting my mind, my mental processes would continue offstage and then when normal service was resumed I should be aware that thoughts and phenomenology had been proceeding while my mere brain was disabled. But it’s not like that; knocking out the brain stops mental processes in a way that blocking a window does not stop the events taking place outside.

Although I take issue with some of his reasoning, I think Hauskeller’s objections have some force, and the limited conclusion he draws – that the possibility of uploading a mind, let alone an identity, is far from established, is true as far as it goes.

How much do we care about identity as opposed to continuity of consciousness? Suppose we had to chose between on the one hand retaining our bare identity while losing all our characteristics, our memories, our opinions and emotions, our intelligence, abilities and tastes, and getting instead some random stranger’s equivalents; or on the other losing our identity but leaving behind a new person whose behaviour, memories, and patterns of thought were exactly like ours? I suspect some people might choose the latter.

If your appetite for discussion of Hauskeller’s paper is unsatisfied, you might like to check out John Danaher’s two-parter on it.