Brain Preservation Prize

The prize offered by the Brain Preservation Foundation has been won by 21st Century Medicine (21CM) with the Aldehyde-Stabilized Cryopreservation (ASC) technique that has been developed. In essence this combines chemical and cryogenic approaches and is apparently capable of preserving the whole connectome (or neural network) of a large mammalian brain (a pig brain here) in full detail and indefinitely. That is a remarkable achievement. A paper is here.

I am an advisor to the BPF, though I should make it clear that they don’t pay me and I haven’t given them a great deal of advice. I’ve always said I would be a critical friend, in that I doubt this research is ever going to lead to personal survival of the self whose brain is preserved. However, in my opinion it is much more realistic than a pure scan-and-upload approach, and has the potential to yield many interesting benefits even if it never yields personal immortality.

One great advantage of preserving the brain like this is that it defers some choices. When we model a brain or attempt to scan it into software, we have to pick out the features we think are salient, and concentrate on those. Since we don’t yet have any comprehensive and certain account of how the brain functions, we might easily be missing something essential. If we keep the whole of an actual brain, we don’t have to make such detailed choices and have a better chance of preserving features whose importance we haven’t yet recognised.

It’s still possible that we might lose something essential, of course. ASC, not unreasonably, concentrates on preserving the connectome. I’m not sure whether, for example, it also keeps the brain’s astrocytes in good condition, though I’d guess it probably does. These are the non-neural cells which have often been regarded as mere packing, but which may in fact have significant roles to play. Recently we’ve heard that neurons appear to signal with RNA packets; again, I don’t know whether ASC preserves any information about that – though it might. But even on a pessimistic view, ASC must in my view be a far better preservation proposition than digital models that explicitly drop the detailed structure of individual neurons in favour of an unrealistically standardised model, and struggle with many other features.

Preserving brains in fine detail is a worthy project in itself, which might yield big benefits to research in due course. But of course the project embodies the hope that the contents of a mind and even the personality of an individual could be delivered to posterity. I do not think the contents of a mind are likely to be recoverable from a preserved brain yet awhile, but in the long run, why not? On identity, I am a believer in brute physical continuity. We are our brains, I believe (I wave my hands to indicate various caveats and qualifications which need not sideline us here). If we want to retain our personal identity, then, the actual physical preservation of the brain is essential.

Now, once your brain has been preserved by ASC, it really isn’t going to be started up again in its existing physical form. The Foundation looks to uploading at this stage, but because I don’t think digital uploading as we now envision it is possible in principle, I don’t see that ever working. However, there is a tiny chink of light at the end of that gloomy tunnel. My main problem is with the computational nature of uploading as currently envisaged. It is conceivable that the future will bring non-computational technologies which just might allow us to upload, not ourselves, but a kind of mental twin at least. That’s a remote speculation, but still a fascinating prospect. Is it just conceivable that ways might be found to go that little bit further and deliver some kind of actual physical interaction between these hypothetical machines and the essential slivers of a preserved brain, some echo such that identity was preserved? Honestly, I think not, but I won’t quite say it is inconceivable. You could say that in my view the huge advantage of the brain preservation strategy for achieving immortality is that unlike its rivals it falls just short of being impossible in principle.

So I suppose, to paraphrase Gimli the dwarf: certainty of death; microscopic chance of success – what are we waiting for?

Postscript: I meant by that last bit that we should continue research, but I see it is open to misinterpretation. I didn’t dream people would actually do this, but I read that Robert McIntyre, lead author of the paper linked above, is floating a startup to apply the technique to people who are not yet dead. That would surely be unethical. If suicide were legal and if you had decided that was your preferred option, you might reasonably choose a method with a tiny chance of being revived in future. But I don’t think you can ask people to pay for a  technique (surely still inadequately tested and developed for human beings) where the prospects of revival are currently negligible and most likely will remain so.

Digital Afterlife

upload angelA digital afterlife is likely to be available one day, according to Michael Graziano, albeit not for some time; his piece re-examines the possibility of uploading consciousness, and your own personality, into a computer. I think he does a good job of briefly sketching the formidable difficulties involved in scanning your brain, and scanning so precisely that your individual selfhood could be captured. In fact, he does it so well that I don’t really understand where his ultimate optimism comes from.

To my way of thinking, ‘scan and build’ isn’t even the most promising way of duplicating your brain. One more plausible way would be some kind of future bio-engineering where your brain just grows and divides, rather in the way that single cells do. A neater way would be some sort of hyper-path through space that split you along the fourth spatial dimension and returned both slices to our normal plane. Neither of these options is exactly a feasible working project, but to me they seem closer to being practical than a total scan. Of course neither of them offers the prospect of an afterlife the way scanning does, so they’re not really relevant for Graziano here. He seems to think we don’t need to go down to an atom by atom scan, but I’m not sure why not. Granted, the loss of one atom in the middle of my brain would not destroy my identity, but not scanning to an atomic level generally seems a scarily approximate and slapdash approach to me given the relevance of certain key molecules in the neural process –  something Graziano fully recognises.

If we’re talking about actual personal identity I don’t think it really matters though, because the objection I consider strongest applies even to perfect copies. In thought experiments we can do anything, so let’s just specify that by pure chance there’s another brain nearby that is in every minute detail the same as mine. It still isn’t me, for the banal commonsensical reason that copies are not the original. Leibniz’s Law tells us that if B has exactly the same properties as A, then it is A: but among the properties of a brain are its physical location, so a brain over there is not the same as one in my skull (so in fact I cheated by saying the second brain was the same in every detail but nevertheless ‘nearby’).

Now most philosophers would say that Leibniz is far too strong a criterion of identity when it comes to persons. There have been hundreds of years of discussion of personal identity, and people generally espouse much looser criteria for a person than they would for a stone – from identity of memories to various kinds of physical, functional, or psychological continuity. After all, people are constantly changing: I am not perfectly identical in physical terms to the person who was sitting here an hour ago, but I am still that person. Graziano evidently holds that personal identity must reside in functional or informational qualities of the kind that could well be transferred into a digital form, and he speaks disparagingly of ‘mystical’ theories that see problems with the transfer of consciousness. I don’t know about that; if anyone is hanging on to residual spiritual thinking, isn’t it the people who think we can be ‘taken out of’ our bodies and live forever? The least mystical stance is surely the one that says I am a physical object, and with some allowance for change and my complex properties, my identity works the same as that of any other physical object. I’m a one-off, particular thing and copies would just be copies.

What if we only want a twin, or a conscious being somewhat like me? That might still be an attractive option after all. OK, it’s not immortality but I think without being rampant egotists most of us probably feel the world could stand a few more people like ourselves around, and we might like to have a twin continuing our good work once we’re gone.

That less demanding goal changes things. If that’s all we’re going for, then yes, we don’t need to reproduce a real brain with atomic fidelity. We’re talking about a digital simulation, and as we know, simulations do not reproduce all the features of the thing being simulated – only those that are relevant for the current purpose. There is obviously some problem about saying what the relevant properties are when it comes to consciousness; but if passing the Turing Test is any kind of standard then delivering good outputs for conversational inputs is a fair guide and that looks like the kind of thing where informational and functional properties are very much to the fore.

The problem, I think, is again with particularity. Conscious experience is a one-off thing while data structures are abstract and generic. If I have a particular experience of a beautiful sunset, and then (thought experiments again) I have an entirely identical one a year later, they are not the same experience, even though the content is exactly the same. Data about a sunset, on the other hand, is the same data whenever I read or display it.

We said that a simulation needs to reproduce the relevant aspects of the the thing simulated; but in a brain simulation the processes are only represented symbolically, while one of the crucial aspects we need for real experience is particular reality.

Maybe though, we go one level further; instead of simulating the firing of neurons and the functional operation of the brain, we actually extract the program being run by those neurons and then transfer that. Here there are new difficulties; scanning the physical structure of the brain is one thing; working out its function and content is another thing altogether; we must not confuse information about the brain with the information in the brain. Also, of course, extracting the program assumes that the brain is running a program in the first place and not doing something altogether less scrutable and explicit.

Interestingly, Graziano goes on to touch on some practical issues; in particular he wonders how the resources to maintain all the servers are going to be found when we’re all living in computers. He suspects that as always, the rich might end up privileged.

This seems a strange failure of his technical optimism. Aren’t computers going to go on getting more powerful, and cheaper? Surely the machines of the twenty-second century will laugh at this kind of challenge (perhaps literally). If there is a capacity problem, moreover, we can all be made intermittent; if I get stopped for a thousand years and then resume, I won’t even notice. Chances are that my simulation will be able to run at blistering speed, far faster than real time, so I can probably experience a thousand years of life in a few computed minutes. If we get quantum computers, all of us will be able to have indefinitely long lives with no trouble at all, even if our simulated lives include having digital children or generating millions of digital alternates of ourselves, thereby adding to the population. Graziano, optimism kicking back in, suggests that we can grow  in understanding and come to see our fleshly life as a mere larval stage before we enter on our true existence. Maybe, or perhaps we’ll find that human minds, after ten billion years (maybe less) exhaust their potential and ultimately settle into a final state; in which case we can just get the computers to calculate that and then we’ll all be finalised, like solved problems. Won’t that be great?

I think that speculations of this kind eventually expose the contrast between the abstraction of data and the reality of an actual life, and dramatise the fact, perhaps regrettable, perhaps not, that you can’t translate one into the other.

 

Downloading Hauskeller

uploadMichael Hauskeller has an interesting and very readable paper in the International Journal of Machine Consciousness on uploading – the idea that we could transfer ourselves from this none-too solid flesh into a cyborg body or even just into the cloud as data. There are bits I thought were very convincing and bits I thought were totally wrong, which overall is probably a good sign.

The idea of uploading is fairly familiar by now; indeed, for better or worse it resembles ideas of transmigration, possession, and transformation which have been current in human culture for thousands of years at least. Hauskeller situates it as the logical next step in man’s progressive remodelling of the environment, while also nodding to those who see it as  the next step in the evolution of humankind itself. The idea that we could transfer or copy ourselves into a computer, Hauskeller points out, rests on the idea that if we recreate the right functional relationships, the phenomenological effects of consciousness will follow; that, as Minsky put it, ‘Minds are what Brains do’. This remains for Hauskeller a speculation, an empirical question we are not yet in a position to test, since we have not as yet built a whole brain simulation (not sure how we would test phenomenology even after that, but perhaps only philosophers would be seriously worried about it…). In fact there are some difficulties, since it has been shown that identical syntax does not guarantee identical semantics (So two identical brains could contain identical thoughts but mean different things by them – or something strange like that. While I think the basic point is technically true with reference to derived intentionality, for example in the case of books – the same sentence written by different people can have different meanings – it’s not clear to me that it’s true for brains, the source of original intentionality.).

However, as Hauskeller says, uploading also requires that identity is similarly transferable, that our computer-based copy would be not just a mind, but a particular mind – our own. This is a much more demanding requirement. Hauskeller suggests the analogy of books might be brought forward; the novel Ulysses can be multiply realised in many different media, but remains the same book. Why shouldn’t we be like that? Well, he thinks readers are different. Two people might both be reading Ulysses at the same moment, meaning the contents of their minds were identical; but we wouldn’t say they had become the same person. Conceivably at least, the same mind could be ‘read’ by different selves in the same way a single book can be read by different readers.

Hauskeller’s premise there is questionable – two people reading the same book don’t have identical mental content (a point he has just touched on, oddly enough, since it would follow from the fact that syntax doesn’t guarantee semantics, even if it didn’t follow simply from the complexity of our multi-layered mental lives). I’d say the very idea of identical mental content is hard to imagine, and that by using it in thought-experiments we risk, as Dennett has warned, mistaking our own imaginative difficulties for real-world constraints. But Hauskeller’s general point, that identity need not follow from content alone, is surely sound enough.

What about Ray Kurzweil’s argument from gradualism? This points out that we might replace someone with cyborg parts bit by bit. We wouldn’t have any doubt about the continuing identity of someone with a cyborg eye; nor someone with an electronic hippocampus. If each neuron were replaced by a functional equivalent one by one, we’d be forced to accept either that the final robot, with no biological parts at all, was indeed the same continuing person, or, that at some stage a single neuron made a stark binary difference between being the same person and not being the same person. If the final machine can be the same person, then uploading by less arduous methods is also surely possible since it’s equivalent to making the final machine by another route?

Hauskeller basically bites Kurzweil’s bullet. Yes, it’s conceivable that at some stage there will come neurons whose replacement quite suddenly switches off the person being operated on. I have a lot of sympathy with the idea that some particular set of neurons might prove crucial to identity, but I don’t think we need to accept the conceivability of sudden change in order to reject Kurzweil’s argument. We can simply suppose that the subject becomes a chimera; a compound of two identically-functioning people. The new person keeps up appearances alright, but the borders of the old personality gradually shrink to destruction, though it may be very unclear when exactly that should be said to have happened.

Suppose (my example) an image of me is gradually overlaid with an image of my identical evil twin Retep, line of pixels by line. No one can even tell the process is happening, yet at some stage it ceases to be a picture of me and becomes one of Retep. The fact that we cannot tell when does not prove that I am identical with Retep, nor that both pictures are of me.

Hauskeller goes on to attack ‘information idealism’. The idea of uploading often rests on the view that in the final analysis we consist of information, but

Having a mind generally means being to some extent aware of the world and oneself, and this awareness is not itself information. Rather, it is a particular way in which information is processed…

Hauskeller, provocatively but perhaps not unjustly, accuses those who espouse information idealism of Cartesian substance dualism; they assume the mind can be separated from the body.

But no, it can’t: in fact Hauskeller goes on to suggest that in fact the whole body is important to our mental life: we are not just our brains. He quotes Alva Noë and goes further, saying:

That we can manipulate the mind by manipulating the brain, and that damages to our brains tend to inhibit the normal functioning of our minds, does not show that the mind is a product of what the brain does.

The brain might instead, he says, be like a window; if the window is obscured, we can’t see beyond it, but that does not mean the window causes what lies beyond it.

Who’s sounding dualist now? I don’t think that works. Suppose I am knocked unconscious by the brute physical intervention of a cosh; if the brain were merely transmitting my mind, my mental processes would continue offstage and then when normal service was resumed I should be aware that thoughts and phenomenology had been proceeding while my mere brain was disabled. But it’s not like that; knocking out the brain stops mental processes in a way that blocking a window does not stop the events taking place outside.

Although I take issue with some of his reasoning, I think Hauskeller’s objections have some force, and the limited conclusion he draws – that the possibility of uploading a mind, let alone an identity, is far from established, is true as far as it goes.

How much do we care about identity as opposed to continuity of consciousness? Suppose we had to chose between on the one hand retaining our bare identity while losing all our characteristics, our memories, our opinions and emotions, our intelligence, abilities and tastes, and getting instead some random stranger’s equivalents; or on the other losing our identity but leaving behind a new person whose behaviour, memories, and patterns of thought were exactly like ours? I suspect some people might choose the latter.

If your appetite for discussion of Hauskeller’s paper is unsatisfied, you might like to check out John Danaher’s two-parter on it.