The Ontological Gap

There’s a fundamental ontological difference between people and programs which means that uploading a mind into a machine is quite impossible.

I thought I’d get my view in first (hey, it’s my blog), but I was inspired to do so by Beth Elderkin’s compilation of expert views in Gizmodo, inspired in turn by Netflix’s series Altered Carbon. The question of uploading is often discussed in terms of a hypothetical Star Trek style scanner and the puzzling thought experiments it enables. What if instead of producing a duplicate of me far away, the scanner produced two duplicates? What if my original body was not destroyed – which is me? But let’s cut to the chase; digital data and a real person belong to different ontological realms. Digital data is a set of numbers, and so has a kind of eternal Platonic essence. A person is a messy entity bound into time and space. The former subsist, the latter exist; you cannot turn one into the other, any more than an integer can become a biscuit and get eaten.

Or look at it this way; a digitisation is a description. Descriptions, however good, do not contain the thing described (which is why the science of colour vision does not contain the colour red, as Mary found in the famous thought experiment).

OK, well, that that, see you next time… Oh, sorry, yes, the experts…

Actually there are many good points in the expert views. Susan Schneider makes three main ones. First, we don’t know what features of a human brain are essential, so we cannot be sure we are reproducing them; quantum physics imposes some limits on how precisely we can copy th3 brain anyway. Second, the person left behind by a non-destructive scanner surely is still you, so a destructive scan amounts to death. Third, we don’t know whether AI consciousness is possible at all. So no dice.

Anders Sandberg makes the philosophical point that it’s debatable whether a scanner transfers identity. He tends to agree with Parfit that there is no truth of the matter about it. He also makes the technical point that scanning a brain in sufficient detail is an impossibly vast and challenging task, well beyond current technology at least. While a digital insert controlling a biological body seems feasible in theory, reshaping a biological brain is probably out of the question. He goes on to consider ethical objections to uploading, which don’t convince him.

Randal Koene thinks uploading probably is possible. Human consciousness, according to the evidence, arises from brain processes; if we reproduce the processes, we reproduce the mind. The way forward may be through brain prostheses that replace damaged sections of brain, which might lead ultimately to a full replacement. He thinks we must pursue the possibility of uploading in order to escape from the ecological niche in which we may otherwise be trapped (I think humans have other potential ways around that problem).

Miguel A. L. Nicolelis dismisses the idea. Our minds are not digital at all, he says, and depend on information embedded in the brain tissue that cannot be extracted by digital means.

I’m basically with Nicolelis, I fear.

 

Mind-meld rats

ratsThis paper  by Pais-Vieira, Lebedev, Kunicki, Wang, and Nicolelis has attracted a great deal of media attention. The BBC described it as ‘literally mind-boggling’. It describes a series of experiments in which the minds of two rats were apparently melded to act as one.

Or does it? One rat, the ‘encoder’ was given a choice of levers to push – left or right (in some cases a more rat-friendly nose-activated switch was used instead of a lever). If it selected the correct one when cued, it got a reward in form of a few drops of water (it seems even lab rats are not getting the rewards they used to these days). Some of the rats learned to pick the right lever in 95% of cases and these went on to the next stage where the patterns of activation from their sensorimotor cortex as they pushed the right lever were picked up and transmitted.

Meanwhile ‘decoder’ rats had been fitted with similar brain implants and trained to respond to a series of impulses delvered in the same sensorimotor area by pressing the right lever. In this training stage they were not receiving impulses from another rat, just an artificially produced stream of blips. This phase of training apparently took about 45 days.

Finally, the two rats were joined up and lo: the impulses recorded from the ‘encoder’ rat, once delivered to the brain of the ‘decoder’ rat, enabled it to hit the right lever with up to 70% accuracy (you could get 50% from random pressing, of course, but it’s still a significant improvement in performance). In one pointless variation, the encoder and decoder rats were in different labs thousands of miles apart; so what? Are we still amazed that electrical signals can be transmitted over long distances?

A couple of other aspects of the experiments seem odd to me. They did not have a control experiment where the signals went to a different part of the decoder rat’s cortex, so we can’t tell whether the use of the particular areas they settled on was significant. Second, they provided the encoder rat with incentives: it got water only when the decoder rat got it right. What was that meant to achieve, apart from making the encoder rat’s life slightly worse than it already was? In essence, it encourages the encoder rat to develop effective signals: to step up the clarity and strength of the neural signals it was sending out. That may have helped to make the experiment a success, but it also detracts from any claim that what was being sent was normal neural activity.

So, what have we got, overall? Really, nothing to speak of. We’re encouraged to think that the decoder rat was hearing the encoder’s thoughts, or feeling its inclinations, or something of the kind, but there’s clearly a much simpler explanation baked into the experiment: it was simply responding to electric impulses of a kind that it had already been trained to respond to (for 45 days, which must be the rat equivalent of post-doctorate levels of lever-pushing knowledge).

Given the lengthy training and selection of the rats, I don’t think a 70% success rate is that amazing: it seems clear that they could have got a better rate if, instead of inserting precise neural connections, they had simply clipped an electrode to the decoder rat’s left ear.

There’s no evidence here of direct transmission of cognitive content: the simple information transferred is delivered via the association already trained into the ‘decoder’. There’s no decoding, and no communication in rat mentalese.

The discussion in the paper ends with the following remarkable proposition.

 …in theory, channel accuracy can be increased if instead of a dyad a whole grid of multiple reciprocally interconnected brains are employed. Such a computing structure could define the first example of an organic computer capable of solving heuristic problems that would be deemed non-computable by a general Turing-machine. Future works will elucidate in detail the characteristics of this multi-brain system, its computational capabilities, and how it compares to other non-Turing computational architectures…

Well, I’m boggling now.