Mind-meld rats

ratsThis paper  by Pais-Vieira, Lebedev, Kunicki, Wang, and Nicolelis has attracted a great deal of media attention. The BBC described it as ‘literally mind-boggling’. It describes a series of experiments in which the minds of two rats were apparently melded to act as one.

Or does it? One rat, the ‘encoder’ was given a choice of levers to push – left or right (in some cases a more rat-friendly nose-activated switch was used instead of a lever). If it selected the correct one when cued, it got a reward in form of a few drops of water (it seems even lab rats are not getting the rewards they used to these days). Some of the rats learned to pick the right lever in 95% of cases and these went on to the next stage where the patterns of activation from their sensorimotor cortex as they pushed the right lever were picked up and transmitted.

Meanwhile ‘decoder’ rats had been fitted with similar brain implants and trained to respond to a series of impulses delvered in the same sensorimotor area by pressing the right lever. In this training stage they were not receiving impulses from another rat, just an artificially produced stream of blips. This phase of training apparently took about 45 days.

Finally, the two rats were joined up and lo: the impulses recorded from the ‘encoder’ rat, once delivered to the brain of the ‘decoder’ rat, enabled it to hit the right lever with up to 70% accuracy (you could get 50% from random pressing, of course, but it’s still a significant improvement in performance). In one pointless variation, the encoder and decoder rats were in different labs thousands of miles apart; so what? Are we still amazed that electrical signals can be transmitted over long distances?

A couple of other aspects of the experiments seem odd to me. They did not have a control experiment where the signals went to a different part of the decoder rat’s cortex, so we can’t tell whether the use of the particular areas they settled on was significant. Second, they provided the encoder rat with incentives: it got water only when the decoder rat got it right. What was that meant to achieve, apart from making the encoder rat’s life slightly worse than it already was? In essence, it encourages the encoder rat to develop effective signals: to step up the clarity and strength of the neural signals it was sending out. That may have helped to make the experiment a success, but it also detracts from any claim that what was being sent was normal neural activity.

So, what have we got, overall? Really, nothing to speak of. We’re encouraged to think that the decoder rat was hearing the encoder’s thoughts, or feeling its inclinations, or something of the kind, but there’s clearly a much simpler explanation baked into the experiment: it was simply responding to electric impulses of a kind that it had already been trained to respond to (for 45 days, which must be the rat equivalent of post-doctorate levels of lever-pushing knowledge).

Given the lengthy training and selection of the rats, I don’t think a 70% success rate is that amazing: it seems clear that they could have got a better rate if, instead of inserting precise neural connections, they had simply clipped an electrode to the decoder rat’s left ear.

There’s no evidence here of direct transmission of cognitive content: the simple information transferred is delivered via the association already trained into the ‘decoder’. There’s no decoding, and no communication in rat mentalese.

The discussion in the paper ends with the following remarkable proposition.

 …in theory, channel accuracy can be increased if instead of a dyad a whole grid of multiple reciprocally interconnected brains are employed. Such a computing structure could define the first example of an organic computer capable of solving heuristic problems that would be deemed non-computable by a general Turing-machine. Future works will elucidate in detail the characteristics of this multi-brain system, its computational capabilities, and how it compares to other non-Turing computational architectures…

Well, I’m boggling now.

Radiotelepathy

Picture: Freeman Dyson. I see that at Edge they have kept up their annual custom of putting a carefully-chosen question to a group of intellectuals.  This year, they asked “What game-changing scientific ideas and developments do you expect to live to see?”. There were many interesting answers. Freeman Dyson foresaw what he called ‘radiotelepathy‘. The idea is that a set of small implants record the activity of your brain which is then transmitted and delivered into someone else’s (and vice versa). Hey presto, at once your thoughts and feelings are shared.

As the Thinking Meat Project remarked,  this idea opens up a number of questions. Given our particular interests here, the first point that came to mind was that this kind of telepathy would surely resolve at last the vexed question of qualia. How do we know that the red seen by others looks the same as the red seen by us? Perhaps the experience they have when they see red is the experience we have when we see green? Or perhaps (a less-often discussed possibility) their red is a bit like our middle C on a badly-tuned piano? Or perhaps their colour experiences are nothing like any of our experiences; perhaps there are an infinite number of phenomenal experiences which go with the perception of colour, and everyone has their own unique and ineffable set.

Well, with Dyson telepathy, there would be no need for us to wonder any more; just tune in to someone else’s brain, and we can have their experiences ourselves. Or can we? Perhaps not. Even as I write, I can sense hard-liners getting ready to insist that qualia recorded, transmitted, and inserted into a new brain, are not the same as the freshly gathered original ones.  You still wouldn’t know what the real thing was like. It might be that it is our brains themselves that impart the special what-it-is-likeness to experiences, in which case even telepathy won’t help, and we can only ever have our own qualia. I think this exposes the insoluble problem at the heart of the whole qualia issue. Really the only way to know what someone’s experiences are like in themselves, is to be that person. But you can’t be someone else.

But steady on, because it seems highly unlikely to me that Dyson telepathy is feasible. I don’t see any insoluble problem with the hardware he calls for, but downloading brain content is a tricky business, and uploading is even worse. To start with, Dyson talks about the ‘entire brain’, but do we want the whole thing? Do I want the activity of someone else’s cerebellum reproduced in my own? Do I want the control routines for my caridovascular system overwritten? No thanks. So even on the macro scale we have to be very careful about where we put our ingoing signals. Pinpointing the right neurons seems a hopeless task. It’s true that by and large the same regions of the cortex appear to deal with the same functions in different individuals, although variation is also quite possible. It’s also true that recent research has identified individual neurons with very specific responses – neurons that fire, say, in response to the sight of Freeman Dyson, but not in response to anyone else. But so far as I know, it hasn’t been demonstrated that the Dyson neuron in everyone is in the same place even approximately; it actually seems most unlikely, given that brains are wired in highly individual ways, and that indeed, most people have never had the pleasure of meeting Freeman Dyson. I don’t think it’s even been shown that the very same neuron which responds to Dyson today continues to do so next week.  Because all our machines are made to have their states encoded in a readable way, we tend to expect the same of nature, but evolution has no need of legible code.  So it’s very likely that the neural activity which in my brain corresponds to thinking of Freeman Dyson would, when transposed to another cranium,  come out as the tantalising memory of the taste of a biscuit, or an intimation of mortality, or reservations about bicameralism.

Of course it’s worse than that.  Our brains are carefully organised, and the random dumping of alien activity would be oustandingly likely to mess things up.  Would the brain activity that was there before – my own mental activity – be wiped out, so that instead of sharing someone else’s thoughts, I suddenly thought I was someone else? Or could it somehow be merged? Dissolving the barriers and merging with another person can sound almost sensuously appealing (given the right person), but the sudden appearance of unforeshadowed alien thoughts might actually be terrifying, severely disorienting: a threat to the integrity of the psyche liable to end in trauma. In this respect, it’s worth noting that nothing is more disruptive to one signal than another similar signal. If you write a sentence on a piece of paper and then cross it out with two or three lines, it remains easily legible. But if you write even one other sentence over the top of it, it becomes pretty much illegible at once. In the same way, it seems likely that activity from another brain would be the most dirsuptive thing you could input to your own, far worse than random noise.

I think the best alternative would be to home in on sensory inputs in the brain and try to place your interface somewhere early in the system before those inputs reach the more complex functions of consciousness. The result then would be more like hearing an external voice, or seeing an external hallucination. Much easier to deal with, but of course not so extraordinary – not really different in kind from using a video phone. At the end of the day, perhaps it’s best to stick to the brain inputs provided by the designer – our normal senses.  Walter Freeman memorably lamented the cognitive isolation in which we are all ultimately confined; but perhaps that isolation is the precondition of personal identity.