ratsThis paper  by Pais-Vieira, Lebedev, Kunicki, Wang, and Nicolelis has attracted a great deal of media attention. The BBC described it as ‘literally mind-boggling’. It describes a series of experiments in which the minds of two rats were apparently melded to act as one.

Or does it? One rat, the ‘encoder’ was given a choice of levers to push – left or right (in some cases a more rat-friendly nose-activated switch was used instead of a lever). If it selected the correct one when cued, it got a reward in form of a few drops of water (it seems even lab rats are not getting the rewards they used to these days). Some of the rats learned to pick the right lever in 95% of cases and these went on to the next stage where the patterns of activation from their sensorimotor cortex as they pushed the right lever were picked up and transmitted.

Meanwhile ‘decoder’ rats had been fitted with similar brain implants and trained to respond to a series of impulses delvered in the same sensorimotor area by pressing the right lever. In this training stage they were not receiving impulses from another rat, just an artificially produced stream of blips. This phase of training apparently took about 45 days.

Finally, the two rats were joined up and lo: the impulses recorded from the ‘encoder’ rat, once delivered to the brain of the ‘decoder’ rat, enabled it to hit the right lever with up to 70% accuracy (you could get 50% from random pressing, of course, but it’s still a significant improvement in performance). In one pointless variation, the encoder and decoder rats were in different labs thousands of miles apart; so what? Are we still amazed that electrical signals can be transmitted over long distances?

A couple of other aspects of the experiments seem odd to me. They did not have a control experiment where the signals went to a different part of the decoder rat’s cortex, so we can’t tell whether the use of the particular areas they settled on was significant. Second, they provided the encoder rat with incentives: it got water only when the decoder rat got it right. What was that meant to achieve, apart from making the encoder rat’s life slightly worse than it already was? In essence, it encourages the encoder rat to develop effective signals: to step up the clarity and strength of the neural signals it was sending out. That may have helped to make the experiment a success, but it also detracts from any claim that what was being sent was normal neural activity.

So, what have we got, overall? Really, nothing to speak of. We’re encouraged to think that the decoder rat was hearing the encoder’s thoughts, or feeling its inclinations, or something of the kind, but there’s clearly a much simpler explanation baked into the experiment: it was simply responding to electric impulses of a kind that it had already been trained to respond to (for 45 days, which must be the rat equivalent of post-doctorate levels of lever-pushing knowledge).

Given the lengthy training and selection of the rats, I don’t think a 70% success rate is that amazing: it seems clear that they could have got a better rate if, instead of inserting precise neural connections, they had simply clipped an electrode to the decoder rat’s left ear.

There’s no evidence here of direct transmission of cognitive content: the simple information transferred is delivered via the association already trained into the ‘decoder’. There’s no decoding, and no communication in rat mentalese.

The discussion in the paper ends with the following remarkable proposition.

 …in theory, channel accuracy can be increased if instead of a dyad a whole grid of multiple reciprocally interconnected brains are employed. Such a computing structure could define the first example of an organic computer capable of solving heuristic problems that would be deemed non-computable by a general Turing-machine. Future works will elucidate in detail the characteristics of this multi-brain system, its computational capabilities, and how it compares to other non-Turing computational architectures…

Well, I’m boggling now.

10 Comments

  1. 1. Eric Thomson says:

    Note I am in the Nicolelis lab, so truth in advertising: I am biased. I’m splitting this into two posts.

    In terms of technical details: you focused on the first task mentioned in the paper, in which M1 was recorded in the encoder rat as it pushed a lever, and this determined which signal was sent to the decoder rat. If that were the whole paper, it would indeed be a lot less interesting than the one published.

    In the second task, things were more intriguing: the encoder rat had to perform a whisker-based tactile discrimination task while researchers recorded from the whisker representation in S1. The signal from S1 determined how many spikes were sent to the decoder rat, whose behavior had to match the encoder rat. So the decoder rat’s brain had indirect access to the tactile information being processed in S1 in the encoder rat’s brain (more on this below).

    In terms of the controls you suggested… First, you suggested adding stimulating electrodes to another bit of cortex to see if that would work. That is fine, and interesting, but not really a control per se because they aren’t claiming that it couldn’t be done with another piece of cortex. Indeed, based on my work with infrared-discriminating rats I would bet you could put the electrodes in any sensory area (and pretty much any place in the CNS) and it would work.

    Rather, the experiments simply are an implementation of the first real-time direct brain-to-brain interface. Obviously this is going to kindle people’s imagination, and the popular press has been going overboard with references to Borg minds and such. But the technical achievements are nontrivial, and obviously this is just the first step involving ensuring the basic hardware works and that this kind of hookup is feasible.

    Note one reason we might want to have these signals travel around the world is to make sure that the delays aren’t so long as to kill effective communication between subjects/machines, which is something we would be concerned about if we wanted to help train a paraplegic in Brazil using software contained at Duke. These are very real concerns (especially as Miguel has labs distributed all over the world each contributing different elements to different projects).

  2. 2. Eric Thomson says:

    In reference to giving the encoder rat extra water if the decoder matched its behavior, you wrote:
    In essence, it encourages the encoder rat to develop effective signals: to step up the clarity and strength of the neural signals it was sending out. That may have helped to make the experiment a success, but it also detracts from any claim that what was being sent was normal neural activity.

    But their claim isn’t that the neural activity was “normal”. Rather, as you mentioned when the behavior of the animals was coupled, this actually changed the activity in the encoder rat in a way to optimize the transaction between the rats. This was one of the key claims in the paper! To me it seems one of the more interesting, and frankly surprising, aspects of the paper. I am not sure how you think this goes against the paper.

    (Note also that the encoder rat got reward potentially twice: once for correct performance on the tactile discrimination task, and again when the decoder matched its behavior.)

    As far as the last paragraph of the paper goes? It’s the final paragraph of the discussion section, so I wouldn’t bet a grant on it. I plead Wittgenstein’s 7th.

    Finally, getting back to whether the second rat “feels” what the first rat feels, or whatever. Obviously that would go beyond the data, and beyond what the authors’ claim in the paper. However, in a technical sense, the decoder rat is gaining information about what is going on in the tactile environment of the first rat, and this is how it is able to perform above chance in the task. The decoder doesn’t know that it is getting such information. For all it cares, the first rat could have electrodes in auditory cortex and could be discriminating B from F sharp. Even in that case, the decoder rat would still likely feel the stimulation as a tickle on its face (though I should say in my work with infrared sensing rats, it is much harder to interpret how they interpret microstimulation in response to novel information embedded in S1, as I discussed in the previous link).

    We should forgive the popular press for taking this basic technical achievement, and running with it to Star Trek. Obviously we need to keep clear on the difference between the actual study, one one hand, and extrapolations to 200 years in the future, on the other. But that doesn’t mean it isn’t fun to engage in such flights of fancy. A lot of the people offering alternate viewpoints in the press seem to be forgetting this important distinction, and coming off as a bit curmudgeonly.

  3. 3. Peter says:

    Eric,

    Many thanks for these comments (and I hope I’m not being too curmudgeonly).

    The nub of the matter is that we’re sort of led to think that there’s a direct transfer of mental content going on here. Simply transferring information from one brain to another via signals is not particularly remarkable; it’s the idea that the decoder rat is decoding the mental states of the encoder rat that makes the experiment seem so extraordinary. But if we can plug the electrodes into any part of the cortex with equal success, it seems clear that the mental states of the decoder don’t particularly resemble those of the encoder. Moreover, if the encoder has been incentivised to develop patterns of neuron firing that are designed for signalling to the encoder rather than simply for pressing the lever, we seem to have messed up the very neural content we were ‘supposed’ to be transmitting.

    Fair enough, we have got a direct transfer from brain to brain in the sense that we’ve bypassed the muscles at one end and the sensory organs at the other. It does look, though, as if we could have done better by feeding signals into one of the rat’s existing inputs (through the electrode on its ear, or a light into its eye): that way we might have got the kind of 95% success the encoders achieved. Putting the signals into the brain – one of the few parts of the rat that doesn’t have sensory inputs – seems perverse, requiring an heroic reorganising effort on the part of the rat’s brain to make it work.

    So far as the grid of brains solving non-computable problems goes, I grant that some speculation is allowed – but it does make it a little harder for the lab to claim that the press has sensationalised its paper!

  4. 4. Eric Thomson says:

    Peter: I think your comments are good at pointing out the need to separate spin from results with this paper. In this case, the distance between the two is startling.

    That said, I agree the final paragraph is a bit far out and contentious, but the media hasn’t really focused on that (the final para says nothing about sharing mental states, but is a speculation about computational consequences of tethering the brains of many rats together). This isn’t presented as a result, but as a speculative final musing. It isn’t something I would personally assert, even outside of a publication, but I think the media hasn’t picked up on that too much. However, I tend to avoid the media and I could be wrong about that. I’m responding here b/c I am a follower of your site.

    In terms of more specific claims: you are right nobody can claim that the decoder rat was sharing experiences with the encoder rat. The paper doesn’t make such outlandish (and likely false) claims. It simply talks about sharing information, which is noncontroversial.

    Regarding the change in activity in the encoder rat, after their behavior was coupled. This opens up lots of questions about its significance, origins, and interpretation. The paper was largely descriptive, conveying what happened behaviorally and neurally as rats learned the task.

    One possible interpretation, that you have offered, is that the ‘content’ of the S1 signal in the encoder has fundamentally changed. This is interesting, but I have my doubts.

    By analogy, let’s say your visual cortex develops a less noisy response to oriented bars when you attend to oriented bars. Is that a shift in the content? Or is it one way V1 can increase the accuracy of its representation of orientation?

    Perhaps something similar is happening here. It’s not like the encoder has any idea why she is getting this second reward. The only thing she knows is that she needs to perform this tactile discrimination task. At least while performing the task, that is her world: “I need to determine how far things are from my face, using my whiskers.” If she represents some new variable, as you suggest, then either she should get worse at the task, or the decoder rat should get worse at the task, because performance of both rats depends on the encoder’s brain getting that one variable right.

    That the recordings are from the whisker region of primary somatosensory cortex (versus amydgala or whatever) suggests that the change is in the accuracy of the sensory representation in the encoder, not a purely attentional/motivational/arousal signal. Likely it is an interaction between one of these effects and the tactile representation in S1 (again, see the analogy with V1).

    Finally, obviously you are right that you might get better performance if you linked something else to the rat. But the goal was to link two brains, not to maximize performance of the decoder rat by any means necessary. If that was the goal, they could have just had a light turn on in the decoder rat’s chamber that depended on the behavior of the first rat. Or allowed the decoder to simply watch the encoder rat. :)

    Note also I realize this is an early preliminary study, and I am not trying to argue that it is flawless or answers all the questions it opened up. I’m just throwing in my 2¢ like everyone else here, and admittedly playing devil’s advocate and trying to present it in a more positive light than the original post.

    I am very busy right now so may not be able to respond very frequently after today, but will try to check in periodically.

  5. 5. Peter says:

    Thanks, Eric – much appreciated.

  6. 6. Vicente says:

    Eric, a nice piece of work. A few questions if I may:

    In the paper it is clearly claim that there is an information transfer, and by the 20% bias, it actually seems to be. Any idea of how that information is coded?

    Peter claims that thatthe decoder rat is decoding the mental states of the encoder rat that makes the experiment seem so extraordinary but:

    What do we mean by this? in this case probably the mental states are subconscious. I suppose that the decoder rat just feels some kind of hunch, or impulse, or intuition, or instinctive feeling, that bias its behaviour improving the success rate. It is not a “conscious decoding”. Probably it is like somebody suffering blindsight skipping obstacles, following some instinct, not knowing why and how.

    How is it that the stimulus area is irrelevant? Information seems to reach the decision centres irrespective of the injection area, that is surprising to me.

    Rats don’t speak, so we cannot ask them what they feel.

    I believe that in order to learn from these experiments to a full extent we need human brains (I don’t mean the researchers ones to think). Of course if your fiscal cliff rewards (drops of water…) remain, don’t expect many volunteers.

    I wish I could have a look at this “technology” evolution in 200 years: bio-internet.

  7. 7. Eric Thomson says:

    Vicente: I didn’t mean that the area recorded in the encoding rat is irrelevant. If she were doing an auditory discrimination task, we would want to record auditory cortex. I meant that the location of stimulation in the decoder rat probably isn’t that important, that she could learn the association between behavior and stimulation levels in an arbitrary region of cortex.

    In terms of how the information is coded in the encoder rat, that is something we have looked at quite a bit in the lab, but which we need to do a lot more work on. It isn’t simple, is the answer. E.g., see Krupa’s paper Layer-specific somatosensory cortical activation during active tactile discrimination.

  8. 8. Mikhail Lebedev says:

    “So, what have we got, overall? Really, nothing to speak of. We’re encouraged to think that the decoder rat was hearing the encoder’s thoughts, or feeling its inclinations, or something of the kind, but there’s clearly a much simpler explanation baked into the experiment: it was simply responding to electric impulses of a kind that it had already been trained to respond to (for 45 days, which must be the rat equivalent of post-doctorate levels of lever-pushing knowledge).”

    To be more precise, the decoder rat responded to electrical stimuli that reflected the encoder’s thoughts and inclinations. What’s wrong with this? You want the decoder to associate its sensations with the encoder more realistically? This can be easily done within the same framework. I am sure you can easily think of such an experiment. By the way, 99% of brain operations are unconscious. Neurons simply respond to electrical stimuli without understanding where they come from. And so what?

  9. 9. Vicente says:

    This device (in press) from Brown U. looks really promising to carry out more realistic experiments more complex brains:

    http://nurmikko.engin.brown.edu/?q=node/1

    An externally head-mounted wireless neural recording device for laboratory animal research and possible human clinical use

    M. Yin, H. Li, C. Bull, D. A. Borton, J. Aceros, L. Larson, and A. V. Nurmikko

    I wonder if neural activity recorded during high cognitive operations (e.g. play chess) could be transfer/induced to another subject.

  10. 10. Mikhail Lebedev says:

    Anybody, including the Brown university team, is welcome to build more advanced brain to brain interfaces. Actually, we have our own multichannel wireless system. High cognitive signals can be extracted from prefrontal cortex, transmitted to a sensory cortex. This way you can, for example, extend the working memory.

Leave a Reply