Rosehip Neurons of Consciousness

A new type of neuron is a remarkable discovery; finding one in the human cortex makes it particularly interesting, and the further fact that it cannot be found in mouse brains and might well turn out to be uniquely human – that is legitimately amazing. A paper (preprint here) in Nature Neuroscience announces the discovery of ‘rosehipneurons , named for their large, “rosehip”-like axonal boutons.

There has already been some speculation that rosehip neurons might have a special role in distinctive aspects of human cognition, especially human consciousness, but at this stage no-one really has much idea. Rosehip neurons are inhibitory, but inhibiting other neurons is often a key function which could easily play a big role in consciousness. Most of the traffic between the two hemispheres of the human brain is inhibitory, for example, possibly a matter of the right hemisphere, with its broader view, regularly ‘waking up’ the left out of excessively focused activity.

We probably shouldn’t, at any rate, expect an immediate explanatory breakthrough. One comparison which may help to set the context is the case of spindle neurons. First identified in 1929, these are a notable feature of the human cortex and at first appeared to occur only in the great apes – they, or closely analogous neurons, have since been spotted in a few other animals with large brains, such as elephants and dolphins. I believe we still really don’t know why they’re there or what their exact role is, though a good guess seems to be that it might be something to do with making larger brains work efficiently.

Another warning against over-optimism might come from remembering the immense excitement about mirror neurons some years ago. Their response to a given activity both when performed by the subject and when observed being performed by others, seemed to some to hold out a possible key to empathy, theory of mind, and even more. Alas, to date that hope hasn’t come to anything much, and in retrospect it looks as if rather too much significance was read into the discovery.

The distinctive presence of rosehip neurons is definitely a blow to the usefulness of rodents as experimental animals for the exploration of the human brain, and it’s another item to add to the list of things that brain simulators probably ought to be taking into account, if only we could work out how. That touches on what might be the most basic explanatory difficulty here, namely that you cannot work out the significance of a new component in a machine whose workings you don’t really understand to begin with.

There might indeed be a deeper suspicion that a new kind of neuron is simply the wrong kind of thing to explain consciousness. We’ve learnt in recent years that the complexity of a single neuron is very much not to be under-rated; they are certainly more than the simple switching devices they have at times been portrayed as, and they may carry out quite complex processing. But even so, there is surely a limit to how much clarification of phenomenology we can expect a single cell to yield, in the absence of the kind of wider functional theory we still don’t really have.

Yet what better pointer to such a  wider functional theory could we have than an item unique to humans with a role which we can hope to clarify through empirical investigation? Reverse engineering is a tricky skill, but if we can ask ourselves the right questions maybe that longed-for ‘Aha!’ moment is coming closer after all?

 

Mind Control Dust

handful of dustNew ways to monitor – and control – neurons are about to become practical. A paper in Neuron by Seo et al describes how researchers at Berkeley created “ultrasonic neural dust” that allowed activity in muscles and nerves to be monitored without traditional electrodes. The technique has not been applied to the brain and has been used only for monitoring, not for control, but the potential is clear, and this short piece in Aeon reviewing the development of comparable techniques concludes that it is time to take these emergent technologies seriously. The diagnostic and therapeutic potential of being able to directly monitor and intervene in the activity of nerves and systems all over the body is really quite mind-boggling; in principle it could replace and enhance all sorts of drug treatments and other interventions in immensely beneficial ways.

From a research point of view the possibility of getting single-neuron level data on an ongoing basis could leap right over the limitations of current scanning technology and tell us, really for the first time, exactly what is going on in the brain. It’s very likely that unexpected and informative discoveries would follow. Some caution is of course in order; for one thing I imagine placement techniques are going to raise big challenges. Throwing a handful of dust into a muscle to pick up its activity is one thing; placing a single mote in a particular neuron is another. If we succeed with that, I wonder whether we will actually be able to cope with the vast new sets of data that could be generated.

Still the way ahead seems clear enough to justify a bit of speculation about mind control.  The ethics are clearly problematic, but let’s start with a broad look at the practicalities. Could we control someone with neural dust?

The crudest techniques are going to be the easiest to pull off. Incapacitating or paralysing someone looks pretty achievable; it could be a technique for confining prisoners (step beyond this line and your leg muscles seize up) or perhaps as a secret fall-back disabling mechanism inserted into suspects and released prisoners.  If they turn up in a theatening role later, you can just switch them off. Killing someone by stopping their heart looks achievable, and the threat of doing so could in theory be used to control hostages or perhaps create ‘human drones’ (I apologise for the repellent nature of some of these ideas; forewarned is forearmed).

Although reading off thoughts is probably too ambitious for the foreseeable future, we might be able to monitor the brain’s states or arousal and perhaps even identify the recognition of key objects or people. I cannot see any obvious reason why remote monitoring of neural dust implants couldn’t pick up a kind of video feed from the optic nerve. People might want that done to themselves as a superior substitute for Google Glass and the like; indeed neural dust seems to offer new scope for the kind of direct brain control of technology that many people seem keen to have. Myself I think the output systems already built into human beings – hands, voice – are hard to beat.

Taking direct and outright control of someone’s muscles and making a kind of puppet of them seems likely to be difficult; making a muscle twitch is a long way from the kind of fluid and co-ordinated control required for effective movement. Devising the torrent of neural signals required looks like a task which is computationally feasible in principle but highly demanding; you would surely look to deep learning techniques, which in a sense were created for exactly this kind of task since they began with the imitation of neural networks.  A basic approach that might be achievable relatively early would be to record stereotyped muscular routines and then play them back like extended reflexes, though that wouldn’t work well for many basic tasks like walking that require a lot of feedback.

Could we venture further and control someone’s own attitudes and thoughts? Again the unambitious and destructive techniques are the easiest; making someone deranged or deluded is probably the most straighforward mental change to bring about. Giving them bad dreams seems likely to be a feasible option.  Perhaps we could simulate drunkenness – or turn it off – I suspect that would need massive but non-specific intervention, so it might be relatively achievable. Simulation of the effects of other drugs might be viable on similar terms, whether to impair performance, enhance it, or purely for pleasure. We might perhaps be able to stimulate paranoia, exhilaration, religiosity or depression, albeit without fully predictable results.

Indirect manipulation is the next easiest option for mind control; we might arrange, for example, to have a flood of good feelings or fear and aversion every time particular political candidates are seen, for example; it wouldn’t force the subject to vote a particular way but it might be heavily influential. I’m not sure it’s a watertight technique as the human mind seems easily able to hold contradictory attitudes and sentiments and widespread empirical evidence suggest many people must be able to go on voting for someone who appears repellent.

Could we, finally, take over the person themselves, feeding in whatever thoughts we chose? I rather doubt that this is ever going to be possible. True, our mental selves must ultimately arise from the firing of neurons, and ex hypothesi we can control all those neurons; but the chances are there is no universal encoding of thoughts; we may not even think the same thought with the same neurons a second time around. The fallback of recording and playing back the activity of a broad swathe of brain tissue might work up to a point if you could be sure that you had included the relevant bits of neural activity, but the results, even if successful, would be more like some kind of  malign mental episode than a smooth take over of the personality. Easier, I suspect, to erase a person than control one in this strong sense. As Hamlet pointed out, knowing where the holes on a flute are doesn’t make you able to play a tune. I can hardly put it better than Shakespeare…

Why, look you now, how unworthy a thing you make of
me! You would play upon me; you would seem to know
my stops; you would pluck out the heart of my
mystery; you would sound me from my lowest note to
the top of my compass: and there is much music,
excellent voice, in this little organ; yet cannot
you make it speak. ‘Sblood, do you think I am
easier to be played on than a pipe? Call me what
instrument you will, though you can fret me, yet you
cannot play upon me.

Beyond Libet

Picture: dials. Libet’s famous experiments are among the most interesting and challenging in neuroscience; now they’ve been taken further. A paper by Fried, Mukamel and Kreiman in Neuron (with a very useful overview by Patrick Haggard) reports on experiments using a number of epilepsy patients where it was ethically possible to implant electrodes and hence to read off the activity of individual neurons, giving a vastly more precise picture than anything achievable by other means. In other respects the experiments broadly followed the design of Libet’s own, using a similar clock-face approach to measure the time when subjects felt they decided to press a button. Libet discovered that a Readiness Potential (RP) could be detected as much as half a second before the subject was conscious of deciding to move; the new experiments show that data from a population of 250 neurons in the SMA (the Supplementary Motor Area) were sufficient to predict the subject’s decision 700 ms in advance of the subject’s own awareness, with 80% accuracy.

The more detailed picture which these experiments provide helps clarify some points about the relationship between pre-SMA and SMA proper, and suggest that the sense of decision reported by subjects is actually the point at which a growing decision starts to be converted into action, rather than the beginning of the decision-forming process, which stretches back further. This may help to explain the results from fMRI studies which have found the precursors of a decision much earlier than 500 ms beforehand. There are also indications that a lot of the activity in these areas might be more concerned with suppressing possible actions than initiating them – a finding which harmonises nicely with Libet’s own idea of ‘free won’t’ – that we might not be able to control the formation of impulses to act, but could still suppress them when we wanted.

For us, though, the main point of the experiments is that they appear to provide a strong vindication of Libet and make it clear that we have to engage with his finding that our decisions are made well before we think we’re making them.

What are we to make of it all then? I’m inclined to think that the easiest and most acceptable way of interpreting the results is to note that making a decision and being aware of having made a decision are two different things (and being able to report the fact may be yet a third). On this view we first make up our minds; then the process of becoming aware of having done so naturally takes some neural processing of its own, and hence arrives a few hundred milliseconds later.

That would be fine, except that we strongly feel that our decisions flow from the conscious process, that the feelings we are aware of, and could articulate aloud if we chose, are actually decisive. Suppose I am deciding which house to buy: house A involves a longer commute while house B is in a less attractive area. Surely I would go through something like an internal argument or assessment, totting up the pros and cons, and it is this forensic process in internal consciousness which causally determines what I do? Otherwise why do I spend any time thinking about it at all – surely it’s the internal discussion that takes time?

Well, there is another way to read the process: perhaps I hold the two possibilities in mind in turn: perhaps I imagine myself on the long daily journey or staring at the unlovely factory wall. Which makes me feel worse? Eventually I get a sense of where I would be happiest, perhaps with a feeling of settling one alternative and so of what I intend to do. On this view the explicitly conscious part of my mind is merely displaying options and waiting for some other, feeling part to send back its implicit message. The talky, explicit part of consciousness isn’t really making the decision at all, though it (or should I say ‘I’?) takes responsibility for it and is happy to offer explanations.

Perhaps there are both processes in involved in different decisions to different degrees. Some purely rational decisions may indeed happen in the explicit part of the mind, but in others – and Libet’s examples would be in this category – things have to feel right. The talky part of me may choose to hold up particular options and may try to nudge things one way or another, but it waits for the silent part to plump.

Is that plausible? I’m not sure. The willingness of the talky part to take responsibility for actions it didn’t decide on and even to confect and confabulate spurious rationales, is very well established (albeit typically in cases with brain lesions), but introspectively I don’t like the idea of two agents being at work I’d prefer it to be one agent using two approaches or two sets of tools – but I’m not sure that does the job of accounting for the delay which was the problem in the first place…

(Thanks to Dale Roberts!)