Linespace
Pontifical neurons
Linespace

5 August 2005

home

pontifical neuron

Some, it seems, believe the neuron has a particularly special role, even more important than holding the memory of Granny. A paper in the Journal of Consciousness by JCW Edwards (mentioned by Steve Esser in a comment here) suggests that consciousness itself is actually a property of single neurons. The theory is clearly one of some stature; it apparently received favourable consideration from William James, and resembles the views of Leibniz – presumably the ones set out in the Monadology .

But why would anyone think that single neurons are independently conscious? In Edwards’s eyes, this is the only likely solution to the notorious binding problem. The binding problem, as you may recall, is the question of how we manage to combine unsynchronised flows of information from our different senses into a smoothly unrolling, multi-modal conscious experience of reality. The suggestion here is that in order to achieve that unity of experience, the relevant bits of information all have to come together in one unit somewhere. A brain, or the cortex, or even a neuronal column, is just too big, it seems, but a single neuron would be just about right. The idea seems to be that a neuron is a real unit in a sense which more composite things, such as brains, can never be. At the same time, however, Edwards doesn't want neurons to be too strongly unified. A traditional view would be that neurons merely add together the total of inward impulses, and fire if that total is greater than a certain value, but Edwards believes they are capable of more complex behaviour.

Linespace

Stated baldly, the idea that neurons are 'real' units sounds rather dubious - cells are pretty composite things themselves, surely? Edwards seeks to translate arguments from William James into terms more suitable to contemporary physics, but I'm not sure that adds anything to their underlying philosophical plausibility.There are other obvious objections; why would we have a head full of neurons if only one is needed for consciousness? When Edwards addresses this question directly, his answer seems highly unsatisfactory. He tells us that having a large brain is actually no big deal - the behaviour of small-brained bees is vastly more complex than that of cows living peacefully in a field. That seems a bit of a slur on bovine society, but in any case it seems to make the big brains even harder to explain. The real answer, implied elsewhere in the paper, seems to be that neurons have a kind of dual role: they do perform all the typical computational stuff, and that function is essential to their awareness of the world, but at the same time they each have a general, independent consciousness. It's as if they're each getting the full picture from the network around them, but at the same time unconsciously carying out some much simpler processing which contributes to that network.

So, to raise another obvious objection, what happens when the particular neuron which provides our consciousness dies – do we die with it? It was considerations of this kind which led James and others to speak facetiously of the supposed ‘pontifical’ neuron implied by the theory; but Edwards asserts that his system is democratic. A particular group of neurons, probably in the cortex, but perhaps in the thalamus, all have full consciousness. They all independently suppose that they are the person in whose head they reside.It follows that they are all having very similar experiences and thinking very similar thoughts - otherwise, surely, at least some of them, the ones not currently in direct control, would be disconcerted by the mismatch between their thoughts and the actions of their body.

Linespace

This raises an interesting question - how do we count consciousnesses? If each neuron has its own consciousness, which seems from the inside to be the consciousness of a human being, and if each of these consciousnesses has virtually the same contents in other respects (to the extent that it doesn't matter which of them is actually in the driving seat), what reason have we got for calling the whole package a set of many consciousnesses rather than a single, perhaps slightly fuzzy, one?Presumably the only reason for assuming multiplicity of consciousness is the multiplicity of the corresponding neurons, but that in turn focuses attention on an unclear part of Edwards' theory - what does he suppose the relationship between consciousness and neuron to be?

I can see two probable answers. One is that the activity of the neuron is what 'gives rise' to consciousness; the other is that the consciousness has an actual physical location within or around the neuron. The first position is a difficult one for Edwards to take because - if I've interpreted him correctly - the whole network of neurons is required to give rise to consciousness, and it seems to follow that the whole network is the thing that 'has' the consciousness. The second position is tenable, but seems to me quite implausible. Why should consciousness have a location, any more than the number 5 has a location? We don't suppose that my consciousness of income tax has to be located near the tax office, so why should it be located near my head, either? The position of my eyes, ears, nose and mouth gives the impression that 'I' am located within my skull, but a virtual reality set-up can give the impression I am located on Planet Zarg. Conscioiusness itself doesn't seem to be the sort of thing that, in itself, has a location in space.

It may be that Edwards has a more subtle conception of the relationship between neuron and consciousness than I realise - but I think some clarification, at least, is required.

Linespace

The real problem, anyway, is that the theory doesn't actually solve the binding problem at all. That was supposed to be its motivation, but Edwards' theory seems to require that the right impulses from different sensory channels arrive at the neuron (at each neuron, actually) simultaneously. But if we could arrange for that to happen, we should already have solved the binding problem: we'd already have found a way for the brain to pick out which sensory impressions should be put together. Slam-dunking them all into a single neuron seems more like an expression of our triumph than a means of achieving it; and indeed, if we had solved the basic problem, we might ask ourselves whether compressing the results into a single neuron really achieved anything.

Linespace

Earlier

home

later