Archive for December, 2009

Picture: correspondent. One of the nice things about doing Conscious Entities is that from time to time people send me links to interesting things; new papers,  lectures, or ideas of their own.  I regret that I have generally kept this stuff to myself in the past, although it often deserves a mention, so I’ve been thinking about how best to deal with it. I would welcome suggestions, but as an experiment I’ve decided to try occasional round-up posts: so here goes.

Jesús Olmo, to whom many thanks, recently drew my attention to this review of  The Ego Tunnel; to PRISMs, Gom Jabbars, and Consciousness, and to the site Conscious Robots.

M.E. Tson has a Brief Explanation of Consciousness.

Mark Muhlestein has a thought experiment Consciousness and 2D Computation: a Curious Conundrum, and has been corresponding with David Chalmers. My own view is as follows.

I think causal relations are the crux of the matter. A computation essentially consists of a series of states of a Turing machine, doesn’t it? Normally each state is caused by the preceding state. Is that an essential feature? I think in the final analysis we’d say no, because the existence of a computation is really a matter of interpretation on the part of the observer. If the different states in the sequence are just written down on sheets of paper, we’d probably still be willing to call it a computation, or at least, we would in one sense. There’s another sense in which I personally wouldn’t: if we read ‘computation’ as meaning an actual run of a given algorithm, or an instantiation of the computation, I think the causal relationships have to be in place.

Now this would be even truer in the case of mental operations leading to consciousness. The causal relations in a Turing machine are to some degree artificial: the fact that we can program them in is really the point. In the human brain, by contrast, the causal relations are direct and arise from the physical constitution of the brain. To exhibit the relevant series of states (even if we assume consciousness in the brain is a matter of discrete states, which actually seems rather unlikely)would not be enough – they have to have caused each other directly in the right way for this to be an actual ‘run’ of the consciousness faculty.

It follows that your projected lights don’t give rise to a consciousness, or perhaps even to a computation. Does this mean I think zombies of some kind are possible? No, because the interesting kind of zombie is physically identical with a real person, and the projected lights are significantly different from the occurrence of the actual run of the computation. Real zombies remain impossible, and all we’re left with is a kind of puppet.

Readers of my post earlier this year about Sam Coleman’s views may be interested to see the nice comments he has provided.

You can send me links to interesting stuff at:

Contact.

Picture: Stanislas Dehaene. Edge has an interesting talk by Stanislas Dehaene.  He and his team, using a range of tools, have identified ‘signatures’ of awareness; marked changes in activity in certain brain regions which accompany awareness of a particular stimulus. They made a clever use of the phenomenon of ‘masking’, in which the perception of a word can be obliterated if it follows too rapidly after the presentation of an earlier one. By adjusting the relevant delay, the team could compare the effect of a stimulus which never reached consciousness with one that did. Using this and similar techniques they identified a number of clear indications of conscious awareness: increased activity in the early stages of processing and activity in new regions, including the prefrontal cortex and inferior parietal. It appears that this is accompanied by a ‘P3 wave’ which is quite easy to detect even with nothing more sophisticated than electrodes on the scalp. Interestingly it seems that the difference between a stimulus which does not make it into consciousness and one which does emerges quite late, after as much as a quarter of a second of processing.

No-one, I suspect, is going to be amazed by the news that conscious awareness is accompanied by distinctive patterns of brain activity; but identifying the actual ‘signatures’ has direct clinical relevance in cases of coma and apparent persistent vegetative state. In principle Dehaene’s research should allow conscious reactions continuing in a paralysed patient to be identified; this possibility is being actively pursued.

More speculative and perhaps of deeper theoretical interest, Dehaene puts forward a theory of consciousness as a global neuronal workspace, another variation on the global workspace theory of Bernard Baars (an idea which keeps being picked up by others, which must suggest that it has something going for it). Dehaene offers the view that a particular function of the workspace is to allow inputs to hang around for an extended period instead of dissipating. Among other benefits, this allows the construction of chains of processing operations, something Dehaene likens to a Turing machine, though it sounds a little messier than that to me. Further ingenious experiments have lent support to this idea; the researchers were able to contrast subjects’ chaining ability when information was supplied subliminally or consciously (this may sound odd, but subjects can perform at better-than-chance levels even with subliminal stimuli).

Dehaene says that he is dealing only with one variety of consciousness – in the main it’s awareness, which in some respects is the basement level compared to the more high-flown self-reflective versions. But in passing the talk does clarify a question which has sometimes troubled me in the past about global workspace theories – why should they involve consciousness at all? It seems easy to understand that the brain might benefit from a kind of clearing house where information from different sources is shared – but couldn’t that happen, as it were, in the dark? What does the magic ingredient of consciousness add to the process?

Well, being in the global workspace means being accessible to several different systems (no intention here to commit to any particular view about modularity); and one of those systems is the vocal reporting system. So as a natural consequence of being in the workspace, inputs become things we can vocally report, things we can talk about. Things we can talk about are surely objects of consciousness in some quite high-level sense.

Dehaene does not go down this path, but I wondered how far we could take it; is there a plausible explanation of phenomenal consciousness in terms of a global workspace? If we followed the same pattern of argument we used above, we would be looking to say that conscious experiences acquired qualia because being in the workspace made them available to the qualic system, whatever that might be. I think some people, those who tend to want to reduce qualia to flags or badges that give inputs a special weight, might find this kind of perspective congenial, but it doesn’t appeal all that much to me. I would prefer an argument that related the appearance of qualia to a sensory input’s being available to a global collection of sensory and other systems; something to do with resonances across modalities; but I happily confess I have no clear idea of how or exactly why that would work either.