Chaotic consciousness

Picture: Etch-a-Sketch. An interesting New Scientist piece recently reviewed research suggesting that chaos has an important part in the way the brain functions. More specifically, the suggestion is that the brain operates ‘on the edge of chaos’, in self-organised criticality;  sometimes it runs in ways which are predictable at a macro level, more or less like a conventional machine; but at times it also goes into chaotic states. The behaviour of the system in these states is still fully deterministic in a wholly traditional, classical way, but depends so exquisitely on the fine detail of the starting state that the behaviour of the system is in practice unpredictable. The analogy offered here is a growing pile of sand; you can’t tell exactly when it will suddenly go through a state shift – collapse – although over a long period the number of large and small collapses is amenable to statistical treatment (actually, I have to say I’ve never noticed piles of sand behaving in this interesting way, but that just shows what a poor observer I am).

The suggestion is that the occasional ‘avalanches’ of neuronal firing in the brain are useful, allowing the brain to enter new states more rapidly than it could otherwise do. Being on the edge of chaos allows “maximum transmission with minimum risk of descending into chaos”. The arrival of a neuronal avalanche is related to the sudden popping-up of an idea in the mind, or perhaps the unexpected recurrence of a random memory. There is also evidence that the duration of phase-shifts is related to IQ scores – perhaps in this case because the longer shift allows the recruitment of more neurons. The recruitment of additional neurons is presumed in such cases to be a good thing (I feel there must be some caveats about that), but there are also suggestions that excess time spent in phase-shifts could be a cause of schizophrenia (someone should set out a list somewhere of all the things that at one time or another have been put forward as causes of schizophrenia); while not enough phase-shifting in parts of the brain to do with social behaviour might have something to do with autism.

One claim not made in the article, but one which could well be made, is that all this might account for the sensation of free will. If the brain occasionally morphs through chaos into a new state, might it not be that the conclusions which emerge would seem to have come out of nowhere? We might be led to assume that these thoughts were freely generated, distinct from the normal predictable pattern. I think the temptation would be to frame such a theory as an explanation of the  illusion of free will: why we feel as if some of our decisions are free even though, in the final analysis, determinism rules. But I can also imagine that a compatibilist might claim that chaotic phase shifts really were freedom. A free act is one which is not predictable, such a person might argue; however, we don’t mean unpredictable in practice – none of us is currently able to look at a brain and predict the decisions it will make in any given circumstances. We mean predictable in principle; predictable if we had all the data plus unlimited time and computing power. Now are chaotic changes predictable in principle or not? They occur within normal physical rules, so in the ultimate sense they are clearly deterministic. But the difficulties are so great that to say that they’re only unpredictable in practice seems to stretch ‘practice’ a long way – we might easily need perfection of measurement to a degree which is never going to be obtainable under any imaginable real circumstances. Couldn’t we rather say, then, that we’re dealing with a third kind of unpredictability, neither quite unpredictability in mere practice nor quite unpredictability in principle, and take the view that decisions subject to this level of unpredictability deserve to be called free? I think we could, but ultimately I’m disinclined to do so because in the final analysis that feels more like inventing a new concept of freedom than justifying the existing one.

There’s another issue here that affects a number of the speculations in the article. We must beware of assuming too easily that features of the underlying process necessarily correspond directly with phenomenal features of experience. So, for example, it’s assumed that when the brain goes quickly into a new state in terms of its neuronal firing, that would be like a new thought popping up suddenly in our conscious minds, an idea which seemed to have come out of nowhere. It ain’t necessarily so (though it would be an interesting question to test). The fact that the brain uses chaos to achieve its results does not mean that the same chaos is directly experienced in our thoughts, any more than I experience say, that old 40Hz buzz starting up in my right parietal, or whatever. At the moment (not having read the actual research, of course) it seems equally likely that phase shifts are wholly outside conscious experience, perhaps, for example, being required in order to allow subordinate systems to catch up rapidly with a separate conscious process which they don’t directly influence. Or perhaps they’re just the vigorous shaking which clears our mental etch-a-sketch, correlated with but not constitutive of, the sophisticated complication of our conscious doodlings.

Dancing Pixies

Picture: Dancing Pixies. I see that among the first papers published by the  recently-launched journal of Cognitive Computation, they sportingly included one arguing that we shouldn’t be seeing cognition as computational at all.  The paper, by John Mark Bishop of Goldsmith’s, reviews some of the anti-computational arguments and suggests we should think of cognitive processes in terms of communication and interaction instead.

The first two objections to computation are in essence those of Penrose and Searle, and both have been pretty thoroughly chewed over in previous discussions in many places; the first suggests that human cognition does not suffer the Gödelian limitations under which formal systems must labour, and so the brain cannot be operating under a formal system like a computer program; the second is the famous Chinese Room thought experiment. Neither objection is universally accepted, to put it mildly,  and I’m not sure that Bishop is saying he accepts them unreservedly himself – he seems to feel that having these popular counter-arguments in play is enough of a hit against computationalism in itself to make us want to look elsewhere.

The third case against computationalism is the pixies: I believe this is an argument of Bishop’s own, dating back a few years, though he scrupulously credits some of the essential ideas to Putnam and others.  A remarkable feature of the argument is that uses panpsychism in a reductio ad absurdum (reductio ad absurdum is where you assume the truth of the thing you’re arguing against, and then show that it leads to an absurd, preferably self-contradictory conclusion).

Very briefly, it goes something like this; if computationalism is true, then anything with the right computational properties has true consciousness (Bishop specifies Ned Block’s p-consciousness, phenomenal consciousness, real something-that-it-is-like experience). But a computation is just a given series of states, and those states can be indicated any way we choose.  It follows that on some interpretation, the required kind of series of states are all over the place all the time. If that were true, consciousness would be ubiquitous, and panpsychism would be true (a state of affairs which Bishop represents as being akin to a world full of pixies dancing everywhere). But since, says Bishop, we know that panpsychism is just ridiculous, that must be wrong, and it follows that our initial assumption was incorrect; computationalism is false after all.

There are of course plenty of people who would not accept this at all, and would instead see the whole argument as just another reason to think that panpsychism might be true after all. Bishop does not spend much time on explaining why he thinks panpsychism is unacceptable, beyond suggesting that it is incompatible with the ‘widespread’ desire to explain everything in physical terms, but he does take on some other objections more explicitly.  These mostly express different kinds of uneasiness about the idea that an arbitrary selection of things could properly constitute a computation with the right properties to generate consciousness.

One of the more difficult is an objection from Hofstadter that the required sequences of states can only be established after the fact: perhaps we could copy down the states of a conscious experience and then reproduce them, but not determine them in advance. Bishop uses an argument based on running the same consciousness program on a robot twice; the first time we didn’t know how it would turn out; the second time we did (because it’s an identical robot and identical program) but it’s absurd to think that one run could be conscious and the other not. 

Perhaps the most tricky objection mentioned is from Chalmers; it points out that cognitive processes are not pre-ordained linear sequences of states, but at every stage have the possibility of branching off and developing differently. We could, of course remove every conditional switch in a given sequence of conscious cognition and replace it by a non-conditional one leading on to the state which was in fact the next one chosen. For that given sequence, the outputs are the same – but we’re not entitled to presume that consious experience would arise in the same way because the functional organisation is clearly different, and that is the thing, on computationalist reasoning, which needs to be the same.  Bishop therefore imagine a more refined version: two robots run similar programs; one program has been put through a code optimiser which keeps all the conditional branches but removes bits of code which follow, as it were, the unused branches of the conditionals. Now surely everything relevant is the same: are we going to say that consciousness arises in one robot by virtue of there being bits of extra code there which lie there idle? That seems odd.

That argument might work, but we must remember that Bishop’s reductio requires the basics of consciousness to be lying around all over the place, instantiated by chance in all sorts of things. While we were dealing with mere sequences of states, that might look plausible, but if we have to have conditional branches connecting the states (even ones whose unused ends have been pruned) it no longer seems plausible to me.  So in patching up his case to respond to the objection, Bishop seems to me to have pulled out some of the foundations it was originally standing on. In fact, I think that consciousness requires the right kind of causal relations between mental states, so that arbitrary sets or lists of states won’t do.

The next part of the discussion is interesting. In many ways computationalism looks like a productive strategy, concedes Bishop – but there are reasons to think it has its limitations. One of the arguments he quotes here is the Searlian point that there is a difference between a simulation and reality. If we simulate a rainstorm on a computer, no-one expects to get wet; so if we simulate the brain, why should we expect consciousness? Now the distinction between a simulation and the real thing is a relevant and useful one, but the comparison of rain and consciousness begs the question too much to serve as an argument. By choosing rain as the item to be simulated, we pick something whose physical composition is (in some sense) essential; if it isn’t made of water it isn’t rain. To assume that the physical substrate is equally essential for consciousness is just to assume what computationalism explicitly denies.  Take a different example; a letter. When I write a letter on my PC, I don’t regard it as a mere simulation, even though no paper need be involved until it’s printed; in fact, I have more than once written letters which were evntually sent as email attachments and never achieved physical form. This is surely because with a letter, the information is more essential than the physical instantiation. Doesn’t it seem highly plausible that the same might be true to an even greater extent of consciousness? If it is true, then the distinction between simulation and reality ceases to apply.

To make sceptical simulation arguments work, we need a separate reason to think that some computation was more like a simulation than the reality – and the strange thing is, I think that’s more or less what the objections from Hofstadter and Chalmers were giving us; they both sort of draw on the intuition that a sequence of states could only simulate consciousness  in the sort of way a series of film frames simulates motion.

The ultimate point, for Bishop, is to suggest we should move on from the ‘metaphor’ of computation to another based on communication. It’s true that the idea of computation as the basis of consciousness has run into problems over recent years, and the optimism of its adherents has been qualified by experience; on the other hand it still has some remarkable strengths. For one thing, we understand computation pretty clearly and thoroughly;  if we could reduce consciousness to computation, the job really would be done; whereas if we reduce consciousness to some notion of communication which still (as Bishop says) requires development and clarification, we may still have most of the explanatory job to do.

The other thing is that computation of some kind, if not the only game in town, still is far closer to offering a complete answer than any other hypothesis.  I supect many people who started out in opposing camps on this issue would agree now that the story of consciousness is more likely to be ‘computation plus plus’ (whatever that implies) than something quite unrelated.