I see that among the first papers published by the recently-launched journal of Cognitive Computation, they sportingly included one arguing that we shouldn’t be seeing cognition as computational at all. The paper, by John Mark Bishop of Goldsmith’s, reviews some of the anti-computational arguments and suggests we should think of cognitive processes in terms of communication and interaction instead.
The first two objections to computation are in essence those of Penrose and Searle, and both have been pretty thoroughly chewed over in previous discussions in many places; the first suggests that human cognition does not suffer the Gödelian limitations under which formal systems must labour, and so the brain cannot be operating under a formal system like a computer program; the second is the famous Chinese Room thought experiment. Neither objection is universally accepted, to put it mildly, and I’m not sure that Bishop is saying he accepts them unreservedly himself – he seems to feel that having these popular counter-arguments in play is enough of a hit against computationalism in itself to make us want to look elsewhere.
The third case against computationalism is the pixies: I believe this is an argument of Bishop’s own, dating back a few years, though he scrupulously credits some of the essential ideas to Putnam and others. A remarkable feature of the argument is that uses panpsychism in a reductio ad absurdum (reductio ad absurdum is where you assume the truth of the thing you’re arguing against, and then show that it leads to an absurd, preferably self-contradictory conclusion).
Very briefly, it goes something like this; if computationalism is true, then anything with the right computational properties has true consciousness (Bishop specifies Ned Block’s p-consciousness, phenomenal consciousness, real something-that-it-is-like experience). But a computation is just a given series of states, and those states can be indicated any way we choose. It follows that on some interpretation, the required kind of series of states are all over the place all the time. If that were true, consciousness would be ubiquitous, and panpsychism would be true (a state of affairs which Bishop represents as being akin to a world full of pixies dancing everywhere). But since, says Bishop, we know that panpsychism is just ridiculous, that must be wrong, and it follows that our initial assumption was incorrect; computationalism is false after all.
There are of course plenty of people who would not accept this at all, and would instead see the whole argument as just another reason to think that panpsychism might be true after all. Bishop does not spend much time on explaining why he thinks panpsychism is unacceptable, beyond suggesting that it is incompatible with the ‘widespread’ desire to explain everything in physical terms, but he does take on some other objections more explicitly. These mostly express different kinds of uneasiness about the idea that an arbitrary selection of things could properly constitute a computation with the right properties to generate consciousness.
One of the more difficult is an objection from Hofstadter that the required sequences of states can only be established after the fact: perhaps we could copy down the states of a conscious experience and then reproduce them, but not determine them in advance. Bishop uses an argument based on running the same consciousness program on a robot twice; the first time we didn’t know how it would turn out; the second time we did (because it’s an identical robot and identical program) but it’s absurd to think that one run could be conscious and the other not.
Perhaps the most tricky objection mentioned is from Chalmers; it points out that cognitive processes are not pre-ordained linear sequences of states, but at every stage have the possibility of branching off and developing differently. We could, of course remove every conditional switch in a given sequence of conscious cognition and replace it by a non-conditional one leading on to the state which was in fact the next one chosen. For that given sequence, the outputs are the same - but we’re not entitled to presume that consious experience would arise in the same way because the functional organisation is clearly different, and that is the thing, on computationalist reasoning, which needs to be the same. Bishop therefore imagine a more refined version: two robots run similar programs; one program has been put through a code optimiser which keeps all the conditional branches but removes bits of code which follow, as it were, the unused branches of the conditionals. Now surely everything relevant is the same: are we going to say that consciousness arises in one robot by virtue of there being bits of extra code there which lie there idle? That seems odd.
That argument might work, but we must remember that Bishop’s reductio requires the basics of consciousness to be lying around all over the place, instantiated by chance in all sorts of things. While we were dealing with mere sequences of states, that might look plausible, but if we have to have conditional branches connecting the states (even ones whose unused ends have been pruned) it no longer seems plausible to me. So in patching up his case to respond to the objection, Bishop seems to me to have pulled out some of the foundations it was originally standing on. In fact, I think that consciousness requires the right kind of causal relations between mental states, so that arbitrary sets or lists of states won’t do.
The next part of the discussion is interesting. In many ways computationalism looks like a productive strategy, concedes Bishop – but there are reasons to think it has its limitations. One of the arguments he quotes here is the Searlian point that there is a difference between a simulation and reality. If we simulate a rainstorm on a computer, no-one expects to get wet; so if we simulate the brain, why should we expect consciousness? Now the distinction between a simulation and the real thing is a relevant and useful one, but the comparison of rain and consciousness begs the question too much to serve as an argument. By choosing rain as the item to be simulated, we pick something whose physical composition is (in some sense) essential; if it isn’t made of water it isn’t rain. To assume that the physical substrate is equally essential for consciousness is just to assume what computationalism explicitly denies. Take a different example; a letter. When I write a letter on my PC, I don’t regard it as a mere simulation, even though no paper need be involved until it’s printed; in fact, I have more than once written letters which were evntually sent as email attachments and never achieved physical form. This is surely because with a letter, the information is more essential than the physical instantiation. Doesn’t it seem highly plausible that the same might be true to an even greater extent of consciousness? If it is true, then the distinction between simulation and reality ceases to apply.
To make sceptical simulation arguments work, we need a separate reason to think that some computation was more like a simulation than the reality – and the strange thing is, I think that’s more or less what the objections from Hofstadter and Chalmers were giving us; they both sort of draw on the intuition that a sequence of states could only simulate consciousness in the sort of way a series of film frames simulates motion.
The ultimate point, for Bishop, is to suggest we should move on from the ‘metaphor’ of computation to another based on communication. It’s true that the idea of computation as the basis of consciousness has run into problems over recent years, and the optimism of its adherents has been qualified by experience; on the other hand it still has some remarkable strengths. For one thing, we understand computation pretty clearly and thoroughly; if we could reduce consciousness to computation, the job really would be done; whereas if we reduce consciousness to some notion of communication which still (as Bishop says) requires development and clarification, we may still have most of the explanatory job to do.
The other thing is that computation of some kind, if not the only game in town, still is far closer to offering a complete answer than any other hypothesis. I supect many people who started out in opposing camps on this issue would agree now that the story of consciousness is more likely to be ‘computation plus plus’ (whatever that implies) than something quite unrelated.