Dancing Pixies

Picture: Dancing Pixies. I see that among the first papers published by the  recently-launched journal of Cognitive Computation, they sportingly included one arguing that we shouldn’t be seeing cognition as computational at all.  The paper, by John Mark Bishop of Goldsmith’s, reviews some of the anti-computational arguments and suggests we should think of cognitive processes in terms of communication and interaction instead.

The first two objections to computation are in essence those of Penrose and Searle, and both have been pretty thoroughly chewed over in previous discussions in many places; the first suggests that human cognition does not suffer the Gödelian limitations under which formal systems must labour, and so the brain cannot be operating under a formal system like a computer program; the second is the famous Chinese Room thought experiment. Neither objection is universally accepted, to put it mildly,  and I’m not sure that Bishop is saying he accepts them unreservedly himself – he seems to feel that having these popular counter-arguments in play is enough of a hit against computationalism in itself to make us want to look elsewhere.

The third case against computationalism is the pixies: I believe this is an argument of Bishop’s own, dating back a few years, though he scrupulously credits some of the essential ideas to Putnam and others.  A remarkable feature of the argument is that uses panpsychism in a reductio ad absurdum (reductio ad absurdum is where you assume the truth of the thing you’re arguing against, and then show that it leads to an absurd, preferably self-contradictory conclusion).

Very briefly, it goes something like this; if computationalism is true, then anything with the right computational properties has true consciousness (Bishop specifies Ned Block’s p-consciousness, phenomenal consciousness, real something-that-it-is-like experience). But a computation is just a given series of states, and those states can be indicated any way we choose.  It follows that on some interpretation, the required kind of series of states are all over the place all the time. If that were true, consciousness would be ubiquitous, and panpsychism would be true (a state of affairs which Bishop represents as being akin to a world full of pixies dancing everywhere). But since, says Bishop, we know that panpsychism is just ridiculous, that must be wrong, and it follows that our initial assumption was incorrect; computationalism is false after all.

There are of course plenty of people who would not accept this at all, and would instead see the whole argument as just another reason to think that panpsychism might be true after all. Bishop does not spend much time on explaining why he thinks panpsychism is unacceptable, beyond suggesting that it is incompatible with the ‘widespread’ desire to explain everything in physical terms, but he does take on some other objections more explicitly.  These mostly express different kinds of uneasiness about the idea that an arbitrary selection of things could properly constitute a computation with the right properties to generate consciousness.

One of the more difficult is an objection from Hofstadter that the required sequences of states can only be established after the fact: perhaps we could copy down the states of a conscious experience and then reproduce them, but not determine them in advance. Bishop uses an argument based on running the same consciousness program on a robot twice; the first time we didn’t know how it would turn out; the second time we did (because it’s an identical robot and identical program) but it’s absurd to think that one run could be conscious and the other not. 

Perhaps the most tricky objection mentioned is from Chalmers; it points out that cognitive processes are not pre-ordained linear sequences of states, but at every stage have the possibility of branching off and developing differently. We could, of course remove every conditional switch in a given sequence of conscious cognition and replace it by a non-conditional one leading on to the state which was in fact the next one chosen. For that given sequence, the outputs are the same – but we’re not entitled to presume that consious experience would arise in the same way because the functional organisation is clearly different, and that is the thing, on computationalist reasoning, which needs to be the same.  Bishop therefore imagine a more refined version: two robots run similar programs; one program has been put through a code optimiser which keeps all the conditional branches but removes bits of code which follow, as it were, the unused branches of the conditionals. Now surely everything relevant is the same: are we going to say that consciousness arises in one robot by virtue of there being bits of extra code there which lie there idle? That seems odd.

That argument might work, but we must remember that Bishop’s reductio requires the basics of consciousness to be lying around all over the place, instantiated by chance in all sorts of things. While we were dealing with mere sequences of states, that might look plausible, but if we have to have conditional branches connecting the states (even ones whose unused ends have been pruned) it no longer seems plausible to me.  So in patching up his case to respond to the objection, Bishop seems to me to have pulled out some of the foundations it was originally standing on. In fact, I think that consciousness requires the right kind of causal relations between mental states, so that arbitrary sets or lists of states won’t do.

The next part of the discussion is interesting. In many ways computationalism looks like a productive strategy, concedes Bishop – but there are reasons to think it has its limitations. One of the arguments he quotes here is the Searlian point that there is a difference between a simulation and reality. If we simulate a rainstorm on a computer, no-one expects to get wet; so if we simulate the brain, why should we expect consciousness? Now the distinction between a simulation and the real thing is a relevant and useful one, but the comparison of rain and consciousness begs the question too much to serve as an argument. By choosing rain as the item to be simulated, we pick something whose physical composition is (in some sense) essential; if it isn’t made of water it isn’t rain. To assume that the physical substrate is equally essential for consciousness is just to assume what computationalism explicitly denies.  Take a different example; a letter. When I write a letter on my PC, I don’t regard it as a mere simulation, even though no paper need be involved until it’s printed; in fact, I have more than once written letters which were evntually sent as email attachments and never achieved physical form. This is surely because with a letter, the information is more essential than the physical instantiation. Doesn’t it seem highly plausible that the same might be true to an even greater extent of consciousness? If it is true, then the distinction between simulation and reality ceases to apply.

To make sceptical simulation arguments work, we need a separate reason to think that some computation was more like a simulation than the reality – and the strange thing is, I think that’s more or less what the objections from Hofstadter and Chalmers were giving us; they both sort of draw on the intuition that a sequence of states could only simulate consciousness  in the sort of way a series of film frames simulates motion.

The ultimate point, for Bishop, is to suggest we should move on from the ‘metaphor’ of computation to another based on communication. It’s true that the idea of computation as the basis of consciousness has run into problems over recent years, and the optimism of its adherents has been qualified by experience; on the other hand it still has some remarkable strengths. For one thing, we understand computation pretty clearly and thoroughly;  if we could reduce consciousness to computation, the job really would be done; whereas if we reduce consciousness to some notion of communication which still (as Bishop says) requires development and clarification, we may still have most of the explanatory job to do.

The other thing is that computation of some kind, if not the only game in town, still is far closer to offering a complete answer than any other hypothesis.  I supect many people who started out in opposing camps on this issue would agree now that the story of consciousness is more likely to be ‘computation plus plus’ (whatever that implies) than something quite unrelated.

25 thoughts on “Dancing Pixies

  1. I’m not sure I understand exactly what Bishop means by communication exactly, but I’d like to point out that computation does not necessarily exclude communication. Please note that I am something of an amateur when it comes to computability theory, despite having studied computer science for a few years, and I don’t have a very strong philosophy background yet either, so I am by no means qualified to discuss the details of this. In any case though, it’s interesting to note that there are many different models of computation, several of them explicitly dealing with communication. For example the Actor model developed by Carl Hewitt and the Calculus of communicating systems developed by Robin Milner. These systems do not actually enable the description of behavior that can not be described in more classical models of computation such as the lambda calculus, in other words, they do not escape the aforementioned Gödelian limitations, but they might provide a way to explain consciousness in terms of communication without giving up computationalism. They key is just this that communication could possibly be a more convenient metaphor, not that it provides a way to explain phenomena not explainable through computation. Maybe Bishop has some sort of notion of communication that would not have these Gödelian limitations, but this seems to be destined to remain a vague metaphor since any attempted formalization should automatically suffer from Gödels limitations, right? Am I missing and/or misunderstanding something here?

  2. No, good points, Lehooo. Bishop does not set out the alternative metaphor he is proposing in any detail, but he cites this paper – I haven’t read it yet myself.

  3. Interesting post and interesting comment by Lehooo, although this is the opinion of someone who is not “something of an amateur” but a complete one instead.
    When a philosopher or scientist opposing the computational view of consciousness proposes an alternative one, I tend to expect this view to be non-computational or somehow beyond computation. Roger Penrose did it that way, but I have the feeling that many others have failed. That’s what I thought, for example, when reading Gerald Edelman’s Bright Air, Brilliant Fire some years ago. After reading the post and Lehooo’s comment, I have the same impression about Bishop’s proposal. Maybe I am unfair, though: I haven’t read his paper yet.
    Is there, maybe, some kind of narrowness about the idea of what computation-based systems are or can be? Our minds are not like our PCs, but not every compuation-based device has to be a digital computer running a program in C.
    There is one more thing I would like to comment. I don’t see how the fact that we have a pretty clear and thorough understanding of computation may serve as a support for the computational approach (second-last paragraph on the post). It sounds to me like that story about the man looking for his keys just under the lamp post. “Did you lose them here? –No, but this is the only place there is enough light to look for them”. The way I see it, the reason why the computational approach is worthy is that we know there are computations running on our brains. I mean: we may not know by now whether our minds have a non-computational essential ingredient, but I think that we can be sure they have plenty of computational ones and that there is still a lot to study on them.

  4. Fair comment about the man with the keys, Luis – the fact that we understand computation relatively well doesn’t mean it is the answer. Perhaps the point is better just put negatively – some of the alternative theories seem to leave almost as much to be explained as there was to begin with. Computationalism may well be wrong, but in the case of some other theories (and I fear Bishop’s might fall into this category) it’s not even clear that they really have a candidate for Final Answer.

  5. I agree that the story about the man with the keys raises an interseting point, and I certainly don’t want to advocate viewing computation as the only way to look at the world, but there’s something about the seeming universality of computation and the relationship between this and determinism that deserves some discussion. I can certainly understand that an advocate of free will would view computationalism as inadequate, but is it possible to believe in determinism but still view computationalism as inadequate? What I mean is this: is there a theory of determinism that holds that there exist deterministic processes that are non-turing computable? Hypercomputation comes to mind, but that would still constitute a model of computation. The point I’m trying to make is that if computation is not strictly inadequate as an explanation, then it seems weird to state that computation is the wrong way to look at it. If you think communication is the right metaphor, then construct a model of computation based on communication, the same thing could probably be done with any other metaphor as well. Describing any kind of process in computational terms seems to me to be analogous to formalizing an argument in some logic, simply a way to clarify your ideas and make them more precise. If this is right, then saying that computation is the wrong way to look at any specific process would be analogous to saying that logic is the wrong way to look at any specfic argument. But again, maybe I’m missing something important.

  6. I think communication has relevance because it is closer to the concept of interaction. Consciousness may be more a system of interaction between the body and the environment which actually is nexus through the “computational” brain.

    Case in point is an artificial eye which is computational and when integrated into a blind person’s brain they attain visual consciousness, even though we would not say the artificial eye itself is conscious.

  7. Yes communication and interaction are probably very useful concept for understanding the mind, but they do not exclude computation. There are models of interactive computation in which a computation does not follow the simple input to ouput chain of events but instead can have an interaction with the environment during an ongoing computation. This of course also raises questions regarding whether to look at the environment in computational terms, which we can certainly do if we consider digital physics to be a viable explanation of the natural world.

  8. I find it hard to understand this debate. Surely computation is something that can be performed by a Turing Machine with a one (or higher dimensional) tape. A Turing Machine involves the communication of bits of information through a crude processing engine that can move the tape back and forth and perform some truly basic operations such as writing the tape with a 0 if a 1 is present on another part of the tape etc. All a Turing Machine does is to re-arrange information so computation = communication + rearrangement.

    I am entirely baffled why anyone would regard the rearrangement of bits of information on a one dimensional tape as equivalent to conscious experience. I am also baffled as to why “intentionality” is introduced when the computational model only requires a world composed of separate instants of no duration. This belief in the computational nature of mind seems to be ideological rather than philosophical or scientific and probably stems from a deep seated naive realism about experience or an idolization of toys and tools such as computers (Ancient Greeks used to idolize their ‘high tech’ clay statuettes in a similar way). See Materialists should read this first.

  9. The answer could possibly be quantum computing where a branch of the program is possibly taken by a collapse of a wavefunction. It is possible that the actual process of the collapse is what leads to consciousness, so just replaying the states later will not recreate the consciousness, though the end states from a macroscopic perspective happen to be the same.

  10. Been following the blog for a short time, and would like to say thanks because it is most interesting!
    The paper Peter linked was interesting for its discussion on “communication”, but I find the re-branding of computation to be ultimately unnecessary. Whatever we call it, at the end of the day the solution will have to be computational will it not?

  11. “Now the distinction between a simulation and the real thing is a relevant and useful one, but the comparison of rain and consciousness begs the question too much to serve as an argument. By choosing rain as the item to be simulated, we pick something whose physical composition is (in some sense) essential; if it isn’t made of water it isn’t rain. To assume that the physical substrate is equally essential for consciousness is just to assume what computationalism explicitly denies. Take a different example; a letter. When I write a letter on my PC, I don’t regard it as a mere simulation, even though no paper need be involved until it’s printed; in fact, I have more than once written letters which were evntually sent as email attachments and never achieved physical form.”

    I think you’re missing the point. Rain is a physical phenomena and according to Searle (who I agree with) so is consciousness. The ‘substrate’ argument is a bit of a red herring in this context. The point is you can’t generate physical phenomena from logical processes.

  12. Searle may very well be right about a lot of this. But some people (I think) would explicitly assert that consciousness is a matter of information processing, and therefore that a computational simulation of consciousness of the right kind would automatically itself be conscious. In that context, to point out that simulations in some other domains don’t generate other kinds of real phenomena is not a refutation. (In fairness I’m not sure Searle himself meant it to be anything more than characteristically vivid counter-assertion.)

    One thing that would help here, of course, would be a Searlian account of the physical properties of the brain that give rise to consciousness: that might help us see whether consciousness really is like rain in the relevant ways. But as Searle honestly says, he can’t tell us what these physical properties are.

    Perhaps this is worth a further discussion.

  13. I think Searle meant what he said. He meant that consciousness belonged to the domain of observable scientific phenomena and not purely mathematical facts. Consciousness has an objective existence as well as a subjective nature.To that extent it is a physical phenomenon and not mere information processing. Surely this is true. Consciousness has characteristics of absolute form rather mere symbolic content for one. “Red” is syntactical within the framework of a discussion about the wavelengths of light, but as a subjective experience it is pure semantic. How could you describe to a man, blind since birth,what red looks like ? It’s an impossibility. But you could teach him easily about the physics of light and the fact that red corresponds to a wavelength of light in the visible spectrum.

    The simple and single fact that mental states are characterised by semantic as well as syntax is sufficient to displace the belief that conscious mental states are information processing (there are plenty of others besides, but this is the main one). Given that, the burden of proof is not actually on Searle to give any account whatsoever of how consciousness arises from physical processes. The fact that is does may be a fact at a “folk knowledge” level vindicated by knowledge of obvious “folk” knowledge – i.e. the effects of a bullet in the brain = but just because science hasn’t provided an answer doesn’t mean it’s not true, and it certainly doesn’t, in any way shape or form ,allow the hard AI enthusiasts to claim that the territory is theirs until science does.

    The burden of proof is on the AI people to show that ‘information’ can cross the ontological gulf into the magic world of phenomena. It’s not going to happen – by definition – but in the meantime I suspect a lot of careers built on hard AI will continue to insist that the emperor wears no clothes.

  14. I don’t doubt Searle meant what he said (whatever it was – I’m not altogether sure we’ve actually pinned down exactly what he said about rain). My point is just that people like Bishop, in quoting the rain analogy, should realise that it is really just a persuasive assertion of non-computationalism rather than an actual argument.

  15. The primary purpose of the analogy I suppose was to highlight the nature of simulation, to point out that a duck is not the same thing as a painting of a duck or even a robotic, three-dimensional duck. It helps make a point.

    What was it you think he said about rain ? I think its quite straightforward isn’t it. A computer program of rain generates nothing phenomenal as nothing about computer programs exist in the physical sense, whereas rain does.

    Nothing in, nothing out. Or maybe I’m missing the point somewhere.

  16. I don’t know if anyone is still following this thread, but I’ll comment again nonetheless, hoping that someone is reading as this is a very interesting discussion.

    I think Searle’s talk about rain reveals his simplistic take on the issue. There is no ground for certainty regarding the question of whether there are phenomena that can only be said to exist in a certain medium. I think I saw a quote somewhere from Searle where he talked about the impossibility of a computer simulation of a digestive system consuming a “real” pizza, which apparently in his mind is analogous to a computer simulation of a brain exhibiting consciousness. This of course misses the point entirely. The computer simulation of the digestive system can only consume a pizza in the same medium that it is represented in. However, if this representation is isomorphic to a “real” pizza, there is no reason to call it anything else than a pizza from the digestive system’s point of view. From out point of view, it would not be considered a pizza only because out digestive systems are represented in a different medium and thus, unable to interact with the pizza directly. For that to happen, it would have to be converted first through some sort of mechanism that can take the digital pizza and create a “real” pizza from it, atom by atom.

    The beads on an abacus represent numbers, sliding them up and down simulates arithmetic. Of course, trying to insert the beads into a computer will not cause it to perform arithmetic, they would have to be encoded in a format that is native to the computer. This however, is not cause to believe that only the abacus performs real arithmetic while the computer only simulates it. There is no reason to simply assume that other phenomena are different from numbers only because we have so far only encountered atoms, pizzas and conscious minds in the form they have in the “real” world. This is the reason I think functionalism actually makes more sense than any alternative, it does not assume the existence of any special properties of any specific medium that prohibits an implementation in any other medium from being “real”. I wouldn’t go as far as saying that Ockham’s razor supports functionalism, but I’m leaning toward something along those lines. Functionalism seems to only assume that the essence of any phenomena lies in its interactions with other phenomena, which is what enables it to still be the same phenomena even if it is realized in some other medium. Alternatives to functionalism seem to inevitably assume that there is something more, something inherent in the particular realization of the phenomena in a particular medium which makes it what it is.

    Anyway, it’s getting late here, I’m a little bit tired and I’m far from an expert in this area so maybe I’m missing some point here or maybe I’m misrepresenting some of the issues and/or viewpoints I’ve talked about. Furthermore, if something seems unclear, please ask me to clarify.

  17. You make a good point, I think, Lehooo; although I think Searle would deny the validity of the idea of something being real within a medium. I can almost hear him saying “Real within a medium? Like fairies are real within the medium of fairy stories? That’s what we call ‘not real’ when we’re being serious.”

  18. Yes, and I think that raises a good point. If we dismiss the physical medium as entirely irrelevant, we do risk having to say that any description of something is really an instance of that thing. Of course we don’t want to say that the fairies of a story book are real, and perhaps arguments such as the one we imagine Searle making here are mostly meant as a reductio absurdum, to express the belief that functionalism leads to being unable to dismiss anything as unreal. In any case, this misses the point. The fairies of the story book are not real because they are only described (incompletely, at least in any story book I’ve read) and not actually fully realized. Fairies are a tricky example as we lack a thorough description of them, seeing as how they’re mythological creatures, so let’s use a dog as an example instead. A dog being described in a book is clearly not a real dog. A real dog is a complex system of processes, with a multitude of organs performing their functions. The dog in the book disqualifies as a real dog not because it is a realization of this system in the wrong medium, but because it is not a realization of this system at all. This of course, is not an argument for functionalism, only against the notion that the hypothesized response from Searle is not really relevant. Not that I think you were saying it was.

  19. I meant to write “against the notion that the hypothesized response from Searle is really relevant” without the “not”. Sorry about that, I should check my comments more carefully before posting.

  20. I’m sorry if I’m clogging up this thread, and I’m also sorry if what I’m about to post seems like an act of shameless self promotion, but I felt that is was definitely relevant to this discussion, so I decided to post it anyway. I just read a post here about the supposed recent simulation of a cat’s brain by an IBM team and how the experiment was apparently nothing of the sort, and decided to comment on it. My comment was a sort of continuation of the point I was trying to make here earlier, and I referred to Searle briefly, which is why I thought it would be relevant to this discussion. If anyone here has anything to say about my comments over there, please respond either here or in that thread.

  21. “Real within a medium? Like fairies are real within the medium of fairy stories? That’s what we call ‘not real’ when we’re being serious.”
    That’s a very good line.

  22. I’ve been making the analogy with laser light, which we can simulate with great precision. Only certain physical systems under the right conditions produce laser light photons. So, too, I think certain physical system under the right conditions produce consciousness.

    No simulation can ever produce photons although it can describe the system that’s lasing. My argument is that a brain simulation may show a biologically functioning system, but no consciousness, no laser light, will be produced. The simulated brain will be effectively comatose.

    The thing about the pixies requires picking ONE set of states among all the other sets of states, other pixies, Wordstar, etc, and no one of these sets has any privilege over others.

    Further, picking out any given state would be resource and computationally so intensive that I think it invalidates the idea there could be pixies.

Leave a Reply

Your email address will not be published. Required fields are marked *