The Stove of Consciousness

I’ve been reading A.C. Grayling’s biography of Descartes: he advances the novel theory that Descartes was a spy. This is actually a rather shrewd suggestion which makes quite a lot of sense given Descartes’ wandering, secretive life. On balance I think he probably wasn’t conducting secret espionage missions – it’s unlikely we’ll ever know for sure, of course – but I think it’s certainly an idea any future biographer will have to address.

I was interested, though, to see what Grayling made of the stove.  Descartes himself tells us that when held up in Germany by the advance of winter, he spent the day alone in a stove, and that was where his radical rebuilding of his own beliefs began.  This famous incident has the sort of place in the history of philosophy that the apple falling on Newton’s head has in the history of science: and it has been doubted and queried in a similar way. But Descartes seems pretty clear about it: “je demeurais tout le jour enfermé seul dans un poêle, où j’avais tout le loisir m’entretenir de mes pensées”.

Some say it must in fact have been a bread-oven or a similarly large affair: Descartes was not a large man and he was particularly averse to cold and disturbance, but it would surely have to have been a commodious stove for him to have been comfortable in there all day. Some say that Bavarian houses of the period had large stoves, and certainly in the baroque palaces of the region one can see vast ornate ones that look as if they might have had room for a diminutive French philosopher. Some commonsensical people say that “un poêle” must simply have meant a stove-heated room; and this is in fact the view which Grayling adopts firmly and without discussion.

Personally I’m inclined to take Descartes’ words at face value; but really the question of whether he really sat in a real stove misses the point. Why does Descartes, a rather secretive man, even mention the matter at all? It must be because, true or not, it has metaphorical significance; it gives us additional keys to Descartes’ meaning which we ought not to discard out of literal-mindedness. (Grayling, in fairness, is writing history, not philosophy.)

For one thing Descartes’ isolation in the stove functions as a sort of thought-experiment. He wants to be able to doubt everything, but it’s hard to dismiss the world as a set of illusions when it’s battering away at your senses: so suppose we were in a place that was warm, dark, and silent?  Second, it recalls Plato’s cave metaphor. Plato had his unfortunate exemplar chained in a cave where his only knowledge of the world outside came from flickering shadows on the wall; he wanted to suggest that what we take to be the real world is a similarly poor reflection of a majestic eternal reality. Descartes wants to work up a similar metaphor to a quite different conclusion, ultimately vindicating our senses and the physical world; perhaps this points up his rebellion against ancient authority. Third, in a way congenial to modern thinking and probably not unacceptable to Descartes, the isolation in the stove resembles and evokes the isolation of the brain in the skull.

The stove metaphor has other possible implications, but for us the most interesting thing is perhaps how it embodies and possibly helped to consolidate one of the most persistent metaphors about consciousness, one that has figured strongly in discussion for centuries, remains dominant, yet is really quite unwarranted. This is that consciousness is internal. We routinely talk about “the external world” when discussing mental experience. The external world is what the senses are supposed to tell us about, but sometimes fail to; it is distinct from an internal world where we receive the messages and where things like emotions and intentions have their existence. The impression of consciousness being inside looking out is strongly reinforced by the way the ears and the brain seem to feed straight into the brain: but we know that impression of being located in the head would be the same if human anatomy actually put the brain in the stomach, so long as the eyes and ears remained where they are. In fact our discussions would make just as much sense if we described consciousness as external and the physical world as internal (or consciousness as ‘above’ and the physical world as ‘below’ or vice versa)

If we take consciousness to be a neural process there is of course, a sense in which it is certainly in the brain; but only in the sense that my money is in the bank’s computer (though I can’t get it out with a hammer) or Pride and Prejudice is in the pages of that book over there (and not, after all, in my head). Strictly or properly, stories and totals don’t have the property of physical location, and nor, really, does consciousness.

Does it matter if the metaphor is convenient? Well, it may well be that the traditional inside view encourages us to fall into certain errors. It has often been argued (and still is) for example that because we’re sometimes wrong about what we’re seeing or hearing, we must in fact only ever see an intermediate representation, never the  real world itself. I think this is a mistake, but it’s one that the internal/external view helps to make plausible.  It may well be, in my opinion, that habitually thinking of consciousness as having a simple physical location makes it more difficult for us to understand it properly.

So perhaps we ought to make a concerted effort to stop, but to be honest I think the metaphor is just too deeply rooted. At the end of the day you can take the thinker out of the stove, but you can’t take the stove out of the thinker.

A new light on computation

cellular automatonA couple of year back I mentioned some ideas put forward by Mark Muhlestein:  a further development of his thinking has now been published in Cognitive Computation: an accessible version is here.

The paper addresses the question of whether a computational process could support consciousness. Muhlestein begins by briefly recounting some thought-experiments proposed by Maudlin and others. Suppose we run a computation which instantiates consciousness: then we run the same process but remove all the unused conditional branches, so that now the course of the process is fixed. In the second case the computer goes through the same set of states (does it, actually?), but we’d be disinclined to think it could be conscious; it’s just a replay. In fact, it’s hard to see why ‘replaying’ it makes any difference since the relevant states all exist in the record, so that inert record itself probably ought to be conscious. Worse than that, since we could, given a sufficiently weird and arbitrary decoding, read random patterns in rocks as recording the same states, an indefinite number of copies of that same consciousness would be occurring constantly pretty much everywhere.

That’s more or less how the argument runs, anyway, and what’s proposed to fix the problem is the idea that for consciousness to occur, the computation must have the property of counterfactual sensitivity. It must, in other words, have had the capacity to go another way.  Without that property, no consciousness. The notion has a certain intuitive plausibility, I think: we can accept that in order for me to have the experience of seeing something, the fact that it was a red square and not a blue circle must be relevant; and that consciousness perhaps must be part of a stream which is, in some unobjectionable sense, free-flowing.

Muhlestein proposes a new thought experiment with a truly formidable apparatus. He set up a physical implementation of Conway’s Game of Life and uses that in turn to implement a computer on which his processes can be run. Because his implementation of Life uses cellular automata which display and detect states using light, he can now intervene anywhere he likes by simply shining an external light into the process.

If that last paragraph is unintelligible, don’t worry too much: all you need to know for the purposes of the argument is that we have a computer set up so that we have a special power to intervene arbitrarily from outside and constrain or alter the process which is running as it progresses. Muhlestein now takes a conscious computational process (in fact he proposes to scan one on a whole-brain basis from his friend Woody – don’t try this at home, kids!) and runs it on his set-up; but here’s the cunning part: he uses his light controls not to interfere, but to project the same process back onto itself. In effect, he’s over-determining the process: it runs normally by itself, but his lights are enforcing a recorded version of the same process at the same time.

Now, the computation is taken to be conscious when running normally. It runs in exactly the same way when the lights are on; it simply loses the counterfactual sensitivity: it could no longer have gone another way. But why would extra lights on the process deprive it of consciousness? The outputs will be exactly the same, any behaviour of the conscious entity will be the same, and so on. Nothing about the fluidity or coherence of the process changes. Yet if we say it remains conscious, we have to give up the idea that counterfactual sensitivity makes a difference, and then we’re back in difficulty.

What do we say to that? Muhlestein ultimately concludes that there is no satisfactory way out short of concluding that in fact the assumptions are wrong and that computational processes are not sufficient for consciousness.

Old Peter (the version of myself that existed a couple of years ago) thought the problem was really about appropriate causality, though I don’t think he explained himself very cogently. He would rightly have said that bringing counterfactuals into it is dangerous stuff because they are a metaphysical and logical minefield. For him the question would be: do the successive states of the computation cause each other in appropriate ways or do they not? If they do, we may have a conscious process; if the causal relations are disrupted or indirect, we’re merely waving flags or playing with puppets. So in his eyes the absence of counterfactual sensitivity in Muhlestein’s example is not decisive and his lights do not remove the consciousness unless they disrupt the underlying causality of the computation: unless they make a difference, in short. Causal over-determination of the process is irrelevant. The problem for Old Peter is in explaining what kinds of causal relations between states have to obtain for the conscious computation to work. His intuition told him that in real processes, including those in the brain, each event causes the next directly, whereas in simulations the causal path is through a model or a script. Unfortunately we could well argue that this kind of indirect causal path is also typical of computation, leading us back to Muhlestein’s conclusion by a different route.  Myself, I’m no longer  completely sure either that it is a matter of indirect causality or that computational processes are necessarily of that kind.

For myself I should still be inclined towards suspicion of counterfactual sensitivity, but I would be more inclined to say that what we’re missing is the element of intentionality; before we can complete the analysis of the problem we need to know what it is that endows a brain process with meaning and aboutness. The snag with that strategy is that we don’t know.

All in all, I found it interesting to look back a bit. It seems clear to me that over the years this site has been up I have made some distinct progress with consciousness: every year I know a little less about it.