Unconscious and Conscious

What if consciousness is just a product of our non-conscious brain, ask Peter Halligan and David A Oakley? But how could it be anything else? If we take consciousness to be a product of brain processes at all, it can only come from non-conscious ones. If consciousness had to come from processes that were themselves already conscious, we should have a bit of a problem on our hands. Granted, it is in one sense remarkable that the vivid light of conscious experience comes from stuff that is at some level essentially just simple physical matter – it might even be that incredulity over that is an important motivator for panpsychists, who find it easier to believe that everything is at least a bit conscious. But most of us take that gap to be the evident core fact that we’re mainly trying to explain when we debate consciousness.

Of course, Halligan and Oakley mean something more than that. Their real point is a sceptical, epiphenomenalist one; that is, they believe consciousness is ineffective. All the decisions are really made by unconscious processes; consciousness notices what is happening and, like an existentialist, claims the act as its own after the fact (though of course the existentialist does not do it automatically). There is of course a lot of evidence that our behaviour is frequently influenced by factors we are not consciously aware of, and that we happily make up reasons for what we have done which are not the true ones. But ‘frequently’ is not ‘invariably’ and in fact there seem to be plenty of cases where, for example, our conscious understanding of a written text really does change our behaviour and our emotional states. I would find it pretty hard to think that my understanding of an email with important news from a friend was somehow not conscious, or that my conscious comprehension was irrelevant to my subsequent behaviour. That is the kind of unlikely stuff that the behaviourists ultimately failed to sell us. Halligan and Oakley want to go quite a way down that same unpromising path, suggesting that it is actually unhelpful to draw a sharp distinction between conscious and unconscious. They will allow a kind of continuum, but to me it seems clear that there is a pretty sharp distinction between the conscious, detached plans of human beings and the instinct-driven, environment-controlled behaviour of animals, one it is unhelpful to blur or ignore.

One distinction that I think would be helpful here is between conscious and what I’ll call self-conscious states. If I make a self-conscious decision I’m aware of making it; but I can also just make the decision; in fact, I can just act. In my view, cases where I just make the decision in that unreflecting way may still be conscious; but I suspect that Halligan and Oakley (like Higher Order theorists) accept only self-conscious decisions as properly conscious ones.

Interesting to compare the case put by Peter Carruthers in Scientific American recently; he argues that the whole idea of conscious thought is an error. He introduces the useful idea of the Global Workspace proposed by Bernard Baars and others; a place where data from different senses can be juggled and combined. To be in the workspace is to be among the contents of consciousness, but Carruthers believes only sensory items get in there. He’s happy to allow bits of ‘inner speech’ or mental visualisation, deceptive items that mislead us about our own thoughts; but he won’t allow completely abstract stuff (again, you may see some resemblance to the ‘mentalistic’ entities disallowed by the behaviourists). I don’t really know why not; if abstractions never enter consciousness or the workspace, how is it that we’re even talking about them?

Carruthers thinks our ‘Theory Of Mind’ faculty misleads us; as a quick heuristic it’s best to assume that others know their own mental states accurately, and so, it’s natural for us to think that we do too: that we have direct access and cannot be wrong about whether we, for example, feel hungry. But he thinks we know much less than we suppose about our own motives and mental states. On this he seems a little more moderate than Halligan and Oakley, allowing that conscious reflection can sometimes affect what we do.

I think the truth is that our mental, and even our conscious processes, are in fact much more complex and multi-layered than these discussions suggest. Let’s consider the causal efficacy of conscious thought in simple arithmetic. When I add two to two in my head and get four, have the conscious thoughts about the sum caused the conscious thought of the answer, or was there an underlying non-conscious process which simply waved flags at a couple of points?

Well, I certainly can do the thing epiphenomenally. I can call up a mental picture of the written characters, for example, and step through them one by one. In that case the images do not directly cause each other. If I mentally visualise two balls and then another two balls and then mentally count them, perhaps that is somewhat different? Can I think of two abstractly and then notice conceptually that its reduplication is identical with the entity ‘four’? Carruthers would deny it, I think, but I’m not quite sure. If I can, what causal chain is operating? At this point it becomes clear that I really have little idea of how I normally do arithmetic, which I suppose scores a point for the sceptics. The case of two plus two being four is perhaps a bad example, because it is so thoroughly remembered I simply replay it, verbally, visually, implicitly, abstractly or however I do it. What if I were multiplying 364 by 5? The introspective truth seems to be that I do something akin to running an algorithm by hand. I split the calculation into separate multiplications, whose answer I mainly draw direct from memory, and then I try to remember results and add them, again usually relying on remembered results. Does my thinking of four times five and recalling that the answer is twenty mean there was a causal link between the conscious thought and the conscious result? I think there may be such a link, but frustratingly if I use my brain in a slightly different way there may not be a direct one, or there may be a direct one which is not of the appropriate kind (because, say, the causal link is direct but irrelevant to the mathematical truth of the conclusion).

Having done all that, I realise that since I’m multiplying by five, I could have simply multiplied by ten, which can be done by simply adding a zero (Is that done visually – is it necessarily done visually? Some people cannot conjure up mental images at all.) and halving the result. Where did that little tactic come from? Did I think of it consciously, and was its arrival in reportable condition causally derived from my wondering about how best to do the sum (in words, or not in words?) or was it thrust into a kind of mental in-tray (rather spookily) by an unconscious part of my brain which has been vaguely searching around for any good tricks for the last couple of minutes? Unfortunately I think it could have happened in any one of a dozen different ways, some of which probably involve causally effective conscious states while others may not.

In the end the biggest difference between me and the sceptics may come down to what we are prepared to call conscious; they only want the states I’m calling self-conscious. Suppose we take it that there is indeed a metaphorical or functional space where mental items become ‘available’ (in some sense I leave unclarified) to influence our behaviour. It could indeed be a Global Workspace, but need not commit us to all the details of that theory. Then they allow at most those items actually within the space to be conscious, while I would allow anything capable of entering it. My intuition here is that the true borderline falls, not between those mental items I’m aware of and those I merely have, but between those I could become aware of if my attention were suitably directed, and those I could never extract from the regions where they are processed, however I thought about it. When I play tennis, I may consciously plan a strategy, but I also consciously choose where to send the ball on the spur of the moment; that is not normally a self-conscious decision, but if I stopped to review it it easily could become one – whereas the murky Freudian reasons why I particularly want to win the game cannot be easily accessed (without lengthy therapy at any rate) and the processes my visual cortex used to work out the ball’s position are forever denied me.

My New Year resolution is to give up introspection before I plunge into some neo-Humean abyss of self-doubt.