The analogy with a digital computer has energised and strongly influenced our thinking about the human mind for at least sixty years, beginning with Turing’s seminal paper of 1950, ‘Computing machinery and intelligence’, and gaining in influence as computers became first real, and then ubiquitous. Whether or not you like the analogy, I think you’d have to concede that it has often set the terms of the discussion over recent decades. Yet we’ve never got it quite clear, and in some respects we’ve almost always got it wrong.
In particular, I’d like to suggest: consciousness is an output, not processing.
At first sight it might seem that consciousness can’t be an output, on the simple grounds that it isn’t, well, put out. Our consciousness is internal, it goes on in our heads – how could that be an output? I don’t, of course, mean it’s an output in that literal sense of being physically emitted: rather, I mean it’s the final product of a process, in this case a mental process. It may often be retained in our heads, but in some sense it’s the end of the line, the result.
It may be worth noting in passing that consciousness is pretty strongly linked with outputs in the simpler sense, though: so much so that the Turing test is based entirely on the ability of the testee to output strings of characters which gain the approval of the judges. Quality of output is taken to be the best possible sign of the presence of consciousness.
Wait a minute, you may say, consciousness isn’t a final output, it’s surely part of the process: what goes on in our conscious mind feeds back into our further thoughts and our behaviour. That’s the whole point of it, surely; to allow more complex and detached forms of processing to take place so that our true outputs in behaviour will eventually be better planned and targeted?
It’s true that the contents of consciousness may feed back into our mental processes, and that must be at least partly why it exists (its role in forming genuine verbal outputs is probably significant too) – I’m not suggesting consciousness is a mere epiphenomenon, like, as they say, the whistle on a train. Items from consciousness may be inputs as well as outputs. To take an unarguable example, I’ve never managed to remember how many days there are in each month: but I have managed to remember that little rhyme which contains the information. So if I need to know how many days there are in August, I recall the rhyme and repeat it to myself: in this case the contents of my consciousness are helpfully fed back into my mind. Apart from clunky manoeuvres of this kind, though, I think careful introspection suggests consciousness does not feed directly back into the underlying mental processes all that often. If we want to make a decision we may hold the alternatives in mind and present them to ourselves in sequence, but what we’re waiting for is a feeling or a salient piece of reasoning to pop into our minds from some lower, essentially inscrutable process: we’re not normally putting our own thoughts on the subject together by hand. I think Fodor once said he had no conscious access to the mental processes which produced his views on any philosophical issue: if he inspected the contents of his mind while cogitating about a particular problem all he came up with were sub-articulate thoughts approximately like “Come on, Jerry!” I feel much the same.
With apologies if I’m repeating things I’ve said before, I think it may help if I mention some of the confusions that I think arise from not recognising the output nature of consciousness. A striking example is Dennett’s odd view that consciousness might involve a serial computer simulated on a parallel machine. We know, of course, that when people speak of the brain being ‘massively parallel’ they usually mean that many different functional areas are promiscuously interconnected, something radically different from massively parallel computing in the original sense of a carefully managed set of isolated processes; but Dennett seems to be motivated by an additional misunderstanding in which it is assumed that only a serial process can give rise to a coherent serial consciousness. Not at all: the outputs from parallel and serial processing are identical (they’d better be): it’s just that the parallel approach sometimes gets there quicker.
It’s a little unfair to single out Dennett: the same assumption that properties of the underlying process must also be properties of the output consciousness can be discerned elsewhere: it’s just that Dennett is clearer than most. Another striking example might be Libet’s notorious finding that consciousness of a decision arrives some time after the decision itself – but of course it does! The decision is an event in processes of which consciousness is the output.
It’s hard to see consciousness as an output, partly because it can also be an input, but also because we identify ourselves with our thoughts. We want to believe that we ourselves enjoy agency, that we have causal effects, and so we’re inclined to believe that our thoughts are what does the trick – although we know quite well that when we move our arm it’s not thinking about it that makes it happen. This supposed identity of thoughts and self (after all, it’s because I think, that I am, isn’t it?) is so strong that some, failing to find in their thoughts anything but fleeting bundles of momentary impressions , have concluded there is no self after all. I think that level of scepticism is unwarranted: it’s just that our selves remain inscrutably shadowed to direct conscious observation. “Know thyself”, said the inscription on the temple of the Delphic oracle – alas, ultimately we can’t.