A Human Phenomenology Project

introspection2We don’t know what we think, according to Alex Rosenberg in the NYT. It’s a piece of two halves, in my opinion; he starts with a pretty fair summary of the sceptical case. It has often been held that we have privileged knowledge of our own thoughts and feelings, and indeed of our own decisions; but the findings of Benjamin Libet about decisions being made before we are aware of them; the phenomenon of blindsight which shows we may go on having visual knowledge we’re not aware of; and many other cases where it can be shown that motives are confabulated and mental content is inaccessible to our conscious, reporting mind; these all go to show that things are much more complex than we might have thought, and that our thoughts are not, as it were, self-illuminating. Rosenberg plausibly suggests that we use on ourselves the kind of tools we use to work out what other people are thinking; but then he seems to make a radical leap to the conclusion that there is nothing else going on.

Our access to our own thoughts is just as indirect and fallible as our access to the thoughts of other people. We have no privileged access to our own minds. If our thoughts give the real meaning of our actions, our words, our lives, then we can’t ever be sure what we say or do, or for that matter, what we think or why we think it.

That seems to be going too far.  How could we ever play ‘I spy’ if we didn’t have any privileged access to private thoughts?

“I spy, with my little eye, something beginning with ‘c'”
“Is it ‘chair’?”
“I don’t know – is it?”

It’s more than possible that Rosenberg’s argument has suffered badly from editing (philosophical discussion, even in a newspaper piece, seems peculiarly information-dense; often you can’t lose much of it without damaging the content badly). But it looks as if he’s done what I think of as an ‘OMG bounce’; a kind of argumentative leap which crops up elsewhere. Sometimes we experience illusions:  OMG, our senses never tell us anything about the real world at all! There are problems with the justification of true belief: OMG there is no such thing as knowledge! Or in this case: sometimes we’re wrong about why we did things: OMG, we have no direct access to our own thoughts!

There are in fact several different reasons why we might claim that our thoughts about our thoughts are immune to error. In the game of ‘I spy’, my nominating ‘chair’ just makes it my choice; the content of my thought is established by a kind of fiat. In the case of a pain in my toe, I might argue I can’t be wrong because a pain can’t be false: it has no propositional content, it just is. Or I might argue that certain of my thoughts are unmediated; there’s no gap between them and me where error could creep in, the way it creeps in during the process of interpreting sensory impressions.

Still, it’s undeniable that in some cases we can be shown to adopt false rationales for our behaviour; sometimes we think we know why we said something, but we don’t. I think by contrast I have occasionally, when very tired, had the experience of hearing coherent and broadly relevant speech come out of my own mouth without it seeming to come from my conscious mind at all. Contemplating this kind of thing does undoubtedly promote scepticism, but what it ought to promote is a keener awareness of the complexity of human mental experience: many layered, explicit to greater or lesser degrees, partly attended to, partly in a sort of half-light of awareness… There seem to be unconscious impulses, conscious but inexplicit thought; definite thought (which may even be in recordable words); self-conscious thought of the kind where we are aware of thinking while we think… and that is at best the broadest outline of some of the larger architecture.

All of this really needs a systematic and authoritative investigation. Of course, since Plato there have been models of the structure of the mind which separate conscious and unconscious, id, ego and superego: philosophers of mind have run up various theories, usually to suit their own needs of the moment; and modern neurology increasingly provides good clues about how various mental functions are hosted and performed. But a proper mainstream conception of the structure and phenomenology of thought itself seems sadly lacking to me. Is this an area where we could get funding for a major research effort; a Human Phenomenology Project?

It can hardly be doubted that there are things to discover. Recently we were told, if not quite for the first time, that a substantial minority of people have no mental images (although at once we notice that there even seen to be different ways of having mental images). A systematic investigation might reveal that just as we have four blood groups, there are four (or seven) different ways the human mind can work. What if it turned out that consciousness is not a single consistent phenomenon, but a family of four different ones, and that the four tribes have been talking past each other all this time…?

The Consciousness Meter

Picture: meter. It has been reported in various places recently that Giulio Tononi is developing a consciousness meter.  I think this all stems from a New York Times article by the excellent Carl Zimmer where, to be tediously accurate, Tononi said “The theory has to be developed a bit more before I worry about what’s the best consciousness meter you could develop.”   Wired discussed the ethical implications of such a meter, suggesting it could be problematic for those who espouse euthanasia but reject abortion.

I think a casual reader could be forgiven for dismissing this talk of a consciousness meter. Over the last few years there have been regular reports of scientific mind-reading: usually what it amounts to is that the subject has been asked to think of x while undergoing a scan; then having recorded the characteristic pattern of activity the researchers have been able to spot from scans with passable accuracy the cases where the subject is thinking of x rather than y or z.  In all cases the ability to spot thoughts about x are confined to a single individual on a single occasion, with no suggestion that the researchers could identify thoughts of x in anyone else, or even in the same individual a day later. This is still a notable achievement; it resembles (can’t remember who originally said this) working out what’s going on in town by observing the pattern of lights from an orbiting spaceship; but it falls a long way short of mind-reading.

But in Tononi’s case we’re dealing with something far more sophisticated.  We discussed a few months ago Tononi’s Integrated Information Theory (IIT), which holds that consciousness is a graduated phenomenon which corresponds to Phi: the quantity of information integrated. If true, the theory would provide a reasonable basis for assessing levels of consciousness, and might indeed conceivably lead to something that could be called a consciousness meter; although it seems likely that measuring the level of integration of information would provide a good rule-of-thumb measure of consciousness even if in fact that wasn’t what constituted consciousness. There are some reasons to be doubtful about Tononi’s theory: wouldn’t contemplating a very complex object lead to a lot of integration of information? Would that mean you were more conscious? Is someone gazing at the ceiling of the Sistine Chapel necessarily more conscious than someone in a whitewashed cell?

Tononi has in fact gone much further than this: in a paper with David Balduzzi he suggested the notion of qualia space. The idea here is that unique patterns of neuronal activation define unique subjective experiences.  There is some sophisticated maths going on here to define qualia space, far beyond my clear comprehension; yet I feel confident that it’s all misguided.  In the first place, qualia are not patterns of neuronal activation; the word was defined precisely to identify those aspects of experience which are over and above simple physics;  the defining text of Mary the colour scientist is meant to tell us that whatever qualia are, they are not information. You may want to reject that view; you may want to say that in the end qualia are just aspects of neuron firing; but you can’t have that conclusion as an assumption. To take it as such is like writing an alchemical text which begins: “OK, so this lead is gold; now here are some really neat ways to shape it up into ingots”.

And alas, that’s not all. The idea of qualia space, if I’ve understood it correctly, rests on the idea that subjective experience can be reduced to combinations of activation along a number of different axes.  We know that colour can be reduced to the combination of three independent values (though experienced colour is of course a large can of worms which I will not open here) ; maybe experience as a whole just needs more scales of value.  Well, probably not.  Many people have tried to reduce the scope of human thought to an orderly categorisation: encyclopediae;  Dewey’s decimal index; and the international customs tariff to name but three; and it never works without capacious ‘other’ categories.  I mean, read Borges, dude:

I have registered the arbitrarities of Wilkins, of the unknown (or false) Chinese encyclopaedia writer and of the Bibliographic Institute of Brussels; it is clear that there is no classification of the Universe not being arbitrary and full of conjectures. The reason for this is very simple: we do not know what thing the universe is. “The world – David Hume writes – is perhaps the rudimentary sketch of a childish god, who left it half done, ashamed by his deficient work; it is created by a subordinate god, at whom the superior gods laugh; it is the confused production of a decrepit and retiring divinity, who has already died” (‘Dialogues Concerning Natural Religion’, V. 1779). We are allowed to go further; we can suspect that there is no universe in the organic, unifying sense of this ambitious term. If there is a universe, its aim is not conjectured yet; we have not yet conjectured the words, the definitions, the etymologies, the synonyms, from the secret dictionary of God.

The metaphor of ‘x-space’ is only useful where you can guarantee that the interesting features of x are exhausted and exemplified by linear relationships; and that’s not the case with experience.  Think of a large digital TV screen: we can easily define a space of all possible pictures by simply mapping out all possible values of each pixel. Does that exhaust television? Does it even tell us anything useful about the relationship of one picture to another? Does the set of frames from Coronation Street describe an intelligible trajectory through screen space? I may be missing the point, but it seems to me it’s not that simple.