The Consciousness Meter

Picture: meter. It has been reported in various places recently that Giulio Tononi is developing a consciousness meter.  I think this all stems from a New York Times article by the excellent Carl Zimmer where, to be tediously accurate, Tononi said “The theory has to be developed a bit more before I worry about what’s the best consciousness meter you could develop.”   Wired discussed the ethical implications of such a meter, suggesting it could be problematic for those who espouse euthanasia but reject abortion.

I think a casual reader could be forgiven for dismissing this talk of a consciousness meter. Over the last few years there have been regular reports of scientific mind-reading: usually what it amounts to is that the subject has been asked to think of x while undergoing a scan; then having recorded the characteristic pattern of activity the researchers have been able to spot from scans with passable accuracy the cases where the subject is thinking of x rather than y or z.  In all cases the ability to spot thoughts about x are confined to a single individual on a single occasion, with no suggestion that the researchers could identify thoughts of x in anyone else, or even in the same individual a day later. This is still a notable achievement; it resembles (can’t remember who originally said this) working out what’s going on in town by observing the pattern of lights from an orbiting spaceship; but it falls a long way short of mind-reading.

But in Tononi’s case we’re dealing with something far more sophisticated.  We discussed a few months ago Tononi’s Integrated Information Theory (IIT), which holds that consciousness is a graduated phenomenon which corresponds to Phi: the quantity of information integrated. If true, the theory would provide a reasonable basis for assessing levels of consciousness, and might indeed conceivably lead to something that could be called a consciousness meter; although it seems likely that measuring the level of integration of information would provide a good rule-of-thumb measure of consciousness even if in fact that wasn’t what constituted consciousness. There are some reasons to be doubtful about Tononi’s theory: wouldn’t contemplating a very complex object lead to a lot of integration of information? Would that mean you were more conscious? Is someone gazing at the ceiling of the Sistine Chapel necessarily more conscious than someone in a whitewashed cell?

Tononi has in fact gone much further than this: in a paper with David Balduzzi he suggested the notion of qualia space. The idea here is that unique patterns of neuronal activation define unique subjective experiences.  There is some sophisticated maths going on here to define qualia space, far beyond my clear comprehension; yet I feel confident that it’s all misguided.  In the first place, qualia are not patterns of neuronal activation; the word was defined precisely to identify those aspects of experience which are over and above simple physics;  the defining text of Mary the colour scientist is meant to tell us that whatever qualia are, they are not information. You may want to reject that view; you may want to say that in the end qualia are just aspects of neuron firing; but you can’t have that conclusion as an assumption. To take it as such is like writing an alchemical text which begins: “OK, so this lead is gold; now here are some really neat ways to shape it up into ingots”.

And alas, that’s not all. The idea of qualia space, if I’ve understood it correctly, rests on the idea that subjective experience can be reduced to combinations of activation along a number of different axes.  We know that colour can be reduced to the combination of three independent values (though experienced colour is of course a large can of worms which I will not open here) ; maybe experience as a whole just needs more scales of value.  Well, probably not.  Many people have tried to reduce the scope of human thought to an orderly categorisation: encyclopediae;  Dewey’s decimal index; and the international customs tariff to name but three; and it never works without capacious ‘other’ categories.  I mean, read Borges, dude:

I have registered the arbitrarities of Wilkins, of the unknown (or false) Chinese encyclopaedia writer and of the Bibliographic Institute of Brussels; it is clear that there is no classification of the Universe not being arbitrary and full of conjectures. The reason for this is very simple: we do not know what thing the universe is. “The world – David Hume writes – is perhaps the rudimentary sketch of a childish god, who left it half done, ashamed by his deficient work; it is created by a subordinate god, at whom the superior gods laugh; it is the confused production of a decrepit and retiring divinity, who has already died” (‘Dialogues Concerning Natural Religion’, V. 1779). We are allowed to go further; we can suspect that there is no universe in the organic, unifying sense of this ambitious term. If there is a universe, its aim is not conjectured yet; we have not yet conjectured the words, the definitions, the etymologies, the synonyms, from the secret dictionary of God.

The metaphor of ‘x-space’ is only useful where you can guarantee that the interesting features of x are exhausted and exemplified by linear relationships; and that’s not the case with experience.  Think of a large digital TV screen: we can easily define a space of all possible pictures by simply mapping out all possible values of each pixel. Does that exhaust television? Does it even tell us anything useful about the relationship of one picture to another? Does the set of frames from Coronation Street describe an intelligible trajectory through screen space? I may be missing the point, but it seems to me it’s not that simple.

Unconscious Free Will?

Picture: Sleepwriting. Here’s an interesting piece by Neil Levy from a few months back, on Does Consciousness Matter? (I only came across it because of its  nomination for a 3QD prize). Actually the topic is a little narrower than the title suggests; it asks whether consciousness is essential for free will and moral responsibility (it being assumed that the two are different facets of the same thing – I’m comfortable with that, but some might challenge it).  Neil notes that people typically suppose Libet’s findings – which suggest our decisions are made before we are aware of them – make free will impossible.

Neil is not actually all that worried by Libet: the impulses from the event of intention formation ought to be registered later than the event itself in any case, he says; so Libet’s results are exactly what we should have expected. Again, I’m inclined to agree: making a conscious decision is one thing;  the second-order business of being conscious of that conscious decision naturally follows slightly later.  (Some, of course, would take a very different view here too; some indeed would say that the second-order awareness is actually what constitutes consciousness.)

Neil particularly addresses two arguments. One says that consciousness is important because only those objectives that are consciously entertained reflect the quality of my will; if I’m not aware that I’m hitting you, I can’t be morally responsible for the blows.  Neil feels this is a question-begging response which just assumes that conscious awareness is essential; I think he’s perhaps a bit over-critical, but of course if we can get a more fully-worked answer, so much the better.

He prefers a slightly different argument which says that factors we were not conscious of cannot influence our deliberations about some act, and hence we can only be held responsible for acts consciously chosen.  George Sher, it seems, rejected this argument on the grounds that our actions are influenced by unconscious factors; but Neil rejects this, saying that although unconscious factors certainly influence out behaviour, we have no opportunity to consider them, which is the critical point.

Personally, I would say that agency is inherently a conscious matter because it requires intentionality. In order to form an intention we have to hold in mind (in some sense) an objective, and that requires intentionality.  Original intentionality is unique to consciousness – in fact you could argue that it’s constitutive of consciousness if you believe all consciousness is consciousness of something – though I myself I wouldn’t go quite so far.

But what about those unconscious factors? Subconscious factors would seem to possess intentionality as well as conscious ones, if Freud is to be believed: I don’t see how I could hold a subconscious desire to kill my father and marry my mother without those desires being about my parents. Neil would argue that this isn’t relevant because we can’t endorse, question or reject motives outside consciousness – but how does he know that? If there’s unconscious endorsement, questioning and so on going on, he wouldn’t know, would he? It could be that the unconscious plays Hyde to the Jekyll of conscious thought, with plans and projects that are in its own terms no less complex or rational than conscious ones.

I think Neil is right: but it doesn’t seem securely proved in the way he was hoping for. The unconscious part of our minds never has chance to set down an explanation of its behaviour, after all: it could still in principle be the case that conscious rationalising is privileged in the literature and in ordinary discourse mainly because it’s the conscious part of the brain that does the talking and writes the blogs…