Illusionism

frankish-illusionConsciousness – it’s all been a terrible mistake. In a really cracking issue of the JCS (possibly the best I’ve read) Keith Frankish sets out and defends the thesis of illusionism, with a splendid array of responses from supporters and others.

How can consciousness be an illusion? Surely an illusion is itself a conscious state – a deceptive one – so that the reality of consciousness is a precondition of anything being an illusion? Illusionism, of course, is not talking about the practical, content-bearing kind of consciousness, but about phenomenal consciousness, qualia, the subjective side, what it is like to see something. Illusionism denies that our experiences have the phenomenal aspect they seem to have; it is in essence a sceptical case about phenomenal experience. It aims to replace the question of what phenomenal experience is, with the question of why people have the illusion of phenomenal experience.

In one way I wonder whether it isn’t better to stick with raw scepticism than frame the whole thing in terms of an illusion. There is a danger that the illusion itself becomes a new topic and inadvertently builds the confusion further. One reason the whole issue is so difficult is that it’s hard to see one’s way through the dense thicket of clarifications thrown up by philosophers, all demanding to be addressed and straightened out. There’s something to be said for the bracing elegance of the two-word formulation of scepticism offered by Dennett (who provides a robustly supportive response to illusionism here, as being the default case) – ‘What qualia?’. Perhaps we should just listen to the ‘tales of the qualophiles’ – there is something it is like, Mary knows something new, I could have a zombie twin – and just say a plain ‘no’ to all of them. If we do that, the champions of phenomenal experience have nothing to offer; all they can do is, as Pete Mandik puts it here, gesture towards phenomenal properties. (My imagination whimpers in fear at being asked to construe the space in which one might gesture towards phenomenal qualities, let alone the ineffable limb with which the movement might be performed; it insists that we fall back on Mandik’s other description; that phenomenalists can only invite an act of inner ostension.)

Eric Schwitzgebel relies on something like this gesturing in his espousal of definition by example as a means of getting the innocent conception of phenomenal experience he wants without embracing the dubious aspects. Mandik amusingly and cogently assails the scepticism of the illusionist case from an even more radical scepticism – meta-illusionism. Sceptics argue that phenomenalism can’t be specified meaningfully (we just circle around a small group of phrases and words that provide a set of synonyms with no definition outside the loop) , but if that’s true how do we even start talking about it? Whereof we cannot speak…

Introspection is certainly the name of the game, and Susan Blackmore has a nifty argument here; perhaps it’s the very act of introspecting that creates the phenomenal qualities? Her delusionism tells us we are wrong to think that there is a continuous stream of conscious experience going on in the absence of introspection, but stops short of outright scepticism about the phenomenal. I’m not sure. William James told us that introspection must be retrospection – we can only mentally examine the thought we just had, not the one we are having now – and it seems odd to me to think that a remembered state could be given a phenomenal aspect after the fact. Easier, surely, to consider that the whole business is consistently illusory?

Philip Goff is perhaps the toughest critic of illusionism; if we weren’t in the grip of scientism, he says, we should have no difficulty in seeing that the causal role of brain activity also has a categorical nature which is the inward, phenomenal aspect. If this view is incoherent or untenable in any way, we’re owed a decent argument as to why.

Myself I think Frankish is broadly on the right track. He sets out three ways we might approach phenomenal experience. One is to accept its reality and look for an explanation that significantly modifies our understanding of the world. Second, we look for an explanation that reconciles it with our current understanding, finding explanations within the world of physics of which we already have a general understanding. Third, we dismiss it as an illusion. I think we could add ‘approach zero’: we accept the reality of phenomenal experience and just regard it as inexplicable. This sounds like mysterianism – but mysterians think the world itself makes sense; we just don’t have the brains to see it. Option zero says there is actual irreducible mystery in the real world. This conclusion is surely thoroughly repugnant to most philosophers, who aspire to clear answers even if they don’t achieve them; but I think it is hard to avoid unless we take the sceptical route. Phenomenal experience is on most mainstream accounts something over and above the physical account just by definition. A physical explanation is automatically ruled out; even if good candidates are put forward, we can always retreat and say that they explain some aspects of experience, but not the ineffable one we are after. I submit that in fact this same strategy of retreat means that there cannot be any satisfactory rational account of phenomenal experience, because it can always be asserted that something ineffable is missing.

I say philosophers will find this repugnant, but I can sense some amiable theologians sidling up to me. Those light-weight would-be scientists can’t deal with mystery and the ineffable, they say, but hey, come with us for a bit…

Regular readers may possibly remember that I think that the phenomenal aspect of experience is actually just its reality; that the particularity or haecceity of real experience is puzzling to those who think that theory must accommodate everything. That reality is itself mysterious in some sense, though: not easily accounted for and not susceptible to satisfactory explanation either by induction or deduction. It may be that to understand that in full we have to give up on these more advanced mental tools and fall back on the basic faculty of recognition, the basis of all our thinking in my view and the capacity of which both deduction and induction are specialised forms. That implies that we might have to stop applying logic and science and just contemplate reality; I suppose that might mean in turn that meditation and the mystic tradition of some religions is not exactly a rejection of philosophy as understood in the West, but a legitimate extension of the same enquiry.

Yeah, but no; I may be irredeemably Western and wedded to scientism, but rightly or wrongly, meditation doesn’t scratch my epistemic itch. Illusionism may not offer quite the right answer, but for me it is definitely asking the right questions.

Doorknobs and Intuition

doorknob‘…stupid as a doorknob…’ Just part of Luboš Motl’s vigorous attack on Scott Aaronson’s critique of IIT, the Integrated Information Theory of Giulio Tononi.

To begin at the beginning. IIT says that consciousness arises from integrated information, and proposes a mathematical approach to quantifying the level of integrated information in a system, a quantity it names Phi (actually there are several variant ways to define Phi that differ in various details, which is perhaps unfortunate). Aaronson and Motl both describe this idea as a worthy effort but both have various reservations about it – though Aaronson thinks the problems are fatal while Motl thinks IIT offers a promising direction for further work.

Both pieces contain a lot of interesting side discussion, including Aaronson’s speculation that approximating phi for a real brain might be an NP-hard problem. This is the digression that prompted the doorknob comment: so what if it were NP-hard, demands Motl; you think nature is barred from containing NP-hard problems?

The real crux as I understand it is Aaronson’s argument that we can give examples of systems with high scores for Phi that we know intuitively could not be conscious. Eric Schwitzgebel has given a somewhat similar argument but cast in more approachable terms; Aaronson uses a Vandermonde matrix for his example of a high-phi but intuitively non-conscious entity, whereas Schwitzgebel uses the United States.

Motl takes exception to Aaronson’s use of intuition here. How does he know that his matrix lacks consciousness? If Aaronson’s intuition is the test, what’s the point of having a theory? The whole point of a theory is to improve on and correct our intuitive judgements, isn’t it? If we’re going to fall back on our intuitions argument is pointless.

I think appeals to intuition are rare in physics, where it is probably natural to regard them as illegitimate, but they’re not that unusual in philosophy, especially in ethics. You could argue that G.E. Moore’s approach was essentially to give up on ethical theory and rely on intuition instead. Often intuition limits what we regard as acceptable theorising, but our theories can also ‘tutor’ and change our intuitions. My impression is that real world beliefs about death, for example, have changed substantially in recent decades under the influence of utilitarian reasoning; we’re now much less likely to think that death is simply forbidden and more likely to accept calculations about the value of lives. We still, however, rule out as unintuitive (‘just obviously wrong’) such utilitarian conclusions as the propriety of sometimes punishing the innocent.

There’s an interesting question as to whether there actually is, in itself, such a thing as intuition. Myself I’d suggest the word covers any appealing pre-rational thought; we use it in several ways. One is indeed to test our conclusions where no other means is available; it’s interesting that Motl actually remarks that the absence of a reliable objective test of consciousness is one of IIT’s problems; he obviously does not accept that intuition could be a fall-back, so he is presumably left with the gap (which must surely affect all theories of consciousness). Philosophers also use an appeal to intuition to help cut to the chase, by implicitly invoking shared axioms and assumptions; and often enough ‘thought experiments’ which are not really experiments at all but in the Dennettian phrase ‘intuition pumps’ are used for persuasive effect; they’re not proofs but they may help to get people to agree.

Now as a matter of fact I think in Aaronson’s case we can actually supply a partial argument to replace pure intuition. In this discussion we are mainly talking about subjective consciousness, the ‘something it is like’ to experience things. But I think many people would argue that that Hard Problem consciousness requires the Easy Problem kind to be in place first as a basis. Subjective experience, we might argue, requires the less mysterious apparatus of normal sensory or cognitive experience; and Aaronson (or Schwitzgebel) could argue that their example structures definitely don’t have the sort of structure needed for that, a conclusion we can reach through functional argument without the need for intuition,

Not everybody would agree, though; some, especially those who lean towards panpsychism and related theories of ‘consciousness everywhere’ might see nothing wrong with the idea of subjective consciousness without the ‘mechanical’ kind. The standard philosophical zombie has Easy Problem consciousness without qualia; these people would accept an inverted zombie who has qualia with no brain function. It seems a bit odd to me to pair such a view with IIT (if you don’t think functional properties are required I’d have thought you would think that integrating information was also dispensable) but there’s nothing strictly illogical about it. Perhaps the dispute over intuition really masks a different disagreement, over the plausibility of such inverted zombies, obviously impossible in  Aaronson’s eyes, but potentially viable in Motl’s?

Motl goes on to offer what I think is a rather good objection to IIT as it stands; ie that it seems to award consciousness to ‘frozen’ or static structures if they have a high enough Phi score. He thinks it’s necessary to reformulate the idea to capture the point that consciousness is a process. I agree – but how does Motl know consciousness requires a process? Could it be that it’s just…  intuitively obvious?

Crimbots

crimbotSome serious moral dialogue about robots recently. Eric Schwitzgebel put forward the idea that we might have special duties in respect of robots, on the model of the duties a parent owes to children, an idea embodied in a story he wrote with Scott Bakker. He followed up with two arguments for robot rights; first, the claim that there is no relevant difference between humans and AIs, second, a Bostromic argument that we could all be sims, and if we are, then again, we’re not different from AIs.

Scott has followed up with a characteristically subtle and bleak case for the idea that we’ll be unable to cope with the whole issue anyway. Our cognitive capacities, designed for shallow information environments, are not even up to understanding ourselves properly; the advent of a whole host of new styles of cognition will radically overwhelm them. It might well be that the revelation of how threadbare our own cognition really is will be a kind of poison pill for philosophy (a well-deserved one on this account, I suppose).

I think it’s a slight mistake to suppose that morality confers a special grade of duty in respect of children. It’s more that parents want to favour their children, and our moral codes are constructed to accommodate that. It’s true society allocates responsibility for children to their parents, but that’s essentially a pragmatic matter rather than a directly moral one. In wartime Britain the state was happy to make random strangers responsible for evacuees, while those who put the interests of society above their own offspring, like Brutus (the original one, not the Caesar stabber) have sometimes been celebrated for it.

What I want to do though, is take up the challenge of showing why robots are indeed relevantly different to human beings, and not moral agents. I’m addressing only one kind of robot, the kind whose mind is provided by the running of a program on a digital computer (I know, John Searle would be turning in his grave if he wasn’t still alive, but bear with me). I will offer two related points, and the first is that such robots suffer grave problems over identity. They don’t really have personal identity, and without that they can’t be moral agents.

Suppose Crimbot 1 has done a bad thing; we power him down, download his current state, wipe the memory in his original head, and upload him into a fresh robot body of identical design.

“Oops, I confess!” he says. Do we hold him responsible; do we punish him? Surely the transfer to a new body makes no difference? It must be the program state that carries the responsibility; we surely wouldn’t punish the body that committed the crime. It’s now running the Saintbot program, which never did anything wrong.

But then neither did the copy of Crimbot 1 software which is now running in a different body – because it’s a copy, not the original. We could upload as many copies of that as we wanted; would they all deserve punishment for something only one robot actually did?

Maybe we would fall back on the idea that for moral responsibility it has to be the same copy in the same body? By downloading and wiping we destroyed the person who was guilty and merely created an innocent copy? Crimbot 1 in the new body smirks at that idea.

Suppose we had uploaded the copy back into the same body? Crimbot 1 is now identical, program and body, the same as if we had merely switched him off for a minute. Does the brief interval when his data registers had different values make such a moral difference? What if he downloaded himself to an internal store, so that those values were always kept within the original body? What if he does that routinely every three seconds? Does that mean he is no longer responsible for anything, (unless we catch him really quickly) while a version that doesn’t do the regular transfer of values can be punished?

We could have Crimbot 2 and Crimbot 3; 2 downloads himself to internal data storage every second and the immediately uploads himself again. 3 merely pauses every second for the length of time that operation takes. Their behaviour is identical, the reasons for it are identical; how can we say that 2 is innocent while 3 is guilty?

But then, as the second point, surely none of them is guilty of anything? Whatever may be true of human beings, we know for sure that Crimbot 1 had no choice over what to do; his behaviour was absolutely determined by the program. If we copy him into another body, and set him uip wioth the same circumstances, he’ll do the same things. We might as well punish him in advance; all copies of the Crimbot program deserve punishment because the only thing that prevented them from committing the crime would be circumstances.

Now, we might accept all that and suggest that the same problems apply to human beings. If you downloaded and uploaded us, you could create the same issues; if we knew enough about ourselves our behaviour would be fully predictable too!

The difference is that in Crimbot the distinction between program and body is clear because he is an artefact, and he has been designed to work in certain ways. We were not designed, and we do not come in the form of a neat layer of software which can be peeled off the hardware. The human brain is unbelievably detailed, and no part of it is irrelevant. The position of a single molecule in a neuron, or even in the supporting astrocytes, may make the difference between firing and not firing, and one neuron firing can be decisive in our behaviour. Whereas Crimbot’s behaviour comes from a limited set of carefully designed functional properties, ours comes from the minute specifics of who we are. Crimbot embodies an abstraction, he’s actually designed to conform as closely as possible to design and program specs; we’re unresolvably particular and specific.

Couldn’t that, or something like that, be the relevant difference?

Perplexities of Consciousness

Picture: Eric Schwitzgebel. Eric Schwitzgebel, author of the excellent Splintered Mind blog (and Professor of Philosophy at University of California at Riverside) has a new book out, Perplexities of Consciousness, which sows doubt and confusion where they have never been sown before.  Like Socrates, Schwitzgebel wants to make us wiser by showing us that we know much less than we thought. It has often been thought that while we might easily be wrong about the world and the things in it, we weren’t prone to being wrong about how the world looks to us: Schwitzgebel seeks to show that in many respects we actually suffer from unresolvable confusions about even that, and worse, in some cases we’re demonstrably wrong. Subjective experience may be a matter of there being something it is like to see red or whatever; but we actually have no clear idea of what that something is (or what it is like).

Back in 2007 Schwitzgebel published a book Describing Inner Experience? Proponent Meets Skeptic with Russell Hurlburt which examined Hurlburt’s method of Descriptive Experience Sampling (DES). In DES subjects are asked to record their inward experience at random moments (prompted by a beeper) and are subsequently put through a fairly searching interview. This earlier book, as it happens, is currently the subject of a special issue of the Journal of  Consciousness Studies.  The book’s title sets up a confrontation, but in fact it’s clear that Hurlburt has a good deal in common with Schwitzgebel: the development of a special method for clarifying inner experience implicitly concedes that error is a serious possibility. The differences seem to be partly a matter of how well DES can really work, and partly a difference of agenda, with Schwitzgebel addressing the issues at a more radical and demanding philosophical level.

There is history to all this, of course: introspectionists like Wundt and Titchener once claimed that with careful training our inner experience could be described scientifically in great detail. The catastrophic collapse of that school of thought led on to the absurd over-reaction of behaviourism, which denied the very existence of inner experience. Only now, we might say, does it seem safe for Hurlburt to venture into the scorched territory of introspection and see if with a slightly different tack, a new beginning can be made.

Schwitzgebel’s new book gives us plenty of intriguing reasons to be pessimistic about that project. The book shows its origins in a series of separate papers, but it does cohere around  central themes, finding ambiguity and conflicting testimony just where we might hope for certainty.

First off it asks: do you dream in black and white? This used to be a common question and there was apparently an era when a large proportion of people thought they did. Nowadays no-one seems to think they dream without colour and in the more distant past the question never seemed to come up (dost thou dream in woodcut or tapestry?). Schwitzgebel very plausibly suggests that the idea of monochrome dreams is tied to the prevalence of black and white films and television. Perhaps if you asked, many people would now say they dream in 2D?  I’ve had dreams that apparently took place at least partly in the world of some video game.

You may feel that the question is actually meaningless and that dreams don’t typically either have or lack colour. Perhaps they are pure narrative, and asking whether they are in black and white makes no more sense than asking whether Pride and Prejudice was written in colour. Certainly if we extend the questioning and begin to ask whether dreams are in Technicolor, or what screen format or resolution they use (or in my case perhaps whether they were Xbox or PS2) the discussion starts to seem absurd; but could we deny that people might dream in black and white, at least sometimes? It doesn’t seem too difficult to imagine that you might. It is possible that the prevalence of monochrome moving images actually changed the way people dreamed for a while; but it seems probable we must admit that some people were in fact mistaken about the qualities of their own dreams, and we must certainly accept that there was a degree of uncertainty. So Schwitzgebel has his wedge in position.

Second, he asks: do things look flat? He cites sources who say that a coin viewed at an angle looks elliptical: not to me, he objects (at least not unless I view it very obliquely, when I can sort of see it that way); if coins look elliptical to you does the world in general similarly look flat? Schwitzgebel addresses the idea that the coin in fact looks like its projection on to a flat surface and raises some objections: in fact, he claims there’s no way to make the geometry of ‘flattism’ make sense. He suspects that here again people’s intuitions have been captured by ‘technology’ – in this case people are thinking of how objects would be represented in a drawing or photograph. He remarks that some theorists have claimed that stereoscopic vision involves systematic doubling of perceived objects; while it’s true that if we focus on something far away we can see two images of a finger held close to our face, Schwitzgebel finds it a very odd idea that most of the objects in our field of vision are normally doubled (I agree – I also agree with him that the world doesn’t look all that much flatter when viewed with one eye rather than two).

Now we move on to academic, questionnaire-based research into our mental imagery, and it seems Galton is to blame for first spreading the idea that this sort of thing worked. Curiously, Galton’s research found that the scientists in his sample were predisposed to deny the existence of mental imagery altogether, while the other subjects were more likely to accept it;  a result which no-one has been able to duplicate since. Perhaps back then people thought mental imagery was an airy-fairy poetic business which hard-nosed scientists should reject.  It turns out, moreover, that there is little or no correlation between reporting vivid mental imagery and being good at tasks which apparently require mental visualisation, such as comparing rotated 3d shapes. This is odd: why would evolution give us vivid mental imagery if it doesn’t even help with tasks that require vivid mental images?  Again it seems that our own reports are all but useless as a guide to what’s really going on in there. We may claim to visualise a table, but under questioning we usually turn out to be unable to provide details, or become confused, or pause to imagine up some more details.

The next chapter raises the interesting question of human echolocation. Nagel famously took bats as his example of a creature whose inner experience we could not hope to imagine because it had a sense we entirely lacked. With amusing irony Schwitzgebel’s case here is based on the fact that we do have some echolocation after all; we’re just not normally aware of it. With a little practice we can learn how to stop short of a wall just by making regular noises and picking up the echo (if you’re going to try this at home, I recommend taking some precautions to ensure that your nose isn’t the first part of you to detect the wall unambiguously).  Some blind people are well aware of this echolocating ability, but (a real score for Schwitzgebel) they misperceive it.  Typically they describe the experience as being about pressure on the face, and nothing to do with hearing: but experiments with blocked ears and covered faces show clearly that they’re wrong about their own experiences.

I mentioned Titchener earlier: Schwitzgebel gives an interesting account of his methods. Whereas these days one would typically try to capture the subject’s impressions as fresh as possible, uncontaminated by the experimenter’s own prejudices, Titchener and his contemporaries took the opposite view: until you were very thoroughly trained in discrimination of your impressions, your testimony was worthless. Schwitzgebel explains how Titchener’s researchers were trained to pick out ‘difference tones’, illusory notes heard under certain conditions.  (You can try it out for yourself on Schwitzgebel’s own page here. There is other useful stuff on his home page including abstracts and draft chapters.)  Some of these tones are debatable and only a minority claim to be able to hear them: are the others failing to hear them, or hearing them and failing to discriminate? Titchener apparently has no answer.

The next chapter returns to earlier concerns about whether experience is sparse or abundant:  are things outside the centre of our attention still constantly present in consciousness (abundant), or do they drop out (sparse).  Schwitzgebel previously used the terminology ‘rich’ and ‘thin’, and we discussed some earlier Hurlburtian experiments of his. It will come as no surprise to find that Schwitzgebel, who quotes radically opposing views from a variety of sources, regards the matter as unresolved and possibly unresolvable; but I must say that this time round I couldn’t see any reason why we shouldn’t just conclude that experience is sometimes sparse and sometimes abundant. There’s no doubt that sometimes when we focus narrowly on one thing, we lose track of everything else; and it seems hard to deny that at times we also pay vigilant attention to our surroundings in general. Couldn’t it simply be that consciousness can have a wide or a narrow beam, at least partly under our own control?

Schwitzgebel now feels ready for a direct assault on the doctrine that our knowledge of our own experience is infallible.  He warms up by questioning whether we know what emotion is, conceding (rather dangerously?) that his wife can sometimes judge his emotional state better than he can himself. What neutral yardstick he uses to confirm his wife’s diagnosis is not altogether clear.

Why is it, he asks, that scarcely anyone, even the most vigorous sceptics, seriously questions the infallibility of introspection on certain points? The core argument seems to be that we can be wrong about the way things are, but we cannot be wrong about the way they appear to us. But why not? Schwitzgebel claims the argument rests on equivocation between two senses of  ‘appear’ , one of them epistemic as in ‘it appears to me that…’.  I don’t know whether the argument actually rests so much on the word ‘appear’ , but it seems a valid and interesting claim that there are two levels at work here: our experience and our beliefs or claims about it, with no special reason to think that the latter must be magically veridical.

Now there is a kind of rock-bottom argument available here; I don’t think I’ve ever seen it used, but it may be in the back of the minds of those who argue for infallibility. This is that eventually you get below the level where truth and falsehood apply. If you pare experience down enough, you get to a point where it just is: it’s not actually that it’s infallible, more that it’s beyond the realm of fallibility or infallibility. To be fallible, to have truth values, there has to be an element of intentionality (and the right kind, too, with an appropriate ‘direction of fit’ – I don’t think desires can be false), but at the rock bottom level, this is absent. If I just experience without any thoughts about it at all, fallibility doesn’t come into it.

I dare say Schwitzgebel might accept that up to a point, although he could reasonably point out that our experience is in practice completely suffused with a kind of intentionality; unconscious parts of our mind do an awful lot of interpretation before reports from our senses reach us and arguably pretty well everything is presented to us ‘as’ something, not just as mere sense-data (the kind of intentionality involved, that lets part of our brain add implicit messages to the conscious part about ‘that there being a table’ and so on is interesting, probably very important, and totally obscure); though generally it seems we can ‘look through’ to the basic sense-impressions if we want.

The question then is perhaps whether there is some very simple level of intentionality that can be added to the rock-bottom experience without any chance of its being wrong yet without it having the kind of trivial self-validation (‘This is the sentence I wrote’) that Schwitzgebel rightly excludes. Could it be that along with the experience itself we have an accompanying belief which says something like “Yeah, that…” which can’t really be wrong?  I still find it hard to resist the idea that there’s something infallible in there.

It’s a great point though; a strong and well-founded attack on such an important and well-accepted dogma has to be of great value. It forces us into greater clarity even if in the end it isn’t accepted.

The book winds up with consideration of another engaging and interesting issue. What do you see when you turn out the light? (I can’t tell you, but I know it’s mine, as Lennon and McCartney argued in their seminal work.)  Apparently little attention has been given to what things look like when our eyes are closed in normal light, but the book unearths quite a sequence of views about what we can see when our eyes are closed in the dark. Grey bands feature strongly in the older reports and then seem to drop out of favour: specks of light turn up fairly regularly, but you won’t be surprised to hear that there is in the end no real consensus but a quantity of misplaced confidence. I think it’s perhaps a little surprising so few people seem to say that when their eyes are closed in the dark they see nothing.  Trying it myself I find I have to sort of insist on seeing and then get some very dim lines and grids, and flashes of amorphous shapes in brighter colour. This kind of thing might well have a good explanation in neurology and the more or less random firing of isolated neurons that respond to lines, grids, or patches of colour.  At times, strangely, I had to do something hard to describe to stop my imagination intervening in what I was experiencing and livening things up.  Overall, it doesn’t look too promising a field for research to me, but, as Schwitzgebel suggests, why not see what you think (or see what you see)?

So in conclusion, what’s the verdict? I think Schwitzgebel’s case is essentially successful; his contention that there’s more to deal with here than we may have realised seems hard to deny.  This is salutary and also interesting; and since the subject is engaging and much of the discussion is readily accessible, I think the book deserves a wider readership outside the philosophical village.

I think it was Russell who said that when acting on a vigorous mind, scepticism produces new energy rather than despair: so can we add any positive conclusions to the overwhelmingly negative ones in the book?  I’m left with two main thoughts. First, there is a philosphical case to be answered in the central assault on infallibility.  Second, I think a great deal of the ambiguity and contradiction exposed by Schwitzgebel comes from the sheer complexity of the task. We ourselves, the experiencing entities, are pretty complex, with different levels of conscious and unconscious thought interacting in a variety of ways.  Second, the ways we can experience and think about things is limitlessly varied.  I don’t think it would be difficult to list fifty significantly different ways of  ‘thinking about’ a table, with and without explicit imagery.  Accordingly in many cases I think simple misunderstandings over what type of experience we’re talking about are more than half the problem. Perhaps Galton’s scientists thought ‘mental imagery’ meant what I would call ‘voluntary hallucination’; perhaps for those monochrome dreamers ‘black and white’ just meant dreams where colour wasn’t specifically salient. Taking all these problems together with a certain natural human variability, I think we might find explanations.  It wouldn’t be trained subjects we need, just better terminology and a more clearly developed common understanding.  I say ‘just’ – in fact it’s quite a tall order but not, I think, hopeless.

Too thin? Too rich?

Disappearing foot. Just before you’d read this sentence, were you consciously aware of your left foot? Eric Schwitzgebel set out to resolve the question in the latest edition of the JCS.

In normal circumstances, we are bombarded by impressions from all directions. Our senses are constantly telling us about the sights, sounds and smells around us, and also about where our feet are, how they feel in their shoes, how hungry we currently feel, whether our sore calf muscle is any better at the moment; and what spurious reasoning some piece of text on the internet is trying to spin for us. But most of the time, most of this information is ignored. In some sense, it’s always there, but only the small subset of things which are currently receiving our attention are, as it were, brightly lit.

There’s little doubt about this basic scenario. Notoriously, when we drive along a familiar route, the details drop into the background of our mind and we start to think about something else. When we arrive at our destination, we may not remember anything much about the journey: but clearly we could see the road and hear the engine at all relevant times, or we probably shouldn’t have been able to finish the journey. On the other hand, suppose as we were driving along, the sound of a baby crying had unexpectedly drifted over from the back seat: would we have failed to notice that feature of the background?

So we have two (at least two) levels of awareness going on. Schwitzgebel poses the question: which do we regard as conscious? On the “thin” view, we’re only really conscious of the things we’re explicitly thinking about. No doubt the other stuff is in some sense available to consciousness, and no doubt bits of it can pop up into consciousness when they trigger some unconscious alarm; but it’s not actually in our consciousness. How else, the thinnists might ask, are we going to make the distinction between the two different levels? The rich view is that everything should be included: I may not be thinking about my foot at all times, but to suggest that I only know where it is subconsciously, or unconsciously, seems ridiculous.

Schwitzgebel does not think either side has particularly strong arguments. Both are inclined to provide examples, or assert their case, and expect the conclusion to seem obvious. Searle has argued that we couldn’t switch attention unless we were conscious of the thing we were switching our attention to: Mack and Rock have done experiments to prove that while paying close attention to one things we may fail to notice other things: but neither of these lines of discussion really seems to provide what you call a knock-down case.

Accordingly, with many reservations, Schwitzgebel set up an experiment of his own. The subjects wore a beeper which went off at a random period up to an hour after being set: they then recorded what they were conscious of immediately beforehand (it’s important, of course, to keep the delay minimal, otherwise the issue gets entangled with problems of memory). The subjects were divided into groups and asked to focus on tactile, visual and total sensory experience.

The results supported neither the thin nor the rich position. Perhaps the most interesting finding is the degree of surprise evoked in the subjects. In a departure from normal experimental method, Schwitzgebel used as subjects philosophy postgrads who could reasonably be expected to have some established prejudices in the field: he also spent time explaining the experiment and talking over the issues, and recorded whether each subject was a thinnist or richist at the start. Although this involved some risk of skewing the results, it allowed the discovery that thinnists actually often found themselves having rich experience, and vice versa.

Where does that leave us? It seems almost as though the dilemma is merely reinforced: the research points towards some compromise, but it’s hard to see where we can find room for one. The results did seem to reinforce the existing general agreement that there really are two distinct levels or aspects of consciousness at work. Wouldn’t one solution, then, be to give both neutral labels (con-1 and con-2?) and leave it at that? That may be what one of Schwitzgebel’s subjects, who apparently dismissed the whole thing as ‘linguistic’ had in mind. But it’s not a very comfortable position to dismiss the concept of consciousness in favour of two hazy new ones. Schwitzgebel, rightly, I think, considers that the difference between thinnism and richism is real and significant.

My best guess for a neatish answer is that we’re simply dealing with pure first order consciousness and the same thing combined with second order consciousness. In other words, the dim constant awareness of everything being reported by our senses, really is conscious, but it’s a region of consciousness we’re not conscious of being conscious of. By contrast, we’re not only conscious of the things at the forefront of our minds, we’re also aware of being conscious of them. (It might well be that second-order consciousness is what animals largely or wholly lack – I wonder if thinnists also tend to be sceptics about animal consciousness?)
Alas, that’s not really a compromise: it seems to make me a kind of richist.