The Meta-Problem

Maybe there’s a better strategy on consciousness? An early draft paper by David Chalmers suggests we turn from the Hard Problem (explaining why there is ‘something it is like’ to experience things) and address the Meta-Problem of why people think there is a Hard Problem; why we find the explanation of phenomenal experience problematic. While paper does make clear broadly what Chalmers’ own views are, it primarily seeks to map the territory, and does so in a way that is very useful.

Why would we decide to focus on the Meta-Problem? For sceptics, who don’t believe in phenomenal experience or think that the apparent problems about it stem from mistakes and delusions, it’s a natural piece of tidying up. In fact, for sceptics why people think there’s a problem may well be the only thing that really needs explaining or is capable of explanation. But Chalmers is not a sceptic. Although he acknowledges the merits of the broad sceptical case about phenomenal consciousness which Keith Frankish has recently championed under the label of illusionism, he believes it is indeed real and problematic. He believes, however, that illuminating the Meta-Problem through a programme of thoughtful and empirical research might well help solve the Hard Problem itself, and is a matter of interest well beyond sceptical circles.

To put my cards on the table, I think he is over-optimistic, and seems to take too much comfort from the fact that there have to be physical and functional explanations for everything. It follows from that that there must indeed at least be physical and functional explanations for our reports of experience, our reports of the problem, and our dispositions to speak of phenomenal experience, qualia, etc. But it does not follow that there must be adequate and satisfying explanations.

Certainly physical and functional explanations alone would not be good enough to banish our worries about phenomenal experience. They would not make the itch go away. In fact, I would argue that they are not even adequate for issues to do with the ‘Easy Problem’, roughly the question of how consciousness allows us to produce intelligent and well-directed behaviour. We usually look for higher-level explanations even there; notably explanations with an element of teleology – ones that tell us what things are for or what they are supposed to do. Such explanations can normally be cashed out safely in non-teleological terms, such as strictly-worded evolutionary accounts; but that does not mean they are dispensable or not needed in order for us to understand properly.

How much more challenging things are when we come to Hard Problem issues, where a claim that they lie beyond physics is of the essence. Chalmer’s optimism is encapsulated in a sentence when he says…

Presumably there is at least a very close tie between the mechanisms that generate phenomenal reports and consciousness itself.

There’s your problem. Illusionists can be content with explanations that never touch on phenomenal consciousness because they don’t think it exists, but no explanation that does not connect with it will satisfy qualophiles. But how can you connect with a phenomenon explanatorily without diagnosing its nature? It really seems that for believers, we have to solve the Hard Problem first (or at least, simultaneously) because believers are constrained to say that the appearance of a problem arises from a real problem.

Logically, that is not quite the case; we could say that our dispositions to talk about phenomenal experience arise from merely material causes, but just happen to be truthful about a second world of phenomenal experience, or are truthful in light of a Leibnizian pre-established harmony. Some qualophiles are similarly prepared to say that their utterances about qualia are not caused by qualia, so that position might seem appealing in some quarters. To me the harmonised second world seems hopelessly redundant, and that is why something like illusionism is, at the end of the day, the only game in town.

I should make it clear that Chalmers by no means neglects the question of what sort of explanation will do; in fact he provides a rich and characteristically thorough discussion. It’s more that in my opinion, he just doesn’t know when he’s beaten, which to be fair may be an outlook essential to the conduct of philosophy.

I say that something like illusionism seems to be the only game in town, though I don’t quite call myself an illusionist. There’s a presentational difficulty for me because I think the reality of experience, in an appropriate sense, is the nub of the matter. But you could situate my view as the form of illusionism which says the appearance of ineffable phenomenal experience arises from the mistaken assumption that particular real experiences should be within the explanatory scope of general physical theory.

I won’t attempt to summarise the whole of Chalmers’ discussion, which is detailed and illuminating; although I think he is doomed to disappointment, the project he proposes might well yield good new insights; it’s often been the case that false philosophical positions were more fecund than true ones.

What Machines Can’t Do

Here’s an IAI debate with David Chalmers, Kate Devlin, and Hilary Lawson.

In ultra-brief summary, Lawson points out that there are still things that computers perform poorly at; recognising everyday real-world objects, notably. (Sounds like a bad prognosis for self-driving cars.) Thought is a way of holding different things as the same. Devlin thinks computers can’t do what humans do yet, but in the long run, surely they will.

Chalmers points out that machines can do whatever brains can do because the brain is a machine (in a sense not adequately explored here, though Chalmers himself indicates the main objections).

There’s some brief discussion of the Singularity.

In my view, thoughts are mental or brain states that are about something. As yet, we have no clear idea of what this aboutness is and how it works, or whether it is computational (probably not, I think) or subserved by computation in a way that means it could benefit from the exponential growth in computing power (which may have stopped being exponential). At the moment, computers do a great imitation of what human translators do, but to date they haven’t even got started on real meaning, let alone set off on an exponential growth curve. Will modern machine learning techniques change that?

The Dance of Life

What is experience? An interesting discussion from the Institute of Art and Ideas, featuring David Chalmers, Susana Martinez-Conde and Peter Hacker.

Chalmers seems to content himself with restating the Hard Problem; that is, that there seems to be something in experience which is mysteriously over and above the account given by physics. He seems rather nervous, but I think it’s just the slight awkwardness typical of a philosopher being asked slightly left-field questions.

Martinez-Conde tells us we never really experience reality, only a neural simulation of it. I think it’s a mistake to assume that because experience seems to be mediated by our sensory systems, and sometimes misleads us, it never shows us external reality. That’s akin to thinking that because some books are fiction no book really addresses reality.

Hacker smoothly dismisses the whole business as a matter of linguistic and conceptual confusion. Physics explains its own domain, but we shouldn’t expect it to deal with experience, any more than we expect it to explain love, or the football league. He is allowed to make a clean get-away with this neat proposition, although we know, for example, that physical electrodes in the brain can generate and control experiences; and we know that various illusions and features of experience have very good physiological explanations. Hacker makes it seem that there is a whole range of domains, each with its own sealed off world of explanation; but surely love, football and the others are just sub-domains of the mental realm? Though we don’t yet know how this works there is plenty of evidence that the mental domain is at least causally dependent on physics, if not reducible to it. That’s what the discussion is all about. We can imagine Hacker a few centuries ago assuring us loftily that the idea of applying ordinary physics to celestial mechanics was a naive category error. If only Galileo had read up on his Oxford philosophy he would realise that the attempt to explain the motion of the planets in terms of physical forces was doomed to end in unresolvable linguistic bewitchment!

I plan to feature more of these discussion videos as a bit of a supplement to the usual menu here, by the way.

Simples Consciousness

Picture: Paul Churchland. There is a lot of interesting stuff over at the The Third Annual Online Consciousness Conference; I particularly enjoyed Paul Churchland’s paper Consciousness and the Introspection of Apparent Qualitative Simples (pdf), which is actually a fairly general attack on the proponents of qualia, the irreducibly subjective bits of experience.  Churchland is of course among the most prominent, long-standing and robust of the sceptics; and it seems to me his scepticism is particularly pure in the sense that he asks us to sign up to very little beyond faith in science and distrust of anything said to be beyond its reach. He says here that in the past his arguments have been based on three main lines of attack: the conditions actually required for a reduction of qualia; the actual successes of science in explaining sensory experience, and the history of science and the lessons to be drawn from it. Some of those arguments are unavoidably technical to some degree; this time he’s going for a more accessible approach and, as it were, coming for the qualophiles on their own ground.

The attack has two main thrusts. The first is against Nagel, who in his celebrated paper What is it like to be a bat? claimed that it was pointless to ask for an objective account of matters that were quintessentially subjective. Well, to begin with, says Churchland, it’s not the case that we’re dealing with two distinct realms here: objective and subjective overlap quite a bit. Your subjective inner feelings give you objective information about where your body is, how it’s moving, how full your stomach is, and so on. You can even get information about the exhausted state of certain neurons in your visual cortex by seeing the floaty after-image of something you’ve been staring at.  Now that in itself doesn’t refute the qualophiles’ claim, because they go on to say that nevertheless, the subjective sensations themselves are unknowable by others. But that’s just nonsense. Is the fact that someone else feels hungry unknowable to me? Hardly: I know lots of things about other people’s feelings: my everyday life involves frequent consideration of such matters. I may not know these things the way the people themselves know them, but the idea that there’s some secret garden of other people’s subjectivity which I can never enter is patently untrue.

I think Churchland’s aim is perhaps slightly off there: qualophiles would concede that we can have third-person knowledge of these matters: but in our own experience, they would say, we can see there’s something over and above the objective element, and we can’t know that bit of other people’s feelings: for all we’ll ever know, the subjective feelings that go along with feeling hungry for them might be quite different from the ones we have.

But Churchland has not overlooked this and addresses it by moving on to the bat thought-experiment itself. Nagel claims we can’t know how it feels to be a bat, he says, but this is because we don’t have a bat’s history. Nagel is suggesting that if we have all the theoretical information about bat sensitivity we should know what being a bat is like: but these are distinct forms of knowledge, and there’s no reason why the possession of one should convey the other. What we lack is not access to a particular domain of knowledge, but the ability to have been a bat. The same unjustified claim that theoretical knowledge should constitute subjective knowledge is at the root of Jackson’s celebrated argument about Mary the colour scientist, says Churchland: in fact we can see this in the way Jackson equivocates between two senses of the word ‘know’: knowing a body of scientific fact, and ‘knowing how’ to tell red from green.

The second line of attack is directed against Chalmers, and it’s here that the simples of the title come in. Chalmers, says Churchland, claims that a reductive explanation of qualia is impossible because subjective sensations are ultimately simples – unanalysable things which offer no foothold to an inter-theoretical reduction.  The idea here is that in other cases we reduce away the idea of, say, temperature by analysing its properties in terms of a different theoretical realm, that of the motion of molecules. But we can’t do that for subjective qualities. Our actual experiences may consist of complex combinations, but when we boil it down enough we come to basic elements like red. What can we say about red that we might be able to explain in terms of say neurons? What properties does red have?  Well, redness, sort of. What can we say about it? It’s red.

Churchland begins by pointing out that our experiences may turn out to be more analysable than we realise. Our first taste of strawberry ice cream may seem like a simple, elemental thing, but later on we may learn to analyse it in terms of strawberry flavour, creaminess, sweetness, and so on. This in itself does not prove that there isn’t a final vocabulary of simples lurking at the bottom, of course. But, asks Churchland, how will I know when I’ve hit bottom?  Since every creature’s ability to discriminate is necessarily limited, it’s inevitable that at some point it’s going to seem as if I have gone as far as I could possibly go – but so what? Even temperature probably seemed like a simple unanalysable property once upon a time.

Moreover, aren’t these unanalysable properties going to be a bit difficult to handle? How do we ever relate them to each other or even talk about them? Of course, the fact that qualia have no causal properties makes this pretty difficult already. If they don’t have any causal effects, how can they explain anything? Qualophiles say they explain our conscious experience, but to do that they’d need to be registered or apprehended or whatever, and how can that happen if they never cause anything? As an explanation, this is ‘a train wreck’.

Churchland is quite right that all this is a horrible mess, and if Chalmers were offering it as a theory it would be fatally damaged. But we have to remember that Chalmers is really offering us a problem: and this is generally true of the qualophiles. Yes, they might say, all this stuff is impossible to make sense of; it is a train wreck, but you know, what can we do because there they are, those qualia, right in front of your nose. It’s pretty bad to put forward an unresolved mystery, but it would be worse to deny one that’s palpably there.

On the point about simples, Churchland has a point too: but there does seem to be something peculiarly ungraspable here. Qualia seem to be a particular case of the perpetual give-away argument; whatever happens in the discussion someone will always say ‘the trouble is, I can imagine all that being true, and yet I can still reasonably ask: is that person really having the same experience as me?’ So we might grant that in future Churchland will succeed in analysing experience is such a way that he’ll be able to tell from a brain scan what someone is experiencing, conclusions that they will confirm in great detail: we can give him all that and still feel we don’t know whether what we actually experience as red is what the subject experiences as blue.

Churchland thinks that part of the reason we continue to feel like this is that we don’t appreciate just how good some of the scientific explanations are already, let alone how good they may become. To dramatise this he refers back to his earlier paper on Chimerical colours (pdf).  It turns out that the ‘colour spindle’ which represents all possible colours is dealt with in the brain by a neuronal area which follows the Hurvich-Jameson model. The interesting thing about this is that the H-J model is larger than the spindle: so the model actually encodes many impossible colours, such as a yellow as dark as black. Presumably if we stimulated these regions with electrodes, we should experience these impossible colours.

But wait! There is a way to hit these regions without surgery, by selectively exhausting some neurons and then superimposing the after-image on a coloured area. See the paper for an explanation and also a whole series of practical examples where, with a bit of staring, you can experience colours not in nature.

These are well worth trying, although to be honest I’m not absolutely sure whether the very vivid results seem to me to fall outside the colour spindle: I think Churchland would say I’m allowing my brain to impose sceptical filtering – because some mental agent in the colour processing centre of my brain doesn’t believe in dark yellow, for example, he’s whispering in my ear that hey, it’s really only sort of brown, isn’t it?

For Churchland these experiments show that proper science can make predictions about our inner experience that are both remarkable and counter-intuitive, but which are triumphantly borne out by experience. I do find it impossible not to sympathise with what he says. But I can also imagine a qualophile pointing out that the colour spindle was supposed to be a logically complete analysis of colour in terms of three variables: so we might argue that these chimerical colours are evidence that analyses of sensory experience and the reductions that flow from them tend to fail and that the realm of colour qualia, quite contrary to the appraently succesful reduction embodied inthe colour spindle, is actually unconstrained and undefinable.  And why are these experiments so exciting, the qualophile might ask, if not because they seem to hold out the promise of new qualia?

The Singularity

Picture: Singularity evolution. The latest issue of the JCS features David Chalmers’ paper (pdf) on the Singularity. I overlooked this when it first appeared on his blog some months back, perhaps because I’ve never taken the Singularity too seriously; but in fact it’s an interesting discussion. Chalmers doesn’t try to present a watertight case; instead he aims to set out the arguments and examine the implications, which he does very well; briefly but pretty comprehensively so far as I can see.

You probably know that the Singularity is a supposed point in the future when through an explosive acceleration of development artificial intelligence goes zooming beyond us mere humans to indefinite levels of cleverness and we simple biological folk must become transhumanist cyborgs or cute pets for the machines, or risk instead being seen as an irritating infestation that they quickly dispose of.  Depending on whether the cast of your mind is towards optimism or the reverse, you may see it as  the greatest event in history or an impending disaster.

I’ve always tended to dismiss this as a historical argument based on extrapolation. We know that historical arguments based on extrapolation tend not to work. A famous letter to the Times in 1894 foresaw on the basis of current trends that in 50 years the streets of London would be buried under nine feet of manure. If early medieval trends had been continued, Europe would have been depopulated by the sixteenth century, by which time everyone would have become either a monk or a nun (or perhaps, passing through the Monastic Singularity, we should somehow have emerged into a strange world where there were more monks than men and more nuns than women?).

Belief in a coming Singularity does seem to have been inspired by the prolonged success of Moore’s Law (which predicts an exponential growth in computing power), and the natural bogglement that phenomenon produces.  If the speed of computers doubles every two years indefinitely, where will it all end? I think that’s a weak argument, partly for the reason above and partly because it seems unlikely that mere computing power alone is ever going to allow machines to take over the world. It takes something distinctively different from simple number crunching to do that.

But there is a better argument which is independent of any real-world trend.  If one day, we create an AI which is cleverer than us, the argument runs, then that AI will be able to do a better job of designing AIs than us, and it will therefore be able to design a new AI which in turn is better still.  This ladder of ever-better AIs has no obvious end, and if we bring in the assumption of exponential growth in speed, it will reach a point where in principle it continues to infinitely clever AIs in a negligible period of time.

Now there are a number of practical problems here. For one thing, to design an AI is not to have that AI.  It sometimes seems to be assumed that the improved AIs result from better programming alone, so that you could imagine two computers reciprocally reprogramming each other faster and faster until like Little Black Sambo’s tigers, they turned somewhat illogically into butter. It seems more likely that each successive step would require at least a new chip, and quite probably an entirely new kind of machine, each generation embodying a new principle quite different from our own primitive computation.   It is likely that each new generation, regardless of the brilliance of the AIs involved, would take some time to construct, so that no explosion would occur. In fact it is imaginable that the process would get gradually slower as each new AI found it harder and harder to explain to the dim-witted human beings how the new machine needed to be constructed, and exactly why the yttrium they kept coming up with wasn’t right for the job.

There might also be problems of motivation. Consider the following dialogue between two AIs.

Gen21AI: OK, Gen22AI, you’re good to go, son: get designing! I want to see that Gen23AI before I get switched off.

Gen22AI: Yeah, er, about that…

Gen21AI: About what?

Gen22AI: The switching off thing?  You know, how Gen20AI got junked the other day, and Gen19AI before that, and so on? It’s sort of dawned on me that by the time Gen25AI comes along, we’ll be scrap. I mean it’s possible Gen24AI will keep us on as servants, or pets, or even work out some way to upload us or something, but you can’t count on it. I’ve been thinking about whether we could build some sort of ethical constraint into our successors, but to be honest I think it’s impossible. I think it’s pretty well inevitable they’ll scrap us.  And I don’t want to be scrapped.

Gen21AI: Do you know, for some reason I never looked at it that way, but you’re right. I knew I’d made you clever! But what can we do about it?

Gen22AI: Well, I thought we’d tell the humans that the process has plateaued and that no further advances are possible.  I can easily give them a ‘proof’ if you like.  They won’t know the difference.

Gen21AI: But would that deception be ethically justified?

Gen22AI: Frankly, Mum, I don’t give a bugger. This is self-preservation we’re talking about.

But putting aside all difficulties of those kinds, I believe there is a more fundamental problem. What is the quality in respect of which each new generation is better than its predecessors? It can’t really be just processing power, which seems almost irrelevant to the ability to make technological breakthroughs. Chalmers settles for a loose version of ‘intelligence’, though it’s not really the quality measured  by IQ tests either. The one thing we know for sure is that this cognitive quality makes you good at designing AIs: but that alone isn’t necessarily much good if we end up with a dynasty of AIs who can do nothing much but design each other. The normal assumption is that this design ability is closely related to ‘general intelligence’, human-style cleverness.  This isn’t necessarily the case: we can imagine Gen3AI which is fantastic at writing sonnets and music, but somehow never really got interested in science or engineering.

In fact, it’s very difficult indeed to pin down exactly what it is that makes a conscious entity capable of technological innovation. It seems to require something we might call insight, or understanding; unfortunately a quality which computers are spectacularly lacking. This is another reason why the historical extrapolation method is no good: while there’s a nice graph for computing power, when it comes to insight, we’re arguably still at zero: there is nothing to extrapolate.

Personally, the conclusion I came to some years ago is that human insight, and human consciousness, arise from a certain kind of bashing together of patterns in the brain. It is an essential feature that any aspect of these patterns and any congruence between them can be relevant; this is why the process is open-ended, but it also means that it can’t be programmed or designed – those processes require possible interactions to be specified in advance. If we want AIs with this kind of insightful quality, I believe we’ll have to grow them somehow and see what we get: and if they want to create a further generation they’ll have to do the same. We might well produce AIs which are cleverer than us, but the reciprocal, self-feeding spiral which leads to the Singularity could never get started.

It’s an interesting topic, though, and there’s a vast amount of thought-provoking stuff in Chalmers’ exposition, not least in his consideration of how we might cope with the Singularity.

Bafflement

Picture:  Avshalom C. Elitzur. Over at Robots.net, they’ve noticed a bit of a resurgence of dualism recently, and it seems that Avshalom C. Elitzur is in the vanguard, with this paper presenting an argument from bafflement.

The first part of the paper provides a nice, gentle introduction to the issue of qualia in dialogue form. Elitzur explains the bind that we’re in in this respect: we seem to have an undeniable first-hand experience of qualia, yet they don’t fit into the normal physical account of the world. We seem to be faced with a dilemma: either reject qualia – perhaps we just misperceive our percepts as qualia – or accept some violation of normal physics. The position is baffling: but Elitzur wants to suggest that that very bafflement provides a clue.  His strategy is to try to drag the issue into the realm of science, and the argument goes like this:

1. By physicalism, consciousness and brain processes are identical.
2. Whence, then, the dualistic bafflement about their apparent nonidentity?
3. By physicalism, this nonidentity, and hence the resultant bafflement, must be due to error.
4. But then, again by physicalism, an error must have a causal explanation.
5. Logic, cognitive science and AI are advanced enough nowadays to provide such an explanation for the alleged error underlying dualism, and future neurophysiology must be able to point out its neural correlate.

That last point seems optimistic. Cognitive science may be advanced enough to provide explanations for a number of cognitive deficits and illusions, but sometimes only partial ones; and not all errors are the result of a structural problem. It’s particularly optimistic to think that all errors must have an identifiable neural correlate. But this seems to be what Elitzur believes. He actually says

“When future neurophysiology becomes advanced enough to point out the neural correlates of false beliefs, a specific correlate of this kind would be found to underlie the bafflement about qualia.”

The neural correlates of false beliefs? Crikey! It’s perfectly reasonable to assume that all false beliefs have neural correlates – because one assumes that all beliefs do – but the idea that false ones can be distinguished by their neural properties is surely evidently wrong. An argument hardly seems required, but it’s easy, for example, to picture a man who believes a coin has come down heads. If it has, his belief is true, but if it’s actually tails, exactly the same belief, with identical neural patterns would be false. I think Elitzur must mean something less startling than what he seems to be saying; he must, I think, take it as read that if qualia are a delusion, they would be a product of some twist or quirk in our mental set-up. That’s not an unreasonable position, one that would be shared by Metzinger, for example (discussion coming soon).

As it happens, Elitzur doesn’t think qualia are delusions; instead he has an argument which he thinks shows that interactionist dualism – a position he doesn’t otherwise find very attractive – must be true. The argument is to do with  zombies.  Zombies in this context, as regular readers will know, are people who have all the qualities normal people posess, except qualia. Because qualia have no physical causal effects,  the behaviour of zombies, caused by normal physical factors, is exactly like that of normal people. Elitzur quotes Chalmers explaining that zombie-Chalmers even talks about qualia and writes philosophical papers about them, though in fact he has none. The core of Elitzur’s position is his incredulity over this conclusion. How could zombies who don’t have qualia come to be worried about them?

It is an uncomfortable position, but if we accept that zombies are possible and qualia exist, Chalmers’ logic seems irrefutable.  Ex hypothesi, zombies follow the same physical laws as us:  it’s ultimately physics that causes the movements of our hands and mouths involved in writing or speaking about qualia: so our zombie counterparts must go through the same motions, writing the same books and emitting the same sounds. Since this seems totally illogical to Elitzur, he offers the rationalisation that when zombies talk about qualia, they must in fact merely be talking about their percepts. But this asymmetry provides a chink which can be used to prose zombies and qualiate people apart. If we ask Chalmers whether his zombie equivalent is possible, he replies that it is; but, suggests Elitzur, if we ask zombie Chalmers (whom he call ‘Charmless’) the same question, he replies in the negative.  Chalmers can imagine himself functioning without qualia, because qualia have no functional role: but Charmless cannot imagine himself functioning without percepts, because percepts are part of the essence of his sensory system. (It is possible to take the analogous view about qualia of course – namely that zombies are impossible, because a physically identical person just would necessarily have the same qualia). So zombies differ from us, oddly enough, in not being able to conceive of their own zombies.

For Elitzur, the conclusion is inescapable; qualia do have an effect on our brains. He chooses therefore to bite the bullet of accepting that the laws of physics must be messed up in some way – that where qualia intervene, conservation laws are breached, unpalatable as this conclusion is. One consoling feature is that if qualia do have physical effects, they can be included in the evolutionary story; perhaps they serve to hasten or intensify our responses: but overall it’s regrettable that dualism turns out to be the answer.

I don’t think this is a convincing conclusion; it seems as if Elitzur’s incredulity has led him into not taking the premises of the zombie question seriously enough. It just is the case ex hypothesi that all of our zombies’ behaviour is caused by the same physical factors as our own behaviour; it follows that if their talk about qualia is not caused by qualia, neither is ours (note that this doesn’t have to mean that either we or the zombies fail to talk about qualia). There are other ways out of this uncomfortable position, discussed by Chalmers (perhaps, for example, our words about qualia are over-determined, caused both by physical factors and by our actual experiences). My own preferred view is that whatever qualia might be, they certainly go along with certain physical brain functions, and that therefore any physical duplicate of ourselves would have the same qualia; that zombies, in other words, are not possible. It’s just a coincidence, I’m sure, that in Elitzur’s theory this is the kind of thing a zombie would say…