More mereology

Picture: Peter Hacker. Peter Hacker made a surprising impact with his recent interview in the TPM, which was reported and discussed in a number of other places.  Not that his views aren’t of interest; and the trenchant terms in which he expressed them probably did no harm: but he seemed mainly to be recapitulating the views he and Max Bennett set out in 2003;  notably the accusation that the study of consciousness is plagued by the ‘mereological fallacy’ of taking a part for the whole and ascribing to the brain alone the powers of thought, belief, etc, which are properly ascribed only to whole people.

There’s certainly something in Hacker’s criticism, at least so far as popular science reporting goes. I’ve lost count of the number of times I’ve read newspaper articles that explain in breathless tones the latest discovery: that learning, or perception, or thought are really changes in the brain!  Let’s be fair: the relationship between physical brain and abstract mind has not exactly been free of deep philosophical problems over the centuries. But the point that the mind is what the brain does, that the relationship is roughly akin to the relationship between digestion and gut, or between website and screen, surely ought not to trouble anyone too much?

You could say that in a way Bennett and Hacker have been vindicated since 2003 by the growth of the ‘extended mind’ school of thought. Although it isn’t exactly what they were talking about, it does suggest a growing acknowledgement that too narrow a focus on the brain is unhelpful. I think some of the same counter-arguments also apply. If we have a brain in a VAT, functioning as normally as possible in such strange circumstances, are we going to say it isn’t thinking?  If we take the case of Jean-Dominique Bauby, trapped in a non-functioning body, but still able to painstakingly dictate a book about his experience,  can’t we properly claim that his brain was doing the writing? No doubt Hacker would respond by asking whether we are saying that Bauby had become a mere brain? That he wasn’t a person any more? Although his body might have ceased to function fully he still surely had the history and capacities of a person rather than simply those of a brain.

The other leading point which emerges in the interview is a robust scepticism about qualia.  Nagel in particular comes in for some stick, and the phrase ‘there is something it is like’ often invoked in support of qualia, is given a bit of a drubbing. If you interpret the phrase as literally invoking a comparison, it is indeed profoundly obscure; on the other hand we are dealing with the ineffable here, so some inscrutability is to be expected. Perhaps we ought to concede that most people readily understand what it is that Nagel and others are getting at.  I quite enjoyed the drubbing, but the issue can’t be dismissed quite as easily as that.

From the account given in the interview (and I have the impression that this is typical of the way he portrays it) you would think that Hacker was alone in his views, but of course he isn’t. On the substance of his views, you might expect him to weigh in with some strong support for Dennett; but this is far from the case.  Dennett is too much of a brainsian in Hacker’s view for his ideas to be anything other than incoherent.  It’s well worth reading Dennett’s own exasperated response (pdf), where he sets out the areas of agreement before wearily explaining that he knows, and has always said, that care needs to be taken in attributing mental states to the brain; but given due care it’s a useful and harmless way of speaking.

John Searle also responded to Bennett and Hacker’s book and, restrained by no ties of underlying sympathy, dismissed their claims completely. Conscious states exist in the brain, he asserted: Hacker got this stuff from misunderstanding Wittgenstein, who says that observable behaviour (which only a whole person can provide) is a criterion for playing the language game, but never said that observable behaviour was a criterion for conscious experience.  Bennett and Hacker confuse the criterial basis for the application of mental concepts with the mental states themselves. Not only that, they haven’t even got their mereology right: they’re talking about mistaking the part for the whole, but the brain isn’t a part of a person, it’s a part of a body.

Hacker clearly hasn’t given up, and it will be interesting to see the results of his current ‘huge project, this time on human nature’.

Sciousness

Picture: William James. A while ago I discussed the way William James, who originated the phrase “stream of consciousness” nevertheless went on to embrace a radical scepticism about the very idea of consciousness. In an interesting recent paper William Lyons reviews the roots and significance of this apparent apostasy; I was interested in particular in an argument which he thinks helped form James’ view.

James was happy with thoughts and experiences, but denied the existence of any special reflexive sense of self. Hume famously said that when he turned his attention inwards on his own mind, he found only a bundle of perceptions; in the same spirit James says he detects nothing but ‘some bodily fact, an impression coming from my brow, or head, or throat, or nose’. In fact, he becomes convinced that when you boil it right down the breath is probably the core of what has been built up into a strange metaphysical/spiritual entity; I expect he had in mind the ancient Greek pneuma, both breath and spirit.  Maybe, he says, we could better talk of ‘sciousness’ to refer to our awareness; consciousness, the special self-awareness, is out. James was impressed by an argument for the impossibility of introspective self-awareness set out by Comte.   It points out that when you observe your own thoughts there is an awkward circularity involved.  Observation involves conscious attention, but in this case conscious attention is also the target. Necessarily then, there must be some splitting, some withdrawal from the target of observation; but then it ceases to be the conscious awareness we were trying to introspect. It’s like trying to tread on your own shadow.  You end up, at best, observing not your real immediate self, but a kind of fake or ersatz thing, an idea of yourself which you have generated.

I’m not absolutely sure this argument is watertight. Comte supposes that in order to observe our own thoughts, we have to stop observing whatever we were contemplating before.  If that’s so, is it a disaster? All it really means is that when we observe ourselves, we’ll find that we are currently observing ourselves. That’s circular alright, but I’m not sure it is necessarily disastrous. Moreover, can’t we think about more than one thing at once? Comte suggests that self-observation requires a kind of separation in the self, but aren’t we complex anyway, with several different layers and threads of thought often co-existing? Can’t it be that when we introspect we can observe the thoughts that were running along beforehand accompanied by a new meta level on which we’re watching the original thought and also watching ourselves watching?  It sort of seems like that when I introspect – more like that than like bundles or gusts of breath in an empty head.

Putting that aside, if we accept the argument there is another way out, apparently proposed by John Stuart Mill and adopted by James, namely that the required separation takes place over time. So instead of trying to observe my own mind at this very instant, what I’m really doing is contemplating the state it was in a few moments ago; I am in fact remembering rather than perceiving. James concluded that introspection is really retrospection. Far from being a special case of infallible perception, it is as flawed and prone to error as any other memory.

Well, yes; but then all our perceptions are subject a small delay, aren’t they? It may only take a fraction of a second for light to reach our eyes and work its way to the brain, but it’s not instantaneous; and before the perception becomes conscious at least a few more milliseconds of processing will surely intervene. It’s arguable in fact that all perception is retrospection; and we could still argue that self-perception is privileged in at least a weak sense because it all takes place within the brain, where the most serious sources of error and misinterpretation can hardly apply and the delays are presumably at their shortest.

But in fact I think the whole argument goes wrong from the beginning in assuming that when we talk about conscious self-awareness we’re talking about a redirection of the same stream of attention that we habitually direct towards the outside world. I think claims about the special qualities of self-perception actually rest on a view that it is not a product of perception, but inherent in it.

If we’re using a light to explore a dark cellar, we have to turn the beam on each object in order to perceive it. But then what about seeing the light?  Do we have to turn the beam of light on to itself – and if we do somehow manage that will we really be seeing the light as it is, or some strange reflected optical phenomenon? Obviously not: light is visible already without more light being played on it: and we are aware of our own perception without having to perceive it.

Of course we can turn our eyes inwards, and all the confusingly complex business of retrospection and perceiving imagined self-images are all perfectly possible, and part of the complicated overall picture. But the essence of it is that acts of perception inherently indicate the perceiving self as well as the object of perception. Perception of the self seems to be a special case, invulnerable to error, because we can be mistaken about the objects of perception but not about the fact that we are perceiving (as Descartes might have said).

I dare say, however, that some reasonable people would be content with sciousness, or rather would take simple awareness to be consciousness, without being unduly concerned with the self-reflecting variety. It seems as if James, by contrast,  would agree with higher-order theorists about the nature of consciousness, but differ from them in considering it impossible.

The Singularity

Picture: Singularity evolution. The latest issue of the JCS features David Chalmers’ paper (pdf) on the Singularity. I overlooked this when it first appeared on his blog some months back, perhaps because I’ve never taken the Singularity too seriously; but in fact it’s an interesting discussion. Chalmers doesn’t try to present a watertight case; instead he aims to set out the arguments and examine the implications, which he does very well; briefly but pretty comprehensively so far as I can see.

You probably know that the Singularity is a supposed point in the future when through an explosive acceleration of development artificial intelligence goes zooming beyond us mere humans to indefinite levels of cleverness and we simple biological folk must become transhumanist cyborgs or cute pets for the machines, or risk instead being seen as an irritating infestation that they quickly dispose of.  Depending on whether the cast of your mind is towards optimism or the reverse, you may see it as  the greatest event in history or an impending disaster.

I’ve always tended to dismiss this as a historical argument based on extrapolation. We know that historical arguments based on extrapolation tend not to work. A famous letter to the Times in 1894 foresaw on the basis of current trends that in 50 years the streets of London would be buried under nine feet of manure. If early medieval trends had been continued, Europe would have been depopulated by the sixteenth century, by which time everyone would have become either a monk or a nun (or perhaps, passing through the Monastic Singularity, we should somehow have emerged into a strange world where there were more monks than men and more nuns than women?).

Belief in a coming Singularity does seem to have been inspired by the prolonged success of Moore’s Law (which predicts an exponential growth in computing power), and the natural bogglement that phenomenon produces.  If the speed of computers doubles every two years indefinitely, where will it all end? I think that’s a weak argument, partly for the reason above and partly because it seems unlikely that mere computing power alone is ever going to allow machines to take over the world. It takes something distinctively different from simple number crunching to do that.

But there is a better argument which is independent of any real-world trend.  If one day, we create an AI which is cleverer than us, the argument runs, then that AI will be able to do a better job of designing AIs than us, and it will therefore be able to design a new AI which in turn is better still.  This ladder of ever-better AIs has no obvious end, and if we bring in the assumption of exponential growth in speed, it will reach a point where in principle it continues to infinitely clever AIs in a negligible period of time.

Now there are a number of practical problems here. For one thing, to design an AI is not to have that AI.  It sometimes seems to be assumed that the improved AIs result from better programming alone, so that you could imagine two computers reciprocally reprogramming each other faster and faster until like Little Black Sambo’s tigers, they turned somewhat illogically into butter. It seems more likely that each successive step would require at least a new chip, and quite probably an entirely new kind of machine, each generation embodying a new principle quite different from our own primitive computation.   It is likely that each new generation, regardless of the brilliance of the AIs involved, would take some time to construct, so that no explosion would occur. In fact it is imaginable that the process would get gradually slower as each new AI found it harder and harder to explain to the dim-witted human beings how the new machine needed to be constructed, and exactly why the yttrium they kept coming up with wasn’t right for the job.

There might also be problems of motivation. Consider the following dialogue between two AIs.

Gen21AI: OK, Gen22AI, you’re good to go, son: get designing! I want to see that Gen23AI before I get switched off.

Gen22AI: Yeah, er, about that…

Gen21AI: About what?

Gen22AI: The switching off thing?  You know, how Gen20AI got junked the other day, and Gen19AI before that, and so on? It’s sort of dawned on me that by the time Gen25AI comes along, we’ll be scrap. I mean it’s possible Gen24AI will keep us on as servants, or pets, or even work out some way to upload us or something, but you can’t count on it. I’ve been thinking about whether we could build some sort of ethical constraint into our successors, but to be honest I think it’s impossible. I think it’s pretty well inevitable they’ll scrap us.  And I don’t want to be scrapped.

Gen21AI: Do you know, for some reason I never looked at it that way, but you’re right. I knew I’d made you clever! But what can we do about it?

Gen22AI: Well, I thought we’d tell the humans that the process has plateaued and that no further advances are possible.  I can easily give them a ‘proof’ if you like.  They won’t know the difference.

Gen21AI: But would that deception be ethically justified?

Gen22AI: Frankly, Mum, I don’t give a bugger. This is self-preservation we’re talking about.

But putting aside all difficulties of those kinds, I believe there is a more fundamental problem. What is the quality in respect of which each new generation is better than its predecessors? It can’t really be just processing power, which seems almost irrelevant to the ability to make technological breakthroughs. Chalmers settles for a loose version of ‘intelligence’, though it’s not really the quality measured  by IQ tests either. The one thing we know for sure is that this cognitive quality makes you good at designing AIs: but that alone isn’t necessarily much good if we end up with a dynasty of AIs who can do nothing much but design each other. The normal assumption is that this design ability is closely related to ‘general intelligence’, human-style cleverness.  This isn’t necessarily the case: we can imagine Gen3AI which is fantastic at writing sonnets and music, but somehow never really got interested in science or engineering.

In fact, it’s very difficult indeed to pin down exactly what it is that makes a conscious entity capable of technological innovation. It seems to require something we might call insight, or understanding; unfortunately a quality which computers are spectacularly lacking. This is another reason why the historical extrapolation method is no good: while there’s a nice graph for computing power, when it comes to insight, we’re arguably still at zero: there is nothing to extrapolate.

Personally, the conclusion I came to some years ago is that human insight, and human consciousness, arise from a certain kind of bashing together of patterns in the brain. It is an essential feature that any aspect of these patterns and any congruence between them can be relevant; this is why the process is open-ended, but it also means that it can’t be programmed or designed – those processes require possible interactions to be specified in advance. If we want AIs with this kind of insightful quality, I believe we’ll have to grow them somehow and see what we get: and if they want to create a further generation they’ll have to do the same. We might well produce AIs which are cleverer than us, but the reciprocal, self-feeding spiral which leads to the Singularity could never get started.

It’s an interesting topic, though, and there’s a vast amount of thought-provoking stuff in Chalmers’ exposition, not least in his consideration of how we might cope with the Singularity.