Sam Harris the Mystic

Sam HarrisI must admit to not being very familiar with Sam Harris’ work: to me he’s been primarily a member of the Four Horsemen of New Atheism: Dawkins, Dennett, Hitchens and that other one…  However in the video here he expresses a couple of interesting views, one about the irreducible subjectivity of consciousness, the other about the illusory nature of the self. His most recent book – first chapter here – apparently seeks to reinterpret spirituality for atheists; he seems basically to be a rationalist Buddhist (there is of course no contradiction involved in becoming a Buddhist while remaining an atheist).

It’s a slight surprise to find an atheist champion who does not want to do away with subjectivity. Harris accepts that there is an interior subjective experience which cannot simply be reduced to its objective, material correlates: he likens the two aspects to two faces of a coin. If you like, you can restrict yourself to talking about one face of the coin, but you can’t go on to say that the other doesn’t really exist, or that features of the heads side are really just features of the tails side seen from another angle.  So far as it goes, this is all very sensible, and I think the majority of people would go along with it quite a way. What’s a little odd is that Harris seems content to rest there: it’s just a feature of the world that it has these two aspects, end of story. Most of us still want some explanation; if not a reduction then at least some metaphysics which allows us to go on calling ourselves monists in a respectable manner.

Harris’ second point is also one that many others would agree with, but not me. The self, he says, is an illusion: there is no consistent core which amounts to a self. In these arguments I feel the sceptics are often guilty of knocking down straw men: they set up a ridiculous version of the self and demolish that without considering more reasonable ideas. So, they deny that that there is any specific part of the brain that can be identified with the self, they deny the existence of a Cartesian Theatre, or they deny any unchanging core. But whoever said the self had to be unchanging or simple?

Actually, we can give a pretty good account of the self without ever going beyond common sense. Human beings are animals, which means I’m a big live lump of meat which has a recognisable identity at the simple physical and biological level: to deny that takes a more radical kind of metaphysical scepticism than most would be willing to go in for.  The behaviour of that ape-like lump of meat is also governed by a reasonably consistent set of memories and beliefs. That’s all we need for a self, no mystery here, folks, move along please.

Now of course my memories and my beliefs change, as does the size and shape of the beast they inhabit. At 56 I’m not the same as I was at 6.  But so what? Rivers, as Heraclitus reminds us, never contain exactly the same water at two different moments: they rise and fall, they meander and change course. We don’t have big difficulties over believing in rivers, though, or even in the Nile or the Amazon in particular. There may be doubts about what to treat as the true origin of the Nile, but people don’t go round saying it’s an illusion (unless they’ve gone in for some of that more radical metaphysics). On this, I think Dennett’s conception of the self as a centre of narrative gravity is closer to the truth than most, though he has, unfairly, I think, been criticised for equivocating over its reality.

Frequently what people really want to deny is not the self so much as the soul. Often they also want to deny that there is a special inward dimension: but as we’ve seen, Harris affirms that. He seems instead almost to be denying the qualic sense of self I suggested a while back. Curiously, he thinks that we can in fact, overcome the illusion of selfhood: in certain exalted states he thinks we can transcend ourselves and see the world as it really is.

This is strange, because you would expect the illusion of self to stem from something fundamental about conscious experience (some terrible bottleneck, some inherent limitation), not from small, adjustable details of chemistry. Can selfhood really be a mental disease caused by an ayahuasca deficiency? Harris asserts that in these exalted states we’re seeing the world as it truly is, but wouldn’t that be the best state for us to stay in? You’d think we would have evolved that way if seeing reality just needed some small tweaks to the brain.

It does look to me as if Harris’ thinking has been conditioned a little too much by Buddhism.  He speaks with great respect of the rational basis of Buddhism, pointing out that it requires you to believe its tenets merely because they can be shown to be true, whereas Christianity seems to require as an arbitrarily specified condition of salvation your belief in things that are acknowledged to be incredible. I have a lot of sympathy for that point of view; but the snag is that if you rely on reasoning your reasoning has to be watertight: and, at the risk of giving offence, Buddhism’s isn’t.

Buddhism tells us that the world is in constant change; that change inevitably involves pain, and that to avoid the pain we should avoid the world. As it happens, it adds, the mutable world and the selves we think inhabit it are mere illusions, so if we can dispel those illusions we’re good.

But that’s a bit of a one-sided outlook, surely? Change can also involve pleasure, and in most of our lives there’s probably a neutral to positive balance; so surely it makes more sense to engage and try to improve that balance than opt out? Moreover, should we seek to avoid pain? Perhaps we ought to endure it, or even seek it out? Of course, people do avoid pain, but why should we give priority to the human aversion to pain and ignore the equally strong human aversion to non-existence? And what does it mean to call the whole world an illusion: isn’t an illusion precisely something that isn’t really part of the world? Everything we see is smoke and mirrors, very well, but aren’t smoke and mirrors (and indeed, tears, as Virgil reminds us) things?

A sceptical friend once suggested to me that all the major religions were made up by frustrated old men, often monks: the one thing they all agree on is that being a contemplative like them is just so much more worthwhile than you’d think at first sight, and that the cheerful ordinary life they missed out on is really completely worthless if not sinful. That’s not altogether accurate – Taoism, for example, praises ordinary life (albeit with a bit of a smirk on its lips);  but it does seem to me that Buddhism is keener to be done with the world than reason alone can justify.

It must be said that Harris’ point of view is refreshingly sophisticated and nuanced in comparison to the Dawkinsian weltanschauung; he seems to have the rare but valuable ability to apply his critical faculties to his own beliefs as well as other people’s. I really should read some of his books.

 

Theorobotics

Bishop BrassneckThis piece (via MLU)notes how a robot is giving lectures in theology – or perhaps it would be more accurate to say that it’s being used as a prop for some theology lectures. It helps dramatise certain human issues, either the ‘strong’ ones about it lacking the immortal soul human beings are taken to have in Christian thought, or some ‘weak’ ones about more general ethical issues.

Nothing wrong with that; in fact I’ve heard it argued that all thinking robots would be theists, because to them it would seem obvious, almost self-evident, that conscious entities need a creator. No doubt D.A.V.I.D helps to raise interest, but he doesn’t seem half as provocative as the Jesus automaton described here; not a modern robot but a feature of the medieval church robot scene, apparently a far livelier business than we could ever have guessed.

It’s certainly true that those old automata had a deep impact on Western thought about the mind. Descartes describes hydraulic ones, and it’s clear that they helped form his idea of the human body as a mere machine. The study of anatomy was backing this up – Leonardo da Vinci, for example, had already concluded on the basis of anatomy alone that the brain was the centre from which the body was controlled. Together these two influences banished older ideas of volition acting throughout the body, with your arm moving because you just wanted it to, impelled by your unintermediated volition. These days, of course, some actually think we have gone too far with our brain-centrism, and need to bring in ideas of embodiment and mind extension; but rightly or wrongly the automata undoubtedly changed our minds dramatically.

The same kind of thing happened when effective computers came on the scene. Before then it had seemed obvious that though the body might be a machine, the mind categorically was not; now there was a persuasive case for thinking our minds as well as our bodies might be machines, and I think our idea of consciousness has been reshaped gradually since so that it can fill the role of ‘the thing machines can’t do’ for those who think there is such a thing.

It might be that this has distorted our way of looking at consciousness, which never occupied an important place in ancient thought, and does not really feature in the same way in non-western traditions (at least so far as I can tell). So perhaps robots shouldn’t be teaching us about the mind. On the other hand, they sometimes come up with interesting stuff. Dennett’s discussion of the frame problem is a nice example. Most people take the frame problem – in essence, dealing with all the small background details of real-world  situations which multiply indefinitely, are probably irrelevant, but might just come back to bite you – as a problem for AI: but Dennett thoughtfully suggested that it was in fact a problem for all forms of intelligence. It was just that the human brain dealt with it so smoothly we’d never noticed it before: but to explain how the brain dealt with it was at least as problematic as building a robot that could handle it. In this way the robots had given us a new insight into human cognition.  So perhaps we should listen to them?

The Intuitional Problem

intuitronMark O’Brien gives a good statement of the computationalist case here; clear, brief, and commendably mild and dispassionate. My impression is that enthusiasm for computationalism – approximately, the belief that human thought is essentially computational in nature – is not what it was. It’s not that computationalists lost the argument, it’s more that the robots failed to come through. What AI research delivered has so far been, in this respect, much less than the optimists had hoped.

Anyway O’Brien’s case rests on two assumptions:

  • Naturalism is true.
  • The laws of physics are computable, that is, any physical process can be simulated to any desired degree of precision by a computer.

It’s immediately clear where he’s going. T0 represent it crudely, the intuition here is that naturalism means the world ultimately consists of  physical processes, any physical process can run on a computer, ergo anything in the world can run on a computer, ergo it must be possible to run consciousness on a computer.

There’s an awful lot packed into those two assumptions. O’Brien tackles one issue with the idea of simulation: namely that simulating something isn’t doing it for real. A simulated rainstorm doesn’t make us wet. His answer is that simulation doesn’t produce physical realities, but it does seem to work for abstract things. I think this is basically right. If we simulate a flight to Paris, we don’t end up there; but the route calculated by the program is the actual route; it makes no sense to say it’s only a simulated route, because it’s actually identical with the one we should use if we really went to Paris. So the power of simulation is greater for informational entities than for physical ones, and it’s not unreasonable to suggest that consciousness seems more like a matter of information than of material stuff.

There’s a deeper point, though. To simulate is not to reproduce: a simulation is the reproduction of the relevant aspects of the thing simulated. It’s implied that some features of the thing simulated are left out, ones that don’t matter. That’s why we get the different results for our Parisian exercise: the simulation necessarily leaves our actual physical locations untouched; those are irrelevant when it comes to describing the route, but essential when it comes to actually visiting Paris.

The problem is we don’t know which properties are relevant to consciousness, and to assume they are the kind of thing handled by computation simply begs the question. It can’t be assumed without an argument that physical properties are irrelevant here: John Searle and Roger Penrose in different ways both assert that they are of the essence. Even if consciousness doesn’t rely quite so brutally as that on the physical nature of the brain, we need to start with a knowledge of how consciousness works. Otherwise, we can’t tell whether we’ve got the right properties in our simulation –  even if they are in principle computational.

I don’t myself think Searle or Penrose are right: but I think it’s quite likely that the causal relationships in cognitive processes are the kind of essential thing a simulation would have to incorporate. This is a serious problem because there are reasons to think computer simulations never reproduce the causal relationships intact. In my brain event A causes event B and that’s all there is to it: in a computer, there’s always a script involved. At its worst what we get is a program that holds up flag A to represent event A and then flag B to represent event B: but the causality is mediated through the program. It seems to me this might well be a real issue.

O’Brien tackles another of Searle’s arguments: that you can’t get semantics from syntax: ie, you can’t deal with meanings just by manipulating digits. O’Brien’s strategy here is to assume a robot that behaves pretty much the way I do: does it have beliefs? It says it does, and it behaves as if it did. Perhaps we’re not willing to concede that those are real beliefs: OK, let’s call them beliefs*. On examination it turns out that the differences between beliefs and beliefs* are nugatory: so on gorunds of parsimony if nothing else we should assume they are the same.

The snag here is that there are no robots that behave the way I do.  We’ve had sixty years of failure since Turing: you can’t just have it as an assumption that our robot pals are self-evidently achievable (alas).  We know that human beings, when they do translation for example, extract meanings and then put the meanings into other words, whereas the most successful translation programs avoid meanings altogether and simply swap text strings for text strings according to a kind of mighty look-up table.

That kind of strategy won’t work when dealing with the formless complexity of the real world: you run into the analogues of the Frame Problem or you just don’t get really started. It doesn’t even work that well for language: we know now that human understanding of language relies on pragmatic Gricean implicatures, and no-one can formalise those.

Finally O’Brien turns to qualia, and here I agree with him on the broad picture. He describes some of the severe difficulties around qualia and says, rightly I think, that in the end it comes down to competing intuitions.  All the arguments for qualia are essentially thought experiments: if we want, we can just say ‘no’ to all of them (as Dennett and the Churchlands, for example, do). O’Brien makes a kind of zombie argument: my zombie twin, who lacks qualia but resembles me in all other respects, would claim to have qualia and would talk about them just the way we do.  So the explanation for talk about qualia is not qualia themselves: given that, there’s no reason to think we ourselves have them.

Up to a point: but we get the conclusion that my zombie twin talks about qualia purely ex hypothesi: it’s just specified. It’s not an explanation, and I think that’s what we really need to be in a position to dismiss the strong introspective sense most people have that qualia exist. If we could actually explain what makes the Twin talk about qualia, we’d be in a much better position.

So I mostly disagree, but I salute O’Brien’s exposition, which is really helpful.

Metaconsciousness

god3Sci provided some interesting links in his comment on the previous post, one a lecture by Raymond Tallis. Tallis offers some comfort to theists who have difficulty explaining how or why an eternal creator God should be making one-off interventions in the time-bound secular world he had created.  Tallis grants that’s a bad problem, but suggests atheists face an analogous one in working out how the eternal laws of physics relate to the local and particular world we actually live in.

These are interesting issues which bear on consciousness in at least two important ways, through human agency and the particularity of our experience; but today I want to leave the main road and run off down a dimly-lit alley that looks as if it contains some intriguing premises.

For the theists the problem is partly that God is omniscient and the creator of everything, so whatever happens, he should have foreseen it and arranged matters so that he does not need to intervene. An easy answer is that in fact his supposed interventions are actually part of how he set it up; they look like angry punishments or miraculous salvations to us, but if we could take a step back and see things from his point of view, we’d see it’s all part of the eternal plan, set up from the very beginning, and makes perfect sense. More worrying is the point that God is eternal and unchanging; if he doesn’t change he can’t be conscious.  I’ve mentioned before that our growing understanding of the brain, imperfect as it is, is making it harder to see how God could exist, and so making agnosticism a less comfortable position. We sort of know that human cognition depends on a physical process; how could an immaterial entity even get started? Instead of asking whether God exists, we’re getting to a place where we have to ask first how we can give any coherent account of what he could be – and it doesn’t look good, unless you’re content with a non-conscious God (not necessarily absurd) or a physical old man sitting on a cloud (which to be fair is probably how most Christians saw it until fairly recent times).

So God doesn’t change, and our developing understanding is redefining consciousness in ways that make an unchanging consciousness seem to involve a direct contradiction in terms. A changeless process? At this point I imagine an old gentleman dressed in black who has been sitting patiently in the corner, leaning forward with a kindly smile and pointing out that what we’re trying to do is understand the mind of God. No mere human being can do that, he says, so no wonder you’re getting into a muddle! This is the point where faith must take over.

Well, we don’t give up so easily; but perhaps he has a point; perhaps God has another and higher form of consciousness – metaconsciousness, let’s say – which resolves all these problems, but in ways we can never really understand.  Perhaps when the Singularity comes there will be robots who attain metaconsciousness, too: they may kindly try to explain it to us, but we’ll never really be able to get our heads round it.

Now of course, computers can already sail past us in terms of certain kinds of simple capacity: they can remember far more data much more precisely than we can, and they can work quickly through a very large number of alternatives. Even this makes a difference, I’ve mentioned before that exhaustive analysis by computers has shown that certain chess positions long considered draws are actually wins for one side: the winning tactics are just so long and complicated that human beings couldn’t see them, and can’t understand them intuitivel even when they see them played out.  But that’s not really any help; here we’re looking for something much more impressive. What we want to do is take the line which connects an early mammal’s level of cognition to ours and extend it until we’ve gone at least as far beyond the merely human. In facing up to this task, we’re rather like Flatlanders trying to understand the third dimension, or ordinary people trying to grasp the fourth: it isn’t really possible to get it intuitively, but we ought to be able to say some things about it by extrapolation.

So, early mammal – let’s call the beast Em (I don’t want to pick a real animal because that will derail us into consideration of how intelligent it really is) – works very largely on an instinctive or stimulus/resp0onse basis. It sees food, it pursues it and attempts to eat it. It lives in a world of concrete and immediate entities and has responses ready for some of them – fairly complex and somewhat variable responses, but fixed in the main. If we could somehow get Em to play chess with us, he would treat his men like a barbarian army, launching them towards us haphazardly en masse or one at a time, and we should have no trouble picking them off.

Human consciousness, by contrast, allows us to consider abstract entities (though we do not well understand their nature), to develop abstract general goals and to make plans and intentions which deal with future and possible events. These plans can also be of great complexity. We can even play out complicated long-range chess strategies if they’re not too complex.  This kind of thing allows us to do a better job than Em of getting food, though to Em a great deal of our food-related activity is completely opaque and apparently unmotivated. A lot of the time when we’re working on activities that will bring us food it will seem to Em as if we’re doing nothing, or at any rate, nothing at all related to food.

We can take it, then, that God or a future robot which is metaconscious will have moved on from mere goals to something more sophisticated – metagoals, whatever they are. He, or it, will understand abstractions as well as we understand concrete objects, and will perhaps employ meta-abstractions which they might be a little shakier about. God and the robot will at time have goals, just as we eat food, but their activities in respect of them will be both far more powerful and productive than the simple direct stuff we do and in our eyes utterly unrelated to the simpler goals we can guess at. A lot of the time they may appear to be doing nothing when they are actually pressing forward with an important metaproject.

But look, you may say, we have no reason to suppose this meta stuff exists at all.  Em was not capable of abstract thought; we are. That’s the end of the sequence; you either got it or you don’t got it. We got it: our memory capacity and so on may be improvable, but there isn’t any higher realm. Perhaps God’s objectives would be longer term and more complex than ours, but that’s just a difference of degree.

It could be so, but that’s how things would seem to Em, mutatis mutandis. Rocks don’t get food, he points out; but we early mammals get it. See food, take food, eat food: we get it. Now humans may see further (nice trick, that hind legs thing) they may get bigger food. But this talk of plans means nothing; there’s nothing to your plans and your abstraction thing except getting food. You do get food on a big scale, I notice, but I guess that’s really just luck or some kind of magic. Metaconsciousness would seem similarly unimaginable to us, and its results would equally look like magic, or like miracles.

This all fits very well, of course, with Colin McGinn’s diagnosis. According to him, there’s nothing odd about consciousness in itself, we just lack the mental capacity to deal with it. The mental operations available to us confine us within a certain mental sphere: we are restricted by cognitive closure. It could be that we need metaconsciousness to understand consciousness (and then, unimaginably, metametaconsciousness in order to understand metaconsciousness).

This is an odd place to have ended up, though: we started out with the problem that God is eternal and therefore can’t be conscious: if He can’t be conscious then He certainly can’t ascend to even higher cognitive states, can He? Remember we thought metaconsciousness would probably enable him to understand Platonic abstractions in a way we can’t, and even deal with meta-Platonic entities. Perhaps at that level the apparent contradiction between being unchanging and being aware is removed or bypassed, rather the way that putting five squares together in two dimensions is absurd but a breeze in three: hell, put six together and make a cube of it!

Do I really believe in metaconsciousness? No, but excuse me; I have to go and get food.