It’s not Intelligence

The robots are (still) coming. Thanks to Jesus Olmo for this TED video of Sam Harris presenting what we could loosely say is a more sensible version of some Singularity arguments. He doesn’t require Moore’s Law to go on working, and he doesn’t need us to accept the idea of an exponential acceleration in AI self-development. He just thinks AI is bound to go on getting better; if it goes on getting better, at some stage it overtakes us; and eventually perhaps it gets to the point where we figure in its mighty projects about the way ants on some real estate feature in ours.

Getting better, overtaking us; better at what? One weakness of Harris’ case is that he talks just about intelligence, as though that single quality were an unproblematic universal yardstick for both AI and human achievement. Really though, I think we’re talking about three quite radically different things.

First, there’s computation; the capacity, roughly speaking, to move numbers around according to rules. There can be no doubt that computers keep getting faster at doing this; the question is whether it matters. One of Harris’ arguments is that computers go millions of times faster than the brain so that a thinking AI will have the equivalent of thousands of years of thinking time while the humans are still getting comfy in their chairs. No-one who has used a word processor and a spreadsheet for the last twenty years will find this at all plausible: the machines we’re using now are so much more powerful than the ones we started with that the comparison defeats metaphor, but we still sit around waiting for them to finish. OK, it’s true that for many tasks that are computationally straightforward – balancing an inherently unstable plane with minute control adjustments, perhaps – computers are so fast they can do things far beyond our range. But to assume that thinking about problems in a human sort of way is a task that scales with speed of computation just begs the question. How fast are neurons? We don’t really understand them well enough to say. It’s quite possible they are in some sense fast enough to get close to a natural optimum. Maybe we should make a robot that runs a million times faster than a cheetah first and then come back to the brain.

The second quality we’re dealing with is inventiveness; whatever capacity it is that allows us to keep on designing better machines. I doubt this is really a single capacity; in some ways I’m not sure it’s a capacity at all. For one thing, to devise the next great idea you have to be on the right page. Darwin and Wallace both came up with the survival of the fittest because both had been exposed to theories of evolution, both had studied the profusion of species in tropical environments, and both had read Malthus. You cannot devise a brilliant new chip design if you have no idea how the old chips worked. Second, the technology has to be available. Hero of Alexandria could design a steam engine, but without the metallurgy to make strong boilers, he couldn’t have gone anywhere with the idea. The basic concept of television was around since films and telegraph came together in someone’s mind, but it took a series of distinct advances in technology to make it feasible. In short, there is a certain order in these things; you do need a certain quality of originality, but again it’s plausible that humans already have enough for something like maximum progress, given the right conditions. Of course so far as AI is concerned, there are few signs of any genuinely original thought being achieved to date, and every possibility that mere computation is not enough.

Third is the quality of agency. If AIs are going to take over, they need desires, plans, and intentions. My perception is that we’re still at zero on this; we have no idea how it works and existing AIs do nothing better than an imitation of agency (often still a poor one). Even supposing eventual success, this is not a field in which AI can overtake us; you either are or are not an agent; there’s no such thing as hyper-agency or being a million times more responsible for your actions.

So the progress of AI with computationally tractable tasks gives no particular reason to think humans are being overtaken generally, or are ever likely to be in certain important respects. But that’s only part of the argument. A point that may be more important is simply that the the three capacities are detachable. So there is no reason to think that an AI with agency automatically has blistering computational speed, or original imagination beyond human capacity. If those things can be achieved by slave machines that lack agency, then they are just as readily available to human beings as to the malevolent AIs, so the rebel bots have no natural advantage over any of us.

I might be biased over this because I’ve been impatient with the corny ‘robots take over’ plot line since I was an Asimov-loving teenager. I think in some minds (not Harris’s) these concerns are literal proxies for a deeper and more metaphorical worry that admiring machines might lead us to think of ourselves as mechanical in ways that affect our treatment of human beings. So the robots might sort of take over our thinking even if they don’t literally march around zapping us with ray guns.

Concerns like this are not altogether unjustified, but they rest on the idea that our personhood and agency will eventually be reduced to computation. Perhaps when we eventually come to understand them better, that understanding will actually tell us something quite different?

Sam Harris the Mystic

Sam HarrisI must admit to not being very familiar with Sam Harris’ work: to me he’s been primarily a member of the Four Horsemen of New Atheism: Dawkins, Dennett, Hitchens and that other one…  However in the video here he expresses a couple of interesting views, one about the irreducible subjectivity of consciousness, the other about the illusory nature of the self. His most recent book – first chapter here – apparently seeks to reinterpret spirituality for atheists; he seems basically to be a rationalist Buddhist (there is of course no contradiction involved in becoming a Buddhist while remaining an atheist).

It’s a slight surprise to find an atheist champion who does not want to do away with subjectivity. Harris accepts that there is an interior subjective experience which cannot simply be reduced to its objective, material correlates: he likens the two aspects to two faces of a coin. If you like, you can restrict yourself to talking about one face of the coin, but you can’t go on to say that the other doesn’t really exist, or that features of the heads side are really just features of the tails side seen from another angle.  So far as it goes, this is all very sensible, and I think the majority of people would go along with it quite a way. What’s a little odd is that Harris seems content to rest there: it’s just a feature of the world that it has these two aspects, end of story. Most of us still want some explanation; if not a reduction then at least some metaphysics which allows us to go on calling ourselves monists in a respectable manner.

Harris’ second point is also one that many others would agree with, but not me. The self, he says, is an illusion: there is no consistent core which amounts to a self. In these arguments I feel the sceptics are often guilty of knocking down straw men: they set up a ridiculous version of the self and demolish that without considering more reasonable ideas. So, they deny that that there is any specific part of the brain that can be identified with the self, they deny the existence of a Cartesian Theatre, or they deny any unchanging core. But whoever said the self had to be unchanging or simple?

Actually, we can give a pretty good account of the self without ever going beyond common sense. Human beings are animals, which means I’m a big live lump of meat which has a recognisable identity at the simple physical and biological level: to deny that takes a more radical kind of metaphysical scepticism than most would be willing to go in for.  The behaviour of that ape-like lump of meat is also governed by a reasonably consistent set of memories and beliefs. That’s all we need for a self, no mystery here, folks, move along please.

Now of course my memories and my beliefs change, as does the size and shape of the beast they inhabit. At 56 I’m not the same as I was at 6.  But so what? Rivers, as Heraclitus reminds us, never contain exactly the same water at two different moments: they rise and fall, they meander and change course. We don’t have big difficulties over believing in rivers, though, or even in the Nile or the Amazon in particular. There may be doubts about what to treat as the true origin of the Nile, but people don’t go round saying it’s an illusion (unless they’ve gone in for some of that more radical metaphysics). On this, I think Dennett’s conception of the self as a centre of narrative gravity is closer to the truth than most, though he has, unfairly, I think, been criticised for equivocating over its reality.

Frequently what people really want to deny is not the self so much as the soul. Often they also want to deny that there is a special inward dimension: but as we’ve seen, Harris affirms that. He seems instead almost to be denying the qualic sense of self I suggested a while back. Curiously, he thinks that we can in fact, overcome the illusion of selfhood: in certain exalted states he thinks we can transcend ourselves and see the world as it really is.

This is strange, because you would expect the illusion of self to stem from something fundamental about conscious experience (some terrible bottleneck, some inherent limitation), not from small, adjustable details of chemistry. Can selfhood really be a mental disease caused by an ayahuasca deficiency? Harris asserts that in these exalted states we’re seeing the world as it truly is, but wouldn’t that be the best state for us to stay in? You’d think we would have evolved that way if seeing reality just needed some small tweaks to the brain.

It does look to me as if Harris’ thinking has been conditioned a little too much by Buddhism.  He speaks with great respect of the rational basis of Buddhism, pointing out that it requires you to believe its tenets merely because they can be shown to be true, whereas Christianity seems to require as an arbitrarily specified condition of salvation your belief in things that are acknowledged to be incredible. I have a lot of sympathy for that point of view; but the snag is that if you rely on reasoning your reasoning has to be watertight: and, at the risk of giving offence, Buddhism’s isn’t.

Buddhism tells us that the world is in constant change; that change inevitably involves pain, and that to avoid the pain we should avoid the world. As it happens, it adds, the mutable world and the selves we think inhabit it are mere illusions, so if we can dispel those illusions we’re good.

But that’s a bit of a one-sided outlook, surely? Change can also involve pleasure, and in most of our lives there’s probably a neutral to positive balance; so surely it makes more sense to engage and try to improve that balance than opt out? Moreover, should we seek to avoid pain? Perhaps we ought to endure it, or even seek it out? Of course, people do avoid pain, but why should we give priority to the human aversion to pain and ignore the equally strong human aversion to non-existence? And what does it mean to call the whole world an illusion: isn’t an illusion precisely something that isn’t really part of the world? Everything we see is smoke and mirrors, very well, but aren’t smoke and mirrors (and indeed, tears, as Virgil reminds us) things?

A sceptical friend once suggested to me that all the major religions were made up by frustrated old men, often monks: the one thing they all agree on is that being a contemplative like them is just so much more worthwhile than you’d think at first sight, and that the cheerful ordinary life they missed out on is really completely worthless if not sinful. That’s not altogether accurate – Taoism, for example, praises ordinary life (albeit with a bit of a smirk on its lips);  but it does seem to me that Buddhism is keener to be done with the world than reason alone can justify.

It must be said that Harris’ point of view is refreshingly sophisticated and nuanced in comparison to the Dawkinsian weltanschauung; he seems to have the rare but valuable ability to apply his critical faculties to his own beliefs as well as other people’s. I really should read some of his books.