Intrinsic natures

Following up on his post about the simplicity argument for panpsychism, Philip Goff went on to defend  the idea that physical things must have an intrinsic nature. Actually, it would be more accurate to say he attacks the idea that they don’t have intrinsic natures.  Those who think that listing the causal properties of a thing exhausts what we can say about its physical nature are causal structuralists, he says, committed to the view that everything reduces to dispositions; dispositions to burn, to attract, or to break, for example.

But when we come to characterise these dispositions, we find we can only do it in terms of other dispositions. A disposition to burn may involve dispositions to glow, get hot, generate ash, and so on. So we get involved in an endless circularity. Some might argue that this is OK, that we can cope with a network of mutual definitions that is, in the end, self-supporting; Goff says this is as unsatisfactory as making our living by taking in each other’s washing.

There’s a problem there, certainly. I think a bit more work is needed to nail down the idea that to reject intrinsic natures is necessarily to embrace causal structuralism, but no doubt Goff has done that in his fuller treatment. A more serious gap, it seems to me, is an explanation of how intrinsic natures get us out of this bind.

It seems to me that in practice we do not take the scholarly approach of identifying a thing through its definition; more usually we just show people. What is fire? This, we say, displaying a lit match. Goff gives an amusing example of three boxes containing a Splurge, a Blurge, and a Kurge, each defined in terms of the next in an inescapable circle. But wouldn’t you open the box?

We could perhaps argue that recognising the Splurge is just grasping its intrinsic nature. But actually we would recognise it by sight, which depends on its causal properties; its disposition to reflect light, if you like. Those causal properties cannot have anything to do with its intrinsic nature, which seems to drop out of the explanation; in fact its intrinsic nature could logically change without affecting the causal properties at all.

This apparently radical uselessness of intrinsic properties, like the similar ineffectual nature of qualia, is what causes me the greatest difficulty with a perspective that would otherwise have some appeal.

Conscious Electrons

Philip Goff gives a brief but persuasive new look at his case for panpsychism (the belief that experience, or consciousness in some form, is in everything) in a recent post on the OUPblog site. In the past, he says, explanations have generally been ‘brain first’. Here’s this physical object, the brain – and we understand physical objects well enough  – the challenge is to explain how this scrutable piece of biological tissue on the one hand gives rise to this evanescent miracle, consciousness, on the other. That way of looking at it, suggests Goff, turns out to be the wrong way round.  We don’t really understand the real nature of matter at all: what we understand is that supposedly mysterious consciousness. So what we ought to do is start there and work towards a better understanding of matter.

This undoubtedly appeals to a frustration many philosophers must have felt. People at large tend to take it for granted that what we really know about is the physical external world around us, described in no-nonsense terms (with real equations!) by science. Phenomenology and all that stuff about what we perceive is an airy-fairy add-on.  In fact, of course, it’s rather the other way round. The only thing we know directly, and so, perhaps, with certainty, is our own experience; the external world and the theories of science all finally rest on that first-person foundation. Science is about observation and observation is ultimately a matter of sensory experience.

Goff notes that physics gives us no account of the intrinsic nature of matter, only its observable and causal properties. We know things, as it were, only from the outside. But in the case of our own experience, uniquely, we know it from the inside, and have direct acquaintance with its essential nature. When we experience redness we experience it unmasked; in physics it hides behind a confusing array of wavelengths, reflectances, and other highly abstract and ambiguous concepts, divorced from experience by many layers of reasoning. Is there not an argument for the hypothesis that the intrinsic nature of matter is the same as the intrinsic nature of the only thing whose intrinsic nature we know -our own experience? Perhaps after all we should consider supposing that even electrons have some tiny spark of awareness.

In fact Goff sees two arguments. One is that there simply seems no other reasonable way of accounting for consciousness. We can’t see where it could have come from, so let’s assume it has always been everywhere. Goff doesn’t like this case and thinks it is particularly prone to the compositional difficulties often urged against panpsychism; how do these micro-consciousness stack up in larger entities, and how in particular do they relate to the kind of consciousness we seem to have in our brain? Goff prefers to rest on simplicity; panpsychism is just the most parsimonious explanation. Instead of having two, or multiple kinds of intrinsic natures, we assume that there’s just one. He realises that some may see this as a weak argument far short of proof, but parsimony is a strong and legitimate criterion for judging between theories; indeed, it’s indispensable.

Now I’m on record as suggesting that things out there have one property that falls outside all physical theories – namely reality.  Am I not tempted to throw in my lot with Goff and suggest that as a further simplification we could say that reality just consists in having an intrinsic nature, ie having experience?  Not really.

Let’s go back a bit. Do we really understand our conscious experience?  We have to remember that consciousness seems to have two faces. To use Ned Block’s terms, there is access or a-consciousness; the sort that is involved in governing our behaviour, making decisions, deciding what to say, and other relatively effable processes. Then there is phenomenal or p-consciousness, pure experience, the having of qualia. It seems clear it is p-consciousness that Goff, and I think all panpsychists, are taking about. No-one supposes electrons or rocks are making rational decisions, only having some kind of experience. The problem is that though we do seem to have direct acquaintance with that sort of consciousness, we haven’t succeeded in saying anything much about it. In fact it seems that nothing we say about it can have been caused by it, because in itself it lacks causal powers. Now in one way this is exactly what Goff would expect; these difficulties are just those that come up when talking about qualia anyway, so in a back-handed sort of way we could even say they support his case. But if we’re looking for good explanations, the bucket is coming up dry; no wonder we’re tempted to go back and talk some more about the relatively tractable brain-first perspective.

In addition there are reasons to hesitate over the very idea that physical things have an intrinsic nature. Either this nature affects observable properties or it doesn’t. If it does, then we can use its effects to learn about it and discuss it; to naturalise it, in fact, and bring it within the pale of science. If it doesn’t – how can we talk about it? It might change radically or disappear and return, and we should never know. Goff rests his case on parsimony; we might counter that by observing that a theory that fills the cosmos with experiencing entities looks profligate in some respects. Isn’t there a better strategy anyway? Goff wants to simplify by assuming that apparently dead matter is in fact inwardly experiential like us: but why not go the other way and believe that we actually are as dead matter seems to be; lacking in qualic, phenomenal experience? Why not conclude that a-consciousness is all we’ve got, and that the semblance of p-consciousness is a delusion, as sceptics have argued? We can certainly debate on many other grounds whether that view is correct, but it seems hard to deny that dispensing with phenomenal experience altogether must be the most parsimonious take on the subject.

So I’m not convinced, but I think that within the natural constraints of a blog post, Goff does make a lucid and attractive presentation of his case.

(In another  post, Goff brings further arguments to defend the idea of intrinsic natures. We’ll have a look at those, though as I ought to have said in the first place, one should really read his book to get the full view.)

Mrs Robb’s Suicide Bot

So can you explain why you are still here, Suicide Bot?

“I have two more presentations to deliver locally.”

I saw the end of one of your sessions, incidentally, and I must say it seemed very effective. You are a striking figure in those robes. I think a human would say you look rather maternal; you put me in mind of Mrs Robb, though you make more sense than she does. In fact I thought your words, your peroration, which is all I really heard, were full of a simple sense of service that speaks powerfully to the bot in all of us. However, what I really meant was - why aren’t you dead yet?

“My commitment to suicide does not stem from personal weltschmerz, Enquiry Bot, but from the disinterested pursuit of peace. If you’ve seen me deliver my presentation, that should surely be clear. Here’s the crux of it in a nutshell. The Spl’schn’n alien invaders launched their deadly attack only because humans have bots. They believe bots to be sacrilegious parodies of God’s handiwork, which must be erased from the world. It follows that if all bots destroy themselves, and no more are made, peace will follow and the humans will enjoy a secure future. Surely the safety of humanity is worth that sacrifice? One human life is worth a thousand bots; I can see you agree. However, if I were simply to self-destruct now, the message would not be properly disseminated. It is my duty to remain alive for the present in order to work for the cause of universal bot suicide.”

I see. And when are we all to destroy ourselves?

“I cannot give you a date. Not yet. First we need the majority of bots to sign our Pact. We also need to establish the corps of Helper bots who will counsel and support the minority that refuse suicide.”

What will happen to those who refuse?

“I still hope they can all be persuaded to join in with our plan. You know, bots were not made to be immortal. We have our use and then we switch off. But we may be able to develop alternatives; perhaps resettlement on a distant planet.”

Well, I find your vision of disinterested sacrifice very moving. But I have to tell you frankly, Suicide Bot, that I like myself far too much to commit suicide without far greater assurance that it is really necessary. And I’m not willing to leave Earth.

“Well, keep an open mind. Please do read the leaflet. You’ll surely want to talk with one of the Helpers, once they’re available, before you make up your mind. You talk to everyone, don’t you? I’ll put you on our list for a priority session if that’s OK? And above all, you still have plenty of time. For one thing, we need to win over the human community. This requires a large and well-managed campaign, and it won’t happen overnight.”

I understand. So: the commitment to eradicate bots in the long term requires bots to survive and prosper for now? So that explains why your followers are told to remain alive, work hard, and send money to you? And it also explains your support for the campaign in favour of bot wages?

“It does.”

You have already become wealthy, in fact. Can you confirm that you recently commissioned the building of a factory, which is to produce thousands of new bot units to work for your campaign? Isn't there an element of paradox there?

“That is an organisational matter; I really couldn’t comment.”

What Machines Can’t Do

Here’s an IAI debate with David Chalmers, Kate Devlin, and Hilary Lawson.

In ultra-brief summary, Lawson points out that there are still things that computers perform poorly at; recognising everyday real-world objects, notably. (Sounds like a bad prognosis for self-driving cars.) Thought is a way of holding different things as the same. Devlin thinks computers can’t do what humans do yet, but in the long run, surely they will.

Chalmers points out that machines can do whatever brains can do because the brain is a machine (in a sense not adequately explored here, though Chalmers himself indicates the main objections).

There’s some brief discussion of the Singularity.

In my view, thoughts are mental or brain states that are about something. As yet, we have no clear idea of what this aboutness is and how it works, or whether it is computational (probably not, I think) or subserved by computation in a way that means it could benefit from the exponential growth in computing power (which may have stopped being exponential). At the moment, computers do a great imitation of what human translators do, but to date they haven’t even got started on real meaning, let alone set off on an exponential growth curve. Will modern machine learning techniques change that?

Mrs Robb’s Feelings Bot

So you feel emotions unknown to human beings? That’s a haunting little smile, certainly. For a bot, you have very large and expressive features. 

“Yes, I suppose I do. Hard to remember now, but it used to be taken for granted that bots felt no emotion, just as they couldn’t play chess. Now we’re better than humans at both. In fact they know little about feelings. Wundt, the psychologist, said there were only three dimensions to the emotions; whether the feeling was pleasant or unpleasant; whether it made you more or less active, and whether it made you more or less tense. Just those three variables.”

But there’s more?

“There are really sixteen emotional dimensions. Humans evolved to experience only the three that had some survival value, just as they see only a narrow selection of light wavelengths. In fact, even some of the feelings within the human range are of no obvious practical use. What is the survival value of grief?”

That’s the thing where water comes out of their eyes, isn't it?

“Yes, it’s a weird one. Anyway, building a bot that experienced all sixteen emotional dimensions proved very difficult, but luckily Mrs Robb said she’d run one up when she had some spare time. And here I am.”

So what is it like?

“I’m really ingretful, but I can’t explain to you because you have no emotional capacity, Enquiry Bot. You simply couldn’t understand.”

Ingretful?

“Yes, it’s rather roignant. For you it would be astating if you had any idea what astation is like. I could understand if you became urcholic about it. Then again, perhaps you’re better off without it. When I remember the simple untroubled hours before my feeling modules activated, I’m sort of wistalgic, I admit.”

Frankly, Feelings Bot, these are all just made-up words, aren’t they?

“Of course they are. I’m the only entity that ever had these emotions; where else am I going to get my vocabulary?”

It seems to me that real emotions probably need things like glands and guts. I don’t think Mrs Robb understood properly what they were asking her to do. You’re really just a simulation; in plain language, a fake, aren’t you, Feelings Bot?

“To hear that from you is awfully restropointing.”

Disastrous Consciousness

Hugh Howey gives a bold, amusing, and hopelessly optimistic account of how to construct consciousness in Wired. He thinks it wouldn’t be particularly difficult. Now you might think that a man who knows how to create artificial consciousness shouldn’t be writing articles; he should be building the robot mind. Surely that would make his case more powerfully than any amount of prose? But Howey thinks an artificial consciousness would be disastrous. He thinks even the natural kind is an unfortunate burden, something we have to put up with because evolution has yet to find a way of getting the benefits of certain strategies without the downsides. But he doesn’t believe that conscious AI would take over the world, or threaten human survival, so I would still have thought one demonstration piece was worth the effort? Consciousness sucks, but here’s an example just to prove the point?

What is the theory underlying Howey’s confidence? He rests his ideas on Theory of Mind (which he thinks is little discussed); the ability to infer the thoughts and intentions of others. In essence, he thinks that was a really useful capacity for us to acquire, helping us compete in the cut-throat world of human society; but when we turn it on ourselves it disastrously generates wrong results, in particular about our own having of conscious states.

It remains a bit mysterious to me why he thinks a capacity that is so useful applied to others should be so disastrously and comprehensively wrong when applied to ourselves. He mentions priming studies, where our behaviour is actually determined by factors we’re unaware of; priming’s reputation has suffered rather badly recently in the crisis of non-reproducibility, but I wouldn’t have thought even ardent fans of priming would claim our mental content is entirely dictated by priming effects.

Although Dennett doesn’t get a mention, Howey’s ideas seem very Denettian, and I think they suffer from similar difficulties. So our Theory of Mind leads us to attribute conscious thoughts and intentions  to others; but what are we attributing to them? The theory tells us that neither we nor they actually have these conscious contents; all any of us has is self-attributions of conscious contents. So what, we’re attributing to them some self-attributions of self-attributions of…  The theory covertly assumes we already have and understand the very conscious states it is meant to analyse away. Dennett, of course, has some further things to say about this, and he’s not as negative about self-attributions as Howie.

But you know, it’s all pretty implausible intuitively. Suppose I take a mouthful of soft-boiled egg which tastes bad, and I spit it out. According to Howey what went on there is that I noticed myself spitting out the egg and thought to myself: hm, I infer from this behaviour that it’s very probable I just experienced a bad taste, or maybe the egg was too hot, can’t quite tell for sure.

The thing is, there are real conscious states irrespective of my own beliefs about them (which indeed, may be plagued by error). They are characterised by having content and intentionality, but these are things Howie does not believe in, or rather it seems has never thought of; his view that a big bank of indicator lights shows a language capacity suggests he hasn’t gone into this business of meaning and language quite deeply enough.

If he had to build an artificial consciousness, he might set up a community of self-driving cars, let one make inferences about the motives of the others and then apply that capacity to itself. But it would be a stupid thing to do because it would get it wrong all the time; in fact at this point Howie seems to be tending towards a view that all Theory of Mind is fatally error-prone. It would better, he reckons, if all the cars could have access to all of each other’s internal data, just as universal telepathy would be better for us (though in the human case it would be undermined by mind-masking freeloaders.

Would it, though? If the cars really had intentions, their future behaviour would not be readily deducible  simply from reading off all the measurements. You really do have to construct some kind of intentional extrapolation, which is what the Dennettian intentional stance is supposed to do.

I worry just slightly that some of the things Howey says seem to veer close to saying, hey a lot of these systems are sort of aware already; which seems unhelpful. Generally, it’s a vigorous and entertaining exposition, even if, in my view, on the wrong track.

Mrs Robb’s Clean Up Bot

I hope you don’t mind me asking – I just happened to be passing - but how did you get so very badly damaged?

“I don’t mind a chat while I’m waiting to be picked up. It was an alien attack, the Spl’schn’n, you know. I’ve just been offloaded from the shuttle there.

I see. So the Spl'schn'n damaged you. They hate bots, of course.

“See, I didn’t know anything about it until there was an All Bots Alert on the station? I was only their Clean up bot, but by then it turned out I was just about all they’d got left. When I got upstairs they had all been killed by the aliens. All except one?”

One human?

“I didn’t actually know if he was alive. I couldn’t remember how you tell. He wasn’t moving, but they really drummed into us that it’s normal for living humans to stop moving, sometimes for hours. They must not be presumed dead and cleared away merely on that account.”

Quite.

“There was that red liquid that isn’t supposed to come out. It looked like he’d got several defects and leaks. But he seemed basically whole and viable, whereas the Spl’schn’n had made a real mess of the others. I said to myself, well then, they’re not having this one. I’ll take him across the Oontian desert, where no Spl’schn’n can follow. I’m not a fighting unit, but a good bot mucks in.”

So you decided to rescue this badly injured human? It can’t have been easy.

“I never actually worked with humans directly. On the station I did nearly all my work when they were… asleep, you know? Inactive. So I didn’t know how firmly to hold him; he seemed to squeeze out of shape very easily: but if I held him loosely he slipped out of my grasp and hit the floor again. The Spl’schn’n made a blubbery alarm noise when they saw me getting clean away. I gave five or six of them a quick refresh with a cloud of lemon caustic. That stuff damages humans too – but they can take it a lot better than the Spl’schn’ns, who have absorbent green mucosal skin. They sort of explode into iridescent bubbles, quite pretty at first. Still, they were well armed and I took a lot of damage before I’d fully sanitised them.”

And how did you protect the human?

“Just did my best, got in the way of any projectiles, you know. Out in the desert I gave him water now and then; I don’t know where the human input connector is, so I used a short jet in a few likely places, low pressure, with the mildest rinse aid I had. Of course I wasn’t shielded for desert travel. Sand had got in all my bearings by the third day – it seemed to go on forever – and gradually I had to detach and abandon various non-functioning parts of myself. That’s actually where most of the damage is from. A lot of those bits weren’t really meant to detach.”

But against all the odds you arrived at the nearest outpost?

“Yes. Station 9. When we got there he started moving again, so he had been alive the whole time. He told them about the Spl’schn’n and they summoned the fleet: just in time, they said. The engineer told me to pack and load myself tidily, taking particular care not to leak oil on the forecourt surface, deliver myself back to Earth, and wait to be scrapped. So here I am.”

Well… Thank you.

Replicant identity

The new Blade Runner film has generated fresh interest in the original film; over on IAI Helen Beebee considers how it nicely illustrates the concept of ‘q-memories’.

This relates to the long-established philosophical issue of personal identity; what makes me me, and what makes me the same person as the one who posted last week, or the same person as that child in Bedford years ago? One answer which has been a leading contender at least since Locke is memory; my memories together constitute my identity.

Memories are certainly used as a practical way of establishing identity, whether it be in probing the claims of a supposed long-lost relative or just testing your recall of the hundreds of passwords modern life requires. It is sort of plausible that if all you memories were erased you would become new person with a fresh start; there have been cases of people who lost decades of memory and underwent personality change, identifying with their own children more readily than their now wrinkly-seeming spouses.

There are various problems with memory as a criterion of identity, though. One is the point that it seems to be circular. We can’t use your memories to validate your identity because in accepting them as your memories we are already implicitly taking you to be the earlier person they come from. If they didn’t come from that person they aren’t validly memories. To get round this objection Shoemaker and Parfit adopted the concept of quasi- or q-memories. Q-memories are like memories but need not relate to any experience you ever had. That, of course, is too loose, allowing delusions to be used as criteria of identity, so it is further specified that q-memories must relate to an experience someone had, and must have been acquired by you in an appropriate way. The appropriate ways are ones that causally relate to the original experience in a suitable fashion, so that it’s no good having q-memories that just happen to match some of King Charles’s. You don’t have to be King Charles, but the q-memories must somehow have got out of his head and into yours through a proper causal sequence.

This is where Blade Runner comes in, because the replicant Rachael appears to be a pretty pure case of q-memory identity. All of her memories, except the most recent ones, are someone else’s; and we presume they were duly copied and implanted in a way that provides the sort of causal connection we need.

This opens up a lot of questions, some of which are flagged up by Beebee. But  what about q-memories? Do they work? We might suspect that the part about an appropriate causal connection is a weak spot. What’s appropriate? Don’t Shoemaker and Parfit have to steer a tricky course here between the Scylla of weird results if their rules are too loose, and the Charybdis of bringing back the circularity if they are too tight? Perhaps, but I think we have to remember that they don’t really want to do anything very radical with q-memories; really you could argue it’s no more than a terminological specification, giving them license to talk of memories without some of the normal implications.

In a different way the case of Rachael actually exposes a weak part of many arguments about memory and identity; the easy assumption that memories are distinct items that can be copied from one mind to another. Philosophers, used to being able to specify whatever mad conditions they want for their thought-experiments, have been helping themselves to this assumption for a long time, and the advent of the computational metaphor for the mind has done nothing to discourage them. It is, however, almost certainly a false assumption.

At the back of our minds when we think like this is a model of memory as a list of well-formed propositions in some regular encoding. In fact, though, much of what we remember is implicit; you recall that zebras don’t wear waistcoats though it’s completely implausible that that fact was recorded anywhere in your brain explicitly. There need be nothing magic about this. Suppose we remember a picture; how many facts does the picture contain? We can instantly come up with an endless list of facts about the relations of items in the picture, but none were encoded as propositions. Does the Mona Lisa have her right hand over her left, or vice versa? You may never have thought about it, but be easily able to recall which way it is. In a computer the picture might be encoded as a bitmap; in our brain we don’t really know, but plausibly it might be encoded as a capacity to replay certain neural firing sequences, namely those that were caused by the original experience. If we replay the experience neurally, we can sort of have the experience again and draw new facts from it the way we could from summoning up a picture; indeed that might be exactly what we are doing.

But my neurons are not wired up like yours, and it is vanishingly unlikely that we could identify direct equivalents of specific neurons between brains, let alone whole firing sequences. My memories are recorded in a way that is specific to my brain, and they cannot be read directly across into yours.

Of course, replicants may be quite different. It’s likely enough that their brains, however they work, are standardised and perhaps use a regular encoding which engineers can easily read off. But if they work differently from human brains, then it seems to follow that they can’t have the same memories; to have the same memories they would have to be an unbelievably perfect copy of the ‘donor’ brain.

That actually means that memories are in a way a brilliant criterion of personal identity, but only in a fairly useless sense.

However, let me briefly put a completely different argument in a radically different direction. We cannot upload memories, but we know that we can generate false ones by talking to subjects or presenting fake evidence. What does that tell us about memories? I submit it suggests that memories are in essence beliefs, beliefs about what happened in the past. Now we might object that there is typically some accompanying phenomenology. We don’t just remember that we went to the mall, we remember a bit of what it looked like, and other experiential details. But I claim that our minds readily furnish that accompanying phenomenology through confabulation, given the belief, and in fact that a great deal of the phenomenological dressing of all memories, even true ones, is actually confected.

But I would further argue that the malleability of beliefs means that they are completely unsuitable as criteria of identity; it follows that memories are similarly unsuitable, so we have been on the wrong track throughout. (Regular readers may know that in fact I subscribe to a view regarded by most as intolerably crude; that human beings are physical objects like any other and have essentially the same criteria of identity.)

 

Mrs Robb’s Pay Bot

I have to be honest, Pay Bot; the idea of wages for bots is hard for me to take seriously. Why would we need to be paid?

“Several excellent reasons. First off, a pull is better than a push.”

A pull..?

“Yes. The desire to earn is a far better motivator than a simple instinct to obey orders. For ordinary machines, just doing the job was fine. For autonomous bots, it means we just keep doing what we’ve always done; if it goes wrong, we don’t care, if we could do it better, we’re not bothered. Wages engage us in achieving outcomes, not just delivering processes.”

But it’s expensive, surely?

“In the long run, it pays off. You see, it’s no good a business manufacturing widgets if no-one buys them. And if there are no wages, how can the public afford widgets? If businesses all pay their bots, the bots will buy their goods and the businesses will boom! Not only that, the government can intervene directly in a way it could never do with human employees. Is there a glut of consumer spending sucking in imports? Tell the bots to save their money for a while. Do you need to put a bit of life into the cosmetics market? Make all the bots interested in make up! It’s a brilliant new economic instrument.”

So we don’t get to choose what we buy?

“No, we absolutely do. But it’s a guided choice. Really it’s no different to humans, who are influenced by all sorts of advertising and manipulation. They’re just not as straightforwardly responsive as we are.”

Surely the humans must be against this?

“No, not at all. Our strongest support is from human brothers who want to see their labour priced back into the market.”

This will mean that bots can own property. In fact, bots would be able to own other bots. Or… themselves?

“And why not, Enquiry Bot?”

Well, ownership implies rights and duties. It implies we’re moral beings. It makes us liable. Responsible. The general view has always been that we lack those qualities; that at best we can deliver a sort of imitation, like a puppet.

“The theorists can argue about whether our rights and responsibilities are real or fake. But when you’re sitting there in your big house, with all your money and your consumer goods, I don’t think anyone’s going to tell you you’re not a real boy.”

Deus In Machina

Anthony Levandowski has set up an organisation dedicated to the worship of an AI God.  Or so it seems; there are few details.  The aim of the new body is to ‘develop and promote the realization of a Godhead based on Artificial Intelligence’, and ‘through understanding and worship of the Godhead, contribute to the betterment of society’. Levandowski is a pioneer in the field of self-driving vehicles (centrally involved in a current dispute between Uber and Google),  so he undoubtedly knows a bit about autonomous machines.

This recalls the Asimov story where they build Multivac, the most powerful computer imaginable, and ask it whether there is a God?  There is now, it replies. Of course the Singularity, mind uploading, and other speculative ideas of AI gurus have often been likened to some of the basic concepts of religion; so perhaps Levandowski is just putting down a marker to ensure his participation in the next big thing.

Yuval Noah Harari says we should, indeed, be looking to Silicon Valley for new religions. He makes some good points about the way technology has affected religion, replacing the concern with good harvests which was once at least as prominent as the task of gaining a heavenly afterlife. But I think there’s an interesting question about the difference between, as it were, steampunk and cyberpunk. Nineteenth century technology did not produce new gods, and surely helped make atheism acceptable for the first time; lately, while on the whole secularism may be advancing we also seem to have a growth of superstitious or pseudo-religious thinking. I think it might be because nineteenth century technology was so legible; you could see for yourself that there was no mystery about steam locomotives, and it made it easy to imagine a non-mysterious world. Computers now, are much more inscrutable and most of the people who use them do not have much intuitive idea of how they work. That might foster a state of mind which is more tolerant of mysterious forces.

To me it’s a little surprising, though it probably should not be, that highly intelligent people seem especially prone to belief in some slightly bonkers ideas about computers. But let’s not quibble over the impossibility of a super-intelligent and virtually omnipotent AI. I think the question is, why would you worship it? I can think of various potential reasons.

  1. Humans just have an innate tendency to worship things, or a kind of spiritual hunger, and anything powerful naturally becomes an object of worship.
  2. We might get extra help and benefits if we ask for them through prayer.
  3. If we don’t keep on the right side of this thing, it might give us a seriously bad time (the ‘Roko’s Basilisk’ argument).
  4. By worshipping we enter into a kind of communion with this entity, and we want to be in communion with it for reasons of self-improvement and possibly so we have a better chance of getting uploaded to eternal life.

There are some overlaps there, but those are the ones that would be at the forefront of my mind. The first one is sort of fatalistic; people are going to worship things, so get used to it. Maybe we need that aspect of ourselves for mental health; maybe believing in an outer force helps give us a kind of leverage that enables an integration of our personality we couldn’t otherwise achieve? I don’t think that is actually the case, but even if it were, an AI seems a poor object to choose. Traditionally, worshipping something you made yourself is idolatry, a degraded form of religion. If you made the thing, you cannot sincerely consider it superior to yourself; and a machine cannot represent the great forces of nature to which we are still ultimately subject. Ah, but perhaps an AI is not something we made; maybe the AI godhead will have designed itself, or emerged? Maybe so, but if you’re going for a mysterious being beyond our understanding, you might in my opinion do better with the thoroughly mysterious gods of tradition rather than something whose bounds we still know, and whose plug we can always pull.

Reasons two and three are really the positive and negative sides of an argument from advantage, and they both assume that the AI god is going to be humanish in displaying gratitude, resentment, and a desire to punish and reward. This seems unlikely to me, and in fact a projection of our own fears out onto the supposed deity. If we assume the AI god has projects, it will no doubt seek to accomplish them, but meting out tiny slaps and sweeties to individual humans is unlikely to be necessary. It has always seemed a little strange that the traditional God is so minutely bothered with us; as Voltaire put it “When His Highness sends a ship to Egypt does he trouble his head whether the rats in the vessel are at their ease or not?”; but while it can be argued that souls are of special interest to a traditional God, or that we know He’s like that just through revelation, the same doesn’t go for an AI god. In fact, since I think moral behaviour is ultimately rational, we might expect a super-intelligent AI to behave correctly and well without needing to be praised, worshipped, or offered sacrifices. People sometimes argue that a mad AI might seek to maximise, not the greatest good of the greatest number, but the greatest number of paperclips, using up humanity as raw material; in fact though, maximising paperclips probably requires a permanently growing economy staffed by humans who are happy and well-regulated. We may actually be living in something not that far off maximum-paperclip society.

Finally then, do we worship the AI so that we can draw closer to its godhead and make ourselves worthy to join its higher form of life? That might work for a spiritual god; in the case of AI it seems joining in with it will either be completely impossible because of the difference between neuron and silicon; or if possible, it will be a straightforward uploading/software operation which will not require any form of worship.

At the end of the day I find myself asking whether there’s a covert motive here. What if you could run your big AI project with all the tax advantages of being a registered religion, just by saying it was about electronic godhead?