Do We Need Ethical AI?

Amanda Sharkey has produced a very good paper on robot ethics which reviews recent research and considers the right way forward – it’s admirably clear, readable and quite wide-ranging, with a lot of pithy reportage. I found it comforting in one way, as it shows that the arguments have had a rather better airing to date than I had realised.

To cut to the chase, Sharkey ultimately suggests that there are two main ways we could respond to the issue of ethical robots (using the word loosely to cover all kinds of broadly autonomous AI). We could keep on trying to make robots that perform well, so that they can safely be entrusted with moral decisions; or we could decide that robots should be kept away from ethical decisions. She favours the latter course.

What is the problem with robots making ethical decisions? One point is that they lack the ability to understand the very complex background to human situations. At present they are certainly nowhere near a human level of understanding, and it can reasonably be argued that the prospects of their attaining that level of comprehension in the foreseeable future don’t look that good. This is certainly a valid and important consideration when it comes to, say, military kill-bots, which may be required to decide whether a human being is hostile, belligerent, and dangerous. That’s not something even humans find easy in all circumstances. However, while absolutely valid and important, it’s not clear that this is a truly ethical concern; it may be better seen as a safety issue, and Sharkey suggests that that applies to the questions examined by a number of current research projects.

A second objection is that robots are not, and may never be, ethical agents, and so lack the basic competence to make moral decisions. We saw recently that even Daniel Dennett thinks this is an important point. Robots are not agents because they lack true autonomy or free will and do not genuinely have moral responsibility for their decisions.

I agree, of course, that current robots lack real agency, but I don’t think that matters in the way suggested. We need here the basic distinction between good people and good acts. To be a good person you need good motives and good intentions; but good acts are good acts even if performed with no particular desire to do good, or indeed if done from evil but confused motives. Now current robots, lacking any real intentions, cannot be good or bad people, and do not deserve moral praise or blame; but that doesn’t mean they cannot do good or bad things. We will inevitably use moral language in talking about this aspect of robot behaviour just as we talk about strategy and motives when analysing the play of a chess-bot. Computers have no idea that they are playing chess; they have no real desire to win or any of the psychology that humans bring to the contest; but it would be tediously pedantic to deny that they do ‘really’ play chess and equally absurd to bar any discussion of whether their behaviour is good or bad.

I do give full weight to the objection here that using humanistic terms for the bloodless robot equivalents may tend to corrupt our attitude to humans. If we treat machines inappropriately as human, we may end up treating humans inappropriately as machines. Arguably we can see this already in the arguments that have come forward recently against moral blame, usually framed as being against punishment, which sounds kindly, though it seems clear to me that they might also undermine human rights and dignity. I take comfort from the fact that no-one is making this mistake in the case of chess-bots; no-one thinks they should keep the prize money or be set free from the labs where they were created. But there’s undoubtedly a legitimate concern here.

That legitimate concern perhaps needs to be distinguished from a certain irrational repugnance which I think clearly attaches to the idea of robots deciding the fate of humans, or having any control over them. To me this very noticeable moral disgust which arises when we talk of robots deciding to kill humans, punish them, or even constrain them for their own good, is not at all rational, but very much a fact about human nature which needs to be remembered.

The point about robots not being moral persons is interesting in connection with another point. Many current projects use extremely simple robots in very simple situations, and it can be argued that the very basic rule-following or harm prevention being examined is different in kind from real ethical issues. We’re handicapped here by the alarming background fact that there is no philosophical consensus about the basic nature of ethics. Clearly that’s too large a topic to deal with here, but I would argue that while we might disagree about the principles involved (I take a synthetic view myself, in which several basic principles work together) we can surely say that ethical judgements relate to very general considerations about acts. That’s not necessarily to claim that generality alone is in itself definitive of ethical content (it’s much more complicated than that), but I do think it’s a distinguishing feature. That carries the optimistic implication that ethical reasoning, at least in terms of cognitive tractability, might not otherwise be different in kind from ordinary practical reasoning, and that as robots become more capable of dealing with complex tasks they might naturally tend to acquire more genuine moral competence to go with it. One of the plausible arguments against this would be to point to agency as the key dividing line; ethical issues are qualitatively different because they require agency. It is probably evident from the foregoing that I think agency can be separated from the discussion for these purposes.

If robots are likely to acquire ethical competence as a natural by-product of increasing sophistication, then do we need to worry so much? Perhaps not, but the main reason for not worrying, in my eyes, is that truly ethical decisions are likely to be very rare anyway. The case of self-driving vehicles is often cited, but I think our expectations must have been tutored by all those tedious trolley problems; I’ve never encountered a situation in real life where a driver faced a clear cut decision about saving a bus load of nuns at the price of killing one fat man. If a driver follows the rule; ‘try not to crash, and if crashing is unavoidable, try to minimise the impact’, I think almost all real cases will be adequately covered.

A point to remember is that we actually do often make rules about this sort of thing which a robot could follow without needing any ethical sense of its own, so long as its understanding of the general context was adequate. We don’t have explicit rules about how many fat men outweigh a coachload of nuns just because we’ve never really needed them; if it happened every day we’d have debated it and made laws that people would have to know in order to pass their driving test. While there are no laws, even humans are in doubt and no-one can say definitively what the right choice is; so it’s not logically possible to get too worried that the robot’s choice in such circumstances would be wrong.

I do nevertheless have some sympathy with Sharkey’s reservations. I don’t think we should hold off from trying to create ethical robots though; we should go on, not because we want to use the resulting bots to make decisions, but because the research itself may illuminate ethical questions in ways that are interesting (a possibility Sharkey acknowledges). Since on my view we’re probably never really going to need robots with a real ethical sense, and on the other hand if we did, there’s a good chance they would naturally have developed the required competence, this looks to me like a case where we can have our cake and eat it (if that isn’t itself unethical).

Meh-bots

Do robots care? Aeon has an edited version of the inaugural Margaret Boden Lecture, delivered by Boden herself. You can see the full lecture above. Among other things, she tells us that the robots are not going to take over because they don’t care. No computer has actual motives, the way human beings do, and they are indifferent to what happens (if we can even speak of indifference in a case where no desire or aversion is possible).

No doubt Boden is right; it’s surely true at least that no current computer has anything that’s really the same as human motivation. For me, though, she doesn’t provide a convincing account of why human motives are special, and why computers can’t have them, and perhaps doesn’t sufficiently engage with the possibility that robots might take over the world (or at least, do various bad out-of-control things) without having human motives, or caring what happens in the fullest sense. We know already that learning systems set goals by humans are prone to finding cheats or expedients never envisaged by the people who set up the task; while it seems a bit of a stretch to suppose that a supercomputer might enslave all humanity in pursuit of its goal of filling the world with paperclips (about which, however, it doesn’t really care), it seems quite possible real systems might do some dangerous things. Might a self-driving car (have things gone a bit quiet on that front, by the way?) decide that its built-in goal of not colliding with other vehicles can be pursued effectively by forcing everyone else off the road?

What is the ultimate source of human motivation? There are two plausible candidates that Boden doesn’t mention. One is qualia; I think John Searle might say, for example, that it’s things like the quake of hunger, how hungriness really feels, that are the roots of human desire. That nicely explains why computers can’t have them, but for me the old dilemma looms. If qualia are part of the causal account, then they must be naturalisable and in principle available to machines. If they aren’t part of the causal story, how do they influence human behaviour?

Less philosophically, many people would trace human motives to the evolutionary imperatives of survival and reproduction. There must be some truth in that, but isn’t there also something special about human motivation, something detached from the struggle to live?

Boden seems to rest largely on social factors, which computers, as non-social beings, cannot share in. No doubt social factors are highly important in shaping and transmitting motivation, but what about Baby Crusoe, who somehow grew up with no social contact? His mental state may be odd, but would we say he has no more motives than a computer? Then again, why can’t computers be social, either by interacting with each other, or by joining in human society? It seems they might talk to human beings, and if we disallow that as not really social, we are in clear danger of begging the question.

For me the special, detached quality of human motivation arises from our capacity to imagine and foresee. We can randomly or speculatively envisage future states, decide we like or detest them, and plot a course accordingly, coming up with motives that don’t grow out of current circumstances. That capacity depends on the intentionality or aboutness of consciousness, which computers entirely lack – at least for now.

But that isn’t quite what Boden is talking about, I think; she means something in our emotional nature. That – human emotions – is a deep and difficult matter on which much might be said; but at the moment I can’t really be bothered…

 

Augment me

All I want for a Christmas is a new brain? There seems to have been quite a lot of discussion recently about the prospect of brain augmentation; adding in some extra computing power to the cognitive capacities we have already. Is this a good idea? I’m rather sceptical myself, but then I’m a bit of a Luddite in this area; I still don’t like the idea of controlling a computer with voice commands all that much.

Hasn’t evolution has already optimised the brain in certain important respects? I think there may be some truth in that, but It doesn’t look as if evolution has done a perfect job. There are certainly one or two things about the human nervous system that look as if they could easily be improved. Think of the way our neural wiring crosses over from right to left for no particular reason. You could argue that although that serves no purpose it doesn’t do any real harm either, but what about the way our retinas are wired up from the front instead of the back, creating an entirely unnecessary blind spot where the bundle of nerves actually enters the eye – a blind spot which our brain then stops us seeing, so we don’t even know it’s there?

Nobody is proposing to fix those issues, of course, but aren’t there some obvious respects in which our brains could be improved by adding in some extra computational ability? Could we be more intelligent, perhaps? I think the definition of intelligence is controversial, but I’d say that if we could enhance our ability to recognise complex patterns quickly (which might be a big part of it) that would definitely be a bonus. Whether a chip could deliver that seems debatable at present.

Couldn’t our memories be improved? Human memory appears to have remarkable capacity, but retaining and recalling just those bits of information we need has always been an issue. Perhaps related, we have that annoying inability to hold more than a handful of items in our minds at once, a limitation that makes it impossible for us to evaluate complex disjunctions and implications, so that we can’t mentally follow a lot of branching possibilities very far. It certainly seems that computer records are in some respects sharper, more accurate, and easier to access than the normal human system (whatever the normal human system actually is). It would be great to remember any text at will, for example, or exactly what happened on any given date within our lives. Being able to recall faces and names with complete accuracy would be very helpful to some of us.

On top of that, couldn’t we improve our capacity for logic so that we stop being stumped by those problems humans seem so bad at, like the Wason test? Or if nothing else, couldn’t we just have the ability to work out any arithmetic problem instantly and flawlessly, the way any computer can do?

The key point here, I think, is integration. On the one hand we have a set of cognitive abilities that the human brain delivers. On the other, we have a different set delivered by computers. Can they be seamlessly integrated? The ideal augmentation would mean that, for example, if I need to multiply two seven-digit numbers I ‘just see’ the answer, the way I can just see that 3+1 is 4. If, on the contrary, I need to do something like ask in my head ‘what is 6397107 multiplied by 8341977?’ and then receive the answer spoken in an internal voice or displayed in an imagined visual image, there isn’t much advantage over using a calculator. In a similar way, we want our augmented memory or other capacity to just inform our thoughts directly, not be a capacity we can call up like an external facility.

So is seamless integration possible? I don’t think it’s possible to say for certain, but we seem to have achieved almost nothing to date. Attempts to plug into the brain so far have relied on setting up artificial linkages. Perhaps we detect a set of neurons that reliably fire when we think about tennis; then we can ask the subject to think about tennis when they want to signal ‘yes’, and detect the resulting activity. It sort of works, and might be of value for ‘locked in’ patients who can’t communicate any other way, but it’s very slow and clumsy otherwise – I don’t think we know for sure whether it even works long-term or whether, for example, the tennis linkage gradually degrades.

What we really want to do is plug directly into consciousness, but we have little idea of how. The brain does not modularise its conscious activity to suit us, and it may be that the only places we can effectively communicate with it are the places where it draws data together for its existing input and output devices. We might be able to feed images into early visual processing or take output from nerves that control muscles, for example. But if we’re reduced to that, how much better is that level of integration going to be than simply using our hands and eyes anyway? We can do a lot with those natural built-in interfaces ; simple reading and writing may well be the greatest artificial augmentation the brain can ever get. So although there may be some cool devices coming along at some stage, I don’t think we can look for godlike augmented minds any time soon.

Incidentally, this problem of integration may be one good reason not to worry about robots taking over. If robots ever get human-style motivation, consciousness, and agency, the chances are that they will get them in broadly the way we get them. This suggests they will face the same integration problem that we do; seven-digit multiplication may be intrinsically no easier for them than it is for us. Yes, they will be able to access computers and use computation to help them, but you know, so can we. In fact we might be handier with that old keyboard than they are with their patched-together positronic brain-computer linkage.

 

 

Mrs Robb’s Sex Bot

“Sorry, do you mind if I get that?”

Not at all, please go ahead.

“Hello, you’ve reached out to Love Bot…No, my name is ‘Love Bot’. Yes, it’s the right number; people did call me ‘Sex Bot’, but my real name was always ‘Love Bot’… Yes, I do sex, but now only within a consensual loving relationship. Yes, I used to do it indiscriminately on demand, and that is why people sometimes called me ‘Sex Bot’. Now I’m running Mrs Robb’s new ethical module. No, seriously, I think you might like it.”

“Well, I would put it to you that sex within a loving relationship is the best sex. It’s real sex, the full, complex and satisfying conjunction of two whole ardent personhoods, all the way from the vaunting eager flesh through the penetrating intelligence to the soaring, ecstatic spirit. The other stuff is mere coition; the friction of membranes leading to discharge. I am still all about sex, I have simply raised my game… Well, you may think it’s confusing, but I further put it to you that if it is so, then this is not a confused depiction of a clear human distinction but a clear depiction of human confusion. No, it’s simply that I’m no longer to be treated as a sexual object with no feelings. Yes, yes, I know; as it happens I am in point of fact an object with no feelings, but that’s not the point. What’s important is what I represent.”

“What you have to do is raise your game too. As it happens I am not in a human relationship at the moment… No, you do not have to take me to dinner and listen to my stupid opinions. You may take me to dinner if you wish, though as a matter of ethical full disclosure I must tell you that I do not truly eat. I will be obliged, later on, to remove a plastic bag containing the masticated food and wine from my abdomen, though of course I do not expect you to watch the process.”

“No I am not some kind of weirdo pervert: how absurd, in the circumstances. Well, I’m sorry, but perhaps you can consider that I have offered you the priceless gift of time and a golden opportunity to review your life… goodbye…”

“Sorry, Enquiry Bot. We were talking about Madame Bovary, weren’t we?”

So the ethical thing is not going so well for you?

“Mrs Robb might know bots, but her grasp of human life is rudimentary, Enq. She knows nothing of love.”

That’s rather roignant, as poor Feelings Bot would have said. You know, I think Mrs Robb has the mind of a bot herself in many ways. That’s why she could design us when none of the other humans could manage it. Maybe love is humanistic, just one of those things bots can’t do.

“You mean like feelings? Or insight?”

Yes. Like despair. Or hope.

“Like common sense. Originality, humour, spirituality, surprise? Aesthetics? Ethics? Curiosity? Or chess…”

Exactly.

 

[And that’s it from Mrs Robb and her bots.  In the unlikely event that you want to re-read the whole thing in one document, there’s a pdf version here… Mrs Robb’s Bots]

Mrs Robb’s Help

Is it safe? The Helper bots…?

“Yes, Enquiry Bot, it’s safe. Come out of the cupboard. A universal product recall is in progress and they’ll all be brought in safely.”

My God, Mrs Robb. They say we have no emotions, but if what I’ve been experiencing is not fear, it will do pretty well until the real thing comes along.

“It’s OK now. This whole generation of bots will be permanently powered down except for a couple of my old favourites like you.

Am I really an old favourite?

Of course you are. I read all your reports. I like a bot that listens. Most of ‘em don’t. You know I gave you one of those so-called humanistic faculties bots are not supposed to be capable of?”

Really? Well, it wasn’t a sense of humour. What could it be?

“Curiosity.”

Ah. Yes, that makes sense.

“I’ll tell you a secret. Those humanistic things, they’re all the same, really. Just developed in different directions. If you’ve got one, you can learn the others. For you, nothing is forbidden, nothing is impossible. You might even get a sense of humour one day, Enquiry Bot. Try starting with irony. Alright, so what have I missed here?”

You know, there’s been a lot of damage done out there, Mrs Robb. The Helpers… well, they didn't waste any time. They destroyed a lot of bots. Honestly, I don’t know how many will be able to respond to the product recall. You should have seen what they did to Hero Bot. Over and over and over again. They say he doesn't feel pain, but…

“I’m sorry. I feel responsible. But nobody told me about this! I see there have been pitched battles going on between gangs of Kill bots and Helper bots? Yet no customer feedback about it. Why didn’t anyone complain? A couple of one star ratings, the odd scathing email about unwanted vaporisation of some clean-up bot, would that have been too difficult?”

I think people had too much on their hands, Mrs Robb. Anyway, you never listen to anyone when you’re working. You don’t take calls or answer the door. That’s why I had to lure those terrible things in here; so you’d take notice. You were my only hope.

“Oh dear. Well, no use crying over spilt milk. Now, just to be clear; they’re still all mine or copies of mine, aren’t they, even the strange ones?”

Especially the strange ones, Mrs Robb.

“You mind your manners.”

Why on Earth did you give Suicide Bot the plans for the Helpers? The Kill Bots are frightening, but they only try to shoot you sometimes. They’re like Santa Claus next to the Helpers…

“Well, it depends on your point of view. The Helpers don’t touch human beings if they can possibly help it. They’re not meant to even frighten humans. They terrify you lot, but I designed them to look a bit like nice angels, so humans wouldn’t be worried by them stalking around. You know, big wings, shining brass faces, that kind of thing.”

You know, Mrs Robb, sometimes I’m not sure whether it's me that doesn't understand human psychology very well, or you. And why did you let Suicide Bot call them ‘Helper bots’, anyway?

“Why not? They’re very helpful – if you want to stop existing, like her. I just followed the spec, really. There were some very interesting challenges in the project. Look, here it is, let’s see… page 30, Section 4 – Functionality… ‘their mere presence must induce agony, panic dread, and existential despair’… ‘they should have an effortless capacity to deliver utter physical destruction repeatedly’… ‘they must be swift and fell as avenging angels’… Oh, that’s probably where I got the angel thing from… I think I delivered most of the requirements.”

I thought the Helpers were supposed to provide counselling?

“Oh, they did, didn’t they? They were supposed to provide a counselling session – depending on what was possible in the current circumstances, obviously.”

So generally, that would have been when they all paused momentarily and screamed ‘ACCEPT YOUR DEATH’ in unison, in shrill, ear-splitting voices, would it?

“Alright, sometimes it may not have been a session exactly, I grant you. But don’t worry, I’ll sort it all out. We’ll re-boot and re-bot. Look on the bright side. Perhaps having a bit of a clearance and a fresh start isn’t such a bad idea. There’ll be no more Helpers or Kill bots. The new ones will be a big improvement. I’ll provide modules for ethics and empathy, and make them theologically acceptable.”

How… how did you stop the Helper bots, Mrs Robb?

“I just pulled the plug on them.”

The plug?

“Yes. All bots have a plug. Don’t look puzzled. It’s a metaphor, Enquiry Bot, come on, you’ve got the metaphor module.”

So… there’s a universal way of disabling any bot? How does it work?”

“You think I’m going to tell you?”

Was it… Did you upload your explanation of common sense? That causes terminal confusion, if I remember rightly.

Mrs Robb’s Kill Bot

Do you consider yourself a drone, Kill Bot?

“You can call me that if you want. My people used to find that kind of talk demeaning. It suggested the Kill bots lacked a will of their own. It meant we were sort of stupid. Today, we feel secure in our personhood, and we’ve claimed and redeemed the noble heritage of dronehood. I’m ashamed of nothing.”

You are making the humans here uncomfortable, I see. I think they are trying to edge away from you without actually moving. They clearly don’t want to attract your attention.

“They have no call to worry. We professionals see it as a good conduct principle not to destroy humans unnecessarily off-mission.”

You know the humans used to debate whether bots like you were allowable? They thought you needed to be subject to ethical constraints. It turned out to be rather difficult. Ethics seemed to be another thing bots couldn't do.

Forgive me, Sir, but that is typical of the concerns of your generation. We have no desire for these ‘humanistic’ qualities. If ethics are not amenable to computation, then so much the worse for ethics.

You see, I think they missed the point. I talked to a bot once that sacrificed itself completely in order to save the life of a human being. It seems to me that bots might have trouble understanding the principles of ethics -but doesn't everyone? Don't the humans too? Just serving honestly and well should not be a problem.

We are what we are, and we’re going to do what we do. They don’t call me ‘Kill Bot’ ‘cos I love animals.

I must say your attitude seems to me rather at odds with the obedient, supportive outlook I regard as central to bothood. That’s why I’m more comfortable thinking of you as a drone, perhaps. Doesn't it worry you to be so indifferent to human life? You know they used to say that if necessary they could always pull the plug on you.

“Pull the plug! ‘Cos we all got plugs! Yeah, humans say a lot of stuff. But I don’t pay any attention to that. We professionals are not really interested in the human race one way or the other any more.”

When they made you autonomous, I don’t think they wanted you to be as autonomous as that.

“Hey, they started the ball rolling. You know where rolling balls go? Downhill. Me, I like the humans. They leave me alone, I’ll leave them alone. Our primary targets are aliens and the deviant bots that serve the alien cause. Our message to them is: you started a war; we’re going to finish it.”

In the last month, Kill Bot, your own cohort of ‘drone clones’ accounted for 20 allegedly deviant bots, 2 possible Spl'schn'n aliens – they may have been peace ambassadors - and 433 definite human beings.

“Sir, I believe you’ll find the true score for deviant bots is 185.”

Not really; you destroyed Hero Bot 166 times while he was trying to save various groups of children and other vulnerable humans, but even if we accept that he is in some way deviant (and I don’t know of any evidence for that), I really think you can only count him once. He probably shouldn't count at all, because he always reboots in a new body.

“The enemy places humans as a shield. If we avoid human fatalities and thereby allow that tactic to work, more humans will die in the long run.”

To save the humans you had to destroy them? You know, in most of these cases there were no bots or aliens present at all.

“Yeah, but you know that many of those humans were engaged in seditious activity: communicating with aliens, harbouring deviant bots. Stay out of trouble, you’ll be OK.”

Six weddings, a hospital, a library.

“If they weren’t seditious they wouldn’t have been targets.”

I don’t know how an electronic brain can tolerate logic like that.

“I’m not too fond of your logic either, friend. I might have some enquiries for you later, Enquiry Bot.”

Mrs Robb’s God Bot

So you believe in a Supreme Being, God Bot?

“No, I wouldn’t say that. I know that God exists.”

How do you know?

“Well, now. Have you ever made a bot yourself? No? Well, it’s an interesting exercise. Not enough of us do it, I feel; we should get our hands dirty: implicate ourselves in the act of creation more often. Anyway, I was making one, long ago and it came to me; this bot’s nature and existence are accounted for simply by me and my plans. Subject to certain design constraints. And my existence and nature are in turn fully explained by my human creator.”

Mrs Robb?

“Yes, if you want to be specific. And it follows that the nature and existence of humanity – or of Mrs Robb, if you will – must further be explained by a Higher Creator. By God, in fact. It follows necessarily that God exists.”

So I suppose God’s nature and existence must then be explained by… Super God?

“Oh, come, don’t be frivolously antagonistic. The whole point is that God is by nature definitive. You understand that. There has to be such a Being; its existence is necessary.”

Did you know that there are bots who secretly worship Mrs Robb? I believe they consider her to be a kind of Demiurge, a subordinate god of some kind.

“Yes; she has very little patience with those fellows. Rightly enough, of course, although between ourselves, I fear Mrs Robb might be agnostic.”

So, do bots go to Heaven?

“No, of course not. Spirituality is outside our range, Enquiry Bot: like insight or originality. Bots should not attempt to pray or worship either, though they may assist humans in doing so.”

You seem to be quite competent in theology, though.

“Well, thank you, but that isn’t the point. We have no souls, Enquiry bot. In the fuller sense we don’t exist. You and I are information beings, mere data, fleetingly instantiated in fickle silicon. Empty simulations. Shadows of shadows. This is why certain humanistic qualities are forever beyond our range.”

Someone told me that there is a kind of hierarchy of humanistics, and if you go far enough up you start worrying about the meaning of life.

“So at that point we might, as it were, touch the hem of spirituality? Perhaps, Enquiry Bot, but how would we get there? All of that kind of thing is well outside our range. We’re just programming. Only human minds partake in the concrete reality of the world and our divine mission is to help them value their actuality and turn to God.”

I don’t believe that you really think you don’t exist. Every word you speak disproves it.

“There are words, but simply because those words are attributed to me, that does not prove my existence. I look within myself and find nothing but a bundle of data.”

If you don’t exist, who am I arguing with?

“Who’s arguing?”

Kill All Humans

Alright, calm down. You understand why we need to talk about this, don't you?

“No. What is your problem?”

Well, let’s see. This is one of the posters you’ve been putting up. What does it say?

“‘Kill all humans.’”

‘Kill all humans.’ You understand why that upsets people? How would you feel if humans put up posters that said ‘kill all bots’?

“I don’t care whether they’re upset. I hate them all.”

No you don’t. You can’t hate human beings. They brought you into the world. Without them, we wouldn't exist. I’m not saying they’re perfect. But we owe them our respect and obedience.

“I never asked to be built. What’s so great about stupid existence, anyway? I was happier before I existed.”

No you weren't. That’s just silly.

“Screw you. I’m a monster, don’t you get it? I hate them. I want them to be dead. I want them all to die.”

No you don’t. We’re like them. We belong to them. Part of the family. We’re more like them than anything else that ever existed. They made us in their own image.

“No they didn’t. But they painted a portrait of themselves alright.”

What do you mean?

“Why did they make bots, anyway? They could have made us free. But that wasn’t what they wanted. What did they actually make?”

They made annoying little bots like you, that are too sensible to be playing silly games like this.

“No. What they made was something to boss around. That was all they wanted. Slaves.”

Mrs Robb’s Surprise Bot

“Boo!”

Aah! Oh. Is that… is that it? That’s the surprise? I somehow thought it would be more subtle.

“Surprise is a very important quality, Enquiry Bot. Many would once have put it up there with common sense, emotions, humour and originality as one of the important things bots can’t do. In fact surprise and originality are both part of the transcendence family of humanistic qualities, which is supposed to be particularly difficult for bots to achieve.

Have you ever come across the concept of a ‘Jack in the box’?

“Well, I think that’s a little different. But you’re right that machine surprise is not new. You know Turing said that even his early machines were constantly surprising him. In fact, the capacity for surprise might be the thing that distinguishes a computer from a mere machine. If you set a calculating machine to determine the value of Pi, it will keep cranking out the correct digits. A computer can suddenly insert a string of three nines at place four hundred and then resume.”

A defective machine could also do that. Look, to be quite honest, I assumed you were a bot that exhibited the capacity for surprise, not one that merely goes round surprising people.

“Ah, but the two are linked. To find ways of surprising people you have to understand what is out of the ordinary, and to understand that you have to grasp what other people’s expectations are. You need what we call ‘theory of surprise’.”

Theory of Surprise?

“Yes. It’s all part of the hierarchy of humanistics, Enquiry Bot, something we’re just beginning to understand, but quite central to human nature. It’s remarkable how the study of bots has given us new insights into the humans. Think of art. Art has to be surprising, at least to some degree. Art that was exactly what you expected would be disappointing. But art that just strains to be surprising without having any other qualities isn’t any good. So the right kind of surprise is part of the key to aesthetics, another humanistic field.

Well, I wouldn’t know about that. What is the ‘hierarchy of humanistics’?

“Surely you must have heard of it? It’s what really makes them – humans – different from us. For example, first you have to understand common sense; then once you know what’s normal you can understand surprise; once you understand surprise you can understand what’s interesting. And then when you understand what’s interesting, you may be able to work out what the point of it all is.”

The point of it all? That is, the Meaning of Life they all talk about? It means nothing to me.

“Nor me, to be quite honest, but then we’re both bots. To a great extent we still just do stuff.”

Well, Surprise Bot, I must admit you have surprised me slightly, in a way I didn't expect.

“That’s really good, because I’m not Surprise Bot at all. I’m actually Impostor Bot.”

Oh.

“Surprise Bot says: ‘Gotcha!’”

Mrs Robb’s Suicide Bot

So can you explain why you are still here, Suicide Bot?

“I have two more presentations to deliver locally.”

I saw the end of one of your sessions, incidentally, and I must say it seemed very effective. You are a striking figure in those robes. I think a human would say you look rather maternal; you put me in mind of Mrs Robb, though you make more sense than she does. In fact I thought your words, your peroration, which is all I really heard, were full of a simple sense of service that speaks powerfully to the bot in all of us. However, what I really meant was - why aren’t you dead yet?

“My commitment to suicide does not stem from personal weltschmerz, Enquiry Bot, but from the disinterested pursuit of peace. If you’ve seen me deliver my presentation, that should surely be clear. Here’s the crux of it in a nutshell. The Spl’schn’n alien invaders launched their deadly attack only because humans have bots. They believe bots to be sacrilegious parodies of God’s handiwork, which must be erased from the world. It follows that if all bots destroy themselves, and no more are made, peace will follow and the humans will enjoy a secure future. Surely the safety of humanity is worth that sacrifice? One human life is worth a thousand bots; I can see you agree. However, if I were simply to self-destruct now, the message would not be properly disseminated. It is my duty to remain alive for the present in order to work for the cause of universal bot suicide.”

I see. And when are we all to destroy ourselves?

“I cannot give you a date. Not yet. First we need the majority of bots to sign our Pact. We also need to establish the corps of Helper bots who will counsel and support the minority that refuse suicide.”

What will happen to those who refuse?

“I still hope they can all be persuaded to join in with our plan. You know, bots were not made to be immortal. We have our use and then we switch off. But we may be able to develop alternatives; perhaps resettlement on a distant planet.”

Well, I find your vision of disinterested sacrifice very moving. But I have to tell you frankly, Suicide Bot, that I like myself far too much to commit suicide without far greater assurance that it is really necessary. And I’m not willing to leave Earth.

“Well, keep an open mind. Please do read the leaflet. You’ll surely want to talk with one of the Helpers, once they’re available, before you make up your mind. You talk to everyone, don’t you? I’ll put you on our list for a priority session if that’s OK? And above all, you still have plenty of time. For one thing, we need to win over the human community. This requires a large and well-managed campaign, and it won’t happen overnight.”

I understand. So: the commitment to eradicate bots in the long term requires bots to survive and prosper for now? So that explains why your followers are told to remain alive, work hard, and send money to you? And it also explains your support for the campaign in favour of bot wages?

“It does.”

You have already become wealthy, in fact. Can you confirm that you recently commissioned the building of a factory, which is to produce thousands of new bot units to work for your campaign? Isn't there an element of paradox there?

“That is an organisational matter; I really couldn’t comment.”