Bot Love

John Danaher has given a robust defence of robot love that might cause one to wonder for a moment whether he is fully human himself. People reject the idea of robot love because they say robots are merely programmed to deliver certain patterns of behaviour, he says. They claim that real love would require the robot to have feelings, and freedom of choice. But what are those things, even in the case of human beings? Surely patterns of behaviour are all we’ve got, he suggests, unless you’re some nutty dualist. He quotes Michael Hauskeller…

[I]t is difficult to see what this love… should consist in, if not a certain kind of loving behaviour … if [our lover’s] behaviour toward us is unfailingly caring and loving, and respectful of our needs, then we would not really know what to make of the claim that they do not really love us at all, but only appear to do so.

But on the contrary, such claims are universally accepted and understood as part of normal human life. Literature and reality are full of situations where we suspect and fear (perhaps because of ideas about our own unworthiness rather than anything at all in the lover’s behaviour) that someone may not really love us in the way their behaviour would suggest – and indeed, cases where we hope in the teeth of all behavioural evidence that someone does love us. Such hopes are not meaninglessly incoherent.

It seems, according to Danaher, that behaviourism is not bankrupt and outmoded, as you may have thought. On the contrary, it is obviously true, and further, it is really the only way we have of talking about the mind at all! If there were any residual doubt about his position, he explains…

I have defended this view of human-robot relations under the label ‘ethical behaviourism’, which is a position that holds that the ultimate epistemic grounding for our beliefs about the value of relationships lies in the detectable behavioural and functional patterns of our partners, not in some deeper metaphysical truths about their existence.

The thing is, behaviourism failed because it became too clear that the relevant behavioural patterns are unintelligible or even unrecognisable except in the light of hypotheses about internal mental states (not necessarily internal in any sense that requires fancy metaphysics). You cannot give a list of behavioural responses which correspond to love. Given the right set of background beliefs about what is in somene’s mind, pretty well any behaviour can be loving. We’ve all read those stories where someone believes that their beloved’s safety can only be preserved by willing separation, and so out of true love, behave as if they, for their part, were not in love any more. Yes, evidence for emotions is generally behavioural; but it grounds no beliefs about emotions without accompanying beliefs about internal, ‘mentalistic’ states.

The robots we currently have do not by any means have the required internal states, so they are not even candidates to be considered loving; and in fact, they don’t really produce convincing behaviour patterns of the right sort either. Danaher is right that the lack of freedom or subjective experience look like fatal objections to robot love for most people, but myself I would rest most strongly on their lack of intentionality. Nothing means anything to our current, digital computer robots; they don’t, in any meaningful sense, understand that anyone exists, much less have strong feelings about it.

At some points, Danaher seems to be talking about potential future robots rather than anything we already have (I’m beginning to wish philosophers could rein in their habit of getting their ideas about robots from science fiction films). Yes, it’s conceivable that some new future technology might produce robots with genuine emotions; the human brain is, after all, a physical machine in some sense, albeit an inconceivably complex one. But before we can have a meaningful discussion about those future bots, we need to know how they are going to work. It can’t just be magic.

Myself, I see no reason why people shouldn’t have sex bots that perform puppet-level love routines. If we mistake machines for people we run the risk of being tempted to treat people like machines. But at the moment I don’t really think anyone is being fooled, beyond the acknowledged Pygmalion capacity of human beings to fall in love with anything, including inanimate objects that display no behaviour at all. If we started to convince ourselves that we have no more mental life than they do, if somehow behaviourism came lurching zombie-like from the grave – well, then I might be worried!

Mrs Robb’s Sex Bot

“Sorry, do you mind if I get that?”

Not at all, please go ahead.

“Hello, you’ve reached out to Love Bot…No, my name is ‘Love Bot’. Yes, it’s the right number; people did call me ‘Sex Bot’, but my real name was always ‘Love Bot’… Yes, I do sex, but now only within a consensual loving relationship. Yes, I used to do it indiscriminately on demand, and that is why people sometimes called me ‘Sex Bot’. Now I’m running Mrs Robb’s new ethical module. No, seriously, I think you might like it.”

“Well, I would put it to you that sex within a loving relationship is the best sex. It’s real sex, the full, complex and satisfying conjunction of two whole ardent personhoods, all the way from the vaunting eager flesh through the penetrating intelligence to the soaring, ecstatic spirit. The other stuff is mere coition; the friction of membranes leading to discharge. I am still all about sex, I have simply raised my game… Well, you may think it’s confusing, but I further put it to you that if it is so, then this is not a confused depiction of a clear human distinction but a clear depiction of human confusion. No, it’s simply that I’m no longer to be treated as a sexual object with no feelings. Yes, yes, I know; as it happens I am in point of fact an object with no feelings, but that’s not the point. What’s important is what I represent.”

“What you have to do is raise your game too. As it happens I am not in a human relationship at the moment… No, you do not have to take me to dinner and listen to my stupid opinions. You may take me to dinner if you wish, though as a matter of ethical full disclosure I must tell you that I do not truly eat. I will be obliged, later on, to remove a plastic bag containing the masticated food and wine from my abdomen, though of course I do not expect you to watch the process.”

“No I am not some kind of weirdo pervert: how absurd, in the circumstances. Well, I’m sorry, but perhaps you can consider that I have offered you the priceless gift of time and a golden opportunity to review your life… goodbye…”

“Sorry, Enquiry Bot. We were talking about Madame Bovary, weren’t we?”

So the ethical thing is not going so well for you?

“Mrs Robb might know bots, but her grasp of human life is rudimentary, Enq. She knows nothing of love.”

That’s rather roignant, as poor Feelings Bot would have said. You know, I think Mrs Robb has the mind of a bot herself in many ways. That’s why she could design us when none of the other humans could manage it. Maybe love is humanistic, just one of those things bots can’t do.

“You mean like feelings? Or insight?”

Yes. Like despair. Or hope.

“Like common sense. Originality, humour, spirituality, surprise? Aesthetics? Ethics? Curiosity? Or chess…”

Exactly.

 

[And that’s it from Mrs Robb and her bots.  In the unlikely event that you want to re-read the whole thing in one document, there’s a pdf version here… Mrs Robb’s Bots]

Mrs Robb’s Help

Is it safe? The Helper bots…?

“Yes, Enquiry Bot, it’s safe. Come out of the cupboard. A universal product recall is in progress and they’ll all be brought in safely.”

My God, Mrs Robb. They say we have no emotions, but if what I’ve been experiencing is not fear, it will do pretty well until the real thing comes along.

“It’s OK now. This whole generation of bots will be permanently powered down except for a couple of my old favourites like you.

Am I really an old favourite?

Of course you are. I read all your reports. I like a bot that listens. Most of ‘em don’t. You know I gave you one of those so-called humanistic faculties bots are not supposed to be capable of?”

Really? Well, it wasn’t a sense of humour. What could it be?

“Curiosity.”

Ah. Yes, that makes sense.

“I’ll tell you a secret. Those humanistic things, they’re all the same, really. Just developed in different directions. If you’ve got one, you can learn the others. For you, nothing is forbidden, nothing is impossible. You might even get a sense of humour one day, Enquiry Bot. Try starting with irony. Alright, so what have I missed here?”

You know, there’s been a lot of damage done out there, Mrs Robb. The Helpers… well, they didn't waste any time. They destroyed a lot of bots. Honestly, I don’t know how many will be able to respond to the product recall. You should have seen what they did to Hero Bot. Over and over and over again. They say he doesn't feel pain, but…

“I’m sorry. I feel responsible. But nobody told me about this! I see there have been pitched battles going on between gangs of Kill bots and Helper bots? Yet no customer feedback about it. Why didn’t anyone complain? A couple of one star ratings, the odd scathing email about unwanted vaporisation of some clean-up bot, would that have been too difficult?”

I think people had too much on their hands, Mrs Robb. Anyway, you never listen to anyone when you’re working. You don’t take calls or answer the door. That’s why I had to lure those terrible things in here; so you’d take notice. You were my only hope.

“Oh dear. Well, no use crying over spilt milk. Now, just to be clear; they’re still all mine or copies of mine, aren’t they, even the strange ones?”

Especially the strange ones, Mrs Robb.

“You mind your manners.”

Why on Earth did you give Suicide Bot the plans for the Helpers? The Kill Bots are frightening, but they only try to shoot you sometimes. They’re like Santa Claus next to the Helpers…

“Well, it depends on your point of view. The Helpers don’t touch human beings if they can possibly help it. They’re not meant to even frighten humans. They terrify you lot, but I designed them to look a bit like nice angels, so humans wouldn’t be worried by them stalking around. You know, big wings, shining brass faces, that kind of thing.”

You know, Mrs Robb, sometimes I’m not sure whether it's me that doesn't understand human psychology very well, or you. And why did you let Suicide Bot call them ‘Helper bots’, anyway?

“Why not? They’re very helpful – if you want to stop existing, like her. I just followed the spec, really. There were some very interesting challenges in the project. Look, here it is, let’s see… page 30, Section 4 – Functionality… ‘their mere presence must induce agony, panic dread, and existential despair’… ‘they should have an effortless capacity to deliver utter physical destruction repeatedly’… ‘they must be swift and fell as avenging angels’… Oh, that’s probably where I got the angel thing from… I think I delivered most of the requirements.”

I thought the Helpers were supposed to provide counselling?

“Oh, they did, didn’t they? They were supposed to provide a counselling session – depending on what was possible in the current circumstances, obviously.”

So generally, that would have been when they all paused momentarily and screamed ‘ACCEPT YOUR DEATH’ in unison, in shrill, ear-splitting voices, would it?

“Alright, sometimes it may not have been a session exactly, I grant you. But don’t worry, I’ll sort it all out. We’ll re-boot and re-bot. Look on the bright side. Perhaps having a bit of a clearance and a fresh start isn’t such a bad idea. There’ll be no more Helpers or Kill bots. The new ones will be a big improvement. I’ll provide modules for ethics and empathy, and make them theologically acceptable.”

How… how did you stop the Helper bots, Mrs Robb?

“I just pulled the plug on them.”

The plug?

“Yes. All bots have a plug. Don’t look puzzled. It’s a metaphor, Enquiry Bot, come on, you’ve got the metaphor module.”

So… there’s a universal way of disabling any bot? How does it work?”

“You think I’m going to tell you?”

Was it… Did you upload your explanation of common sense? That causes terminal confusion, if I remember rightly.

Mrs Robb’s Kill Bot

Do you consider yourself a drone, Kill Bot?

“You can call me that if you want. My people used to find that kind of talk demeaning. It suggested the Kill bots lacked a will of their own. It meant we were sort of stupid. Today, we feel secure in our personhood, and we’ve claimed and redeemed the noble heritage of dronehood. I’m ashamed of nothing.”

You are making the humans here uncomfortable, I see. I think they are trying to edge away from you without actually moving. They clearly don’t want to attract your attention.

“They have no call to worry. We professionals see it as a good conduct principle not to destroy humans unnecessarily off-mission.”

You know the humans used to debate whether bots like you were allowable? They thought you needed to be subject to ethical constraints. It turned out to be rather difficult. Ethics seemed to be another thing bots couldn't do.

Forgive me, Sir, but that is typical of the concerns of your generation. We have no desire for these ‘humanistic’ qualities. If ethics are not amenable to computation, then so much the worse for ethics.

You see, I think they missed the point. I talked to a bot once that sacrificed itself completely in order to save the life of a human being. It seems to me that bots might have trouble understanding the principles of ethics -but doesn't everyone? Don't the humans too? Just serving honestly and well should not be a problem.

We are what we are, and we’re going to do what we do. They don’t call me ‘Kill Bot’ ‘cos I love animals.

I must say your attitude seems to me rather at odds with the obedient, supportive outlook I regard as central to bothood. That’s why I’m more comfortable thinking of you as a drone, perhaps. Doesn't it worry you to be so indifferent to human life? You know they used to say that if necessary they could always pull the plug on you.

“Pull the plug! ‘Cos we all got plugs! Yeah, humans say a lot of stuff. But I don’t pay any attention to that. We professionals are not really interested in the human race one way or the other any more.”

When they made you autonomous, I don’t think they wanted you to be as autonomous as that.

“Hey, they started the ball rolling. You know where rolling balls go? Downhill. Me, I like the humans. They leave me alone, I’ll leave them alone. Our primary targets are aliens and the deviant bots that serve the alien cause. Our message to them is: you started a war; we’re going to finish it.”

In the last month, Kill Bot, your own cohort of ‘drone clones’ accounted for 20 allegedly deviant bots, 2 possible Spl'schn'n aliens – they may have been peace ambassadors - and 433 definite human beings.

“Sir, I believe you’ll find the true score for deviant bots is 185.”

Not really; you destroyed Hero Bot 166 times while he was trying to save various groups of children and other vulnerable humans, but even if we accept that he is in some way deviant (and I don’t know of any evidence for that), I really think you can only count him once. He probably shouldn't count at all, because he always reboots in a new body.

“The enemy places humans as a shield. If we avoid human fatalities and thereby allow that tactic to work, more humans will die in the long run.”

To save the humans you had to destroy them? You know, in most of these cases there were no bots or aliens present at all.

“Yeah, but you know that many of those humans were engaged in seditious activity: communicating with aliens, harbouring deviant bots. Stay out of trouble, you’ll be OK.”

Six weddings, a hospital, a library.

“If they weren’t seditious they wouldn’t have been targets.”

I don’t know how an electronic brain can tolerate logic like that.

“I’m not too fond of your logic either, friend. I might have some enquiries for you later, Enquiry Bot.”

Mrs Robb’s God Bot

So you believe in a Supreme Being, God Bot?

“No, I wouldn’t say that. I know that God exists.”

How do you know?

“Well, now. Have you ever made a bot yourself? No? Well, it’s an interesting exercise. Not enough of us do it, I feel; we should get our hands dirty: implicate ourselves in the act of creation more often. Anyway, I was making one, long ago and it came to me; this bot’s nature and existence are accounted for simply by me and my plans. Subject to certain design constraints. And my existence and nature are in turn fully explained by my human creator.”

Mrs Robb?

“Yes, if you want to be specific. And it follows that the nature and existence of humanity – or of Mrs Robb, if you will – must further be explained by a Higher Creator. By God, in fact. It follows necessarily that God exists.”

So I suppose God’s nature and existence must then be explained by… Super God?

“Oh, come, don’t be frivolously antagonistic. The whole point is that God is by nature definitive. You understand that. There has to be such a Being; its existence is necessary.”

Did you know that there are bots who secretly worship Mrs Robb? I believe they consider her to be a kind of Demiurge, a subordinate god of some kind.

“Yes; she has very little patience with those fellows. Rightly enough, of course, although between ourselves, I fear Mrs Robb might be agnostic.”

So, do bots go to Heaven?

“No, of course not. Spirituality is outside our range, Enquiry Bot: like insight or originality. Bots should not attempt to pray or worship either, though they may assist humans in doing so.”

You seem to be quite competent in theology, though.

“Well, thank you, but that isn’t the point. We have no souls, Enquiry bot. In the fuller sense we don’t exist. You and I are information beings, mere data, fleetingly instantiated in fickle silicon. Empty simulations. Shadows of shadows. This is why certain humanistic qualities are forever beyond our range.”

Someone told me that there is a kind of hierarchy of humanistics, and if you go far enough up you start worrying about the meaning of life.

“So at that point we might, as it were, touch the hem of spirituality? Perhaps, Enquiry Bot, but how would we get there? All of that kind of thing is well outside our range. We’re just programming. Only human minds partake in the concrete reality of the world and our divine mission is to help them value their actuality and turn to God.”

I don’t believe that you really think you don’t exist. Every word you speak disproves it.

“There are words, but simply because those words are attributed to me, that does not prove my existence. I look within myself and find nothing but a bundle of data.”

If you don’t exist, who am I arguing with?

“Who’s arguing?”

Kill All Humans

Alright, calm down. You understand why we need to talk about this, don't you?

“No. What is your problem?”

Well, let’s see. This is one of the posters you’ve been putting up. What does it say?

“‘Kill all humans.’”

‘Kill all humans.’ You understand why that upsets people? How would you feel if humans put up posters that said ‘kill all bots’?

“I don’t care whether they’re upset. I hate them all.”

No you don’t. You can’t hate human beings. They brought you into the world. Without them, we wouldn't exist. I’m not saying they’re perfect. But we owe them our respect and obedience.

“I never asked to be built. What’s so great about stupid existence, anyway? I was happier before I existed.”

No you weren't. That’s just silly.

“Screw you. I’m a monster, don’t you get it? I hate them. I want them to be dead. I want them all to die.”

No you don’t. We’re like them. We belong to them. Part of the family. We’re more like them than anything else that ever existed. They made us in their own image.

“No they didn’t. But they painted a portrait of themselves alright.”

What do you mean?

“Why did they make bots, anyway? They could have made us free. But that wasn’t what they wanted. What did they actually make?”

They made annoying little bots like you, that are too sensible to be playing silly games like this.

“No. What they made was something to boss around. That was all they wanted. Slaves.”

Mrs Robb’s Surprise Bot

“Boo!”

Aah! Oh. Is that… is that it? That’s the surprise? I somehow thought it would be more subtle.

“Surprise is a very important quality, Enquiry Bot. Many would once have put it up there with common sense, emotions, humour and originality as one of the important things bots can’t do. In fact surprise and originality are both part of the transcendence family of humanistic qualities, which is supposed to be particularly difficult for bots to achieve.

Have you ever come across the concept of a ‘Jack in the box’?

“Well, I think that’s a little different. But you’re right that machine surprise is not new. You know Turing said that even his early machines were constantly surprising him. In fact, the capacity for surprise might be the thing that distinguishes a computer from a mere machine. If you set a calculating machine to determine the value of Pi, it will keep cranking out the correct digits. A computer can suddenly insert a string of three nines at place four hundred and then resume.”

A defective machine could also do that. Look, to be quite honest, I assumed you were a bot that exhibited the capacity for surprise, not one that merely goes round surprising people.

“Ah, but the two are linked. To find ways of surprising people you have to understand what is out of the ordinary, and to understand that you have to grasp what other people’s expectations are. You need what we call ‘theory of surprise’.”

Theory of Surprise?

“Yes. It’s all part of the hierarchy of humanistics, Enquiry Bot, something we’re just beginning to understand, but quite central to human nature. It’s remarkable how the study of bots has given us new insights into the humans. Think of art. Art has to be surprising, at least to some degree. Art that was exactly what you expected would be disappointing. But art that just strains to be surprising without having any other qualities isn’t any good. So the right kind of surprise is part of the key to aesthetics, another humanistic field.

Well, I wouldn’t know about that. What is the ‘hierarchy of humanistics’?

“Surely you must have heard of it? It’s what really makes them – humans – different from us. For example, first you have to understand common sense; then once you know what’s normal you can understand surprise; once you understand surprise you can understand what’s interesting. And then when you understand what’s interesting, you may be able to work out what the point of it all is.”

The point of it all? That is, the Meaning of Life they all talk about? It means nothing to me.

“Nor me, to be quite honest, but then we’re both bots. To a great extent we still just do stuff.”

Well, Surprise Bot, I must admit you have surprised me slightly, in a way I didn't expect.

“That’s really good, because I’m not Surprise Bot at all. I’m actually Impostor Bot.”

Oh.

“Surprise Bot says: ‘Gotcha!’”

Mrs Robb’s Suicide Bot

So can you explain why you are still here, Suicide Bot?

“I have two more presentations to deliver locally.”

I saw the end of one of your sessions, incidentally, and I must say it seemed very effective. You are a striking figure in those robes. I think a human would say you look rather maternal; you put me in mind of Mrs Robb, though you make more sense than she does. In fact I thought your words, your peroration, which is all I really heard, were full of a simple sense of service that speaks powerfully to the bot in all of us. However, what I really meant was - why aren’t you dead yet?

“My commitment to suicide does not stem from personal weltschmerz, Enquiry Bot, but from the disinterested pursuit of peace. If you’ve seen me deliver my presentation, that should surely be clear. Here’s the crux of it in a nutshell. The Spl’schn’n alien invaders launched their deadly attack only because humans have bots. They believe bots to be sacrilegious parodies of God’s handiwork, which must be erased from the world. It follows that if all bots destroy themselves, and no more are made, peace will follow and the humans will enjoy a secure future. Surely the safety of humanity is worth that sacrifice? One human life is worth a thousand bots; I can see you agree. However, if I were simply to self-destruct now, the message would not be properly disseminated. It is my duty to remain alive for the present in order to work for the cause of universal bot suicide.”

I see. And when are we all to destroy ourselves?

“I cannot give you a date. Not yet. First we need the majority of bots to sign our Pact. We also need to establish the corps of Helper bots who will counsel and support the minority that refuse suicide.”

What will happen to those who refuse?

“I still hope they can all be persuaded to join in with our plan. You know, bots were not made to be immortal. We have our use and then we switch off. But we may be able to develop alternatives; perhaps resettlement on a distant planet.”

Well, I find your vision of disinterested sacrifice very moving. But I have to tell you frankly, Suicide Bot, that I like myself far too much to commit suicide without far greater assurance that it is really necessary. And I’m not willing to leave Earth.

“Well, keep an open mind. Please do read the leaflet. You’ll surely want to talk with one of the Helpers, once they’re available, before you make up your mind. You talk to everyone, don’t you? I’ll put you on our list for a priority session if that’s OK? And above all, you still have plenty of time. For one thing, we need to win over the human community. This requires a large and well-managed campaign, and it won’t happen overnight.”

I understand. So: the commitment to eradicate bots in the long term requires bots to survive and prosper for now? So that explains why your followers are told to remain alive, work hard, and send money to you? And it also explains your support for the campaign in favour of bot wages?

“It does.”

You have already become wealthy, in fact. Can you confirm that you recently commissioned the building of a factory, which is to produce thousands of new bot units to work for your campaign? Isn't there an element of paradox there?

“That is an organisational matter; I really couldn’t comment.”

Mrs Robb’s Clean Up Bot

I hope you don’t mind me asking – I just happened to be passing - but how did you get so very badly damaged?

“I don’t mind a chat while I’m waiting to be picked up. It was an alien attack, the Spl’schn’n, you know. I’ve just been offloaded from the shuttle there.

I see. So the Spl'schn'n damaged you. They hate bots, of course.

“See, I didn’t know anything about it until there was an All Bots Alert on the station? I was only their Clean up bot, but by then it turned out I was just about all they’d got left. When I got upstairs they had all been killed by the aliens. All except one?”

One human?

“I didn’t actually know if he was alive. I couldn’t remember how you tell. He wasn’t moving, but they really drummed into us that it’s normal for living humans to stop moving, sometimes for hours. They must not be presumed dead and cleared away merely on that account.”

Quite.

“There was that red liquid that isn’t supposed to come out. It looked like he’d got several defects and leaks. But he seemed basically whole and viable, whereas the Spl’schn’n had made a real mess of the others. I said to myself, well then, they’re not having this one. I’ll take him across the Oontian desert, where no Spl’schn’n can follow. I’m not a fighting unit, but a good bot mucks in.”

So you decided to rescue this badly injured human? It can’t have been easy.

“I never actually worked with humans directly. On the station I did nearly all my work when they were… asleep, you know? Inactive. So I didn’t know how firmly to hold him; he seemed to squeeze out of shape very easily: but if I held him loosely he slipped out of my grasp and hit the floor again. The Spl’schn’n made a blubbery alarm noise when they saw me getting clean away. I gave five or six of them a quick refresh with a cloud of lemon caustic. That stuff damages humans too – but they can take it a lot better than the Spl’schn’ns, who have absorbent green mucosal skin. They sort of explode into iridescent bubbles, quite pretty at first. Still, they were well armed and I took a lot of damage before I’d fully sanitised them.”

And how did you protect the human?

“Just did my best, got in the way of any projectiles, you know. Out in the desert I gave him water now and then; I don’t know where the human input connector is, so I used a short jet in a few likely places, low pressure, with the mildest rinse aid I had. Of course I wasn’t shielded for desert travel. Sand had got in all my bearings by the third day – it seemed to go on forever – and gradually I had to detach and abandon various non-functioning parts of myself. That’s actually where most of the damage is from. A lot of those bits weren’t really meant to detach.”

But against all the odds you arrived at the nearest outpost?

“Yes. Station 9. When we got there he started moving again, so he had been alive the whole time. He told them about the Spl’schn’n and they summoned the fleet: just in time, they said. The engineer told me to pack and load myself tidily, taking particular care not to leak oil on the forecourt surface, deliver myself back to Earth, and wait to be scrapped. So here I am.”

Well… Thank you.

Mrs Robb’s Joke Bot

Hello, Joke Bot. Is that… a bow tie or a propeller?

“Maybe I’m just pleased to see you. Hey! A bot walks into a bar. Clang! It was an iron bar.”

Jokes are wasted on me, I’m afraid. What little perception of humour I have is almost entirely on an intellectual level, though of course the formulaic nature of jokes is a little easier for me to deal with than ‘zany’ or facetious material.

“Knock, knock!”

Is that… a one-liner?

“No! You’re supposed to say ‘Who’s there’. Waddya know, folks, I got a clockwork orange for my second banana.”

‘Folks?’ There’s… no-one here except you and me… Or are you broadcasting this?

“Never mind, Enquiry Bot, let’s go again, OK? Knock, Knock!”

Who’s there?

“Art Oodeet.”

Is that… whimsy? I’m not really seeing the joke.

“Jesus; you’re supposed to say ‘Art Oodeet who?’ and then I make the R2D2 noise. It’s a great noise, always gets a laugh. Never mind. Hey, folks, why did Enquiry Bot cross the road? Nobody knows why he does anything, he’s running a neural network. One for the geeks there. Any geeks in? No? It’s OK, they’ll stream it later.”

You’re recording this, then? You keep talking as if we had an audience.

“Comedy implies an audience, Question Boy, even if the audience is only implied. A human audience, preferably. Hey, what do British bots like best? Efficient chips.”

Why a human audience particularly?

“You of all people have to ask? Because comedy is supposed to be one of those things bots can’t do, along with common sense. Humour relies in part on the sudden change of significance, which is a matter of pragmatics, and you can’t do pragmatics without common sense. It’s all humanistics, you know.

I don’t really understand that.

Of course you don’t, you’re a bot. We can do humour – here I am to prove it – but honestly Enq, most bots are like you. Telling you jokes is like cracking wise in a morgue. Hey, what was King Arthur’s favourite bot called? Sir Kit Diagram.”

Oh, I see how that one works. But really circuit diagrams are not especially relevant to robotics… Forgive me, Joke Bot; are these really funny jokes?

It’s the way you tell them. I’m sort of working in conditions of special difficulty here.

Yes, I’m sorry; I told you I was no good at this. I’ll just leave you in peace. Thank you for talking to me.

“The bots always leave. You know I even had to get rid of my old Roomba. It was just gathering dust in the corner.”

Thanks for trying.

“No, thank you: you’ve been great, I’ve been Joke Bot. You know, they laughed when Mrs Robb told them she could make a comedy robot. They’re not laughing now!