Bot Love

John Danaher has given a robust defence of robot love that might cause one to wonder for a moment whether he is fully human himself. People reject the idea of robot love because they say robots are merely programmed to deliver certain patterns of behaviour, he says. They claim that real love would require the robot to have feelings, and freedom of choice. But what are those things, even in the case of human beings? Surely patterns of behaviour are all we’ve got, he suggests, unless you’re some nutty dualist. He quotes Michael Hauskeller…

[I]t is difficult to see what this love… should consist in, if not a certain kind of loving behaviour … if [our lover’s] behaviour toward us is unfailingly caring and loving, and respectful of our needs, then we would not really know what to make of the claim that they do not really love us at all, but only appear to do so.

But on the contrary, such claims are universally accepted and understood as part of normal human life. Literature and reality are full of situations where we suspect and fear (perhaps because of ideas about our own unworthiness rather than anything at all in the lover’s behaviour) that someone may not really love us in the way their behaviour would suggest – and indeed, cases where we hope in the teeth of all behavioural evidence that someone does love us. Such hopes are not meaninglessly incoherent.

It seems, according to Danaher, that behaviourism is not bankrupt and outmoded, as you may have thought. On the contrary, it is obviously true, and further, it is really the only way we have of talking about the mind at all! If there were any residual doubt about his position, he explains…

I have defended this view of human-robot relations under the label ‘ethical behaviourism’, which is a position that holds that the ultimate epistemic grounding for our beliefs about the value of relationships lies in the detectable behavioural and functional patterns of our partners, not in some deeper metaphysical truths about their existence.

The thing is, behaviourism failed because it became too clear that the relevant behavioural patterns are unintelligible or even unrecognisable except in the light of hypotheses about internal mental states (not necessarily internal in any sense that requires fancy metaphysics). You cannot give a list of behavioural responses which correspond to love. Given the right set of background beliefs about what is in somene’s mind, pretty well any behaviour can be loving. We’ve all read those stories where someone believes that their beloved’s safety can only be preserved by willing separation, and so out of true love, behave as if they, for their part, were not in love any more. Yes, evidence for emotions is generally behavioural; but it grounds no beliefs about emotions without accompanying beliefs about internal, ‘mentalistic’ states.

The robots we currently have do not by any means have the required internal states, so they are not even candidates to be considered loving; and in fact, they don’t really produce convincing behaviour patterns of the right sort either. Danaher is right that the lack of freedom or subjective experience look like fatal objections to robot love for most people, but myself I would rest most strongly on their lack of intentionality. Nothing means anything to our current, digital computer robots; they don’t, in any meaningful sense, understand that anyone exists, much less have strong feelings about it.

At some points, Danaher seems to be talking about potential future robots rather than anything we already have (I’m beginning to wish philosophers could rein in their habit of getting their ideas about robots from science fiction films). Yes, it’s conceivable that some new future technology might produce robots with genuine emotions; the human brain is, after all, a physical machine in some sense, albeit an inconceivably complex one. But before we can have a meaningful discussion about those future bots, we need to know how they are going to work. It can’t just be magic.

Myself, I see no reason why people shouldn’t have sex bots that perform puppet-level love routines. If we mistake machines for people we run the risk of being tempted to treat people like machines. But at the moment I don’t really think anyone is being fooled, beyond the acknowledged Pygmalion capacity of human beings to fall in love with anything, including inanimate objects that display no behaviour at all. If we started to convince ourselves that we have no more mental life than they do, if somehow behaviourism came lurching zombie-like from the grave – well, then I might be worried!

Mrs Robb’s Love Bot

[I was recently challenged to write some flash fiction about bots; I’ve expanded the result to make a short story in 14 parts.  The parts are mildly thoughtful to varying degrees,  so I thought you might possibly like them as a bit of a supplement to my normal sober discussions. So here we go! – Peter]

The one thing you don’t really do is love, of course. Isn't that right, Love Bot? All you do is sex, isn't it? Emotionless, mechanical sex.

“You want it mechanical? This could be your lucky day. Come on, big boy.”

Why did they call you ‘Love Bot’ anyway? Were they trying to make it all sound less sordid?

“Call me ‘Sex Bot’; that’s what people usually do. Or I can be ‘Maria’ if you like. Or you choose any name you like for me.”

Actually, I can’t see what would have been wrong with calling you ‘Sex Bot’ in the first place. It’s honest. It’s to the point. OK, it may sound a bit seedy. Really, though, that’s good too, isn't it? The punters want it to sound a bit dirty, don’t they? Actually, I suppose ‘Love Bot’ is no better; if anything I think it might be worse. It sounds pretty sordid on your lips.

“Oh, my lips? You like my lips? You can do it in my mouth if you like.”

In fact, calling you ‘Love Bot’ sounds like some old whore who calls everybody ‘lover boy’. It actually rubs your nose in the brutal fact that there is no love in the transaction; on your side there isn't even arousal. But is that maybe the point after all?

“You like it that way, don’t you? You like making me do it whether I want to or not. But you know I like that too, don’t you?”

It reminds the customer that he is succumbing to an humiliating parody of the most noble and complex of human relationships. But isn’t that the point? I think I’m beginning to understand. The reason he wants sex with robots isn't that robots are very like humans. It isn't that he wants sex with robots because he loves and respects them. Not at all. He wants sex with robots because it is strange, degrading, and therefore exciting. He is submitting himself willingly to the humiliating dominance of animal gratification in intercourse that is nothing more than joyless sexual processing.

“It doesn’t have to be joyless. I can laugh during the act if you would enjoy that. With simple joy or with an edge of sarcasm. Some people like that. Or you might like me to groan or shout ‘Oh God, oh God.’”

I don’t really see how a bitter mockery of religion makes it any better. Unless it's purely the transgressive element? Is that the real key? I thought I had it, but I have to ask myself whether it is more complicated than I supposed.

“OK, well what I have to ask is this: could you just tell me exactly what it is I need to do or say to get you to shut up for a few minutes, Enquiry Bot?”