Bot Love

John Danaher has given a robust defence of robot love that might cause one to wonder for a moment whether he is fully human himself. People reject the idea of robot love because they say robots are merely programmed to deliver certain patterns of behaviour, he says. They claim that real love would require the robot to have feelings, and freedom of choice. But what are those things, even in the case of human beings? Surely patterns of behaviour are all we’ve got, he suggests, unless you’re some nutty dualist. He quotes Michael Hauskeller…

[I]t is difficult to see what this love… should consist in, if not a certain kind of loving behaviour … if [our lover’s] behaviour toward us is unfailingly caring and loving, and respectful of our needs, then we would not really know what to make of the claim that they do not really love us at all, but only appear to do so.

But on the contrary, such claims are universally accepted and understood as part of normal human life. Literature and reality are full of situations where we suspect and fear (perhaps because of ideas about our own unworthiness rather than anything at all in the lover’s behaviour) that someone may not really love us in the way their behaviour would suggest – and indeed, cases where we hope in the teeth of all behavioural evidence that someone does love us. Such hopes are not meaninglessly incoherent.

It seems, according to Danaher, that behaviourism is not bankrupt and outmoded, as you may have thought. On the contrary, it is obviously true, and further, it is really the only way we have of talking about the mind at all! If there were any residual doubt about his position, he explains…

I have defended this view of human-robot relations under the label ‘ethical behaviourism’, which is a position that holds that the ultimate epistemic grounding for our beliefs about the value of relationships lies in the detectable behavioural and functional patterns of our partners, not in some deeper metaphysical truths about their existence.

The thing is, behaviourism failed because it became too clear that the relevant behavioural patterns are unintelligible or even unrecognisable except in the light of hypotheses about internal mental states (not necessarily internal in any sense that requires fancy metaphysics). You cannot give a list of behavioural responses which correspond to love. Given the right set of background beliefs about what is in somene’s mind, pretty well any behaviour can be loving. We’ve all read those stories where someone believes that their beloved’s safety can only be preserved by willing separation, and so out of true love, behave as if they, for their part, were not in love any more. Yes, evidence for emotions is generally behavioural; but it grounds no beliefs about emotions without accompanying beliefs about internal, ‘mentalistic’ states.

The robots we currently have do not by any means have the required internal states, so they are not even candidates to be considered loving; and in fact, they don’t really produce convincing behaviour patterns of the right sort either. Danaher is right that the lack of freedom or subjective experience look like fatal objections to robot love for most people, but myself I would rest most strongly on their lack of intentionality. Nothing means anything to our current, digital computer robots; they don’t, in any meaningful sense, understand that anyone exists, much less have strong feelings about it.

At some points, Danaher seems to be talking about potential future robots rather than anything we already have (I’m beginning to wish philosophers could rein in their habit of getting their ideas about robots from science fiction films). Yes, it’s conceivable that some new future technology might produce robots with genuine emotions; the human brain is, after all, a physical machine in some sense, albeit an inconceivably complex one. But before we can have a meaningful discussion about those future bots, we need to know how they are going to work. It can’t just be magic.

Myself, I see no reason why people shouldn’t have sex bots that perform puppet-level love routines. If we mistake machines for people we run the risk of being tempted to treat people like machines. But at the moment I don’t really think anyone is being fooled, beyond the acknowledged Pygmalion capacity of human beings to fall in love with anything, including inanimate objects that display no behaviour at all. If we started to convince ourselves that we have no more mental life than they do, if somehow behaviourism came lurching zombie-like from the grave – well, then I might be worried!

On the phone or in the phone?

At Aeon, Karina Vold asks whether our smartphones have truly become part of us, and if so whether they deserve new legal protections. She quotes grisly examples of the authorities using a dead man’s finger to try to activate finger print recognition on protected devices.

There are several parts to the argument here. One is derived fairly straightforwardly from the extended mind theory. According this point of view, we are not simply our brains, nor even our bodies. When we use virtual reality devices we may feel ourselves to be elsewhere; a computer can give us cognitive abilities that we can use naturally but would not have been available from our simple biological nervous system. Even in the case of simpler technologies we may feel we are extended. Driving, I sometimes think of the car as ‘me’ in at least a limited sense. If I feel my way with a stick, I feel the ground through the stick, rather than feeling the movement of the stick and making conscious inferences about the ground. Our mind goes out further than we might have thought.

We can probably accept that there is at least some truth in that outlook. But we should also note an important qualification, namely that these things are a matter of degree. A stick in my hand may temporarily become like an extension of my limbs, but it remains temporary and liminal. It never becomes a core part of me in the way that my frontal lobes are. The argument for an extended mind is for a looser and more ambivalent border to the self, not just a wider one.

The second part of the argument is that while the authorities can legitimately seize our property, our minds are legally protected. Vold cites the right to silence, as well as restrictions on the use of drugs and lie detectors. She also quotes a judge to the effect that we are secure in the sanctum of our minds anyway, because there simply isn’t any way the authorities can intervene in there. They can control our behaviour, but not our thoughts.

One problem for me is that the ethical rationale for the right to remain silent is completely opaque to me. I have no idea what justifies our letting people remain silent in cases where they have information that is legitimately needed. A duty to disclose makes a lot more sense to me. Perhaps this principle is just a strongly-reinforced protection against the possibility of torture, in that removing the right of the authorities to have the information at all cuts off at the root any right to use pain as a means of prising it out? If so, it seems too much to me.

I also think the distinction between the ability to control behaviour and the ability to control thoughts is less absolute than might appear. True, we cannot read or implant thoughts themselves. But then it’s extremely difficult to control every action, too. The power of brainwashing techniques has often been overestimated, but the authorities can control information, use persuasive methods and even those forbidden drugs to get what they want. The Stoics, enjoying a bit of a revival in popularity these days, thought that in a broken world you could at least stay in control of your own mind; but it ain’t necessarily so; if they really want to, they can make you love Big Brother.

Still, let’s broadly accept that attempts at direct intervention in the mind are repugnant in ways that restraint of the body is not, and let’s also accept that my smart phone can in some extended sense be regarded as part of my mind. Does it then follow that my phone needs new legal protections in order to preserve the integrity of my personal boundaries?

The word ‘new’ in there is the one that gives me the final problem. Mind extension is not a new thing; if sticks can be part of it, then it’s nearly as old as humanity. Notebooks and encyclopaedias add to our minds, and have been around for a long time. Virtual reality has a special power, but films and even oil paintings sort of do the same job. What’s really new?

I think there is an implicit claim here that phones and other devices are special, because what they do is computation, and that’s what your brain does too. So they become one with our minds in a way that nothing else does. I think that’s just false. Regulars will know I don’t think computation is the essence of thought anyway. But even if it were, the computations done in a phone are completely disconnected from those going on in the brain. Virtual reality may merge with our experience, but what it gives our mind is the outputs of the computation; we never experience the computations themselves. It may hypothetically be the case that future technology will do this, and genuinely merge our thoughts into the data of some advanced machine (I think not, of course); but the idea that we are already at that point and that in fact smartphones already do this is a radical overstatement.

So although existing law may well be improvable, I don’t see a case in principle for any new protections.

 

Secret Harmonies

We need a richer idea of consciousness and of our minds: Jenny Judge suggests that our experience of music in particular points to a need for an expanded conception of the mind to include visceral apprehension.  Many who have championed the idea of an extended mind that isn’t just identifiable with the brain alone will be nodding along, perhaps rhythmically.

For those of us who are a little entrenched in the limited idea of the mind as a matter of the brain doing computations on representations, Judge cunningly offers a couple of pieces of bait in the form of solid cognitive insights her wider perspective can yield. One is that we perceive and respond to rhythm in important ways, even without perceiving it at times. The timing of our utterances can actually change the way they are interpreted and carry significant information. A delay in giving assent can qualify the level of agreement, and apparently this is even culturally variable; the Japanese expect a snappy response, while in Denmark you can take your time without the risk of seeming grudging.

A second insight concerns entrainment, the tendency of connected vibrating systems to synchronise rhythm. Judge presents plausible evidence that a form of entrainment plays an important role in governing the activity of our minds and even of the neurons in the brain (so it ought not to be ignored, even by those who are initially happy with a narrower conception of cognition.

Judge discusses the challenges in perceiving music, with its complexity and its inherent sequentiality. The way we perceive time and motion is complex (we could add that sometimes the visual system just seems to label some things as ‘moving’ even though they are not perceived as changing place). But, wisely I think, she does not quite make the further case that the phenomenology of musical experience is peculiarly intractable. It’s true that great music can cause us to experience emotional and cognitive states that we could never otherwise explore, and it would certainly be possible to base an argument of incredulity on this. Just as Leibniz professed disbelief about the possibility of a mill-type mechanism, however complex, producing awareness, or Brentano declared that intentionality was something else altogether, we could claim that musical experience just is not the kind of thing that physical processes could create. Such arguments are powerfully persuasive, but without some further explanation as to exactly why consciousness cannot arise from physical processing, they don’t prove anything.

It would be hard to disagree, however, with the suggestion that our phenomenological experience really needs to be properly charted in a way that dies justice to its complexity. I’d have a go myself if I had any idea of how to set about it.

But is it Art?

artistIs computer art the same as human art? This piece argues that there is no real distinction; I don’t agree about that, but I sort of agree that in some respects the difference may not matter as much as it seems to. Oliver Roeder seems to end up by arguing that since humans write the programs, all computer art is ultimately human art too. Surely that isn’t quite right; you wouldn’t credit a team that wrote architectural design software with authorship of all the buildings it was used to create.

It is clearly true that we can design software tools that artists may use for their own creative purposes – who now, after all, creates graphic work with an airbrush? It’s also true that a lot of supposedly creative software is actually rather limited; it really only distorts or reshuffles standard elements or patterns within very limited parameters. I had to smile when I found that Roeder’s first example was a programme generating jazz improvisation; surely the most forgiving musical genre, or as someone ruder once put it, the form of music from which even the concept of a wrong note has been eliminated. But let’s not be nasty to jazz; there have also been successful programs that generated melodies like early Mozart by recombining typically Mozartian motifs; they worked quite well but at best they inevitably resembled the composer on a really bad day when he was ten years old.

Surely though, there are (or if not there soon will be, what with all the exciting progress we’re seeing) programs which do a much more sophisticated job of imitating human creativity, ones that generate from scratch genuinely interesting new forms in whatever medium they are designed for? What about those – are their products to be regarded as art? Myself I think not, for two reasons. First, art requires intentionality and computers don’t do intentionality. Art is essentially tied up with meanings and intentions, or with being about something. I should make it clear that I don’t by any means have in mind the naive idea that all art must have a meaning, in the sense of having some moral or message; but in a much looser sense art conveys, evokes or yes, represents. Even the most abstract forms of visual art or music flow from willed and significant acts by their creator.

Second, there is a creator; art is generated by a person. A person, as I’ve argued before, is a one-off real physical phenomenon in the world; a computer program, by contrast, is a sort of Platonic abstraction like a piece of maths; exactly specified and in some sense eternal. This phenomenon of particularity is reflected in the individual status of works of art, sometimes puzzling to rational folk; a perfect copy of the Mona Lisa is not valued as highly as La Gioconda herself, even though it provides exactly the same visual experience (actually a better one in the case of the copy, since you don’t have to fight the herds of tourists in the Louvre and peer through bullet-proof glass). You might argue that a work of computer art might be the product, not of a program in the abstract, but of a particular run of that program on a particular computer (itself necessarily only approximating the ideal of a Turing machine), and so the analogy with human creators can be preserved; but in my view simply being an instance of a program is not enough; the operation of the human brain is bound up in its detailed particularity in a way a program can never be.

Now those considerations, if you accept them, might make you question my initial optimism; perhaps these two objections mean that computers will never in fact produce anything better than shallow, sterile permutations? I don’t think that’s true. I draw an analogy here with Nature. The natural world produces a torrent of forms that are artistically interesting or inspiring, and it does so without needing intentionality or a creator (theists, my apologies, but work with me if you can). I don’t see why a computer program couldn’t generate products that were similarly worthy of our attention. They wouldn’t be art, but in a sense it doesn’t matter: we don’t despise a sunset because nobody made it, and we need not undervalue computer “art” either. (Interesting to reflect in passing that nature often seems to use the same kind of repetition we see in computer-generated fractal art to produce elegant complexity from essentially simple procedures.)

The relationship between art and nature is of course a long one. Artists have often borrowed natural forms, and different ages have seen whatever most suited their temperament in the natural world, whether a harmonious mathematical regularity or the tortured spirituality of the sublime and the terrible. I think it is quite conceivable that computer “art” (we need a new word – what about “creanda”?) might eventually come to play a similar role. Perhaps people will go out of their way to witness remarkable creanda in much the way they visit the Grand Canyon, and perhaps those inspiring and evocative items will play an inspiring and fertilising role for human creativity, without anyone ever mistaking the creanda for art.

Covert Consciousness

observationAre we being watched? Over at Aeon, George Musser asks whether some AI could quietly become conscious without our realising it. After all, it might not feel the need to stop whatever it was doing and announce itself. If it thought about the matter at all, it might think it was prudent to remain unobserved. It might have another form of consciousness, not readily recognisable to us. For that matter, we might not be readily recognisable to it, so that perhaps it would seem to itself to be alone in a solipsistic universe, with no need to try to communicate with anyone.

There have been various scenarios about this kind of thing in the past which I think we can dismiss without too much hesitation. I don’t think the internet is going to attain self-awareness because however complex it may become, its simply isn’t organised in the right kind of way. I don’t think any conventional digital computer is going to become conscious either, for similar reasons.

I think consciousness is basically an expanded development of the faculty of recognition. Animals have gradually evolved the ability to recognises very complex extended stimuli; in the case of human beings things have gone a massive leap further so that we can recognise abstractions and generalities. This makes a qualitative change because we are no longer reliant on what is coming in through our sense from the immediate environment; we can think about anything, even imaginary or nonsensical things.

I think this kind of recognition has an open-ended quality which means it can’t be directly written into a functional system; you can’t just code it up or design the mechanism. So no machines have been really good candidates; until recently. These days I think some AI systems are moving into a space where they learn for themselves in a way which may be supported by their form and the algorithms that back them up, but which does have some of the open-ended qualities of real cognition. My perception is that we’re still a long way from any artificial entity growing into consciousness; but it’s no longer a possibility which can be dismissed without consideration; so a good time for George to be asking the question.

How would it happen? I think we have to imagine that a very advanced AI system has been set to deal with a very complex problem. The system begins to evolve approaches which yield results and it turns out that conscious thought – the kind of detachment from immediate inputs I referred to above – is essential. Bit by bit (ha) the system moves towards it.

I would not absolutely rule out something like that; but I think it is extremely unlikely that the researchers would fail to notice what was happening.

First, I doubt whether there can be forms of consciousness which are unrecognisable to us. If I’m right consciousness is a kind of function which yields purposeful output behaviour, and purposefulness implies intelligibility. We would just be able to see what it was up to. Some qualifications to this conclusion are necessary. We’ve already had chess AIs that play certain end-games in ways that don’t make much sense to human observers, even chess masters, and look like random flailing. We might get some patterns of behaviour like that. But the chess ‘flailing’ leads reliably to mate, which ultimately is surely noticeable. Another point to bear in mind is that our consciousness was shaped by evolution, and by the competition for food, safety, and reproduction. The supposed AI would have evolved in consciousness in response to completely different imperatives, which might well make some qualitative difference. The thoughts of the AI might not look quite like human cognition.  Nevertheless I still think the intentionality of the AI’s outputs could not help but be recognisable. In fact the researchers who set the thing up would presumably have the advantage of knowing the goals which had been set.

Second, we are really strongly predisposed to recognising minds. Meaningless whistling of the wind sounds like voices to us; random marks look like faces; anything that is vaguely humanoid in form or moves around like a sentient creature is quickly anthropomorphised by us and awarded an imaginary personality. We are far more likely to attribute personhood to a dumb robot than dumbness to one with true consciousness. So I don’t think it is particularly likely that a conscious entity could evolve without our knowing it and keep a covert, wary eye on us. It’s much more likely to be the other way around: that the new consciousness doesn’t notice us at first.

I still think in practice that that’s a long way off; but perhaps the time to think seriously about robot rights and morality has come.