Bot Love

John Danaher has given a robust defence of robot love that might cause one to wonder for a moment whether he is fully human himself. People reject the idea of robot love because they say robots are merely programmed to deliver certain patterns of behaviour, he says. They claim that real love would require the robot to have feelings, and freedom of choice. But what are those things, even in the case of human beings? Surely patterns of behaviour are all we’ve got, he suggests, unless you’re some nutty dualist. He quotes Michael Hauskeller…

[I]t is difficult to see what this love… should consist in, if not a certain kind of loving behaviour … if [our lover’s] behaviour toward us is unfailingly caring and loving, and respectful of our needs, then we would not really know what to make of the claim that they do not really love us at all, but only appear to do so.

But on the contrary, such claims are universally accepted and understood as part of normal human life. Literature and reality are full of situations where we suspect and fear (perhaps because of ideas about our own unworthiness rather than anything at all in the lover’s behaviour) that someone may not really love us in the way their behaviour would suggest – and indeed, cases where we hope in the teeth of all behavioural evidence that someone does love us. Such hopes are not meaninglessly incoherent.

It seems, according to Danaher, that behaviourism is not bankrupt and outmoded, as you may have thought. On the contrary, it is obviously true, and further, it is really the only way we have of talking about the mind at all! If there were any residual doubt about his position, he explains…

I have defended this view of human-robot relations under the label ‘ethical behaviourism’, which is a position that holds that the ultimate epistemic grounding for our beliefs about the value of relationships lies in the detectable behavioural and functional patterns of our partners, not in some deeper metaphysical truths about their existence.

The thing is, behaviourism failed because it became too clear that the relevant behavioural patterns are unintelligible or even unrecognisable except in the light of hypotheses about internal mental states (not necessarily internal in any sense that requires fancy metaphysics). You cannot give a list of behavioural responses which correspond to love. Given the right set of background beliefs about what is in somene’s mind, pretty well any behaviour can be loving. We’ve all read those stories where someone believes that their beloved’s safety can only be preserved by willing separation, and so out of true love, behave as if they, for their part, were not in love any more. Yes, evidence for emotions is generally behavioural; but it grounds no beliefs about emotions without accompanying beliefs about internal, ‘mentalistic’ states.

The robots we currently have do not by any means have the required internal states, so they are not even candidates to be considered loving; and in fact, they don’t really produce convincing behaviour patterns of the right sort either. Danaher is right that the lack of freedom or subjective experience look like fatal objections to robot love for most people, but myself I would rest most strongly on their lack of intentionality. Nothing means anything to our current, digital computer robots; they don’t, in any meaningful sense, understand that anyone exists, much less have strong feelings about it.

At some points, Danaher seems to be talking about potential future robots rather than anything we already have (I’m beginning to wish philosophers could rein in their habit of getting their ideas about robots from science fiction films). Yes, it’s conceivable that some new future technology might produce robots with genuine emotions; the human brain is, after all, a physical machine in some sense, albeit an inconceivably complex one. But before we can have a meaningful discussion about those future bots, we need to know how they are going to work. It can’t just be magic.

Myself, I see no reason why people shouldn’t have sex bots that perform puppet-level love routines. If we mistake machines for people we run the risk of being tempted to treat people like machines. But at the moment I don’t really think anyone is being fooled, beyond the acknowledged Pygmalion capacity of human beings to fall in love with anything, including inanimate objects that display no behaviour at all. If we started to convince ourselves that we have no more mental life than they do, if somehow behaviourism came lurching zombie-like from the grave – well, then I might be worried!

An AI driving test?

The first case of a pedestrian death caused by a self-driving vehicle has provoked an understandably strong reaction. Are we witnessing the questioning of the Emperor’s new clothes? Have we been living through the latest and greatest wave of hype around AI and its performance? Will self-driving cars come off the road for a generation, or even forever? Or, on the contrary, will we quickly come to accept that fatal accidents are unavoidable, as we have done for so long in the case of human drivers, after all?

It does seem to me that the dialogue around self-driving cars has been a bit unsatisfactory to date. There’s been a surprising amount of discussion about the supposed ethical issues; should the car save the lives of the occupants if doing so involves killing a greater number of bystanders? I think some people have spent too long on trolley problems; these situations never really come up in practice, and ‘try not to have an accident at all’ is probably a perfectly adequate strategy.

More remarkable is the way the cars have been allowed to move on to public roads quite quickly. No doubt the desire of the relevant authorities in various places to stay ahead of the technological curve has something to do with this. But so far as I know there has been almost no sceptical examination of the technology by genuinely impartial observers. The designers have generally retained quite tight control, and by and large their claims have been accepted rather uncritically. I’ve pointed out before that the idea that self-driving machines are safer than humans sits oddly with the fact that current versions all use the human driver as the emergency fall-back. There may in fact have been some tendency to treat the human as a handy repository for all blame; if they don’t intervene, they should have done; if they do intervene, then any accident is their responsibility because the AI was not in control at the moment of disaster.

Our confidence in autonomous vehicles is the stranger because it is quite well known that AI tends to have problems dealing with the unrestricted and unformalised domain of the real world. In the controlled environment of rail systems, AI works fine, and even in the more demanding case of aviation, autopilots have an excellent record – although a plane is out in the world, it normally only deals with predictable conditions and carefully designed runways. To a degree roads can be considered similarly standardised and predictable, of course, but not all roads and certainly not the human beings that frequent them.

It can be argued that AI does not need human levels of understanding to function well; machine translation now turns in a useful performance without even attempting to fathom what the words are about, after all. But even now it has a significant failure rate, and while an egregious mistranslation here and there probably does little harm, a driving mistake is another matter.

Do we therefore need a driving test for AI? Should autonomous vehicles be put through rigorous tests designed to exploit likely weaknesses and administered by neutral or even hostile examiners? I would have thought something like that would be a natural requirement.

The problem might be whether effective tests are possible. Human drivers have recognisable patterns of failure that can be addressed with more training. That may or may not be the case with AIs. I don’t know how advanced the recognition technology being used actually is, but we know that some of the best available systems can behave in ways that are weirdly unpredictable to human beings, with a few pixels in an image exerting unexpected influence. It might be very difficult to test an AI in ways that give good assurance that performance will degrade, if at all, only gradually and manageably, rather than suddenly and catastrophically.

In this context, footage of the recent accident is disturbing. The car simply ploughs right into a clearly visible pedestrian wheeling a bike across the road. It’s hard to see how this can be explained or how it can be consistent with a predictably safe system. I hope we’re not just going to be told it was the fault of the human co-pilot (who reacted too slowly, but surely isn’t supposed to have to be ready for an emergency stop at any moment?) –  or worse, of the victim.

Anthropic Consciousness

Stephen Hawking’s recent death caused many to glance regretfully at the unread copies of A Brief History Of Time on their bookshelves. I don’t even own one, but I did read The Grand Design, written with Leonard Mlodinow, and discussed it here. It’s a bold attempt to answer the big questions about why the universe even exists, and I suggested back then that it showed signs of an impatience for answers which is characteristic of scientists, at least as compared with philosophers. One sign of Hawking’s impatience was his readiness to embrace a version of the rather dodgy Anthropic Principle as part of the foundations of his case.

In fact there are many flavours of the Anthropic Principle. The mild but relatively uninteresting version merely says we shouldn’t be all that surprised about being here, because if we hadn’t been here we wouldn’t have been thinking about it at all. Is it an amazing piece of luck that from among all the millions of potential children our parents were capable of engendering, we were the ones who got born? In a way, yes, but then whoever did get born would have had the same perspective. In a similar way, it’s not that surprising that the universe seems designed to accommodate human beings, because if it hadn’t been that way, no-one would be worrying about it.

That’s alright, but the stronger versions of the Principke make much more dubious claims, implying that our existence as observers really might have called the world into existence in some stronger sense. If I understood them correctly, Hawking and Mlodinow pitched their camp in this difficult territory.

Here at Conscious Entities we do sometimes glance at the cosmic questions, but our core subject is of course consciousness. So for us the natural question is, could there be an Anthropic-style explanation of consciousness? Well, we could certainly have a mild version of the argument, which would simply say that we shouldn’t be surprised that consciousness exists, because if it didn’t no-one would be thinking about it. That’s fine but unsatisfying.

Is there a stronger version in which our conscious experience creates the preconditions for itself? I can think of one argument which is a bit like that. Let me begin by proposing an analogy in the supposed Problem of Focus.

The Problem of Focus notes that the human eye has the extraordinary power of drawing in beams of light from all the objects around it. Somehow every surface around us is impelled to send rays right in to that weirdly powerful metaphysical entity which resides in our eyes, the Focus. Some philosophers deny that there is a single Focus in each eye, suggesting it changes constantly. Some say the whole idea of a Focus with special powers is an illusion, a misconception of perfectly normal physical processes. Others respond that the facts of optometry and vision just show that denying the existence of Focus is in practice impossible; even the sceptics wear glasses!

I don’t suppose anyone will be detained for long by worries about the Probkem of Focus; but what if we remove the beams of light and substitute instead the power of intentionality, ie our mental ability to think about things. Being able to focus on an item mentally is clearly a useful ability, allowing us to target our behaviour more effectively. We can think of intentionality as a system of pointers, or lines connecting us to the object being thought of. Lines, however, have two ends, so the back end of these ones must converge in a single point. Isn’t it remarkable that this single focus point is able to draw together the contents of consciousness in a way which in fact generates that very state of awareness?

Alright, I’m no Hawking…

Brain Preservation Prize

The prize offered by the Brain Preservation Foundation has been won by 21st Century Medicine (21CM) with the Aldehyde-Stabilized Cryopreservation (ASC) technique that has been developed. In essence this combines chemical and cryogenic approaches and is apparently capable of preserving the whole connectome (or neural network) of a large mammalian brain (a pig brain here) in full detail and indefinitely. That is a remarkable achievement. A paper is here.

I am an advisor to the BPF, though I should make it clear that they don’t pay me and I haven’t given them a great deal of advice. I’ve always said I would be a critical friend, in that I doubt this research is ever going to lead to personal survival of the self whose brain is preserved. However, in my opinion it is much more realistic than a pure scan-and-upload approach, and has the potential to yield many interesting benefits even if it never yields personal immortality.

One great advantage of preserving the brain like this is that it defers some choices. When we model a brain or attempt to scan it into software, we have to pick out the features we think are salient, and concentrate on those. Since we don’t yet have any comprehensive and certain account of how the brain functions, we might easily be missing something essential. If we keep the whole of an actual brain, we don’t have to make such detailed choices and have a better chance of preserving features whose importance we haven’t yet recognised.

It’s still possible that we might lose something essential, of course. ASC, not unreasonably, concentrates on preserving the connectome. I’m not sure whether, for example, it also keeps the brain’s astrocytes in good condition, though I’d guess it probably does. These are the non-neural cells which have often been regarded as mere packing, but which may in fact have significant roles to play. Recently we’ve heard that neurons appear to signal with RNA packets; again, I don’t know whether ASC preserves any information about that – though it might. But even on a pessimistic view, ASC must in my view be a far better preservation proposition than digital models that explicitly drop the detailed structure of individual neurons in favour of an unrealistically standardised model, and struggle with many other features.

Preserving brains in fine detail is a worthy project in itself, which might yield big benefits to research in due course. But of course the project embodies the hope that the contents of a mind and even the personality of an individual could be delivered to posterity. I do not think the contents of a mind are likely to be recoverable from a preserved brain yet awhile, but in the long run, why not? On identity, I am a believer in brute physical continuity. We are our brains, I believe (I wave my hands to indicate various caveats and qualifications which need not sideline us here). If we want to retain our personal identity, then, the actual physical preservation of the brain is essential.

Now, once your brain has been preserved by ASC, it really isn’t going to be started up again in its existing physical form. The Foundation looks to uploading at this stage, but because I don’t think digital uploading as we now envision it is possible in principle, I don’t see that ever working. However, there is a tiny chink of light at the end of that gloomy tunnel. My main problem is with the computational nature of uploading as currently envisaged. It is conceivable that the future will bring non-computational technologies which just might allow us to upload, not ourselves, but a kind of mental twin at least. That’s a remote speculation, but still a fascinating prospect. Is it just conceivable that ways might be found to go that little bit further and deliver some kind of actual physical interaction between these hypothetical machines and the essential slivers of a preserved brain, some echo such that identity was preserved? Honestly, I think not, but I won’t quite say it is inconceivable. You could say that in my view the huge advantage of the brain preservation strategy for achieving immortality is that unlike its rivals it falls just short of being impossible in principle.

So I suppose, to paraphrase Gimli the dwarf: certainty of death; microscopic chance of success – what are we waiting for?

Postscript: I meant by that last bit that we should continue research, but I see it is open to misinterpretation. I didn’t dream people would actually do this, but I read that Robert McIntyre, lead author of the paper linked above, is floating a startup to apply the technique to people who are not yet dead. That would surely be unethical. If suicide were legal and if you had decided that was your preferred option, you might reasonably choose a method with a tiny chance of being revived in future. But I don’t think you can ask people to pay for a  technique (surely still inadequately tested and developed for human beings) where the prospects of revival are currently negligible and most likely will remain so.

On the phone or in the phone?

At Aeon, Karina Vold asks whether our smartphones have truly become part of us, and if so whether they deserve new legal protections. She quotes grisly examples of the authorities using a dead man’s finger to try to activate finger print recognition on protected devices.

There are several parts to the argument here. One is derived fairly straightforwardly from the extended mind theory. According this point of view, we are not simply our brains, nor even our bodies. When we use virtual reality devices we may feel ourselves to be elsewhere; a computer can give us cognitive abilities that we can use naturally but would not have been available from our simple biological nervous system. Even in the case of simpler technologies we may feel we are extended. Driving, I sometimes think of the car as ‘me’ in at least a limited sense. If I feel my way with a stick, I feel the ground through the stick, rather than feeling the movement of the stick and making conscious inferences about the ground. Our mind goes out further than we might have thought.

We can probably accept that there is at least some truth in that outlook. But we should also note an important qualification, namely that these things are a matter of degree. A stick in my hand may temporarily become like an extension of my limbs, but it remains temporary and liminal. It never becomes a core part of me in the way that my frontal lobes are. The argument for an extended mind is for a looser and more ambivalent border to the self, not just a wider one.

The second part of the argument is that while the authorities can legitimately seize our property, our minds are legally protected. Vold cites the right to silence, as well as restrictions on the use of drugs and lie detectors. She also quotes a judge to the effect that we are secure in the sanctum of our minds anyway, because there simply isn’t any way the authorities can intervene in there. They can control our behaviour, but not our thoughts.

One problem for me is that the ethical rationale for the right to remain silent is completely opaque to me. I have no idea what justifies our letting people remain silent in cases where they have information that is legitimately needed. A duty to disclose makes a lot more sense to me. Perhaps this principle is just a strongly-reinforced protection against the possibility of torture, in that removing the right of the authorities to have the information at all cuts off at the root any right to use pain as a means of prising it out? If so, it seems too much to me.

I also think the distinction between the ability to control behaviour and the ability to control thoughts is less absolute than might appear. True, we cannot read or implant thoughts themselves. But then it’s extremely difficult to control every action, too. The power of brainwashing techniques has often been overestimated, but the authorities can control information, use persuasive methods and even those forbidden drugs to get what they want. The Stoics, enjoying a bit of a revival in popularity these days, thought that in a broken world you could at least stay in control of your own mind; but it ain’t necessarily so; if they really want to, they can make you love Big Brother.

Still, let’s broadly accept that attempts at direct intervention in the mind are repugnant in ways that restraint of the body is not, and let’s also accept that my smart phone can in some extended sense be regarded as part of my mind. Does it then follow that my phone needs new legal protections in order to preserve the integrity of my personal boundaries?

The word ‘new’ in there is the one that gives me the final problem. Mind extension is not a new thing; if sticks can be part of it, then it’s nearly as old as humanity. Notebooks and encyclopaedias add to our minds, and have been around for a long time. Virtual reality has a special power, but films and even oil paintings sort of do the same job. What’s really new?

I think there is an implicit claim here that phones and other devices are special, because what they do is computation, and that’s what your brain does too. So they become one with our minds in a way that nothing else does. I think that’s just false. Regulars will know I don’t think computation is the essence of thought anyway. But even if it were, the computations done in a phone are completely disconnected from those going on in the brain. Virtual reality may merge with our experience, but what it gives our mind is the outputs of the computation; we never experience the computations themselves. It may hypothetically be the case that future technology will do this, and genuinely merge our thoughts into the data of some advanced machine (I think not, of course); but the idea that we are already at that point and that in fact smartphones already do this is a radical overstatement.

So although existing law may well be improvable, I don’t see a case in principle for any new protections.