Consciousness and agony

trepanningThis piece in the Atlantic raises a question we have touched on before: anaesthesia. How do we know it works, how do we distinguish it from amnesia, and how much does it matter? What is supposed to happen is that when we get an anaesthetic injection, or gas, we simply stop feeling pain; possibly we stop feeling anything. It is certainly more complicated than that: the drugs used by doctors may simply stop us feeling pain – we more or less know with a local anaesthetic that that’s sort of what happens – but they may also, or alternatively, stop us remembering, or caring about, the pain; they may also merely stop us moving or complaining. Some of the drugs used by anaesthetists apparently do only one of the three latter things, with no direct influence on the pain experience itself.

This matters most obviously when things go wrong, and in the middle of a surgical operation a patient resumes consciousness while remaining paralysed, experiencing all the terrible pain of being cut open without being able to give any sign of it. Because of the memory-erasing effects of some of the drugs used we cannot be sure how often this happens, but it is accepted that sometimes it does.

This is one of those areas where the apparently airy-fairy disputes of philosophy suddenly get real.  A number of people are sceptical in principle about the reality of the self, of consciousness, and of subjective experience: but I think you need to be quite bold to stick to scepticism when it involves dismissing pain on the operating table as a mere conceptual confusion, something we need not be too concerned about.

There is, however, little agreement about some fundamental issues. Does it matter if I feel terrible pain, but then forget about it completely? I’d say yes, but a friend of mine takes the opposite view: if he doesn’t remember it he considers that as good as not having had the experience in the first place. Of course I would have minded at the time, he says, but that’s not what we’re talking about; we’re talking about whether I should mind now. Ex hypothesi, if I’ve forgotten, my mind is in exactly the same state as if I never had the pain, and it’s incoherent to say I should worry about a non-existent difference.

I have to concede that my point of view opens up many more problems. Since memory is fallible, there might be lots of forgotten pains I should be worried about; but if I have a false memory of a pain that never happened, is that also a cause for concern? It’s possible to be in a state of mind in which one feels a pain but somehow does not mind it: but what if I start minding about it later? What if I forget that I didn’t mind? What if I did mind but the pain was actually illusory? What if I felt the pain unconsciously? And what if the memory then became conscious later? Or what if I felt it consciously but have only an unconscious memory? Do animals feel pain in the same way as we do? Do plants? If a drug dials my level of consciousness back to monkey level, dog level, chicken level, lizard level, ant level – does pain still matter? If I feel pain while operating on a protozoan mental level, does it matter more when I remember it on a fully human level? Do imaginative people suffer more (as, apparently, red-headed people tend to do)?

This may all sound stupidly speculative, but the questions are genuine and we’re talking about whether I’m in agony or not. It would be great if we could bring all this out of philosophy zone and into science, but there are problems there too. The Atlantic article recounts difficulties with the BIS monitor, supposed to provide a simple numerical reading for the level of consciousness based on electroencephalograph readings. Perhaps it’s not suprising that the BIS has been questioned: an electroencephalograph is hardly cutting edge brain technology. A more fundamental problem is that it has no known theoretical basis: the algorithm is secret and the procedure is based entirely on empirical evidence with no underlying theory of conscious experience. Providing sound empirical evidence of subjective experience is obviously fraught with complications.

Step forward our old friend Giulio Tononi, whose theory of Phi, the measure of integrated information and hence, perhaps, of consciousness, is ideally suited to fill the conceptual gap. With Phi in one hand and modern scanning technology in the other, surely we can crack this one? I don’t know whether the Phi theory is really up to the job: if it’s true it might tell me why consciousness occurs in the brain and how much of it is going on, but it doesn’t seem to explicate the actual nature of the pain experience, and that leaves us vulnerable because we’re still reliant on the fallible reports and memories of the patients to establish our correlations. Could we ever be in a position where the patient complains of terrible pain and the doctor with the Tononi monitor tells them that actually he can prove they’re not feeling any such thing?

Perhaps we can imagine a world where the doctor goes further. Great news, he says, we’ve established that you’re a philosophical zombie: although you talk and behave as if you have feelings, you’ve actually never felt real pain at all! So no injection for you – we’ll just strap you down and get on with it…

If guns could kill

TankBack in November Human Rights Watch (HRW) published a report – Losing Humanity – which essentially called for a ban on killer robots – or more precisely on the development, production, and use of fully autonomous weapons,  backing it up with a piece in the Washington Post. The argument was in essence that fully autonomous weapons are most probably not compatible with international conventions on responsible ethical military decision making, and that robots or machines lack (and perhaps  always will lack) the qualities of emotional empathy and ethical judgement required to make decisions about human lives.

You might think that in certain respects that should be fairly uncontroversial. Even if you’re optimistic about the future potential of robotic autonomy, the precautionary principle should dictate that we move with the greatest of caution when it comes to handing over lethal weapons . However, the New Yorker followed up with a piece which linked HRW’s report with the emergence of driverless cars and argued that a ban was ‘wildly unrealistic’. Instead, it said, we simply need to make machines ethical.

I found this quite annoying; not so much the suggestion as the idea that we are anywhere near being in a position to endow machines with ethical awareness. In the first place actual autonomy for robots is still a remote prospect (which I suppose ought to be comforting in a way). Machines that don’t have a specified function and are left around to do whatever they decide is best, are not remotely viable at the moment, nor desirable. We don’t let driverless cars argue with us about whether we should really go to the beach, and we don’t let military machines decide to give up fighting and go into the lumber business.

Nor, for that matter, do we have a clear and uncontroversial theory of ethics of the kind we should need in order to simulate ethical awareness. So the New Yorker is proposing we start building something when we don’t know how it works or even what it is with any clarity. The danger here, to my way of thinking, is that we might run up some simplistic gizmo and then convince ourselves we now have ethical machines, thereby by-passing the real dangers highlighted by HRW.

Funnily enough I agree with you that the proposal to endow machines with ethics is premature, but for completely different reasons. You think the project is impossible; I think it’s irrelevant. Robots don’t actually need the kind of ethics discussed here.

The New Yorker talks about cases where a driving robot might have to decide to sacrifice its own passengers to save a bus-load of orphans or something. That kind of thing never happens outside philosophers’ thought experiments. In the real world you never know that you’re inevitably going to kill either three bankers or twenty orphans – in every real driving situation you merely need to continue avoiding and minimising impact as much as you possibly can. The problems are practical, not ethical.

In the military sphere your intelligent missile robot isn’t morally any different to a simpler one. People talk about autonomous weapons as though they are inherently dangerous. OK, a robot drone can go wrong and kill the wrong people, but so can a ballistic missile. There’s never certainty about what you’re going to hit. A WWII bomber had to go by the probability that most of its bombs would hit a proper target, not a bus full of orphans (although of course in the later stages of WWII they were targeting civilians too).  Are the people who get killed by a conventional bomb that bounces the wrong way supposed to be comforted by the fact that they were killed by an accident rather than a mistaken decision? It’s about probabilities, and we can get the probabilities of error by autonomous robots down to very low levels.  In the long run intelligent autonomous weapons are going to be less likely to hit the wrong target than a missile simply lobbed in the general direction of the enemy.

Then we have the HRW’s extraordinary claim that autonomous weapons are wrong because they lack emotions! They suggest that impulses of mercy and empathy, and unwillingness to shoot at one’s own people sometimes intervene in human conflict, but could never do so if robots had the guns. This completely ignores the obvious fact that the emotions of hatred, fear, anger and greed are almost certainly what caused and sustain the conflict in the first place!  Which soldier is more likely to behave ethically: one who is calm and rational, or one who is in the grip of strong emotions? Who will more probably observe the correct codes of military ethics, Mr Spock or a Viking berserker?

We know what war is good for (absolutely nothing). The costs of a war are always so high that a purely rational party would almost always choose not to fight. Even a bad bargain will nearly always be better than even a good war. We end up fighting for reasons that are emotional, and crucially because we know or fear that the enemy will react emotionally.

I think if you analyse the HRW statement enough it becomes clear that the real reason for wanting to ban autonomous weapons is simply fear; a sense that machines can’t be trusted. There are two facets to this. The first and more reasonable is a fear that when machines fail, disaster may follow. A human being may hit the odd wrong target, but it goes no further: a little bug in some program might cause a robot to go on an endless killing spree. This is basically a fear of brittleness in machine behaviour, and there is a small amount of justification for it. It is true that some relatively unsophisticated linear programs rely on the assumptions built into their program and when those slip out of synch with reality things may go disastrously and unrecoverably wrong. But that’s because they’re bad programs, not a necessary feature of all autonomous systems and it is only cause for due caution and appropriate design and testing standards, not a ban.

The second facet, I suggest, is really a kind of primitive repugnance for the idea of a human’s being killed by a lesser being; a secret sense that it is worse, somehow more grotesque, for twenty people to be killed by a thrashing robot than by a hysterical bank robber. Simply to describe this impulse is to show its absurdity.

It seems ethics are not important to robots be cause for you they’re not important to anyone. But I’m pleased you agree that robots are outside the moral sphere.

Oh no, I don’t say that. They don’t currently need the kind of utilitarian calculus the New Yorker is on about, but I think it’s inevitable that robots will eventually end up developing not one but two separate codes of ethics. Neither of these will come from some sudden top-down philosophical insight – typical of you to propose that we suspend everything until the philosophy has been sorted out in a few thousand years or so – they’ll be built up from rules of thumb and practical necessity.

First, there’ll be rules of best practice governing their interaction with humans.  There may be some that will have to do with safety and the avoidance of brittleness and many, as Asimov foresaw, will essentially be about deferring to human beings.  My guess is that they’ll be in large part about remaining comprehensible to humans; there may be a duty to report , to provide rationales in terms that human beings can understand, and there may be a convention that when robots and humans work together, robots do things the human way, not using procedures too complex for the humans to follow, for example.

More interesting, when there’s a real community of autonomous robots they are bound to evolve an ethics of their own. This is going to develop in the same sort of way as human ethics, but the conditions are going to be radically different. Human ethics were always dominated by the struggle for food and reproduction and the avoidance of death: those things won’t matter as much in the robot system. But they will be happy dealing with very complex rules and a high level of game-theoretical understanding, whereas human beings have always tried to simplify things. They won’t really be able to teach us their ethics; we may be able to deal with it intellectually but we’ll never get it intuitively.

But for once, yes, I agree: we don’t need to worry about that yet.