Are robots people or people robots?

boilerplateI must admit I generally think of the argument over human-style artificial intelligence as a two-sided fight. There are those who think it’s possible, and those who think it isn’t. But a chat I had recently made it clear that there are really more differences than that, in particular among those who believe we shall one day have robot chums.

The key difference I have in mind is over whether there really is consciousness at all, or at least whether there’s anything special about it.

One school of thought says that there is indeed a special faculty of consciousness; but eventually machines of sufficient complexity will have it too. We may not yet have all the details of how this thing works; maybe we even need some special new secret. But one thing is perfectly clear; there’s no magic involved, nothing outside the normal physical account, and in fact nothing that isn’t ultimately computable. One day we will be able to build into a machine all the relevant qualities of a human mind. Perhaps we’ll do it by producing an actual direct simulation of a human brain, perhaps not; the point is, when we switch on that ultimate robot, it will have feelings and qualia, it will have moral rights and duties, and it will have the same perception of itself as a real existing personality, that we do.

The second school of thought agrees that we shall be able to produce a robot that looks and behaves exactly like a human being. But that robot will not have qualia or feelings or free will or any of the rest of it, because in reality human beings don’t have them either! That’s one of the truths about ourselves that has been helpfully revealed by the progress of AI: all those things are delusions and always have been. Our feelings that we have a real self, that there is phenomenal experience, and that somehow we have a special kind of agency, those things are just complicated by-products of the way we’re organised.

Of course we could split the sceptics too, between those who think that consciousness requires a special spiritual explanation, or is inexplicable altogether, and those who think it is a natural feature of the world, just not computational or not explained by any properties of the physical world known so far. There is clearly some scope for discussion between the former kind of believer and the latter kind of sceptic because they both think that consciousness is a real and interesting feature of the world that needs more explanation, though they differ in their assumptions about how that will turn out. Although there’s less scope for discussion, there’s also some common ground between the two other groups because both basically believe that the only kind of discussion worth having about consciousness is one that clarifies the reasons it should be taken off the table (whether because it’s too much for the human mind or because it isn’t worthy of intelligent consideration).

Clearly it’s possible to take different views on particular issues. Dennett, for example, thinks qualia are just nonsense and the best possible thing would be to stop even talking about them, while he thinks the ability of human beings to deal with the Frame Problem is a real and interesting ability that robots don’t have but could and will once it’s clarified sufficiently.

I find it interesting to speculate about which camp Alan Turing would have joined; did he think that humans had a special capacity which computers could one day share, or did he think that the vaunted consciousness of humans turned out to be nothing more than the mechanical computational abilities of his machines? It’s not altogether clear, but I suspect he was of the latter school of thought. He notes that the specialness of human beings has never really been proved; and a disbelief in the specialness of consciousness might help explain his caginess about answering the question “can machines think?”. He preferred to put the question aside: perhaps that was because he would have preferred to answer; yes, machines can think, but only so long as you realise that ‘thinking’ is not the magic nonsense you take it to be…

Third consciousness

dysanaesthesiaThere have been a number of reports of a speech or presentation by Dr. Jaideep Pandit to the Annual Congress of the The Association of Anaesthetists of Great Britain and Ireland (AAGBI) in Dublin, in which he apparently proposed the existence of a ‘third’ state of consciousness.

Dr Pandit led the fifth annual survey (NAP5) by the AAGBI on the subject of accidental awareness, in which patients who have been anaesthetised for an operation become conscious during the procedure but are unable to do anything because some of the drugs they are normally given induce paralysis. Anaesthetists used to believe that even though patients couldn’t move, it was generally possible to tell when they were becoming aware, through signs of distress like the heartrate and sweating, allowing the anaesthetist to give further doses to correct the problem: but it has become clear that this is not always the case and that significant numbers of patients do go through severe pain, or are at least aware of what is going on, while on the operating table.

There are extremely difficult methodological issues involved in assessing how often this happens. Besides drugs to paralyse patients and remove the pain, they are typically given drugs which erase any memory. Of those who do become conscious during surgery, only a minority report it afterwards. On the other side, many patients may dream or confabulate an awareness that never really existed. Nevertheless the AAGBI’s annual surveys are a valuable and creditable effort to provide some data.

The point about the ‘third state’ does not arise so much from the survey, however, as separate research carried out by Dr Ian F Russell: I haven’t been able to track down the particular paper which I think was referred to in Dublin, but there’s a useful general article which covers the same experiments here. The techniques used in this case was to keep one forearm unparalysed and then ask the patient at repeated intervals to squeeze with their hand. Russell found that 44% of patients woulod respond even though they were otherwise apparently unconscious and had no recollection of the episode.

How aware wee these patients? They were able to hear and understand the surgeon and respond appropriately, yet we can, with some caveats, presume that they were not in pain. If they had been in agony, these patients, unlike most, coulod have waved their hand and would surely have done so – unless the state they were in somehow allowed them to respond to requests but not to initiate any action; or possibly left them too confused even to formulate the simple plan of attracting attention, but not so confused that they could not respond to a simple request. At any rate, it is this curiously ambivalent condition which has prompted Dr Pandit’s suggestion of a third state of consciousness.

We might observe that it probably isn’t really the third, but perhaps the seventh (or the seventeenth).  We already have dreaming, hypnosis, sleepwalking, and meditation to take into account, and very likely we could come up with more. The normal mental state of human beings appears to be complex and layered, and capable of breaking down or degrading in ways you wouldn’t have expected.

There’s a moral there for artificial general intelligence, I suppose: if you really want to model human thought, you’re going to need it to operate on a number of levels at once, rather than having a single ‘theatre’ or workspace where mental content does its stuff. It may of course be that we don’t especially want to model specifically human styles of thought. Perhaps a single level cognitive structure will prove perfectly good for most practical tasks; perhaps it will even be an improvement. That raises the interesting possibility of sitting down to have a conversation with a robot friend who seems to be pretty much like us, but has no unconscious. What would people with no unconscious be like? An outside possibility is that they would be fine at ‘easy problem’ stuff, but lack phenomenal depth; perhaps they would be the philosophical zombies hose lack of qualia has featured in so many papers.

More generally, the existence of a state in which we remember nothing and experience no pain, but comply helpfully with requests made to us in normal language suggests that either the self as normally envisaged is not as much in executive control as it thinks – a conclusion which would be supported by various other bits of evidence – or that it is less unitary than we generally suppose. Indeed, we might see this as at least persuasive evidence for the existence of a slightly unexpected degree of modularisation. The ‘third state’ and hypnotic states suggest that there might be a kind of ‘implementer’ function in the brain. This implementer is responsible for actually getting actions carried out; normally it reacts so fast to the least indication from our self-conscious interior monologue that we don’t even notice it; to us it just seems that what we wanted happened. But in certain unusual states the talky, remembering, self-conscious bit of our mind can be turned off while the butler-like implementer is still around and in the absence of the usual boss, quite happy to respond to instructions from outside (and quite capable of using all the language-processing functions of the brain to understand them with).

In its way I find that almost as unsettling as the AAGBI’s conclusions about the surprisingly frequent ineffectiveness of anaesthesia.

Conscious Yogurt

yogurtCan machines be moral agents? There’s a bold attempt to clarify the position in the
IJMC
, by Parthemore and Whitby (draft version). In essence they conclude that the factors that go to make a moral agent are the same whether the entity in question is biological or an artefact. On the face of it that seems like common sense – although the opposite conclusion would be more interesting; and there is at least one goodish reason to think that there’s a special problem for artefacts.

But let’s start at the beginning. Parthemore and Whitby propose three building blocks which any successful candidate for moral agency must have: the concept of self, the concept of morality, and the concept of concept.

By the concept of self, they mean not simply a basic awareness of oneself as an object in the landscape, but a self-reflective awareness along the lines of Damasio’s autobiographical self. Their rationale is not set out very explicitly, but I take it they think that without such a sense of self, your acts would seem to be no different from other events in the world; just things you notice happening, and that therefore you couldn’t be seen as responsible for them. It’s a reasonable claim, but I think it properly requires more discussion. A sceptic might make the case that there’s a difference between feeling you’re not responsible and actually not being responsible; that someone could have the cheerily floaty feeling of being a mere observer while actually carrying out acts that were premeditated in some level of the mind. There’s scope then for an extensive argument about whether conscious awareness of making a decision is absolutely necessary to moral ownership of the decision. We’ll save that for another day, and of course Parthemore and Whitby couldn’t hope to deal with every implication in a single paper. I’m happy to take their views on the self as reasonable working assumptions.

The second building block is the concept of morality. Broadly, Parthemore and Whitby want their moral agent to understand and engage with the moral realm; it’s not enough, they say, to memorise by rote learning a set of simple rules. Now of course many people have seemed to believe that learning and obeying a set of rules such as the Ten Commandments was necessary or even sufficient for being a morally good person. What’s going on here? I think this becomes clearer when we move on to the concept of concept, which roughly means that the agent must understand what they’re doing. Parthemore and Whitby do not mean that a moral agent must have a philosophical grasp of the nature of concepts, only that they must be able to deal with them in practical situations, generalising appropriately where necessary. So I think what they’re getting at is not that moral rules are invalid in themselves, merely that a moral agent has to have sufficient conceptual dexterity to apply them properly.  A rule about not coveting your neighbour’s goods may be fine, but you need to able to recognise neighbours and goods and instances of their being coveted, without needing say, a list of the people to be considered neighbours.

So far, so good, but we seem to be missing one item normally considered fundamental: a capacity for free action. I can be fully self-aware, understand and appreciate that stealing is wrong, and be aware that by picking up a chocolate bar without paying I am in fact stealing; but it won’t generally be considered a crime if I have a gun to my head, or have been credibly told that if I don’t steal the chocolate several innocent people will be massacred. More fundamentally, I won’t be held responsible if it turns out that because of internal factors I have no ability to choose otherwise: yet the story told by physics seems to suggest I never really have the ability to choose otherwise. I can’t have real responsibility without real free will (or can I?).

Parthemore and Whitby don’t really acknowledge this issue directly; but they do go on to add what is effectively a fourth requirement for moral agency; you have to be able to act against your own interests. It may be that this is in effect their answer; instead of a magic-seeming capacity for free will they call for a remarkable but fully natural ability to act unselfishly. They refer to this as akrasia; consciously doing the worse thing: normally I think the term refers to the inability to do what you can see is the morally right thing; here Parthemore and Whitby seem to reinterpret it as the ability to do morally good things which run counter to your own selfish interests.

There are a couple of issues with that principle. First, it’s not actually the case that we only act morally when going against our own interests; it’s just that those are the morally interesting cases because we can be sure in those instances that morality alone is the motivation. Worse than that, someone like Socrates would argue that moral action is always in your own best interests, because being a good person is vastly more important than being rich or successful; so no rational person who understood the situation properly would ever choose to do the wrong thing. Probably though, Parthemore and Whitby are really going for something a little different. They link their idea to personal boundaries, citing Andy Clark, so I think they have in mind an ability to extend sympathy or a feeling of empathy to others. The ability they’re after is not so much that of acting against your own interests as that of construing your own interests to include those of other entities.

Anyway, with that conception of moral agency established, are artefacts capable of qualifying? Parthemore and Whitby cite a thought-experiment of Zlatev: suppose someone who lived a normal life were found after death to have had no brain but a small mechanism in their skull: would we on that account disavow the respect and friendship we might have felt for the person during life? Zlatev suggests not; and Parthemore and Whitby, agreeing, proposing that it would make no difference if the skull were found to be full of yogurt; indeed, supposing someone who had been found to have yogurt instead of brains were able to continue their life, they would see no reason to treat them any differently on account their acephaly (galactocephaly?). They set this against John Searle’s view that it is some as-yet-unidentified property of nervous tissue that generates consciousness, and that a mind made out of beer cans is a patent absurdity. Their view, which certainly has its appeal, is that it is performance that matters; if an entity displays all the signs of moral sense, then let’s treat it as a moral being.

Here again Parthemore and Whitby make a reasonable case but seem to neglect a significant point. The main case against artefacts being agents is not the the Searlian view, but a claim that in the case of artefacts responsibility devolves backwards to the person who designed them, who either did or should have foreseen how they would behave; is at any rate responsible for their behaving as they do; and therefore bears any blame. My mother is not responsible for my behaviour because she did not design me or program my brain (well, only up to a point), but the creator of Robot Peter would not have the same defence; he should have known what he was bringing into the world. It may be that in Parthemore and Whitby’s view akrasia takes care of this too, but if so it needs explaining.

If Parthemore and Whitby think performance is what matters, you might think they would be well-disposed towards a Moral Turing Test; one in which the candidate’s ability to discuss ethical issues coherently determines whether we should see it as a moral agent or not. Just such a test was proposed by Allen et al, but in fact Parthemore and Whitby are not keen on it. For one thing, as they point out, it requires linguistic ability, whereas they want moral agency to extend to at least some entities with no language competence. Perhaps it would be possible to devise pre-linguistic tests, but they foresee difficulties: rightly, I think. One other snag with a Moral Turing Test would be the difficulty of spotting cases where the test subject had a valid system of ethics which nevertheless differed from our own; we might easily end up looking for virtuous candidates and ignore those who consciously followed the ethics of Caligula.

The paper goes on to describe conceptual space theory and its universal variant: an ambitious proposal to map the whole space of ideas in a way which the authors think might ground practical models of moral agency. I admire the optimism of the project, but doubt whether any such mapping is possible. Tellingly, the case of colour space, which does lend itself beautifully to a simple 3d mapping, is quoted: I think other parts of the conceptual repertoire are likely to be much more challenging. Interestingly I thought the general drift suggested that the idea was a cousin of Arnold Trehub’s retinoid theory, though more conceptual and perhaps not as well rooted in neurology.

Overall it’s an interesting paper: Parthemore and Whitby very reasonably say at several points that they’re not out to solve all the problems of philosophy; but I think if they want their points to stick they will unavoidably need to delve more deeply in a couple of places.