Minds Within Minds

Can there be minds within minds? I think not.

The train of thought I’m pursuing here started in a conversation with a friend (let’s call him Fidel) who somehow manages to remain not only an orthodox member of the Church of England, but one who is apparently quite untroubled by any reservations, doubts, or issues about the theology involved. Now of course we don’t all see Christianity the same way. Maybe Fidel sees it differently from me. For many people (I think) religion seems to be primarily a social organisation of people with a broadly similar vision of what is good, derived mainly from the teachings of Jesus. To me, and I suspect to most people who are likely to read this, it’s primarily a set of propositions, whose truth, falsity, and consistency is the really important matter. To them it’s a club, to us it’s a theory. I reckon the martyrs and inquisitors who formed the religion, who were prepared to die or kill over formal assent to a point of doctrine, were actually closer to my way of thinking on this, but there we are.

Be that as it may, my friend cunningly used the problems (or mysteries) of his religion as a weapon against me. You atheists are so complacent, he said, you think you’ve got it all sorted out with your little clockwork science universe, but you don’t appreciate the deep mysteries, beyond human understanding. There are more things in heaven and earth…
But that isn’t true at all, I said. If you think current physics works like clockwork, you haven’t been paying attention. And there are lots of philosophical problems where I have only reasonable guesses at the answer, or sometimes, even on some fundamental points, little idea at all. Why, I said injudiciously, I don’t understand at all what reality itself even is. I can sort of see that to be real is to be part of a historical process characterised by causality, but what that really means and why there is anything like that, what the hell is really going on with it…? Ah, said Fidel, what a confession! Well, when you’re ready to learn about reality, you know where to come…

I don’t, though. The trouble is, I don’t think Christianity really has any answers for me on this or many other metaphysical points. Maybe it’s just my ignorance of theology talking here, but it seems to me just as Christianity tells us that people are souls and then falls largely silent on how souls and spirits work and what they are, it tells us that God made the world and withholds any useful details of how and what. I know that Buddhism and Taoism tell us pretty clearly that reality is an illusion; that seems to raise other issues but it’s a respectable start. The clearest Christian answer I can come up with is Berkeley’s idealism; that is, that to be real is to be within the mind of God; the world is whatever God imagines or believes it to be.

That means that we ourselves exist only because we are among the contents of God’s mind. Yet we ourselves are minds, so that requires it to be true that minds can exist within minds (yes, at last I am getting to the point). I don’t think a mind can exist within another mind. The simplest way to explain is perhaps as follows; a thought that exists within a mind, that was generated by that mind, belongs to that mind. So if I am sustaining another mind by my thoughts, all of its thoughts are really generated by me, and of course they are within my mind. So they remain my thoughts, the secondary mind has none that are truly its own – and it doesn’t really exist. In the same way, either God is thinking my thoughts for me – in which case I’m just a puppet – or my thoughts are outside his mind, in which case my reality is grounded in something other than the Divine mind.

That might help explain why God would give us free will, and so on; it looks as if Berkeley must have been perfectly wrong: in fact reality is exactly the quality possessed by those things that are outside God’s mind. Anyway, my grip of theology is too weak for my thoughts on the matter to be really worth reading (so I owe you an apology); but the idea of minds within minds arises in AI related philosophy, too; perhaps in relation to Nick Bostrom’s argument that we are almost certainly part of a computer simulation. That argument rests on the idea that future folk with advanced computing tech will produce perfect simulations of societies like their own, which will themselves go on to generate similar simulations, so that most minds, statistically, are likely to be simulated ones. If minds can’t exist within other minds, might we be inclined to doubt that they could arise in mind-like simulations?

Suppose for the sake of argument that we have a conscious mind that is purely computational; its mind arises from the computations it performs. Why should such a program not contain, as some kind of subroutine or something, a distinct process that has the same mind-generating properties? I don’t think the answer is obvious, and it will depend on your view of consciousness. For me it’s all about recognition; a conscious mind is a process whose outputs are conditioned by the recognition of future and imagined entities. So I would see two alternatives; either the computational mind we supposed to exist has one locus of recognition, or two. If it has one, the secondary mind can only be a puppet; if there are two, then whatever the computational relationship, the secondary process is independent in a way that means it isn’t truly within the primary mind.

That doesn’t seem to give me the anti-Bostrom argument I thought might be there, and let’s be honest, the notion of a ‘locus of recognition’ could possibly be attacked. If God were doing my thinking, I feel it would be a bit sharper than this…

Do We Need Ethical AI?

Amanda Sharkey has produced a very good paper on robot ethics which reviews recent research and considers the right way forward – it’s admirably clear, readable and quite wide-ranging, with a lot of pithy reportage. I found it comforting in one way, as it shows that the arguments have had a rather better airing to date than I had realised.

To cut to the chase, Sharkey ultimately suggests that there are two main ways we could respond to the issue of ethical robots (using the word loosely to cover all kinds of broadly autonomous AI). We could keep on trying to make robots that perform well, so that they can safely be entrusted with moral decisions; or we could decide that robots should be kept away from ethical decisions. She favours the latter course.

What is the problem with robots making ethical decisions? One point is that they lack the ability to understand the very complex background to human situations. At present they are certainly nowhere near a human level of understanding, and it can reasonably be argued that the prospects of their attaining that level of comprehension in the foreseeable future don’t look that good. This is certainly a valid and important consideration when it comes to, say, military kill-bots, which may be required to decide whether a human being is hostile, belligerent, and dangerous. That’s not something even humans find easy in all circumstances. However, while absolutely valid and important, it’s not clear that this is a truly ethical concern; it may be better seen as a safety issue, and Sharkey suggests that that applies to the questions examined by a number of current research projects.

A second objection is that robots are not, and may never be, ethical agents, and so lack the basic competence to make moral decisions. We saw recently that even Daniel Dennett thinks this is an important point. Robots are not agents because they lack true autonomy or free will and do not genuinely have moral responsibility for their decisions.

I agree, of course, that current robots lack real agency, but I don’t think that matters in the way suggested. We need here the basic distinction between good people and good acts. To be a good person you need good motives and good intentions; but good acts are good acts even if performed with no particular desire to do good, or indeed if done from evil but confused motives. Now current robots, lacking any real intentions, cannot be good or bad people, and do not deserve moral praise or blame; but that doesn’t mean they cannot do good or bad things. We will inevitably use moral language in talking about this aspect of robot behaviour just as we talk about strategy and motives when analysing the play of a chess-bot. Computers have no idea that they are playing chess; they have no real desire to win or any of the psychology that humans bring to the contest; but it would be tediously pedantic to deny that they do ‘really’ play chess and equally absurd to bar any discussion of whether their behaviour is good or bad.

I do give full weight to the objection here that using humanistic terms for the bloodless robot equivalents may tend to corrupt our attitude to humans. If we treat machines inappropriately as human, we may end up treating humans inappropriately as machines. Arguably we can see this already in the arguments that have come forward recently against moral blame, usually framed as being against punishment, which sounds kindly, though it seems clear to me that they might also undermine human rights and dignity. I take comfort from the fact that no-one is making this mistake in the case of chess-bots; no-one thinks they should keep the prize money or be set free from the labs where they were created. But there’s undoubtedly a legitimate concern here.

That legitimate concern perhaps needs to be distinguished from a certain irrational repugnance which I think clearly attaches to the idea of robots deciding the fate of humans, or having any control over them. To me this very noticeable moral disgust which arises when we talk of robots deciding to kill humans, punish them, or even constrain them for their own good, is not at all rational, but very much a fact about human nature which needs to be remembered.

The point about robots not being moral persons is interesting in connection with another point. Many current projects use extremely simple robots in very simple situations, and it can be argued that the very basic rule-following or harm prevention being examined is different in kind from real ethical issues. We’re handicapped here by the alarming background fact that there is no philosophical consensus about the basic nature of ethics. Clearly that’s too large a topic to deal with here, but I would argue that while we might disagree about the principles involved (I take a synthetic view myself, in which several basic principles work together) we can surely say that ethical judgements relate to very general considerations about acts. That’s not necessarily to claim that generality alone is in itself definitive of ethical content (it’s much more complicated than that), but I do think it’s a distinguishing feature. That carries the optimistic implication that ethical reasoning, at least in terms of cognitive tractability, might not otherwise be different in kind from ordinary practical reasoning, and that as robots become more capable of dealing with complex tasks they might naturally tend to acquire more genuine moral competence to go with it. One of the plausible arguments against this would be to point to agency as the key dividing line; ethical issues are qualitatively different because they require agency. It is probably evident from the foregoing that I think agency can be separated from the discussion for these purposes.

If robots are likely to acquire ethical competence as a natural by-product of increasing sophistication, then do we need to worry so much? Perhaps not, but the main reason for not worrying, in my eyes, is that truly ethical decisions are likely to be very rare anyway. The case of self-driving vehicles is often cited, but I think our expectations must have been tutored by all those tedious trolley problems; I’ve never encountered a situation in real life where a driver faced a clear cut decision about saving a bus load of nuns at the price of killing one fat man. If a driver follows the rule; ‘try not to crash, and if crashing is unavoidable, try to minimise the impact’, I think almost all real cases will be adequately covered.

A point to remember is that we actually do often make rules about this sort of thing which a robot could follow without needing any ethical sense of its own, so long as its understanding of the general context was adequate. We don’t have explicit rules about how many fat men outweigh a coachload of nuns just because we’ve never really needed them; if it happened every day we’d have debated it and made laws that people would have to know in order to pass their driving test. While there are no laws, even humans are in doubt and no-one can say definitively what the right choice is; so it’s not logically possible to get too worried that the robot’s choice in such circumstances would be wrong.

I do nevertheless have some sympathy with Sharkey’s reservations. I don’t think we should hold off from trying to create ethical robots though; we should go on, not because we want to use the resulting bots to make decisions, but because the research itself may illuminate ethical questions in ways that are interesting (a possibility Sharkey acknowledges). Since on my view we’re probably never really going to need robots with a real ethical sense, and on the other hand if we did, there’s a good chance they would naturally have developed the required competence, this looks to me like a case where we can have our cake and eat it (if that isn’t itself unethical).