Picture: Percy - Brains he has nix. Ages ago (gosh, it was nearly five years ago) I had a piece where Blandula remarked that any robot clever enough to understand Isaac Asimov’s Three Laws of Robotics would surely be clever enough to circumvent them.  At the time I think all I had in mind was the ease with which a clever robot would be able to devise some rationalisation of the harm or disobedience it was contemplating.  Asimov himself was of course well aware of the possibility of this kind of thing in a general way.  Somewhere (working from memory) I think he explains that it was necessary to specify that robots may not, through inaction, allow a human to come to harm, or they would be able to work round the ban on outright harming by, for example, dropping a heavy weight on a human’s head.  Dropping the weight would not amount to harming the human because the robot was more than capable of catching it again before the moment of contact. But once the weight was falling, a robot without the additional specification would be under no obligation to do the actual catching.

That does not actually wrap up the problem altogether. Even in the case of robots with the additional specification, we can imagine that ways to drop the fatal weight might be found. Suppose, for example, that three robots, who in this case are incapable of catching the weight once dropped, all hold on to it and agree to let go at the same moment. Each individual can feel guiltless because if the other two had held on, the weight would not have dropped. Reasoning of this kind is not at all alien to the human mind;  compare the planned dispersal of responsibility embodied in a firing squad.

Anyway, that’s all very well, but I think there may well be a deeper argument here: perhaps the cognitive capacity required to understand and apply the Three Laws is actually incompatible with a cognitive set-up that guarantees obedience.

There are two problems for our Asimovian robot: first it has to understand the Laws; second, it has to be able to work out what actions will deliver results compatible with them.  Understanding, to begin with, is an intractable problem.  We know from Quine that every sentence has an endless number of possible interpretations; humans effortlessly pick out the one that makes sense, or at least a small set of alternatives; but there doesn’t seem to be any viable algorithm for picking through the list of interpretations. We can build in hard-wired input-output responses, but when we’re talking about concepts as general and debatable as ‘harm’, that’s really not enough. If we have a robot in a factory, we can ensure that if it detects an unexpected source of heat and movement of the kind a human would generate, it should stop thrashing its painting arm around – but that’s nothing like intelligent obedience of a general law against harm.

But even if we can get the robot to understand the Laws, there’s an equally grave problem involved in making it choose what to do.  We run into the frame problem (in its wider, Dennettian form). This is, very briefly, the problem that arises from tracking changes in the real world. For a robot to keep track of everything that changes (and everything which stays the same, which is also necessary) involves an unmanageable explosion of data. Humans somehow pick out just relevant changes; but again a robot can only pick out what’s relevant by sorting through everything that might be relevant, which leads straight back to the same kind of problem with indefinitely large amounts of data.

I don’t think it’s a huge leap to see something in common between the two problems; I think we could say that they both arise from an underlying difficulty in dealing with relevance in the face of  the buzzing complexity of reality. My own view is that humans get round this problem through recognition; roughly speaking, instead of looking at every object individually to determine whether it’s square, we throw everything into a sort of sieve with holes that only let square things drop through. But whether or not that’s right, and putting aside the question of how you would go about building such a faculty into a robot, I suggest that both understanding and obedience involve the ability to pick out a cogent, non-random option from an infinite range of possibilities.  We could call this free will if we were so inclined, but let’s just call it a faculty of choice.

Now I think that faculty, which the robot is going to have to exercise in order to obey the Laws, would also unavoidably give it the ability to choose whether to obey them or not. To have the faculty of choice, it has to be able to range over an unlimited set of options, whereas constraining it to any given set of outcomes  involves setting limits. I suppose we could put this in a more old-fashioned mentalistic kind of way by observing that obedience, properly understood, does not eliminate the individual will but on the contrary requires it to be exercised in the right way.

If that’s true (and I do realise that the above is hardly a tight knock-down argument) it would give Christians a neat explanation of why God could not have made us all good in the first place – though it would not help with the related problem of why we are exposed to widely varying levels of temptation and opportunity.  To the rest of us it offers, if we want it, another possible compatibilist formulation of the nature of free will.

16 Comments

  1. 1. Brandon says:

    Somewhere (working from memory) I think he explains that it was necessary to specify that robots may not, through inaction, allow a human to come to harm, or they would be able to work round the ban on outright harming by, for example, dropping a heavy weight on a human’s head.

    The story you were thinking of is “Little Lost Robot”, where the problem of robots is without the specification is considered at length (this is one of the examples used by Susan Calvin, the ‘robopsychologist’, to explain that it is a problem).

  2. 2. Cibbuano says:

    Or was it “The Naked Sun”? The detective postulates that a robot could be in the same room as a murdered man…

  3. 3. Kar says:

    Peter,

    We all know what the former New York governor Eliot Spitzer did (getting himself involved in a high-priced prostitution ring and ended up getting caught) was quite stupid. At the same time, we also know (assume) that he had in his possession this “faculty of choice”. This is particularly troubling given his past experience as a prosecutor who could prosecute this kind of things. And somehow, some part of the human nature is just capable of overriding this “faculty of choice”….

    Understanding is one thing, emotion is another. People know high fat diet is not the most healthiest diet, yet many of us crave for one more piece of bacon in the morning… nature takes over.

    So, I am inclined to claim “emotion” overriding knowledge. Perhaps cleverness cannot cause disobedience, if hardwired.

    So, it may just be possible to engineer robots to be absolutely incapable of disobedience, or just standing by when a piece of rock is about to fall on a human head. i.e. “emotionally” incapable of staying passive even if there are other robots around that may do the job, because doing so would make the robots “feel very bad”. All three robots will probably all rush to the rescue after dropping the weight (even though they may end up crashing into each other).

    However, it got complicated with eating bacon. It will be quite difficult to engineer a robot to know when to take away the piece of bacon from its human master in the morning, directly disobeying the master’s order, after the human master already had so many pieces down because the robot reasons that one more piece of bacon will harm the human and it cannot stay in a state of inaction.

    Perhaps the Asimov laws of Robotics are indeed incompatible with each other.

  4. 4. Peter says:

    Thanks, yes it was “Little Lost Robot”, though there may well be some similar reasoning in “The Naked Sun”, which I read years ago but don’t own.

    I take the point about the over-riding effect of emotions, (though I doubt whether they could be 100% reliable in governing behaviour). And I can see the bacon issue being the basis of an interesting story!

  5. 5. Michael Baggot says:

    It seems to me that if you implement a human neural architecture on a robot you will never be able to also make it truly obedient because humans are in fact only obedient when it suits their purposes. IOW, humans are continually weighing needs vs consequences and acting accordingly, i.e., in pursuit of their own complex interests. Such a robot would have to be cajoled into doing its “owners” bidding. Of course, we do not want to modify the cognitive architecture for then we would lose the desirable cognitive features that humans have to offer. The answer, it seems to me, is this: Build a humanoid robot but imbue it with a set of memories, e.g., the life history or cumulative experience of a soldier or perhaps a saint, that would make it think that obeying orders was in its best interest.

  6. 6. Doru says:

    I just stumbled upon your blog and I instantly resonated with it.

    I believe that one day computers will be enabled to process the illusion of consciousness.
    These will be achieved through massive parallel computing, when accurate prediction of reality models can be simulated prior of perceived happening 2-300 ms similar with our neuro-cortex.

    Hmmm,,,fascinating,,,,,
    I’ll be back!

  7. 7. Peter says:

    Thanks, Doru – I look forward to some good discussion, and I was pleased to be introduced to your own blog, a thoughtful place.

  8. 8. Logan says:

    Hmm… intriguing topic. I would think that at the current moment in time (2009) this would be impossible. Just because of the fact that we still don’t have a way to make all those processes work synchronously or close to the same amount of process time. Maybe IBM’s quantum computer will be improved to calculate quicker than it can now. And jelly is much more weight and space friendly. But you still have the monumental problem of writing the code for such a program. Because you can’t just have:
    if: problem
    solve it.
    (For any developers looking at this I am sorry for my failing with C++. I only learned a little of it in 5th grade.)

    Even if we want it to be that easy. But I believe that at some point there will be a major breakthrough in the development on a conscious artificial being. Even if its just a block at first. Like these guys have done: http://www.a-i.com/default.asp http://www.a-i.com/show_tree.asp?id=115 who knows maybe there way of thinking will get us past that monumental barrier.

    Thanks for the post,
    Logan

  9. 9. Karim says:

    Very thoughtfull post on the three laws revisited. It should be very much helpfull.

    Thanks,
    Karim – Positive thinking

  10. 10. Lloyd Rice says:

    Peter, can you give a more explicit reference on the “Dennettian form” of the frame problem? I have most of his books, so if you can just name one of them, it would help. Sounds like something early, maybe Brainchildren?? Thanks.

  11. 11. Lloyd Rice says:

    Michael (re your comment 5), My earlier take on this was pretty much as you describe: that a machine smart enough to deal with the issues could easily work around the dictum. But I have changed my views on that. I now believe that the real forces that drive us are mostly the underlying substrate, emotions, morality, etc., and that the cognitive layer contributes rather little to the end result. If that is the case, then it might be possible to build in some sort of “gut response” mechanism that would fire off a wave of repulsion at the thought of disobeying any of the “three laws”. It would likely not be 100% effective, but it could steer things in the right direction.

  12. 12. Peter says:

    Lloyd, the “Dennettian form” I have in mind is the one discussed in ‘Cognitive Wheels: the Frame Problem of AI’, which is indeed in ‘Brainchildren’. It was first published in ‘Mind, Machines and Evolution’ edited by C. Hookway, and it’s also in Margaret Boden’s collection ‘The Philosophy of Artificial Intelligence.

  13. 13. Lloyd Rice says:

    Peter, I’ve been nagging you from time to time about this frame problem. I just dug out the copy of Brainchildren and will get up to date on frames before I say more. I also keep running into references to Minsky’s chapter in Winston’s 1975 “Psychology of Computer Vision”, which I do not have. It’s about $20 on Amazon and I almost went for it last week. We’ll see…

  14. 14. Lloyd Rice says:

    I just checked on the Ford and Hayes book that Dennett mentioned in the Brainchildren article. It must be a real collector’s item, $600 used. And the Pylyshyn books are $80 each. No thanks. I may have to pass on reaching Dennett’s criterion of understanding “just about everything … about the frame problem.”

  15. 15. black celebs says:

    Sign: wdpad Hello!!! zubdx and 3816ramrxhkbfn and 9970 : I will try to recommend this post to my friends and family, cuz its really helpful.

  16. 16. My website says:

    I read a good book about asimov a few years ago it was quite interesting if you just google the name it comes up on amazon

Leave a Reply