Clockwork Orange syndrome?

Picture: Clockwork Orange. Over at the Institute for Ethics and Emerging Technologies,  Martine Rothblatt offers to solve the Hard Problem for us in her piece “Can Consciousness be created in Software?”.  The Hard Problem, as regulars here will know, is how to explain subjective experience, the redness of red, the ineffable inner sense of what experience is like. Rothblatt’s account is quite short, and on my first reading I had slipped past the crucial sentences and discovered she was claiming victory before I quite realised what her answer was, so that I had to go back and read it again more carefully.

She says: “Redness is part of the gestalt impression obtained in a second or less from the immense pattern of neural connections we have built up about red things… With each neuron able to make as many as 10,000 connections, and with 100 billion neurons, there is ample possibility for each person to have subjective experiences through idiosyncratic patterns of connectivity. “ The point seems to be that the huge connectivity of the human brain makes it easy for everyone’s experience of redness to be unique.

This is an interesting and rather novel point of view. One traditional way of framing the Hard Problem is to ask whether the blue that I see is really the same as the blue that you see – or could it be equivalent to my orange? How would we know?  On Rothblatt’s view it would seem the answer must be that my blue is definitely not the same as yours, nor the same as anyone else’s; and nor is it the same as your orange, or Fred’s green for that matter, everyone having an experience of blue which is unique to them and their particular pattern of neuronal connection. I find this a slightly scary perspective, but not illogical. Presumably from Rothblatt’s point of view the only thing the different experiences have in common is that they all refer to the same blue things in the real world (or something like that).

Is it necessarily so, though?  Does the fact that our neuronal connections are different mean our experiences are different? I don’t think so. After all, I can write a particular sentence in different fonts and different sizes and colours of text;  I can model it in clay or project it on a screen, yet it remains exactly the same sentence. Differences in the substrate don’t matter.  We can go a step further: when people think of that same sentence, we must presume that the neuronal connections supporting the thought are different in each individual – yet it’s the same sentence they’re thinking. So why can’t different sets of neural connections support the same experience of blue in different heads?

Rothblatt’s account is a brief one, and perhaps she has some further theoretical points which explain the relationship between neuronal structures and subjective experiences in a way which would illuminate this. But hang on – wasn’t the relationship between neuronal structures and subjective experience the original problem? Slapping our foreheads, we realise that Rothblatt has palmed off on us an idea about uniqueness which actually doesn’t address the Hard Problem at all. The Hard Problem is not about how brains can accommodate a million unique experiences: it’s about how brains can accommodate subjectivity, how they make it true that there is something it is like to see red.

I fear this may be another example of the noticeable tendency of some theorists, especially those with a strong scientific background, to give us clockwork oranges, to borrow the metaphor Anthony Burgess used in a slightly different context: accounts that model some of the physical features quite nicely (see, it’s round, it hangs from trees effectively) while missing the essential juice which is really the whole point. According to Burgess, or at least to F. Alexander, the ill-fated character who expresses this view in the novel,  the point of a man is not orderly compliance with social requirements but, as it were, his juice, his inner reality.  I think the answer to the Hard Problem is indeed something to do with our reality (I should say that I mean ‘reality’ in a perfectly everyday sense:  Rothblatt, like some purveyors of mechanical fruit, is a little too quick to dismiss anything not strictly reducible to physics as mysticism) : we’re fine when we’re dealing with abstractions, it’s the concrete and particular which remains stubbornly inexplicable in any but superficial terms.  It’s my real experience of redness that causes the difficulty. Perhaps Rothblatt’s ideas about uniqueness are an attempt to capture the one-off quality of that experience; but uniqueness isn’t quite the point, and I think we need something deeper.

The Three Laws revisited

Picture: Percy - Brains he has nix. Ages ago (gosh, it was nearly five years ago) I had a piece where Blandula remarked that any robot clever enough to understand Isaac Asimov’s Three Laws of Robotics would surely be clever enough to circumvent them.  At the time I think all I had in mind was the ease with which a clever robot would be able to devise some rationalisation of the harm or disobedience it was contemplating.  Asimov himself was of course well aware of the possibility of this kind of thing in a general way.  Somewhere (working from memory) I think he explains that it was necessary to specify that robots may not, through inaction, allow a human to come to harm, or they would be able to work round the ban on outright harming by, for example, dropping a heavy weight on a human’s head.  Dropping the weight would not amount to harming the human because the robot was more than capable of catching it again before the moment of contact. But once the weight was falling, a robot without the additional specification would be under no obligation to do the actual catching.

That does not actually wrap up the problem altogether. Even in the case of robots with the additional specification, we can imagine that ways to drop the fatal weight might be found. Suppose, for example, that three robots, who in this case are incapable of catching the weight once dropped, all hold on to it and agree to let go at the same moment. Each individual can feel guiltless because if the other two had held on, the weight would not have dropped. Reasoning of this kind is not at all alien to the human mind;  compare the planned dispersal of responsibility embodied in a firing squad.

Anyway, that’s all very well, but I think there may well be a deeper argument here: perhaps the cognitive capacity required to understand and apply the Three Laws is actually incompatible with a cognitive set-up that guarantees obedience.

There are two problems for our Asimovian robot: first it has to understand the Laws; second, it has to be able to work out what actions will deliver results compatible with them.  Understanding, to begin with, is an intractable problem.  We know from Quine that every sentence has an endless number of possible interpretations; humans effortlessly pick out the one that makes sense, or at least a small set of alternatives; but there doesn’t seem to be any viable algorithm for picking through the list of interpretations. We can build in hard-wired input-output responses, but when we’re talking about concepts as general and debatable as ‘harm’, that’s really not enough. If we have a robot in a factory, we can ensure that if it detects an unexpected source of heat and movement of the kind a human would generate, it should stop thrashing its painting arm around – but that’s nothing like intelligent obedience of a general law against harm.

But even if we can get the robot to understand the Laws, there’s an equally grave problem involved in making it choose what to do.  We run into the frame problem (in its wider, Dennettian form). This is, very briefly, the problem that arises from tracking changes in the real world. For a robot to keep track of everything that changes (and everything which stays the same, which is also necessary) involves an unmanageable explosion of data. Humans somehow pick out just relevant changes; but again a robot can only pick out what’s relevant by sorting through everything that might be relevant, which leads straight back to the same kind of problem with indefinitely large amounts of data.

I don’t think it’s a huge leap to see something in common between the two problems; I think we could say that they both arise from an underlying difficulty in dealing with relevance in the face of  the buzzing complexity of reality. My own view is that humans get round this problem through recognition; roughly speaking, instead of looking at every object individually to determine whether it’s square, we throw everything into a sort of sieve with holes that only let square things drop through. But whether or not that’s right, and putting aside the question of how you would go about building such a faculty into a robot, I suggest that both understanding and obedience involve the ability to pick out a cogent, non-random option from an infinite range of possibilities.  We could call this free will if we were so inclined, but let’s just call it a faculty of choice.

Now I think that faculty, which the robot is going to have to exercise in order to obey the Laws, would also unavoidably give it the ability to choose whether to obey them or not. To have the faculty of choice, it has to be able to range over an unlimited set of options, whereas constraining it to any given set of outcomes  involves setting limits. I suppose we could put this in a more old-fashioned mentalistic kind of way by observing that obedience, properly understood, does not eliminate the individual will but on the contrary requires it to be exercised in the right way.

If that’s true (and I do realise that the above is hardly a tight knock-down argument) it would give Christians a neat explanation of why God could not have made us all good in the first place – though it would not help with the related problem of why we are exposed to widely varying levels of temptation and opportunity.  To the rest of us it offers, if we want it, another possible compatibilist formulation of the nature of free will.