Superfluous Consciousness?

Do we need robots to be conscious? Ryota Kanai thinks it is largely up to us whether the machines wake up – but he is working on it. I think his analysis is pretty good and in fact I think we can push it a bit further.

His opening remarks, perhaps due to over-editing, don’t clearly draw the necessary distinction between Hard and Easy problems, or between subjective p-consciousness and action-related a-consciousness (I take it to be the same distinction, though not everyone would agree). Kanai talks about the unsolved mystery of experience, which he says is not a necessary by-product of cognition, and says that nevertheless consciousness must be a product of evolution. Hm. It’s p-consciousness, the ineffable, phenomenal business of what experience is like, that is profoundly mysterious, not a necessary by-product of cognition, and quite possibly nonsense. That kind of consciousness cannot in any useful sense be a product of evolution, because it does not affect my behaviour, as the classic zombie twin thought experiment explicitly demonstrates.  A-consciousness, on the other hand, the kind involved in reflecting and deciding, absolutely does have survival value and certainly is a product of evolution, for exactly the reasons Kanai goes on to discuss.

The survival value of A-consciousness comes from the way it allows us to step back from the immediate environment; instead of responding to stimuli that are present now, we can respond to ones that were around last week, or even ones that haven’t happened yet; our behaviour can address complex future contingencies in a way that is both remarkable and powerfully useful. We can make plans, and we can work out what to do in novel situations (not always perfectly, of course, but we can do much better than just running a sequence of instinctive behaviour).

Kanai discusses what must be about the most minimal example of this; our ability to wait three seconds before responding to a stimulus. Whether this should properly be regarded as requiring full consciousness is debatable, but I think he is quite right to situate it within a continuum of detached behaviour which, further along, includes reactions to very complex counterfactuals.

The kind he focuses on particularly is self-consciousness or higher-order consciousness; thinking about ourselves. We have an emergent problem, he points out, with robots  whose reasons are hidden; increasingly we cannot tell why a complex piece of machine learning resulted in the behaviour that resulted. Why not get the robot to tell us, he says; why not enable it to report its own inner states? And if it becomes able to consider and explain its own internal states, won’t that be a useful facility which is also like the kind of self-reflecting consciousness that some philosophers take to be the crucial feature of the human variety?

There’s an immediate and a more general objection we might raise here. The really bad problem with machine learning is not that we don’t have access to the internal workings of the robot mind; it’s really that in some cases there just is no explanation of the robot’s behaviour that a human being can understand. Getting the robot to report will be no better than trying to examine the state of the robot’s mind directly; in fact it’s worse, because it introduces a new step into the process, one where additional errors can creep in. Kanai describes a community of AIs, endowed with a special language that allows them to report their internal states to each other. It sounds awfully tedious, like a room full of people who, when asked ‘How are you?’ each respond with a detailed health report. Maybe that is quite human in a way after all.

The more general theoretical objection (also rather vaguer, to be honest) is that, in my opinion at least, Kanai and those Higher Order Theory philosophers just overstate the importance of being able to think about your own mental states. It is an interesting and important variety of consciousness, but I think it just comes for free with a sufficiently advanced cognitive apparatus. Once we can think about anything, then we can of course think about our thoughts.

So do we need robots to be conscious? I think conscious thought does two jobs for us that need to be considered separately although they are in fact strongly linked. I think myself that consciousness is basically recognition. When we pull off that trick of waiting for three seconds before we respond to a stimulus, it is because we recognise the wait as a thing whose beginning is present now, and can therefore be treated as another present stimulus. This one simple trick allows us to respond to future things and plan future behaviour in a way that would otherwise seem to contradict the basic principle that the cause must come before effect.

The first job that does is allow the planning of effective and complex actions to achieve a given goal. We might want a robot to be able to do that so it can acquire the same kind of effectiveness in planning and dealing with new situations which we have ourselves, a facility which to date has tended to elude robots because of the Frame Problem and other issues to do with the limitations of pre-programmed routines.

The second job is more controversial. Because action motivated by future contingencies has a more complex causal back-story, it looks a bit spooky, and it is the thing that confers on us the reality (or the illusion, if you prefer) of free will and moral responsibility. Because our behaviour comes from consideration of the future, it seems to have no roots in the past, and to originate in our minds. It is what enables us to choose ‘new’ goals for ourselves that are not merely the consequence of goals we already had. Now there is an argument that we don’t want robots to have that. We’ve got enough people around already to originate basic goals and take moral responsibility; they are a dreadful pain already with all the moral and legal issues they raise, so adding a whole new category of potentially immortal electronic busybodies is arguably something best avoided. That probably means we can’t get robots to do job number one for us either; but that’s not so bad because the strategies and plans which job one yields can always be turned into procedures after the fact and fed to ‘simple’ computers to run. We can, in fact, go on doing things the way we do them now; humans work out how to deal with a task and then give the robots a set of instructions; but we retain personhood, free will, agency and moral responsibility for ourselves.

There is quite a big potential downside, though; it might be that the robots, once conscious, would be able to come up with better aims and more effective strategies than we will ever be able to devise. By not giving them consciousness we might be permanently depriving ourselves of the best possible algorithms (and possibly some superior people, but that’s a depressing thought from a human point of view). True, but then I think that’s almost what we are on the brink of doing already. Kanai mentions European initiatives which may insist that computer processes come with an explanation that humans can understand; if put into practice the effect, once the rule collides with some of those processes that simply aren’t capable of explanation, would be to make certain optimal but inscrutable algorithms permanently illegal.

We could have the best of both worlds if we could devise a form of consciousness that did job number one for us without doing job two as an unavoidable by-product, but since in my view they’re all acts of recognition of varying degrees of complexity, I don’t see at the moment how the two can be separated.

Political consciousness

Picture: Gladraeli. As Gilbert and Sullivan had it,

…every boy and every girl
That’s born into the world alive
Is either a little Liberal
Or else a little Conservative!

Now indeed it seems that right-wing brains are measurably different from left-wing ones.

Well, alright, it’s not as simple as that, but as this review (via)of an interesting piece of research reports, in one sample of young adults, self-reported political conservatives tended to have a larger right amygdala, whereas self-reported liberals tend to have a larger anterior cingulate cortex. Strictly speaking we are probably not entitled to deduce anything from this, but if we want to jump to conclusions we can assume that a larger amygdala is associated with greater levels of distrust and hostility, while a larger anterior cingulate cortex is associated with greater tolerance of conflict and uncertainty. It’s easy to imagine that a more distrustful (perhaps we should say ‘sceptical’) personality, with a more pessimistic view of human nature, might be associated with generally more right-wing views. I’m not so sure about the interpretation of the other finding, but I suppose a greater tolerance of conflict and uncertainty might be associated with less support for established authority and traditional mores, and so with a generally more leftish slant. I suppose we would expect, in line with the often-observed tendency to drift to the right as one ages, that the amgydala would swell and the anterior cingulate cortex shrink as time went by?

It’s generally fairly plausible that political views correlate to at least some degree with personality traits. It’s often suggested that introversion tends to go with a more conservative outlook, for example. It’s certainly the case that political leanings are more about direction of travel than about specific policies: pretty well everyone alive now is a raving lefty in terms of the medieval political outlook (though even then there were outbursts of radicalism, of course). Conservative opinion which once would fight to the death defending the divine right of kings finds itself, four hundred years later, defending the mercantile individualist values it once regarded as the enemy. For that matter I’ve just been reading a few of Trollope’s novels, and it seems pretty clear that the Duke of Omnium’s coalition government would find its successor in Westminster today not just unacceptably but almost incomprehensibly left-wing, even though in theory the governments have a broadly similar mixture of conservative and liberal opinion.

It’s strange that Kanai’s research identifies two different measures of political leaning. It doesn’t seem to me that trusting and liking people is exactly the same thing as tolerating conflict. The existence of two independent axes suggests that there are actually four different positions. As well as the die-hard right and the entrenched left, we have on the one hand people who favour hard-nosed policies but attach no value to authority and convention; while on the other hand we have people who prefer generous and supportive systems but want to combine them with traditional and conforming ways of behaving. Neither seems at all hard to imagine: I know people who would fit both those descriptions tolerably well. Perhaps our existing political set-up misses out on the full geometry: if so it might be a little worrying because it would imply that perhaps there are problems and solutions which are really quite important but remain partly invisible to us on our one-dimensional view?

If the normal right-left spectrum is so inadequate (and I think many people would say it is, and that there are really not just two axes involved, but several), how come we’re lumbered with it? It’s not that surprising that when one group is in power many of its opponents band together and sink their differences in the joint project of turning the rascals out; and when the tables are turned it’s the new opposition that benefits from the same effect. Over time it seems plausible this would lead to the crystallisation of two broad groupings; and the theme of those who have established wealth and power, and hence tend to like theoretical arguments against change, versus those who have less and so are predisposed to like reforming and revolutionary sentiments, seems well adapted to be the thread along which the parties ultimately take shape.

Fine for politics, but I can’t see how that would explain a similar dichotomy in the brain, unless there’s something more fundamental going on. Could there be some kind of game-theoretical account, in which, say, big-amygdala people do well out of their hard-nosed attitudes and reproduce successfully up to the point where they become a majority of the population, at which point the level of distrust undermines social cohesion and fully offsets the benefits to new ‘selfish’ entrants, so the advantage begins to accrue to the cingulate cortexers who can cope with a bit of dissent and disorder, who then do better until their generosity and trust begins to be exploited by a selfish minority and so on until some kind of balance is reached?

Maybe it’s not genetic, though: perhaps it has to do with birth order? It is said that first-born children tend to endorse authority and take on the role of parental deputy, while for subsequent children there are more rewards in being the first non-conformist than in being a mere second deputy. Perhaps these influences create differential growth in differing parts of the brain (alas, no data on birth order)?

Having jumped repeatedly from one conclusion to another like an excitable frog, I find myself in an unfamiliar part of the pond, so I shall retreat, taking with me only the wild speculation that much of the theory and rhetoric of politics might in fact resemble the bizarre confabulations produced by some patients to give a superficial appearance of conscious volition to behaviour whose actual origins in deep mental hard-wiring or cognitive deficits are actually quite unknown to them.