What, no kill-bots?

Kill-bot. You may have read a month or two ago about the rather scary robotic sentries which have been created: it seems they identify anything that moves and shoot it. Although it has a number of interesting implications, that technology does not seem an especially exciting piece of cybernetics. The BICA project (Biologically Inspired Cognitive Architectures) set up by DARPA (the people who, to all intents and purposes, brought us the Internet) is a very different kettle of fish.

The aim set out for the project was to achieve artificial cognition with a human level of flexibility and power, by bringing together the best of computational approaches and recent progress in neurobiology. Ultimately, of course, the application would be military, with robots and intelligent machines supporting or superseding human soldiers. In the first phase, a number of different sub-projects explored a range of angles. To my eyes there is a good deal of interesting stuff in the reports from this stage: if I had been sponsoring the project, I should have been afraid that each team would want to go on riding its own favourite hobby-horse to the exclusion of the project’s declared aims, but that does not seem to have happened.

In the second phase, the teams were to proceed to implementation, and the resulting machines were to compete against each other in a Cognitive Decathlon. Unfortunately, it seems there will be no second phase. No-one appears to know exactly why, but the project will go no further.

It could well be that the cancellation is the result of budget shifts within DARPA that have little or nothing to do with the project’s perceived worth. Another possibility is that the sponsors became uneasy with the basic idea of granting lethal hardware a mind of its own: the aim was to achieve the kind of cognition that allows the machine to cope with unexpected deviations from plan, and make sensible new decisions on the fly; but that necessarily involves the ability to go out of control spontaneously. It could also be that someone realised how difficult moving from design to implementation was going to be. It has always been easy to run up a good-looking high-level architecture for cognition, with the real problems having a tendency to get tidied up into a series of black boxes. This might have been one project where it was a mistake to start with the overall design. The plasticity of the human brain, and the existence of other brain layouts in creatures such as squid, suggest that the overall layout may not matter all that much, or at least that a range of different designs would all be perfectly viable if you could get the underlying mechanisms right.

There is another basic methodological issue here, though. When you start a project, you need to know what you’re trying to build and what it’s supposed to do: but no-one can really give a clear answer to those questions so far as human cognition is concerned. The BICA project was likened by some to the Apollo moon landings: but although the moon trips were hugely challenging, it was always clear what needed to be delivered, and in broad terms, how.

But what is human cognition actually for? We can say fairly clearly what some of the sub-systems do: analyse input from the eyes, for example, or ensure that the sentences we utter hang together properly. But high-level cognition itself?

From an evolutionary perspective, cognition clearly helps us survive: but that could equally be said of almost every organ and function of a human being, so it doesn’t help us define the distinctive function of thought. Following the line adopted by DARPA we could plausibly say that cognition frees us from the grasp of our instincts, helping us to deal much more effectively with novel situations, and exploit opportunities which would otherwise be neglected. But that doesn’t really pin it down, either: the fact that thoughtful behaviour is different from instinctive, pre-programmed behaviour doesn’t distinguish it from random behaviour or inertia, and pointing out that it’s often more successful behaviour just seems to beg the question.

It seems to be important that human-level cognition allows us to address situations which have not in fact occurred; we can identify the dangerous consequences of a possible course of action without trying it out, and enable ‘our hypotheses to die in our stead’. Perhaps we could describe cognition as another sense, the sense of the possible: our eyes allow us to consider what is around us, but our thoughts allow us to consider what would or might be. It’s surely more than that, though, since our imagination allows us to consider the impossible and the fantastic just as readily as the possible. As a definition, moreover, it’s still not much use to a designer, not least because the very concept of possibility is highly problematic.

Perhaps after all we were getting closer to the truth with the purely negative point that thoughtful behaviour is not instinctive. When evolution endowed us with high-level cognition, she took an unprecedented gamble; that cutting us loose to some degree from self-interested behaviour would in the end and overall lead to better self-interested behaviour. The gamble, so far, appears to have paid off; but just as the kill-bots could choose alternative victims, or perhaps become pacifists, human beings can (and do) kill themselves or choose not to reproduce. Perhaps the distinctive quality of cognition is it free, gratuitous character: its point is that it is pointless. That doesn’t seem to be much help to an engineer either.

Anyway, I think I can wait a bit longer for the kill-bots; but it seems a shame that the project didn’t go on a bit further, and perhaps illuminate these issues.

What is this thing called love?

Masks of Comedy and Tragedy. Edge has excerpted the first chapter of Marvin Minsky’s book on emotions, The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind. I clearly need to read the whole book, but I found the excerpt characteristically thought-provoking. As you might have expected, the general approach builds on the Society of Mind: emotions, it seems, allow us to activate the right set of resources (their nature deliberately kept vague) from the diffuse cloud available to us. So emotions really serve to enhance our performance: in young or unsophisticated organisms they may be a bit rough and ready, but in adult human beings they are under a degree of rational control and are subject to a higher level of continuity and moderation.

At first sight, explaining the emotions by identifying the useful jobs they do for us seems a promising line of investigation. Since we are the products of evolution, it seems a good hypothesis to suppose that the emotional states we have developed must have some positive survival value. The evidence, moreover, seems to support the idea to some degree: anger, for example, corresponds with physiological states which help get us ready for fighting. Love between mates presumably helps to establish a secure basis for the production and care of offspring. The survival value of fear is obvious.

However, on closer examination things are not so clear as they might be. What could the survival value of grief be? It seems to be entirely negative. Its physiological manifestations range from the damaging (loss of concentration and determination) to the surreal (excessive water flowing from the tear-ducts down the face). Darwin himself apparently found the crying of tears ‘a puzzler’ with no practical advantages either to modern humans or to any imaginable ancestor, or any intelligible relationship to any productive function – what has rinsing out your eyes got to do with the death of a mate or child? If it comes to that, even anger is not an unalloyed benefit: a man in the grip of rage is not necessarily in the best state to win an argument, and it’s surely even debatable whether he’s more likely to win a fight than someone who remains rational and judicious enough to employ sensible tactics.
One of the thoughts the extract provoked in me, albeit at a tangent to what Minsky is saying, concerned another possible problem with evolutionary arguments. The physiological story about an increased pulse rate and the rest of it is one thing, but does all that have to be accompanied by feeling angry? Can’t we perhaps imagine going through all the right physiological changes to equip us for fighting, fleeing, or whatever other activity seems salient, without having any particular feelings about it?This sounds like qualia. If emotions are feelings which are detachable from physical events, are they, in themselves, qualia? I’m not quite sure what the orthodox view of this is: I’ve read discussions which take emotions to be qualia, or accompanied by them, but the canonical examples of qualia – seeing red and the rest of it – are purely sensory. The comparison, at any rate, is interesting. In the case of sensory qualia there are three elements involved: an object in the external physical world, the mechanical process of registration by the senses, and the ineffable experience of the thing in our minds, where the actual redness or sounding or smelliness occurs. In the case of the emotions, there isn’t really any external counterpart (although you may be angry or in love with someone, you perceive the anger or love as your own, not as one of the other person’s qualities): the only objective correlate of an emotional quale is our own physiological state.

Are emotional zombies really possible? It’s widely though not universally believed that we could behave exactly the way we do – perhaps be completely indistinguishable from our normal selves – and yet lack sensory qualia altogether. An emotional zombie, along similar lines, would have to be a person whose breathing and pulse quickened, whose face blushed and voice turned hoarse, and who was objectively aware of these physiological manifestations, but actually felt no emotion whatever. I think this is still conceivable, but it seems a little stranger and harder to accept than the sensory case. I think in the case of an emotional zombie I should feel inclined to hypothesise a kind of split personality, supposing that the emotions were indeed being felt somewhere orin some sense, but that there was also a kind of emotionless passenger lodged in the same brain. I don’t feel similarly tempted, with sensory zombies, to suppose that real subjective redness must be going on in a separate zone of consciousness somewhere in the mind.

The same difference in plausibility is visible from a different angle. In the case of sensory qualia, we can worry about whether our colour vision might one day be switched, so that what previously looked blue now looks yellow, and vice versa. I suppose we can entertain the idea that Smith, when he goes red in the face, shouts and bangs the table, is feeling what we would call love; but it seems more difficult to think that our own emotions could somehow be switched in a similar way. The phenomenal experience of emotions just seems to have stronger ties to the relevant behaviour than phenomenal experience of colours, say, has to the relevant sensory operations.

It might be that this has something to do with the absence of an external correlate for emotions, which leaves us feeling more certainty about them. We know our senses can mislead us about the external world, so we tend to distrust them slightly: in the case of emotions, there’s nothing external to be wrong about, and we therefore don’t see how we could really be wrong about our own emotions. Perhaps, not entirely logically, this accounts for a lesser willingness to believe in emotional zombies.

Or, just possibly, emotional qualia really are tied in some deeper way to volition. This is not so much a hypothesis as a gap where a hypothesis might be, since I should need a plausible account of volition before the idea could really take shape. But one thing in favour of this line of investigation is that it holds out some hope of explaining what the good of phenomenal experience really is, something lacking from most accounts. If we could come up with a good answer to that, our evolutionary arguments might gain real traction at last.

Too thin? Too rich?

Disappearing foot. Just before you’d read this sentence, were you consciously aware of your left foot? Eric Schwitzgebel set out to resolve the question in the latest edition of the JCS.

In normal circumstances, we are bombarded by impressions from all directions. Our senses are constantly telling us about the sights, sounds and smells around us, and also about where our feet are, how they feel in their shoes, how hungry we currently feel, whether our sore calf muscle is any better at the moment; and what spurious reasoning some piece of text on the internet is trying to spin for us. But most of the time, most of this information is ignored. In some sense, it’s always there, but only the small subset of things which are currently receiving our attention are, as it were, brightly lit.

There’s little doubt about this basic scenario. Notoriously, when we drive along a familiar route, the details drop into the background of our mind and we start to think about something else. When we arrive at our destination, we may not remember anything much about the journey: but clearly we could see the road and hear the engine at all relevant times, or we probably shouldn’t have been able to finish the journey. On the other hand, suppose as we were driving along, the sound of a baby crying had unexpectedly drifted over from the back seat: would we have failed to notice that feature of the background?

So we have two (at least two) levels of awareness going on. Schwitzgebel poses the question: which do we regard as conscious? On the “thin” view, we’re only really conscious of the things we’re explicitly thinking about. No doubt the other stuff is in some sense available to consciousness, and no doubt bits of it can pop up into consciousness when they trigger some unconscious alarm; but it’s not actually in our consciousness. How else, the thinnists might ask, are we going to make the distinction between the two different levels? The rich view is that everything should be included: I may not be thinking about my foot at all times, but to suggest that I only know where it is subconsciously, or unconsciously, seems ridiculous.

Schwitzgebel does not think either side has particularly strong arguments. Both are inclined to provide examples, or assert their case, and expect the conclusion to seem obvious. Searle has argued that we couldn’t switch attention unless we were conscious of the thing we were switching our attention to: Mack and Rock have done experiments to prove that while paying close attention to one things we may fail to notice other things: but neither of these lines of discussion really seems to provide what you call a knock-down case.

Accordingly, with many reservations, Schwitzgebel set up an experiment of his own. The subjects wore a beeper which went off at a random period up to an hour after being set: they then recorded what they were conscious of immediately beforehand (it’s important, of course, to keep the delay minimal, otherwise the issue gets entangled with problems of memory). The subjects were divided into groups and asked to focus on tactile, visual and total sensory experience.

The results supported neither the thin nor the rich position. Perhaps the most interesting finding is the degree of surprise evoked in the subjects. In a departure from normal experimental method, Schwitzgebel used as subjects philosophy postgrads who could reasonably be expected to have some established prejudices in the field: he also spent time explaining the experiment and talking over the issues, and recorded whether each subject was a thinnist or richist at the start. Although this involved some risk of skewing the results, it allowed the discovery that thinnists actually often found themselves having rich experience, and vice versa.

Where does that leave us? It seems almost as though the dilemma is merely reinforced: the research points towards some compromise, but it’s hard to see where we can find room for one. The results did seem to reinforce the existing general agreement that there really are two distinct levels or aspects of consciousness at work. Wouldn’t one solution, then, be to give both neutral labels (con-1 and con-2?) and leave it at that? That may be what one of Schwitzgebel’s subjects, who apparently dismissed the whole thing as ‘linguistic’ had in mind. But it’s not a very comfortable position to dismiss the concept of consciousness in favour of two hazy new ones. Schwitzgebel, rightly, I think, considers that the difference between thinnism and richism is real and significant.

My best guess for a neatish answer is that we’re simply dealing with pure first order consciousness and the same thing combined with second order consciousness. In other words, the dim constant awareness of everything being reported by our senses, really is conscious, but it’s a region of consciousness we’re not conscious of being conscious of. By contrast, we’re not only conscious of the things at the forefront of our minds, we’re also aware of being conscious of them. (It might well be that second-order consciousness is what animals largely or wholly lack – I wonder if thinnists also tend to be sceptics about animal consciousness?)
Alas, that’s not really a compromise: it seems to make me a kind of richist.