What, no kill-bots?

Kill-bot. You may have read a month or two ago about the rather scary robotic sentries which have been created: it seems they identify anything that moves and shoot it. Although it has a number of interesting implications, that technology does not seem an especially exciting piece of cybernetics. The BICA project (Biologically Inspired Cognitive Architectures) set up by DARPA (the people who, to all intents and purposes, brought us the Internet) is a very different kettle of fish.

The aim set out for the project was to achieve artificial cognition with a human level of flexibility and power, by bringing together the best of computational approaches and recent progress in neurobiology. Ultimately, of course, the application would be military, with robots and intelligent machines supporting or superseding human soldiers. In the first phase, a number of different sub-projects explored a range of angles. To my eyes there is a good deal of interesting stuff in the reports from this stage: if I had been sponsoring the project, I should have been afraid that each team would want to go on riding its own favourite hobby-horse to the exclusion of the project’s declared aims, but that does not seem to have happened.

In the second phase, the teams were to proceed to implementation, and the resulting machines were to compete against each other in a Cognitive Decathlon. Unfortunately, it seems there will be no second phase. No-one appears to know exactly why, but the project will go no further.

It could well be that the cancellation is the result of budget shifts within DARPA that have little or nothing to do with the project’s perceived worth. Another possibility is that the sponsors became uneasy with the basic idea of granting lethal hardware a mind of its own: the aim was to achieve the kind of cognition that allows the machine to cope with unexpected deviations from plan, and make sensible new decisions on the fly; but that necessarily involves the ability to go out of control spontaneously. It could also be that someone realised how difficult moving from design to implementation was going to be. It has always been easy to run up a good-looking high-level architecture for cognition, with the real problems having a tendency to get tidied up into a series of black boxes. This might have been one project where it was a mistake to start with the overall design. The plasticity of the human brain, and the existence of other brain layouts in creatures such as squid, suggest that the overall layout may not matter all that much, or at least that a range of different designs would all be perfectly viable if you could get the underlying mechanisms right.

There is another basic methodological issue here, though. When you start a project, you need to know what you’re trying to build and what it’s supposed to do: but no-one can really give a clear answer to those questions so far as human cognition is concerned. The BICA project was likened by some to the Apollo moon landings: but although the moon trips were hugely challenging, it was always clear what needed to be delivered, and in broad terms, how.

But what is human cognition actually for? We can say fairly clearly what some of the sub-systems do: analyse input from the eyes, for example, or ensure that the sentences we utter hang together properly. But high-level cognition itself?

From an evolutionary perspective, cognition clearly helps us survive: but that could equally be said of almost every organ and function of a human being, so it doesn’t help us define the distinctive function of thought. Following the line adopted by DARPA we could plausibly say that cognition frees us from the grasp of our instincts, helping us to deal much more effectively with novel situations, and exploit opportunities which would otherwise be neglected. But that doesn’t really pin it down, either: the fact that thoughtful behaviour is different from instinctive, pre-programmed behaviour doesn’t distinguish it from random behaviour or inertia, and pointing out that it’s often more successful behaviour just seems to beg the question.

It seems to be important that human-level cognition allows us to address situations which have not in fact occurred; we can identify the dangerous consequences of a possible course of action without trying it out, and enable ‘our hypotheses to die in our stead’. Perhaps we could describe cognition as another sense, the sense of the possible: our eyes allow us to consider what is around us, but our thoughts allow us to consider what would or might be. It’s surely more than that, though, since our imagination allows us to consider the impossible and the fantastic just as readily as the possible. As a definition, moreover, it’s still not much use to a designer, not least because the very concept of possibility is highly problematic.

Perhaps after all we were getting closer to the truth with the purely negative point that thoughtful behaviour is not instinctive. When evolution endowed us with high-level cognition, she took an unprecedented gamble; that cutting us loose to some degree from self-interested behaviour would in the end and overall lead to better self-interested behaviour. The gamble, so far, appears to have paid off; but just as the kill-bots could choose alternative victims, or perhaps become pacifists, human beings can (and do) kill themselves or choose not to reproduce. Perhaps the distinctive quality of cognition is it free, gratuitous character: its point is that it is pointless. That doesn’t seem to be much help to an engineer either.

Anyway, I think I can wait a bit longer for the kill-bots; but it seems a shame that the project didn’t go on a bit further, and perhaps illuminate these issues.

4 thoughts on “What, no kill-bots?

  1. For many years I worked for a U.S. military sub-contractor, and I learned that sometimes contracts are “mysteriously” cancelled because something better is actually being built behind the scenes. Case in point: the B-1 long range strategic bomber. The B-1 was the most advanced bomber in its day, and its development was a huge undertaking that spanned many years, cost of billions of dollars, and employed of thousands of people. Although some 240 B-1’s were to be built, only 100 were actually assembled before the project was cancelled. Only later did the public learn that new stealth technology was being developed, quickly making B-1 technology obsolete. The B-2 stealth bomber—kept secret until its unveiling in 1988—had the radar signature of a humming bird.

    Could it be that DARPA cancelled the BICA project because it has something better up its sleeve?

  2. That certainly makes sense, Norm, but my mind boggles slightly at what the ‘something better’ could be.

  3. I suspect your guess that moving from design to implementation could have played a role. For example here’s a paper from 2005 where they’ve used genetic algorithms to attack the complexity issue of solving what would be a trivial case for a human: 3 unmanned aerial vehicles coordinating an attack on a fixed position of less than 10 targets.

    http://www.ece.osu.edu/~passino/PapersToPost/UAVTaskAssgnGA-COR.pdf

    While I admit to liking the technical challenges in such machine learning problems, the inevitable outcome of such research gives me pause.

    On a completely different tangent – in regards to structures of the human brain and its analogs in squid, etc… if you’ve never read von Neumann’s book “The Computer and the Brain” I highly recommend it. It’s prescient in it’s depth and breadth of insight and imagination, especially considering when he compiled the notes.

    Cheers,
    Joe

Leave a Reply

Your email address will not be published. Required fields are marked *