Linespace
Brittle
Linespace

20 February 2005

home 

Brittle robot

A recent paper (pdf format) by Michael L Anderson and Donald R Perlis offers a new solution to the problem of brittleness.  Brittleness, in a robot or program, is an inability to cope with unexpected developments, and the paper, no doubt rightly, says it is arguably the most important problem in Artificial Intelligence, perhaps in computer systems generally. Anderson and Perlis quote the example of a contestant in the DARPA Grand Challenge, in which robots had to negotiate their way through a real-world journey. The hapless specimen in question drove into a fence it could not see, and then continued attempting to move forward for the rest of the contest. Perhaps better sensors would have helped, but sullenly persisting in a course which isn't getting anywhere is poor behaviour for a supposedly intelligent machine. The proposed solution is the Metacognitive Loop; roughly speaking, a mechanism whereby the robot or program can notice whether or not it is achieving its goals. If not, it can then try something else, whether a carefully considered Plan B, or more or less random variations in behaviour, until it gets better results. 

Clearly what we are dealing with here is our old friend (or enemy) the frame problem: in fact, although they never use that phrase, Anderson and Perlis cite the paper by McCarthy and Hayes which first clearly identified the problem, and in some ways their work is in harmony with McCarthy's view that the problem would probably be resolved by new (non-monotonic) approaches to logic. 

Linespace

The subject of consciousness notoriously exists at the junction of many different fields: psychology, philosophy, neurology and artificial intelligence to name just some. The frame problem, I think, is a leading example of how different specialists can have, and retain, a very different view of the same problem with surprisingly little cross-fertilisation. Since the issue first came to light, AI specialists have seen it as a persistent, annoying problem, but one capable of being dealt with tactically in the short term and by prudent measures in the longer run. Philosophers, in contrast, have tended to see it as a fundamental issue: one which requires a completely new approach or insights, or even rules out all possibility of artificial consciousness. Even Daniel Dennett, perhaps the philosopher most supportive of AI, has given a classic exposition ('Cognitive Wheels- the Frame Problem of AI') of his own version of the frame problem without, so far as I know, offering any solution.  

Linespace

It must be acknowledged that on the face of it the AI fraternity frequently seem to have the more reasonable view. Some philosophers present such a gloomy case that we seem almost obliged to disbelieve in human consciousness, never mind the artificial variety; and approaches like that of Anderson and Perlis seem extremely reasonable and pragmatic.

They identify three distinct problems with 'common sense' reasoning. One is 'slippage' - the facts don't hold still over time, but are liable to change while you are reasoning about them, or clinging to outdated conclusions. The second is the 'KR mismatch' problem in which the representational conventions used impose unhelpful (but unavoidable) limitations on the ability to deal with circumstances which don't match them - especially those which give rise to novel expressions not catered for in the original scheme. The third issue is how to cope with contradictions. Contradictions in the system's data are inevitable, but making decisions in their prescence is obviously problematic, especially for a system which operates by classical rules of inference, since in classical logic there is a valid inference from a contradiction to anything at all.

Linespace

There are two steps in the proposed solution. First, the paper advocates 'active logic'. In this approach, instead of logical inferences taking place instantly (or in an eternal Platonic realm), they follow a chronological sequence. The conclusion comes a moment after the premises, not at the same moment. This has two benign consequences. The system is not obliged to consider all the implications of every piece of data - surely an infinite task in principle anyway - and it can tolerate contradictory premises because it may not happen to be thinking about the two sides of the contradiction at any given moment. Where there is a real problem, the system will eventually derive a direct contradiction, which then serves as a useful warning that something is going wrong.

This part of the argument seems dubious. In the first place, if contradictions are signs of a problem, allowing them to remain unresolved in the heart of the system is surely dangerous; by the time a direct, explicit contradiction is noticed, the damage may have been done. Second, is it actually safe to assume that a direct contradiction is necessarily a problem? In ordinary thought that may be so, but in formal logic contradictions are a useful if not indispensable part of the process of reasoning: without them many proofs are going to be hard to derive.

Linespace

The second step is the metacognitive loop, which seems unassailably sensible. The basic architecture is composed of two modules, one training the other, with a third overseeing both. This third module is what does the monitoring of performance against expectation, and steps in with new strategies and retraining where necessary.

The authors propose three potential projects which the metacognitive loop might enable: a robotic St Bernard, able to render basic assistance in unpredictable emergency situations; a Mars explorer, which might be able to cope well with finding things its designers had not expected; and finally a GePurS - General Purpose Scout, designed to operate in many different environments, with a large reservoir of experience, much of it redundant in any given situation, but lending a particular flexibility and generality to its cognitive abilities. Indeed, such a creature would surely be intelligent on an entirely new level, comparable with sophisticated animals or even human beings. They don't, of course, predict the imminent achievement of such high-grade performance.

Linespace

So, is the road ahead clear? It is interesting to compare the example of the misguided DARPA challenge robot with the one described in Dennett's essay. Dennett's robot is set to retrieve its spare battery from a room where a bomb is due to go off, but singularly fails to take account of the known fact that the bomb is on the same trolley as the battery. The DARPA robot fails its task in a clear way - it isn't moving towards its goal - whereas Dennett's unfortunate robot actually succeeds in getting the battery out: failure arrives out of a blue sky from an unforeseen factor - the fact that the bomb has come along for the ride. It's hard not to feel that metacognitive loops might help the DARPA victim but wouldn't do much for Dennett's robot. If you have a module monitoring success, it has to be able to recognise success when it sees it, and this is not a trivial task. Even the DARPA robot may have had, for example, a speedometer which told it its wheels were revolving in fine style, oblivious of the fact that they weren't moving it anywhere. So long as your robot relies on logic, the only way for it to assess success is to perform a number of tests set up by the designer: but that means the designer has to anticipate everything which might go wrong - the very problem we started with in the first place.

Linespace

The relative weakness of formal logic as a tool for reasoning about the world is, of course, well known - and known to Anderson and Perlis, whose proposals are designed to extend its efficacy. But the AI practitioners seem always to under-rate the severity of the problem, perhaps because, when all's said and done, formal logic is still the only real tool available. Aaron Sloman, long ago, argued very persuasively that some kind of analogical reasoning is needed to make good the deficits of old-fashioned logic. There are many problems attached to processes based on analogies, and the development of a suitable formalism is itself a daunting task, requiring a breakthrough of considerable proportions. Nevertheless, it's hard not to feel that he was, broadly at least, on to something.

Linespace

Earlier

home

later