Theory theory and other theories

Picture: Theory theory vs simulation theory. Mitchell Herschbach spoke up for folk psychology in the JCS recently, suggesting that while there was truth in what its critics had said, it could not be dispensed with altogether.

What is folk psychology anyway? It is widely accepted that one of our basic mental faculties is the ability to understand other people – to attribute motives and feelings to them and change our own behaviour accordingly. One of the recognised stages of child development is the dawning recognition that other people may not know what we know, and may believe something different. There is relatively little evidence of this ability in other animals, although I believe chimps. for example, have been known to keep their discovery of food quiet if they thought other chimps couldn’t see it. Commonly this ability to understand others is attributed to the possession of a ‘theory of mind’, or to the application of ‘folk psychology’, a set of commonsensical or intuitive rules about how people think and how it affects their likely behaviour.

So far as I’m aware, no-one has managed to set out exactly what ‘folk psychology’ amounts to. An interesting comparison would be the attempt some years ago to define folk physics, or ‘naive physics’ as it was called – it was thought that artificial intelligence might benefit from being taught to think the way human beings think about the real world, ie in mainly pre-Newtonian if not pre-Galilean terms.  It proved easy enough to lay down a few of the laws of folk physics – bodies in motion tend to slow down and stop; heavy things fall faster than light ones – but the elaboration of the theory ran into the sand for two reasons; the folk-theory couldn’t really be made to work as a deductive system, and as it developed it became increasingly complex and strange, until the claim that it resembled our intuitive beliefs became rather hard to accept. I imagine a comprehensive account of folk psychology might run into similar problems.

Of course, ‘folk psychology’ could take various forms besides a rigorously stated axiomatic deductive system.Herschbach explains that one of main divisions in the folk psychology camp is between the champions of theory theory (ie the idea that we really do use something relatively formal resembling a scientific theory) and simulation theory (you guessed it, the idea that we use a simulation of other people’s thought processes instead). Some of course, are attracted by the idea that mirror neurons, those interesting cells that fire both when we perform action A and when we see action A performed by someone else, might have a role in providing a faculty of empathy which underpins our understanding of others.  According to Herschbach the theory theorists and the simulation theorists have tended to draw together more recently, with most people accepting that the mind makes some use of both approaches.

However, the folk folk face a formidable enemy and a more fundamental attack. The ‘phenomenological’ critics of folk psychology think the whole enterprise is misguided; in order to guess what other people will do, we don’t need to go through the rigmarole of examining their behaviour, consulting a theory and working out what their inward mental states are likely to be, then using the same theory to extrapolate what they are likely to do. We can deal with people quite competently without ever having to consider explicitly what their inner thoughts might be. Instead we can use ‘online’ intelligence, the sort of unreflecting, immediate understanding of life which governs most of our everyday behaviour.

The classical way of investigating the ‘folk psychology’ of children involves false-belief experiments. An experimenter places a toy in box a in full view of the child and an accomplice. The accomplice leaves the room and the experimenter moves the toy to box b. Then the child is asked where the accomplice will look for the toy. Younger children expect the accomplice to look for the toy where it actually is, in box b; when they get a little older they realise that the accomplice didn’t see the transfer and therefore can be expected to have the false belief that the toy is still in box a. The child has demonstrated its ability to understand that other people have different beliefs which may affect their behaviour. Variations on this kind of test have been in use since Piaget at least.

Ha! say the critics, but what’s going on here besides the main show? In addition to watching the accomplice and the toy, the child is going through complex interactions with the experimenter – obeying instructions, replying to questions and so on. Yet it would be absurd to maintain that they are carefully deducing the experimenter’s state of mind at every stage. They just do as they’re told. But if they can do all that without a theory of mind, why do they need one for the experiment? They just realise that people tend to look for things where they saw them last, without any nonsense about mental states.

Herschbach accepts that the case for ‘online’ understanding is good, but he thinks, briefly, that the opponents of folk psychology don’t give sufficient attention to the real point of false-belief experiments. It may be that we don’t have to retreat into self-conscious offline meditation to deal with false beliefs, but isn’t it the case that online thinking is itself mentalistic to a degree?

It does seem unlikely that beliefs about other people’s beliefs can be banished altogether from our account of how we deal with each other. In fact, our tendency to attribute beliefs to people by way of explanation is so strong we automatically do it even with inanimate objects which we know quite well have no beliefs at all (“MS Word has some weird ideas about grammar”).

The problem, I think, is not that Herschbach is wrong, but that we seem to have ended up with a bit of a mess. In dealing with other people we may or may not use online or offline reasoning or some mixture of the two; our thinking may or may not be mentalistic in some degree, and it may or may not rely on a theory or a simulation or both. The brain is, of course, under no obligation to provide us with a simple, single module for dealing with people, but this tangle of possibilities seems so complicated we don’t really seem to be left with any reliable insight at the end of it. Any mental faculty or way of thinking may, in an unpredicatble host of different ways, be relevant to the way we understand each other. Alas (alas for philosophy of mind, anyway), I think that’s probably the truth.

Ethical kill-bots

Picture: Bender. Robot ethics have been attracting media attention again recently. Could autonomous kill-bots be made to behave ethically  – perhaps even more ethically than human beings?

Can robots be ethical agents at all? The obvious answer is no, because they aren’t really agents; they don’t make any  genuine decisions, they just follow the instructions they have been given. They really are ‘only following orders’, and  unlike human beings, they have no capacity to make judgements about whether to obey the rules or not, and no moral responsibility for what they do.

On the other hand, the robots in question are autonomous to a degree. The current examples, so far as I know, are  relatively simple, but it’s not impossible to imagine, at least, a robot which was bound by the interactions within its silicon only in the sort of way human beings are bound by the interactions within their neurons. After all it’s still an open debate whether we ourselves, in the final analysis, make any decisions, or just act in obedience to the laws of biology and physics.

The autonomous kill-bots certainly raise qualms of a kind which seem possibly moral in nature. We may find  land-mines morally repellent in some sense (and perhaps the abandonment of responsibility by the person who places them  is part of that) but a robot which actually picks out its targets and aims a gun seems somehow worse (or does it?). I  think part of the reason is the deeply embedded conviction that commission is worse than omission; that we are more to  blame for killing someone then for failing to save their life. This feels morally right, but philosophically it’s hard to  argue for a clear distinction. Doctors apparently feel that injecting a patient in a persistent vegetative state with  poison would be wrong, but that it’s OK to fail to provide the nutrition which keeps the patient alive: rationally it’s hard to explain what the material difference might be.

Suppose we had a more moral kind of land mine. It only goes off if heavy pressure is applied, so that it is  unlikely to blow up wandering children, only heavy military vehicles. If anything, that seems better than the  ordinary kind of mine; yet an automated machine gun which seeks out military targets on its own initiative seems somehow worse than a manual one. Rules which restrain seem good, while rules which allow the robot to kill people it could not have killed otherwise  seem bad; unfortunately, it may be difficult to make the distinction. A kill-bot which picks out its own targets may be the result of giving new cybernetic powers to a gun which would otherwise sit idle, or it may be result of imposing some constraints on a bot which otherwise shoots everything that moves.

In practice the real challenge arises from the need to deal with messy reality. A kill-bot can easily be given a set of rules of engagement, required to run certain checks before firing, and made to observe appropriate restraint. It will follow the rules more rigorously than a human soldier, show no fear, and readily risk its own existence rather than breach the rules of engagement. In these respects, it may be true that the robot can exceed the human in propriety.

But practical morality comes in two parts; working out what principle to apply, and working out what the hell is going on. In the real world, even for human beings, the latter is the real problem more often than not. I may know perfectly clearly that it is my duty to go on defending a certain outpost until there ceases to be any military utility in doing so; but has that point been reached? Are the enemy so strong it is hopeless already? Will reinforcements arrive if I just hang on a bit longer? Have I done enough that my duty to myself now supersedes my duty to the unit? More fundamentally, is that the enemy, or a confused friendly unit, or partisans, or non-combatants? Are they trying to slip past, to retreat, to surrender, to annihilate me in particular? Am I here for a good reason, is my role important, or am I wasting my time, holding up an important manoeuvre, wasting ammunition? These questions are going to be much more difficult for the kill-bots to tackle.

Three things seem inevitable. There will be a growing number of people working on the highly interesting question of which algorithms produce, for any given set of computing and sensory limitations, the optimum ratio of dead enemies and saved innocents over a range of likely sets of circumstances. They will refer to the rules which emerge from their work as ethical, whether they really are or not. Finally, those algorithms will in turn condition our view of how human beings should behave in the same circumstances, and affect our real moral perceptions. That doesn’t sound too good, but again the issue is two-sided. Perhaps on some distant day the chief kill-bot, having absorbed and exhaustively considered the human commander’s instructions for an opportunistic war will use the famous formula:

“I’m sorry Dave, I’m afraid I can’t do that..”

Sorry November was a bit quiet, by the way. Most of my energy was going into my Nanowrimo effort – successful, I’m glad to say. Peter