Theory theory and other theories

Picture: Theory theory vs simulation theory. Mitchell Herschbach spoke up for folk psychology in the JCS recently, suggesting that while there was truth in what its critics had said, it could not be dispensed with altogether.

What is folk psychology anyway? It is widely accepted that one of our basic mental faculties is the ability to understand other people – to attribute motives and feelings to them and change our own behaviour accordingly. One of the recognised stages of child development is the dawning recognition that other people may not know what we know, and may believe something different. There is relatively little evidence of this ability in other animals, although I believe chimps. for example, have been known to keep their discovery of food quiet if they thought other chimps couldn’t see it. Commonly this ability to understand others is attributed to the possession of a ‘theory of mind’, or to the application of ‘folk psychology’, a set of commonsensical or intuitive rules about how people think and how it affects their likely behaviour.

So far as I’m aware, no-one has managed to set out exactly what ‘folk psychology’ amounts to. An interesting comparison would be the attempt some years ago to define folk physics, or ‘naive physics’ as it was called – it was thought that artificial intelligence might benefit from being taught to think the way human beings think about the real world, ie in mainly pre-Newtonian if not pre-Galilean terms.  It proved easy enough to lay down a few of the laws of folk physics – bodies in motion tend to slow down and stop; heavy things fall faster than light ones – but the elaboration of the theory ran into the sand for two reasons; the folk-theory couldn’t really be made to work as a deductive system, and as it developed it became increasingly complex and strange, until the claim that it resembled our intuitive beliefs became rather hard to accept. I imagine a comprehensive account of folk psychology might run into similar problems.

Of course, ‘folk psychology’ could take various forms besides a rigorously stated axiomatic deductive system.Herschbach explains that one of main divisions in the folk psychology camp is between the champions of theory theory (ie the idea that we really do use something relatively formal resembling a scientific theory) and simulation theory (you guessed it, the idea that we use a simulation of other people’s thought processes instead). Some of course, are attracted by the idea that mirror neurons, those interesting cells that fire both when we perform action A and when we see action A performed by someone else, might have a role in providing a faculty of empathy which underpins our understanding of others.  According to Herschbach the theory theorists and the simulation theorists have tended to draw together more recently, with most people accepting that the mind makes some use of both approaches.

However, the folk folk face a formidable enemy and a more fundamental attack. The ‘phenomenological’ critics of folk psychology think the whole enterprise is misguided; in order to guess what other people will do, we don’t need to go through the rigmarole of examining their behaviour, consulting a theory and working out what their inward mental states are likely to be, then using the same theory to extrapolate what they are likely to do. We can deal with people quite competently without ever having to consider explicitly what their inner thoughts might be. Instead we can use ‘online’ intelligence, the sort of unreflecting, immediate understanding of life which governs most of our everyday behaviour.

The classical way of investigating the ‘folk psychology’ of children involves false-belief experiments. An experimenter places a toy in box a in full view of the child and an accomplice. The accomplice leaves the room and the experimenter moves the toy to box b. Then the child is asked where the accomplice will look for the toy. Younger children expect the accomplice to look for the toy where it actually is, in box b; when they get a little older they realise that the accomplice didn’t see the transfer and therefore can be expected to have the false belief that the toy is still in box a. The child has demonstrated its ability to understand that other people have different beliefs which may affect their behaviour. Variations on this kind of test have been in use since Piaget at least.

Ha! say the critics, but what’s going on here besides the main show? In addition to watching the accomplice and the toy, the child is going through complex interactions with the experimenter – obeying instructions, replying to questions and so on. Yet it would be absurd to maintain that they are carefully deducing the experimenter’s state of mind at every stage. They just do as they’re told. But if they can do all that without a theory of mind, why do they need one for the experiment? They just realise that people tend to look for things where they saw them last, without any nonsense about mental states.

Herschbach accepts that the case for ‘online’ understanding is good, but he thinks, briefly, that the opponents of folk psychology don’t give sufficient attention to the real point of false-belief experiments. It may be that we don’t have to retreat into self-conscious offline meditation to deal with false beliefs, but isn’t it the case that online thinking is itself mentalistic to a degree?

It does seem unlikely that beliefs about other people’s beliefs can be banished altogether from our account of how we deal with each other. In fact, our tendency to attribute beliefs to people by way of explanation is so strong we automatically do it even with inanimate objects which we know quite well have no beliefs at all (“MS Word has some weird ideas about grammar”).

The problem, I think, is not that Herschbach is wrong, but that we seem to have ended up with a bit of a mess. In dealing with other people we may or may not use online or offline reasoning or some mixture of the two; our thinking may or may not be mentalistic in some degree, and it may or may not rely on a theory or a simulation or both. The brain is, of course, under no obligation to provide us with a simple, single module for dealing with people, but this tangle of possibilities seems so complicated we don’t really seem to be left with any reliable insight at the end of it. Any mental faculty or way of thinking may, in an unpredicatble host of different ways, be relevant to the way we understand each other. Alas (alas for philosophy of mind, anyway), I think that’s probably the truth.

Ethical kill-bots

Picture: Bender. Robot ethics have been attracting media attention again recently. Could autonomous kill-bots be made to behave ethically  – perhaps even more ethically than human beings?

Can robots be ethical agents at all? The obvious answer is no, because they aren’t really agents; they don’t make any  genuine decisions, they just follow the instructions they have been given. They really are ‘only following orders’, and  unlike human beings, they have no capacity to make judgements about whether to obey the rules or not, and no moral responsibility for what they do.

On the other hand, the robots in question are autonomous to a degree. The current examples, so far as I know, are  relatively simple, but it’s not impossible to imagine, at least, a robot which was bound by the interactions within its silicon only in the sort of way human beings are bound by the interactions within their neurons. After all it’s still an open debate whether we ourselves, in the final analysis, make any decisions, or just act in obedience to the laws of biology and physics.

The autonomous kill-bots certainly raise qualms of a kind which seem possibly moral in nature. We may find  land-mines morally repellent in some sense (and perhaps the abandonment of responsibility by the person who places them  is part of that) but a robot which actually picks out its targets and aims a gun seems somehow worse (or does it?). I  think part of the reason is the deeply embedded conviction that commission is worse than omission; that we are more to  blame for killing someone then for failing to save their life. This feels morally right, but philosophically it’s hard to  argue for a clear distinction. Doctors apparently feel that injecting a patient in a persistent vegetative state with  poison would be wrong, but that it’s OK to fail to provide the nutrition which keeps the patient alive: rationally it’s hard to explain what the material difference might be.

Suppose we had a more moral kind of land mine. It only goes off if heavy pressure is applied, so that it is  unlikely to blow up wandering children, only heavy military vehicles. If anything, that seems better than the  ordinary kind of mine; yet an automated machine gun which seeks out military targets on its own initiative seems somehow worse than a manual one. Rules which restrain seem good, while rules which allow the robot to kill people it could not have killed otherwise  seem bad; unfortunately, it may be difficult to make the distinction. A kill-bot which picks out its own targets may be the result of giving new cybernetic powers to a gun which would otherwise sit idle, or it may be result of imposing some constraints on a bot which otherwise shoots everything that moves.

In practice the real challenge arises from the need to deal with messy reality. A kill-bot can easily be given a set of rules of engagement, required to run certain checks before firing, and made to observe appropriate restraint. It will follow the rules more rigorously than a human soldier, show no fear, and readily risk its own existence rather than breach the rules of engagement. In these respects, it may be true that the robot can exceed the human in propriety.

But practical morality comes in two parts; working out what principle to apply, and working out what the hell is going on. In the real world, even for human beings, the latter is the real problem more often than not. I may know perfectly clearly that it is my duty to go on defending a certain outpost until there ceases to be any military utility in doing so; but has that point been reached? Are the enemy so strong it is hopeless already? Will reinforcements arrive if I just hang on a bit longer? Have I done enough that my duty to myself now supersedes my duty to the unit? More fundamentally, is that the enemy, or a confused friendly unit, or partisans, or non-combatants? Are they trying to slip past, to retreat, to surrender, to annihilate me in particular? Am I here for a good reason, is my role important, or am I wasting my time, holding up an important manoeuvre, wasting ammunition? These questions are going to be much more difficult for the kill-bots to tackle.

Three things seem inevitable. There will be a growing number of people working on the highly interesting question of which algorithms produce, for any given set of computing and sensory limitations, the optimum ratio of dead enemies and saved innocents over a range of likely sets of circumstances. They will refer to the rules which emerge from their work as ethical, whether they really are or not. Finally, those algorithms will in turn condition our view of how human beings should behave in the same circumstances, and affect our real moral perceptions. That doesn’t sound too good, but again the issue is two-sided. Perhaps on some distant day the chief kill-bot, having absorbed and exhaustively considered the human commander’s instructions for an opportunistic war will use the famous formula:

“I’m sorry Dave, I’m afraid I can’t do that..”

Sorry November was a bit quiet, by the way. Most of my energy was going into my Nanowrimo effort – successful, I’m glad to say. Peter

Is intentionality non-computable?

Picture: tiles. I undertook to flesh out my intuitive feeling that intentionality might in fact be a non-computable matter. It is a feeling rather than an argument, but let me set it out as clearly (and appealingly) as I can.

First, what do I mean by non-computability? I’m talking about the standard, uncontroversial variety of non-computability exhibited by the halting problem and various forms of tiling problem. The halting problem concerns computer programs. If we think about all possible programs, some run for a while and stop, while others run on forever (if for example there’s a loop which causes the program to run through the same instructions over and over again indefinitely). The question is, is there any computational procedure which can tell which is which? Is there a program which, when given any other program, can tell whether it halts or not? The answer is no; it was famously proved by Turing that there is no such program, and that the problem is therefore undecidable or non-computable.

Some important qualifications should be mentioned. First, programs that stop can be identified computationally; you just have to run them and wait long enough. The problem arises with programs that don’t halt; there is no general procedure by which we can identify them all. However, second, it’s not the case that we can never identify a non-stopping program; some are obvious. Moreover, when we have identified a particular non-stopping program, we can write programs designed to spot that particular kind of non-stopping. I think this was the point Derek was making in his comment on the last point, when he asked for an example of a human solution that couldn’t be simulated by computer; there is indeed no example of human discovery that couldn’t be simulated – after the fact. But that’s a bit like the blind man who claims he can find his way round a strange town just as well as anyone else; he just needs to be shown round first. We can actually come up with programs that are pretty good at spotting non-stopping programs for practical purposes; but never one that spots them all.

Tiling problems are really an alternative way of looking at the same issue. The problem here is, given a certain set of tiles, can we cover a flat plane with them indefinitely without any gaps? The original tiles used for this kind of problem were square with different colours, and an additional requirement was that colours must be matched where tiles met. At first glance, it looks as though different sets of tiles would fall into two groups; those that don’t tile the plain at all, because the available combinations of colours can’t be made to match up satisfactorily without gaps; and those that tile it with a repeating pattern. But this is not the case; in fact there are sets of tiles which will tile the plane, but only in such a way that the pattern never repeats. The early sets of tiles with this property were rather complex, but later Roger Penrose devised a non-square set which consists of only two tiles.

The existence of such ‘non-periodic’ tilings is the fly in the ointment which essentially makes it impossible to come up with a general algorithm for deciding whether or not a given set of tiles will tile the plane. Again, we can spot some that clearly don’t, some that obviously do, and indeed some that demonstrably only do so non-periodically; but there is no general procedure that can deal with all cases.

I mentioned Roger Penrose; he, of course, has suggested that the mathematical insight or creativity which human beings use is provably a non-computable matter, and I believe it was the contemplation of how the human brain manages to spot whether a particular tricky set of tiles will tile the plane that led to this view (that’s not to say that human brains have an infallible ability tell whether sets of tiles tile the plane, or computations halt). Penrose suggests that mathematical creativity arises in some way from quantum interactions in microtubules; others disagree with his theory entirely, arguing, for example, that the brain just has a very large set of different good algorithms which when deployed flexibly or in combination look like something non-computational.

I should like to take a slightly different tack. Let’s consider the original frame problem. This was a problem for AI dealing with dynamic environments, where the position of objects, for example, might change. The program needed to keep track of things, so it needed to note when some factor had changed. It turned out, however, that it also needed to note all the things that hadn’t changed, and the list of things to be noted at every moment could rapidly become unmanageable. Daniel Dennett, perhaps unintentionally, generalised this into a broader problem where a robot was paralysed by the combinatorial explosion of things to consider or to rule out at every step.

Aren’t these problems in essence a matter of knowing when to stop, of being able to dismiss whole regions of possibility as irrelevant? Could we perhaps say the same of another notorious problem of cognitive science – Quine’s famous problem of the impossibility of radical translation. We can never be sure what the word ‘Gavagai’ means, because the list of possible interpretations goes on forever. Yes, some of the interpretations are obviously absurd – but how do we know that? Isn’t this, again, a question of somehow knowing when to stop, of being able to see that the process of considering whether ‘Gavagai’ means ‘rabbit or more than two mice’, ‘rabbit or more than three mice’ and so on isn’t suddenly going to become interesting.

Quine’s problem bears fairly directly on the problem of meaning, since the ability to see the meaning of a foreign word is not fundamentally different from the ability to see the meaning of words per se. And it seems to me a general property of intentionality, that to deal with it we have to know when to stop. When I point, the approximate line from my finger sweeps out an indefinitely large volume of space, and in principle anything in there could be what I mean; but we immediately pick out the salient object, beyond which we can tell the exploration isn’t going anywhere worth visiting.

The suggestion I wanted to clarify, then, is that the same sort of ability to see where things are going underlies both our creative capacity to spot instances of programs that don’t halt, or sets of tiles that cover the plane, and our ability to divine meanings and deal with intentionality. This would explain why computers have never been able to surmount their problems in this area and remain in essence as stolidly indifferent to real meaning as machines that never manipulated symbols.

Once again, I’m not suggesting that humans are infallible in dealing with meaning, nor that algorithms are useless. By providing scripts and sets of assumptions, we can improve the performance of AI in variable circumstances; by checking the other words in a piece of text, we can improve the ability of programs to do translation. But even if we could bring their performance up to a level where it superficially matched that of human beings, it seems there would still be radically different processes at work, processes that look non-computational.

Such is my feeling, at least; I certainly have no proof and no idea how the issue could even be formalised in a way that rendered it susceptible to proof. I suppose being difficult to formalise rather goes with the territory.

Whole Brain Emulation

Picture: Brains. Robots.net recently featured the Whole Brain Emulation Roadmap (pdf) produced by the Future of Humanity Institute at Oxford University. The Future of Humanity Institute has a transhumanist tinge which I find slightly off-putting, and it does seem to include fiction among its inspirations, but the Roadmap is a thorough and serious piece of work, setting out in summary at least the issues that would need to be addressed in building a computer simulation of an entire human brain. Curiously, it does not include any explicit consideration of the Blue Brain project, even in an appendix on past work in the area, although three papers by Markram, including one describing the project, are cited.

One interesting question considered is: how low do you go? How much detail does a simulation need to have? Is it good enough to model brain modules (whatever they might be), neuronal groups of one kind of another, neurons themselves, neurotransmitters, quantum interactions in microtubules? The roadmap introduces the useful idea of scale separation; there might be one or more levels where there is a cut-off, and a simulation in terms of higher level entities does not need to be analysed any further. Your car depends on interactions at a molecular level, but in order to understand and simulate it we don’t need to go below the level of pistons, cylinders, etc. Are there any cut-offs of this kind in the brain? The road map is not meant to offer answers, but I think after reading it one is inclined to think that there is probably a cut-off somewhere below neuronal level; you probably need to know about different kinds of neurotransmitters, but probably don’t need to track individual molecules. SOmething like this seems to have been the level which the Blue Brain settled on.

The roadmap merely mentions some of the philosophical issues. It clearly has in mind the uploading of an individual consciousness into a computer, or the enhancement or extension of a biological brain by adding silicon chips, so an issue of some importance is whether personal identity could be preserved across this kind of change. If we made a compter copy of Stephen Hawking’s brain at the moment of his death, would that be Stephen Hawking?

The usual problem in discussions of this issue is that it is easy to imagine two parallel scenarios; one in which Hawking dies at the moment of transition (perhaps the destruction of his brain is part of the process), and one in which the exact same simulation is created while he continues his normal life. In the first case, we might be inclined to think that the simulation was a continuation, in the latter case it’s more difficult; yet the simulation in both cases is the same. My inclination is to think that the assertion of continuing identity in the first case is loose; we may choose to call it Hawking, but even if we do, we have to accept that it’s Hawking put through a radical alteration.

Of course, even if the simulation hasn’t got Hawking’s personal identity, having a simulation of his brain (or even one which was only 80% faithful) woud be a fairly awesome thing.

The roadmap provides a useful list of assumptions. One of these is:

Computability: brain activity is Turing?computable, or if it is uncomputable, the uncomputable aspects have no functionally relevant effects on actual behaviour.

I’ve come to doubt that this is probable. I cannot present a rigorous case, but in sloppy impressionistic terms the problem is as follows. Non-computable problems like the halting problem or the tiling problem seem intuitively to involve processes which when tackled computationally go on forever without resolution. Human thought is able to deal with these issues by being able to ‘see where things are going’ without pursuing the process to the end.

Now it seems to me that the process of recognising meanings is very likely a matter of ‘seeing where things are going’ in much the same way. Computers don’t deal with meaning at all, although there are cunning ploys to get round this in the various areas where it arises. The problem may well be that meanings are indefinitely ambiguous; there are always some more possible readings to be eliminated, and this might be why meaning is so untractable by computation.

Of course, apart from the hand-waving vagueness of that line of thought, it leaves me with the problem of explaining how the problem would manifest itself in the construction of a whole brain simulation; there would presumably have to be some properties of a biological brain which could never be accurately captured by a computational simulation. There are no doubt some fine details of the brain which could never be captured with perfect accuracy, but given the concept of scale separation,it’s hard to see how that alone would be a fatal problem.

When a whole brain simulation is actually attempted, the answer will presumably emerge; alas, according to the estimates in the road map, I may not live to see it.

Consciousness: the new battleground for creationists?

Picture: Darwin and Descartes.This piece in the New Scientist suggests that creationists and their sympathisers are seeking to open up a new front. They think that the apparent insolubility of the problem of qualia means that materialism is on the way out; in fact, that consciousness is ‘Darwinism’s grave’. Cartesian dualism is back with a vengeance. Oh boy: if there was one thing the qualia debate didn’t need, it was a large-scale theological intervention. Dan Dennett must be feeling rather the way Guy Crouchback felt when he heard about the Nazi-Soviet pact: the forces of darkness have drawn together and the enemy stands clear at last!

The suggested linkage between qualia and evolution seems tortuous. The first step, I suppose, assumes that dualism makes the problem of qualia easier to solve; then presumably we deduce that if dualism is true, it might as well be a dualism with spirits in (there are plenty of varieties without; in fact if I were to put down a list of the dualisms which seem to me most clear and plausible, I’m not sure that the Christian spirit variety would scrape into the Top Ten); then, that if there are spirits, there could well be God, and then that if there’s God he might as well take on the job of governing speciation. At least, that’s how I assume it goes. A key point seems to be the existence of some form of spiritual causation. Experiments are adduced in which the subjects were asked to change the pattern of their thoughts, which was then shown to correspond with change in the activity of their brain; this, it is claimed, shows that mind and brain are distinct. Unfortunately it palpably doesn’t; attempting to disprove the identity of mind and brain by citing a correlation between the activity of the two is, well, pretty silly. Of course the thing that draws all this together and makes it seem to make sense in the minds of its advocates is Christianity, or at any rate an old-fashioned, literalist kind of Christianity.

Anyway, I shall leave Darwinism to look after itself, but in a helpful spirit let me offer these new qualophiles two reasons why dualism is No Good.

The first, widely recognised, is that arranging linkages between the two worlds, or two kinds of stuff required by dualism, always proves impossible. In resurrecting ‘Cartesian dualism’ I don’t suppose the new qualophiles intend to restore the pineal gland to the role Descartes gave it as the unique locus of interaction between body and soul, but they will find that coming up with anything better is surprisingly difficult. There is a philosophical reason for this. If you have causal connections between your worlds – between spirits and matter, in this case – it becomes increasingly difficult to see why the second world should be regarded as truly separate at all, and your theory turns into a monism before your eyes. But if you don’t have causal connections, your second world becomes irrelevant and unknowable. The usual Christian approach to this problem is to go for a kind of Sunday-best causal connection, one that doesn’t show up in the everyday world, but lurks in special invisible places in the human brain. This was never a very attractive line of thinking and in spite of the quixotic efforts of those two distinguished knights, John Eccles and Karl Popper, it is less plausible now than ever before, and its credibility drains further with every advance in neurology.

The second problem, worse in my view, is that dualism doesn’t really help. The new qualophile case must be, I suppose, that our ineffable subjective experiences are experiences of the spirit, and that’s what gives them their vivid character. The problem of qualia is to define what it is in the experience of seeing red which is over and above the simple physical account; bingo! It’s the spirit. To put it another way, on this view zombies don’t have souls.

But why not? How does the intervention of a soul turn the ditchwater of physics into the glowing wine of qualia? It seems to me I could quite well imagine a person who had a fully functioning soul and yet had no real phenomenal experiences: or at any rate, it’s as easy to imagine that as an unsouled zombie in the same position. I think the new qualophiles might reply that my saying that shows I just haven’t grasped what a soul is. Indeed I haven’t, and I need them to explain how it works before I can see what advantage there is in their theory. If we’re going to solve the mystery of qualia by attributing it to ‘souls’, and then we declare ‘souls’ a mystery, why are we even bothering? But here, as elsewhere with theological arguments, it seems to be assumed that if we can get the question into the spiritual realm, the enquiry politely ceases and we avert our eyes.

It is, of course, the same thing over on the other front, where creationists typically offer criticism of evolutionary theory, but offer not so much as a sniff of a Theory of Creation. Perhaps in the end the whole dispute is not so much a clash between two rival theories as a dispute over whether we should have rational theories at all.

Loebner 2008

Picture:  Elbot The annual Loebner Prize has been won by Elbot. As you may know, the Loebner competition implements the Turing Test, inviting contestants to put forward a chat-bot program which can conduct online conversation indistinguishable from one conducted with a human being. We previously discussed the victory of Rollo Carpenter’s Jabberwacky, a contender again this year.

One of Elbot’s characteristics, which presumably helped tip the balance this year, is a particular assertiveness about trying to manage the conversation into ‘safe’ areas. One of the easiest ways to unmask a chat-bot is to exploit its lack of knowledge about the real world; but if the bot can keep the conversation in domains it is well-informed about, it stands a much better chance of being plausible. Otherwise the only option is often to resort to relatively weak default responses (‘I don’t know’, ‘What do you think?’, ‘Why do you mention [noun extracted from the input sentence]?).

But aren’t Elbot’s tactics cheating? Don’t these cheap tricks invalidate the whole thing as a serious project? Some would say so: the Loebner does not enjoy universal esteem among academics, and Marvin Minsky famously offered a cash reward to anyone who could stop the contest.

We have to remember, however, that the contestants are not seeking to reproduce the real operation of the human brain. Humans are able to conduct general conversation because they have general-purpose consciousness, but that is far too much to expect of a simple chat-bot. The Turing Test is sometimes interpreted as a test for consciousness, but that isn’t quite how Turing himself described it (he proposed it as a more practical alternative to considering the question ‘Can machines think?’).

OK so it’s not cheating, but all the same, if it’s just fakery, what’s the value of the exercise? There are several answers to this. One is the ‘plane wing’ argument: planes don’t fly the way birds do, but they’re still of value. It might well be that a program that does conversation is useful in its own right, even if it doesn’t do things the way the human brain does; perhaps for human/machine interfaces. On the other hand, as a second answer, it might turn out that discoveries we make while making chat-bots will eventually shed some light on how some parts of the brain put together well-structured and relevant sentences. If they don’t do that, they may still lead to the discovery of unexpectedly valuable techniques in programming: solving difficult but apparently pointless problems just for the hell of it does sometimes prove more fruitful than expected. A fourth point which I think perhaps deserves more attention is that even if chat-bots tell us nothing about AI, they may still tell us interesting things about human beings.  The way trust and belief are evoked, for example: the implicit rules of conversation, and the pragmatics of human communication.

The clincher in my view, however, is that the Loebner is fun, and enlivens the subject in a way which must surely be helpful overall. How many serious scientists were inspired in part by a secret childhood desire to have a robot friend they could talk to?

In a way you could say Elbot is attempting to refine the terms of the test. A successful program actually needs to deploy several different kinds of ability, and one of the most challenging is bringing to bear a fund of implicit background knowledge. No existing program is remotely as good at this as the human brain, and there are some reasons to doubt whether they ever will be. In the meantime, at any rate, there may be an argument for taking this factor out of the equation: Elbot tries to do this by managing the conversation, but in some early Loebner contests the conversations were explicitly limited to particular topic, and maybe this approach has something to be said for it. I believe Daniel Dennett, who was once a Loebner judge, suggested that the contest should develop towards testing a range of narrower abilities rather than the total conversational package. Perhaps we can imagine tests of parsing, of knowledge management, and so on.

At any rate, the Loebner seems in vigorous health, with a strong group of contenders this year: I hope it continues.

Cognition Incorporated

Picture:  Semantic Map Picture: Bitbucket. So, another bastion of mysterianism falls. Cognition Technologies Inc has a semantic map of virtually the entire English language, which enables applications to understand English words. Yes, understand. It’s always been one of your bedrock claims that computers can’t really understand anything; but you’re going to have to give that one up.

Picture: Blandula. Oh, yes, I read about that. In fact it’s been mentioned in several places. I thought the Engadget headline got to the root of the issue pretty well – ‘semantic map opens way for robot uprising’. I suppose that’s pretty much your view. It seems to me just another case of the baseless hyperbole that afflicts the whole AI field. There are those people at Cyc and elsewhere who think cognition is just a matter of having a big enough encyclopaedia: now we’ve got this lot who think it’s just a colossal dictionary. But having a long list of synonyms and categories doesn’t constitute understanding; or my bookshelf would have achieved consciousness long ago.

Picture: Bitbucket. Let me ask you a question. Suppose you were a teacher? How would you judge whether your pupils understood a particular word? Wouldn’t you see whether they could give synonyms, either words or phrases that meant the same thing? If you yourself didn’t understand a word what would you do? You’d go over to that spooky bookcase and look at a list of synonyms. Once you’d matched up your new word with one or two synonyms, you’d understand it, wouldn’t you?

I know you’ve got all these reservations about whether computers really this or really that. You don’t accept that they really learn anything, because true human learning implies understanding, and they haven’t got that. They don’t really communicate, they just transfer digital blips. According to you all these usages are just misleading metaphors. According to you they don’t even play chess, not in the sense that humans do. Now frankly, it seems to me that when the machine is sitting there and moving the pieces in a way that puts you into checkmate, any distinction between what it’s doing and playing chess is patently nugatory. You might as well say you don’t really ride a bike because a bike hasn’t got legs, and that bike-riding is just a metaphor. However, I recognise that I’m not going to drive you out of your cave on this. But you’re going to have to accept that machines which use this kind of semantic mapping can understand words at least to the same extent that computers can play chess. Concede gracefully; that’ll be enough for today.

Picture: Blandula. I’ll grant you that the mapping is an interesting piece of work. But what does it really add? These people are using the map for a search engine, but is it really any better than old-fashioned approaches? So we search for, say ‘tears’; the search engine turns up thousands of pages on weeping, when we wanted to know about tears in a garment. Now Cognition’s engine will be able to spot from the context what I’m really after. Because I’ve searched for ‘What to do when something tears your trousers’ it will notice the word trousers and give me results about rips. But so will Google! If I give Google the word trousers as well as tears, it will find relevant stuff without needing any explicit semantics at all. These people don’t understand why a ‘semantic map’ is necessary for a search engine, and they’re sceptical about Cognition’s ontology (in that irritating non-philosophical sense).

Picture: Bitbucket. Wrong! When you do a search on Google for ‘What to do when something tears your trousers’ the top result is actually about tears from your eye, believe it or not. Typically you get all sorts of irrelevant stuff along with a few good results. But to see the point, look at an example Cognition give themselves. Their legal search demo (try it for yourself), when asked for results relating to ‘fatal fumes in the workplace’ came up with a relevant result which contains neither ‘fatal’ nor ‘workplace’, only ‘died’ and ‘working’. This kind of thing is going to be what Web 3 is built around.

Picture: Blandula. If this thing is so good, why haven’t they used it for a chatbot? If this technology involves complete understanding of English, they ought to breeze through the Turing test. I’ll tell you why: because that would involve actual understanding, not just a dictionary. Their machine might be able to look up meaning of pitbull and find that the synonym dog, but it wouldn’t have a hope with the lipstick pitbull, or are they the ones with all the issues.

Picture: Bitbucket. Nobody says the Cognition technology has achieved consciousness.

Picture: Blandula. I think you are saying that, by implication. I don’t see how there can be real understanding without consciousness.

Picture: Bitbucket. Or if there is, it won’t be real consciousness according to you – just a metaphor…

Nanointentionality

Picture:  Amoeba Somewhat belatedly I came across an interesting paper by W Tecumseh Fitch the other day (Actually I came across Beau Siever’s discussion of Daniel Dennett’s  discussion of the paper.) in which he boldly tackles the thorny subject of original intentionality, claiming it’s all based on what he calls nano-intentionality.

Fitch declares himself a defender of intrinsic intentionality. Intentionality, as you may know, is aboutness, meaningfulness: things like books and films are said to have derived intentionality: they are about things because the people who made them and the people who read or watch them interpret them as being about something, conferring meaning on them. But some things, our own thoughts, for example, are not about things because of what anyone else thinks, they just are intrinsically about things. How they manage this has always been a mystery.

Dennett, in fact, denies that there is any such thing as intrinsic intentionality – how can anything just inherently be about something? It’s this view that Fitch wants to challenge; strangely, Dennett says it’s all a misunderstanding and he agrees with Fitch.

How can this be? Well, Dennett would be right to reject intrinsic intentionality if it meant that we just say things are magically about things and that’s the end of the story; but really when people speak of intrinsic intentionality it is usually a kind of promissory note: they mean, here in people’s minds is where meaning originates; we don’t know how yet, but meanings in minds are different to meanings in books. Fitch means to say; this is where meaning originates, and I think I can tell you how. I think Dennett is comfortable with theories of intentionality which provide a decent naturalistic interpretation – and if that’s what we’re doing, he doesn’t really care too much whether we’re calling it intrinsic, or original, or whatever.

Fitch’s view still seems at odds with Dennett’s in many ways; he rejects the idea that computers could have intrinsic intentionality, and in general his ideas would seem to fit well with those of Searle, Dennett’s arch-enemy. Searle says that consciousness arises from certain kinds of biological material as a result of some properties of that material which we don’t understand (yet – Searle is sure that further scientific research will enable us to understand them). Nanointentionality would seem to fit into that view quite well.

So what is it? Fitch says that biological organisms exhibit a responsiveness to their environment which no machine can emulate. When we’re cut, we heal up: our flesh extemporises, forming functional but ad hoc patterns of tissue that patch up the gap. Amoebas and smaller single-celled organisms respond to their surroundings, not just in a pre-organised way, but flexibly, managing to respond and adapt even to new circumstances. This kind of responsiveness to the environment, in his view, is the elementary precursor to true intentionality: the responses are not, in detail at least, writtn into the organism, and they are, at a basic level, goal-directed.

Having, as he believes, obtained this narrow foothold, Fitch seeks to build on it. Cells working together can build up an information processing system which inherits from them the spark of aboutness while adding to it new capacities. When they reach the level where options can be modelled and accepted or rejected, full-blown consciousness and true intrinsic intentionality dawn. There’s something a little surprising about this; Fitch is relying for most of the work on the kind of functional organisation he otherwise rejects. At least half of the powers of intentionality seem to come, not from the initial spark, but the way the neurons process information – the sort of thing you might think was perfectly amenable to computation (I see Dennett nodding happily). It prompts another thought, too: Fitch denies that mere silicon can have the kind of open-ended responsiveness of an amoeba. If we swap cells for transistors, that may be right; but what if the computer moves down a notch and simulates the parts (perhaps even the molecules) of the amoeba? Since Fitch is committed to naturalism, it seems hard to exclude computers from having the properties of living things so absolutely as he wishes.

There is also a problem down there with the nano-intentionality, too. Fitch sees the responsiveness of the eukaryotic organisms (he’s prepared to exclude the prokaryotes) as having a directedness which prefigures proper intentionality. But I doubt it. This directedness is a real and interesting quality, resembling what Grice called natural meaning: those spots mean measles; that spade means a hole to be dug. This is a good place to be looking for the roots of intentionality; indeed, my own view would have them somewhere in this area. The snag is that natural meaning has a tendency to separate out into two parts; the fitness of a thing for a result, and derived intentionality. The first of these has nothing of intentionality in it, properly understood; a spade may be specially fit for digging, but a large snowbank is especially fit for avalanches; large black clouds are fit for rain; there’s no real meaning at work. The element of derived intentionality lies in the eye of the beholder; the spade is about digging because that’s the way it was designed, and that’s the way I mean to use it. This is intentionality, but resoundingly not the intrinsic kind we’re after.

So, if we look at Fitch’s amoeba, we can analyse its responsiveness. In part it’s simply a fitness to go on surviving no different in principle from the cloud’s fitness to rain; in part it’s a purposefulness which we and FItch can’t help reading into it. Take away these two elements, and the foothold on which Fitch stands disappears, I think.

Feelings about Jaynes

Picture:  Julian Jaynes I see that the annual Julian Jaynes Conference took place last month. As you may  know, Jaynes put forward a surprising theory of consciousness which suggested it had a relatively recent origin. According to Jaynes ancient human beings, right up into early historical times, had minds that were divided into two chambers. One of these chambers was in charge of day-to-day life, operating on a simple, short-term emotional basis for the most part (though still capable of turning out some substantial pieces of art and literature, it seems). The occasional interventions of the second chamber, the part which dealt in more reflective, longer-term consideration were not experienced as the person’s own thoughts, but rather as divine or ancestral voices restraining or instructing the hearer, which explains why interventionist gods feature so strongly in early literature. The breakdown of this bicameral arrangement and the unification of the two chambers of the mind were, according to Jaynes, what produced consciousness as we now understand it.

I find this bicameral theory impossible to believe, but it does have some undeniably engaging qualities. The way it gives a neat potential explanation for divine voices and for certain modern mental disorders gives it a superficial plausibility, especially when expounded with Jaynes’ characteristic eloquence and panache. It’s tempting to think of it as a drastically overstated version of  an essentially sound insight, but even if it’s completely wrong, thinking about its flaws is a stimulating exercise.

At this year’s conference, Stevan Harnad gave a speech in two parts, beginning with some slightly disjointed personal reminiscences of Jaynes – he mentions that he found it impossible to write an obituary for Jaynes and you get the feeling that his emotions are still making it a difficult subject for him to talk about – and then going on to a philosophical discussion of the interesting question of whether Jaynes would have kicked a dog, and why not.

Why shouldn’t he? For Jaynes, after all, consciousness was uniquely human; no other creature had gone through the breakdown of a bicameral mind. There’s nothing especially wrong with kicking unconscious objects and since dogs lack consciousness, there should be no particular reason not to kick one; but Harnad was sure Jaynes, a gentle and civilised man, would certainly not have done so. In fact, he had confirmed this in conversation with Jaynes during his lifetime. Jaynes said it was true that dogs in themselves did not deserve the same moral consideration as conscious entities like human beings; but that we should by all means refrain from kicking dogs unnecessarily because of the moral consequences for the kicker and onlookers. Kicking dogs is a bad, desensitising act, unworthy of the moral dignity of human beings even though dogs don’t fundamentally matter.

This is an interesting answer, and I think it’s intellectually tenable. We should, on the whole, refrain from slashing rose bushes to pieces, and from smashing beautiful porcelain, even though plants and pots are not conscious. But as Harnad suggests, we may doubt the sincerity of the argument – it has the air of a rationalisation run up to defend a weak spot in a wider case rather than something sincerely believed. We might think that a better line of argument was available to Jaynes if he had been willing to say that unconscious creatures can still be moral objects, which is surely true.

Harnad, ultimately, wants to say that dogs are indeed moral objects because they have feelings, or so our mirror neurons tell us; and that empathy is enough to make us hold off, even Julian Jaynes. Although I suspect this overstates the role of mirror neurons, he’s surely right to think that the possession of feelings is enough to constitute a moral object. As Jeremy Bentham put it, ‘The question is not, can they reason?, nor can they talk? but, can they suffer?’

What’s particularly interesting is the discussion Harnad provides about feelings (in the loosest sense; any kind of mental intimation, including but not limited to sensory input). He begins by pointing out that Descartes’ cogito ergo sum is not a logical deduction but a claim about the infallibility of certain thoughts or feelings. To think that one exists can’t be a mistake because non-existent people don’t think at all. Harnad scrupulously points out that it’s the existence of the the thought itself which is established, rather than the existence of the more problematic self. However, other feelings have a similar kind of infallibility; we can be wrong about whether we’ve got a bad tooth, but not that it feels like toothache. Harnad notes that a similar kind of infallibility attaches to what we mean or understand. We can of course use words that don’t, in the wider world, have the meaning we wanted, but we can’t be wrong about what we meant internally. Harnad describes this as the distinction between wide and narrow meaning; it largely corresponds with the more controversial distinction between intrinsic and derive intentionality (thoughts have meanings because somehow they just do; books have meanings only because they record and evoke thoughts).

It all comes down to feelings according to Harnad. “2×2=4” does not feel the same to us as “Kétser kettõ négy”, but if, like him, we spoke Hungarian, they would feel very similar, because they mean the same thing.

This is very interesting. The problem of intentionality, of meaningfulness, is one of the principal problems of consciousness, but it tends in my view to be somewhat neglected – perhaps partly because it’s so difficult to find anything worth saying about it. New ideas in this area are very welcome, and on the face of it Harnad’s suggestion is plausible (sincerity and strong feelings seem to go together at any rate). The chief problem, perhaps, is that even if it’s true, this insight doesn’t move us on as much as we should like. There’s no accompanying theory of feelings, and since Harnad has explicitly chosen the vaguest and widest interpretation of the term, we still don’t know all that much about the fundamental nature of intentionality.

My feeling, on the whole, is that in fact the true essence of meaningfulness lies elsewhere; a feeling that x is an invariable accompaniment to believing that x, but does not constitute the belief. Two cheers for Harnad, though, and a third for Jaynes, whose legacy remains so productive.

Why am I me?

Picture:  Why me? There are a few philosophical problems which occur spontaneously to people who know nothing of academic philosophy but have a naturally thoughtful inclination. The problem of free will is one, I think, and probably so is qualia; many people who never heard of David Chalmers sometimes ponder the ‘hard question’, asking themselves how they know that the blue they see ‘in their heads’ is the same as the blue other people see. David M. Black has put his finger on another of these problems in his paper on The Ownership of Consciousness. Why am I me and not someone else? Black’s main purpose lies elsewhere – he wants to suggest that talk of spirituality can be a valuable way of discussing structures in the subjective part of the world, complementing the reductive scientific account which deals with the objective aspects. That’s an attractive project (though I think he allows himself too much too easily in assuming that subjective experience has causal effects).But it was the issue posed by the title of the paper that particularly caught my attention.

Now of course, there is a sense in which this is is an absurd enquiry. Whoever I am, that person is me: I can’t not be me, by definition. Self and consciousness are intertwined, so that the ownership of my consciousness can never really be in doubt. I am my consciousness. It would make more sense to ask why I have this body than to ask why I have this consciousness. So it might seem that the mystery of why I am who I am is really about on a par with the mystery of how I was lucky enough to be born on Earth, rather than on a planet without an atmosphere; not really a mystery at all.

But suppose, we might say, we strip away the details of my body and my life and pare me down to the essential nub of experiencing entity. What makes this nub any different from other such nubs, and why is it linked with the life of this particular human organism rather than any other?

Some would say in response that there is no such nub; it’s exactly my history and my physical constitution that make my consciousness what it is; so again it’s no surprise, properly understood, that my experiences belong to me and not to anyone else. Strip away all those supposedly inessential features, and you strip me away with them. Others would accept that it’s my history and composition that define me as me, but feel, possibly on the basis of introspection, that there still is something to me over and above all that. They might find this final ingredient in an inscrutable panpsychic quality of matter itself; others have suggested a kind of universal experiential substrate or background. Instead of individual nubs, we have a kind of Universal self; a line of thought which is highly compatible with some religious and mystical views.

However, I think both sides of this argument are missing the point. To say that my individual consciousness arises from or is constituted by my physical nature and background is not actually to dispose of the essential problem at all, because my physical nature and background are also inexplicably particular. Perhaps the underlying problem is the vexed question of why anything is anything in particular: it’s just that in the case of my own experience and my own existence the question hits me with a force it lacks when I’m merely wondering about a chair. The basic problem is haecceity or thisness (the same problem which in my view lies at the root of the qualia problem).

The difficulty of this issue is that it seems no kind of explanation will do. One way to explain the arbitrary complexity of the world would be to assume that everything is really a logical necessity, so if we were clever enough we could deduce all truths from first principles, like Douglas Adams’ Deep Thought which, beginning with the cogito had got as far as deducing the existence of rice pudding and income tax before anyone could switch it off, if I remember correctly. Even if we could pull off such staggering feats of deduction, the explanation is no good because if everything exists only by virtue of logical necessity everything exists in an eternal Platonic world, the opposite of the mutable diversity we set out to explain. A more scientific view would have it that the current state of the world derives from previous states in accordance with the laws of physics, so that the explanation is essentially historical. But this is no better. We now have an expanded set of laws; beside those of logic, we need those of mathematics and those of physics. But they’re still laws, so we’re still Platonic and immutable unless some arbitrary graininess somehow crept in at the beginning of it all. Nowadays we’re readier to accept that huge chaotic complexity can arise out of small beginnings; but not out of nothing. Explaining that original graininess is as difficult as explaining the haecceity of the world was to start with.

And what about those laws of physics? Do they too reduce to logical or mathematical necessity, or is there some arbitrary element involved; and if so, how do you explain that? One line we could take is to say that all the possible variations of the underlying constants are realized in some possible world. The problem of how we got these particular laws of physics then turns into the same sort of vacuous problem as the ones mentioned above: if the laws weren’t like that, you wouldn’t be here to wonder about it.

But that’s no good. Even if we could get over the problems involved in the idea of parallel worlds, which seems to involve the problematic retention of identity between non-identical entities, how can we deal with the concept of all possible sets of laws of physics? Typically in these discussions it is assumed that we’re talking about variations in the value of a few constants, but things are much worse than that. Apart from the sheer bogglement of universes where the value of gravity is determined by quadripedal blurpton interactions in the trouser-pocket of fourth-order fried bread, if all possible sets of laws are realised, that includes universes whose laws and constitution are the same as ours up until 2009, when the blurptons abruptly take over. In short, anything could happen at any time; to say that all possible laws of physics are realised in some universe is in effect to declare that there are no laws of physics and that everything happens arbitrarily.

That is another possible position, of course, though an utterly unsatisfactory one. Taking a more traditional tack, we could say that the world is the way it is because of the will of God; but for philosophers, that’s no good. We need to know whether God was working on logical principles, and if it wasn’t pure logic, where did His axioms or His quirks of personality come from?

In the last century it was finally established that there is something in maths that isn’t reducible to logic; not much, but an essential little something which can be construed in different ways. It seems to me that in the same way there’s some fundamental element in metaphysics that isn’t any kind of law; but I have no idea how to construe it at all.