Archive for October, 2008

Picture: Darwin and Descartes.This piece in the New Scientist suggests that creationists and their sympathisers are seeking to open up a new front. They think that the apparent insolubility of the problem of qualia means that materialism is on the way out; in fact, that consciousness is ‘Darwinism’s grave’. Cartesian dualism is back with a vengeance. Oh boy: if there was one thing the qualia debate didn’t need, it was a large-scale theological intervention. Dan Dennett must be feeling rather the way Guy Crouchback felt when he heard about the Nazi-Soviet pact: the forces of darkness have drawn together and the enemy stands clear at last!

The suggested linkage between qualia and evolution seems tortuous. The first step, I suppose, assumes that dualism makes the problem of qualia easier to solve; then presumably we deduce that if dualism is true, it might as well be a dualism with spirits in (there are plenty of varieties without; in fact if I were to put down a list of the dualisms which seem to me most clear and plausible, I’m not sure that the Christian spirit variety would scrape into the Top Ten); then, that if there are spirits, there could well be God, and then that if there’s God he might as well take on the job of governing speciation. At least, that’s how I assume it goes. A key point seems to be the existence of some form of spiritual causation. Experiments are adduced in which the subjects were asked to change the pattern of their thoughts, which was then shown to correspond with change in the activity of their brain; this, it is claimed, shows that mind and brain are distinct. Unfortunately it palpably doesn’t; attempting to disprove the identity of mind and brain by citing a correlation between the activity of the two is, well, pretty silly. Of course the thing that draws all this together and makes it seem to make sense in the minds of its advocates is Christianity, or at any rate an old-fashioned, literalist kind of Christianity.

Anyway, I shall leave Darwinism to look after itself, but in a helpful spirit let me offer these new qualophiles two reasons why dualism is No Good.

The first, widely recognised, is that arranging linkages between the two worlds, or two kinds of stuff required by dualism, always proves impossible. In resurrecting ‘Cartesian dualism’ I don’t suppose the new qualophiles intend to restore the pineal gland to the role Descartes gave it as the unique locus of interaction between body and soul, but they will find that coming up with anything better is surprisingly difficult. There is a philosophical reason for this. If you have causal connections between your worlds – between spirits and matter, in this case – it becomes increasingly difficult to see why the second world should be regarded as truly separate at all, and your theory turns into a monism before your eyes. But if you don’t have causal connections, your second world becomes irrelevant and unknowable. The usual Christian approach to this problem is to go for a kind of Sunday-best causal connection, one that doesn’t show up in the everyday world, but lurks in special invisible places in the human brain. This was never a very attractive line of thinking and in spite of the quixotic efforts of those two distinguished knights, John Eccles and Karl Popper, it is less plausible now than ever before, and its credibility drains further with every advance in neurology.

The second problem, worse in my view, is that dualism doesn’t really help. The new qualophile case must be, I suppose, that our ineffable subjective experiences are experiences of the spirit, and that’s what gives them their vivid character. The problem of qualia is to define what it is in the experience of seeing red which is over and above the simple physical account; bingo! It’s the spirit. To put it another way, on this view zombies don’t have souls.

But why not? How does the intervention of a soul turn the ditchwater of physics into the glowing wine of qualia? It seems to me I could quite well imagine a person who had a fully functioning soul and yet had no real phenomenal experiences: or at any rate, it’s as easy to imagine that as an unsouled zombie in the same position. I think the new qualophiles might reply that my saying that shows I just haven’t grasped what a soul is. Indeed I haven’t, and I need them to explain how it works before I can see what advantage there is in their theory. If we’re going to solve the mystery of qualia by attributing it to ‘souls’, and then we declare ‘souls’ a mystery, why are we even bothering? But here, as elsewhere with theological arguments, it seems to be assumed that if we can get the question into the spiritual realm, the enquiry politely ceases and we avert our eyes.

It is, of course, the same thing over on the other front, where creationists typically offer criticism of evolutionary theory, but offer not so much as a sniff of a Theory of Creation. Perhaps in the end the whole dispute is not so much a clash between two rival theories as a dispute over whether we should have rational theories at all.

Picture:  Elbot The annual Loebner Prize has been won by Elbot. As you may know, the Loebner competition implements the Turing Test, inviting contestants to put forward a chat-bot program which can conduct online conversation indistinguishable from one conducted with a human being. We previously discussed the victory of Rollo Carpenter’s Jabberwacky, a contender again this year.

One of Elbot’s characteristics, which presumably helped tip the balance this year, is a particular assertiveness about trying to manage the conversation into ‘safe’ areas. One of the easiest ways to unmask a chat-bot is to exploit its lack of knowledge about the real world; but if the bot can keep the conversation in domains it is well-informed about, it stands a much better chance of being plausible. Otherwise the only option is often to resort to relatively weak default responses (‘I don’t know’, ‘What do you think?’, ‘Why do you mention [noun extracted from the input sentence]?).

But aren’t Elbot’s tactics cheating? Don’t these cheap tricks invalidate the whole thing as a serious project? Some would say so: the Loebner does not enjoy universal esteem among academics, and Marvin Minsky famously offered a cash reward to anyone who could stop the contest.

We have to remember, however, that the contestants are not seeking to reproduce the real operation of the human brain. Humans are able to conduct general conversation because they have general-purpose consciousness, but that is far too much to expect of a simple chat-bot. The Turing Test is sometimes interpreted as a test for consciousness, but that isn’t quite how Turing himself described it (he proposed it as a more practical alternative to considering the question ‘Can machines think?’).

OK so it’s not cheating, but all the same, if it’s just fakery, what’s the value of the exercise? There are several answers to this. One is the ‘plane wing’ argument: planes don’t fly the way birds do, but they’re still of value. It might well be that a program that does conversation is useful in its own right, even if it doesn’t do things the way the human brain does; perhaps for human/machine interfaces. On the other hand, as a second answer, it might turn out that discoveries we make while making chat-bots will eventually shed some light on how some parts of the brain put together well-structured and relevant sentences. If they don’t do that, they may still lead to the discovery of unexpectedly valuable techniques in programming: solving difficult but apparently pointless problems just for the hell of it does sometimes prove more fruitful than expected. A fourth point which I think perhaps deserves more attention is that even if chat-bots tell us nothing about AI, they may still tell us interesting things about human beings.  The way trust and belief are evoked, for example: the implicit rules of conversation, and the pragmatics of human communication.

The clincher in my view, however, is that the Loebner is fun, and enlivens the subject in a way which must surely be helpful overall. How many serious scientists were inspired in part by a secret childhood desire to have a robot friend they could talk to?

In a way you could say Elbot is attempting to refine the terms of the test. A successful program actually needs to deploy several different kinds of ability, and one of the most challenging is bringing to bear a fund of implicit background knowledge. No existing program is remotely as good at this as the human brain, and there are some reasons to doubt whether they ever will be. In the meantime, at any rate, there may be an argument for taking this factor out of the equation: Elbot tries to do this by managing the conversation, but in some early Loebner contests the conversations were explicitly limited to particular topic, and maybe this approach has something to be said for it. I believe Daniel Dennett, who was once a Loebner judge, suggested that the contest should develop towards testing a range of narrower abilities rather than the total conversational package. Perhaps we can imagine tests of parsing, of knowledge management, and so on.

At any rate, the Loebner seems in vigorous health, with a strong group of contenders this year: I hope it continues.

Picture:  Semantic Map Picture: Bitbucket. So, another bastion of mysterianism falls. Cognition Technologies Inc has a semantic map of virtually the entire English language, which enables applications to understand English words. Yes, understand. It’s always been one of your bedrock claims that computers can’t really understand anything; but you’re going to have to give that one up.

Picture: Blandula. Oh, yes, I read about that. In fact it’s been mentioned in several places. I thought the Engadget headline got to the root of the issue pretty well – ‘semantic map opens way for robot uprising’. I suppose that’s pretty much your view. It seems to me just another case of the baseless hyperbole that afflicts the whole AI field. There are those people at Cyc and elsewhere who think cognition is just a matter of having a big enough encyclopaedia: now we’ve got this lot who think it’s just a colossal dictionary. But having a long list of synonyms and categories doesn’t constitute understanding; or my bookshelf would have achieved consciousness long ago.

Picture: Bitbucket. Let me ask you a question. Suppose you were a teacher? How would you judge whether your pupils understood a particular word? Wouldn’t you see whether they could give synonyms, either words or phrases that meant the same thing? If you yourself didn’t understand a word what would you do? You’d go over to that spooky bookcase and look at a list of synonyms. Once you’d matched up your new word with one or two synonyms, you’d understand it, wouldn’t you?

I know you’ve got all these reservations about whether computers really this or really that. You don’t accept that they really learn anything, because true human learning implies understanding, and they haven’t got that. They don’t really communicate, they just transfer digital blips. According to you all these usages are just misleading metaphors. According to you they don’t even play chess, not in the sense that humans do. Now frankly, it seems to me that when the machine is sitting there and moving the pieces in a way that puts you into checkmate, any distinction between what it’s doing and playing chess is patently nugatory. You might as well say you don’t really ride a bike because a bike hasn’t got legs, and that bike-riding is just a metaphor. However, I recognise that I’m not going to drive you out of your cave on this. But you’re going to have to accept that machines which use this kind of semantic mapping can understand words at least to the same extent that computers can play chess. Concede gracefully; that’ll be enough for today.

Picture: Blandula. I’ll grant you that the mapping is an interesting piece of work. But what does it really add? These people are using the map for a search engine, but is it really any better than old-fashioned approaches? So we search for, say ‘tears'; the search engine turns up thousands of pages on weeping, when we wanted to know about tears in a garment. Now Cognition’s engine will be able to spot from the context what I’m really after. Because I’ve searched for ‘What to do when something tears your trousers’ it will notice the word trousers and give me results about rips. But so will Google! If I give Google the word trousers as well as tears, it will find relevant stuff without needing any explicit semantics at all. These people don’t understand why a ‘semantic map’ is necessary for a search engine, and they’re sceptical about Cognition’s ontology (in that irritating non-philosophical sense).

Picture: Bitbucket. Wrong! When you do a search on Google for ‘What to do when something tears your trousers’ the top result is actually about tears from your eye, believe it or not. Typically you get all sorts of irrelevant stuff along with a few good results. But to see the point, look at an example Cognition give themselves. Their legal search demo (try it for yourself), when asked for results relating to ‘fatal fumes in the workplace’ came up with a relevant result which contains neither ‘fatal’ nor ‘workplace’, only ‘died’ and ‘working’. This kind of thing is going to be what Web 3 is built around.

Picture: Blandula. If this thing is so good, why haven’t they used it for a chatbot? If this technology involves complete understanding of English, they ought to breeze through the Turing test. I’ll tell you why: because that would involve actual understanding, not just a dictionary. Their machine might be able to look up meaning of pitbull and find that the synonym dog, but it wouldn’t have a hope with the lipstick pitbull, or are they the ones with all the issues.

Picture: Bitbucket. Nobody says the Cognition technology has achieved consciousness.

Picture: Blandula. I think you are saying that, by implication. I don’t see how there can be real understanding without consciousness.

Picture: Bitbucket. Or if there is, it won’t be real consciousness according to you – just a metaphor…