The Loebner prize, for the program best able to simulate a human being in conversation, along the lines of the celebrated Turing Test, has been won for the second year running by Rollo Carpenter’s Jabberwacky, this time using ‘Joan’ a female personality. Last year’s contest was mentioned here . Jabberwacky seems to be making steady progress: some media attention has been attracted recently by the visual people-simulations which have been commissioned to accompany the verbal output. George, last year’s winning personality is now visible here.
You can have a conversation with Jabberwacky yourself here : my impression is that unless you deliberately set out to trap the program, it delivers quite long stretches of plausible, if rather evasive, dialogue. I suppose the evasiveness is an inevitable product of the computer not really understanding what you’re talking about – it’s like someone trying to bluff their way through a conversation about a book that they haven’t actually read. It might disappear if the software were dealing with a limited domain, an area which it could ‘know about’ in more detail. That suggests that practical usefulness in some such restricted context (perhaps answering routine queries about a particular product, or being an interactive ‘tour guide’ in a museum) is no longer an unrealistic aspiration – though “really” passing the Turing Test in free conversation still is.
Should we be pleased or worried? Chatbot programs do not pretend to reproduce all the ineffable properties of real human thought and consciousness: they just aim to deliver good outputs by whatever means works. It has been suggested, however, that their resemblance to a full-fledged conscious being could have a subtly malign impact on our attitudes. The problem is that some people – many people, probably – are more than willing to attribute personhood to programs which haven’t any claim to it. Daniel Dennett has remarked, in connection with his own erstwhile involvement with the Loebner, that the attitude of the interlocutor was often a more important factor than the quality of the chatbot. Some sceptical people devise cunning questions which rely on knowledge of the world, or the context of the dialogue, in ways which throw even the most sophisticated and well-prepared program: but many others accept almost any grammatical output as a human-like response. It seems there is something seductive about having our own image reflected in a machine; a kind of Narcissus effect which makes us fascinated with a dialogue in which we are really the only players.
This willingness to be deceived became evident with Joseph Weizenbaum’s famous Eliza, the mother of all chatbots: he famously found his secretary conducting a long and deeply engaged conversation with the program, much to his horror. I saw the same effect on a smaller scale myself many years ago, in the days when an 80 by 25 display of glowing green characters was still the standard PC display technology. I had a simple program which drew a face on the screen using ascii characters: at random intervals it would raise an eyebrow, swivel its square eyes, smile, blink, and so on. Colleagues I had previously regarded as sensible took this thing to be far more sophisticated than it was: not human, obviously, but “maybe up to about the level of a tortoise” (it looked a bit chelonian).
We might well find, then, that if we start dealing with plausible chat-bots on a regular basis, we shall automatically start to think of them in much the same way as we think of human beings. But there are some obvious dangers in confusing people and simple machines. On the one hand, it might lead us to trust the advice of machines rather more than we should. We are already a bit prone to this, following the instructions of our in-car GPS navigation system even when it conflicts with common sense, and attributing a spurious authority to job evaluation or personality test programs which merely reflect back at us in a digested form the views we fed into them in the first place.
More seriously, we might find our instincts being tutored towards treating people the way we treat machines – as tools to be manipulated and used without any ethical significance. I think you could make a case that this sort of thing has already begun to happen: attitudes to euthanasia have certainly shifted a long way in recent times for example (if the thing’s worn out, junk it), and utilitarian calculations about patients as generators of “quality of life” are far more overtly applied in medical contexts than would once have been the case. Moreover, it seems to be more readily accepted these days that moral responsibility is largely an illusion and that economic and social conditions determine the behaviour of criminals and heroes equally. Books and other works of art merely express the writer’s place in the societal matrix rather than anything individual and ineffable. Does all this represent a welcome clarity about human nature, or a depressingly impoverished view of the world?
I understand the pessimistic view, which I think lies behind some people’s distaste for the whole idea of AI. But in the end I think it under-rates the subtlety of people’s attitudes. The fact is we are already used to a world in which all sorts of things which lack a human brain are treated in varying degrees as animate. Children do and don’t believe that their toys have live personalities; the ancient Romans and many others believed in gods that were sort of people, and sort of mere embodiments of abstract qualities. In Japan, Shinto grants the status of Kami (god, spirit, ensouled thing) to all the most salient features of nature, without people becoming confused or losing their sense of human specialness. For that matter, humanoid robots have been a part of our culture for a long time now – the word itself is over eighty years old. Perhaps Weizenbaum’s concern about his secretary was unnecessary: she may have regarded the friendly counsellor in the computer as no more real than the elusive Hearts players that help some Windows users pass the time these days. And… could it be possible that my old colleagues were to some extent just winding me up? I doubt if they would have gone back into a burning building to save tortoise-face, in any case. All in all, I think we can cope.
Of course, if we want to be really optimistic, there is another possibility – that dealing with chatbots every day will actually sharpen and improve our sense of the special qualities of real people…