Most Human

Picture: Brian Christian In The Most Human Human: A Defence of Humanity in the Age of the Computer, Brian Christian gives an entertaining and generally sensible account of his campaign to win the ‘most human human’ award at the Loebner prize.

As you probably know, the Loebner prize is an annual staging of the Turing test: judges conduct online conversations with real humans and with chatbots and try to determine which is which.  The main point of the contest is for the chatbot creators to deceive as many judges as they can, but in order to encourage the human subjects there’s also an award for the participant considered to be least like a computer.  Christian set himself to win this prize, and intersperses the story of his effort with reflections on the background and significance of the enterprise.

He tries to ramp up the drama a bit by pointing out that in 2008 a chatbot came close to succeeding, persuading 3 of the judges that it was human: four would have got it the first-ever win. This year, therefore, he suggests, might in some sense prove to be humanity’s last stand.  I think it’s true that Loebner entrants have improved over the years. In the early days none of the chatbots was at all convincing and few of their efforts at conversation rose above the ludicrous – in fact, many early transcripts are worth reading for the surreal humour they inadvertently generated. Nowadays, if they’re not put under particular pressure the leading chatbots can produce a lengthy stream of fairly reasonable responses, mixed with occasional touches of genius and – still – the inevitable periodic lapses into the inapposite or plain incoherent. But I don’t think the Turing Test is seriously about to fall. The variation in success at the Loebner prize has something to do with the quality of the bots, but more with the variability of the judges. I don’t think it’s made very clear to the judges what their task is, and they seem to divide into hawks and doves: some appear to feel they should be sporting and play along with the bots, while others approach the conversations inquisitorially and do their best to catch their  ‘opponents’ out. The former approach sometimes lets the bots look good; the latter, I’m afraid, never really fails to unmask them.  I suspect that in 2008 there just happened to be a lot of judges who were ready to give the bots a fighting chance.

How do you demonstrate your humanity?  In the past some human contenders have tried to signal their authenticity with displays of emotional affect, but I should have thought that if that approach was susceptible to fakery. However, compared to any human being, the bots have little information and no understanding. They can therefore be thrown off either by allusions to matters of fact that a human participant would certainly know (the details of the hotel breakfast; a topical story from the same day’s news) but would not be in the bots’ databases; or they can be stumped by questions that require genuine comprehension (prhps w cld sk qstns wth n vwls nd sk fr rpls n the sm frmt?). In one way or another they rely on scripts, so as Christian deduces, it is basically a matter of breaking the pattern.

I’m not altogether sure how it comes about that humans can break pattern so easily while remaining confident that another human will readily catch their drift. Sometimes it’s a matter of human thought operating on more than one level, so that where two topics intersect we can leap from one to the other (a feature it might be worth trying to build into bots to some extent). In the case of the Loebner, though, hawkish judges are likely to make a point of leaving no thread of relevance whatever between each input, confident that any human being will be able to pick up a new narrative instantly from a standing start. I think it has something to do with the un-named human faculty that allows us to deal with pragmatics in language, evade the frame problem, and effortlessly catch and attribute meanings (at least, I think all those things rely at least partly on a common underlying faculty,or perhaps on an unidentified common property of mental processes).

Christian quotes an example of a bot which appeared to be particularly impoverished, having only one script: if it could persuade the judge to talk about Bill Clinton it looked very good, but s soon as the subject was changed it was dead meat. The best bots, like Rollo Carpenter’s Jabberwacky,  seem to have a very large repertoire of examples of real human responses to the kind of thing real humans say in a conversation with chatbots (helpfully real humans are not generally all that original in these circumstances, so it’s almost possible to treat chatbot conversation as a large but limited domain in itself). They often seem to make sense, but still fall down on consistency, being liable to give random and conflicting answers, for example about their own supposed gender and marital status.

Reflecting on this, Christian notes that a great deal of ordinary everyday human interaction effectively follows scripts, too. In the routine of small talk, shopping, or ordering food, there tend to be ritual formulas to be followed and only a tiny exchange of real information. Where communication is poor, for example where there is no shared language, it’s still often relatively easy to get the tiny nuggets of actual information across and complete the transaction successfully (though not always: Christian tells against himself the story of a small disaster he suffered in Paris through assuming, by analogy with Spanish, that ‘station est’ must mean ‘this station’).

Doesn’t this show, Christian asks, that half the time we’re wasting time? Wouldn’t it be better if we dropped the stereotyped phatic exchanges and cut to the chase? In speed-dating it is apparently necessary to have rules forbidding participants to ask certain standard questions (Where do you live? What do you do for a living?) which eat up scarce time without people getting any real feel for each other’s personality. Wouldn’t it be more rewarding if we applied similar rules to all our conversations?

This,  Christian thinks, might be the gift which artificial intelligence ultimately bestows on us. Unlike some others, he’s not worried that dialogue with computers will make us start to think of ourselves as machines – the difference, he thinks, is too obvious. On the contrary, the experience of dealing with robots will bring home to us for the first time how much of our own behaviour is needlessly stereotyped and robotic and inspire us to become more original – more human – than we ever were before.

In some ways this makes sense. As a similar point it has sometimes occurred to me in the past to wonder whether our time is  best spent by so many of us watching the same films and reading the same books. Too often I have had conversations sharing identical memories of the same television programme and quoting the same passages from reviews in the same newspapers. Mightn’t it be more productive, mightn’t we cover more ground, if we all had different experiences of different things?

Maybe, but it would be hard work. If Christian had his way, we should no longer be saying things like this.

– Hi, how are you?

– Fine, thanks, you too?

– Yeah, not so bad. I see the rain’s stopped.

– Mm, hope it stays fine for the weekend.

– Oh, yeah, the weekend.

– Well, it’s Friday tomorrow – at last!

– That’s right. One more day to go.

Instead I suppose our conversations would be earnest, informative, intense, and personal.

– Tell me something important.

– Sometimes I’m secretly glad my father died young rather than living to see his hopes crushed.

– Mithridates, he died old.

– Ugh: Housman’s is the only poetry which is necessarily improved by parody.

– I’ve never respected your taste or your intellect, but I’ve still always felt protective towards you.

– There’s a useful sociological framework theory of amorance I can summarise if it would help?

– What are you really thinking?

Perhaps the second kind of exchange is more interesting than the first, but all day every day it would be tough to sustain and wearing to endure. It seems to me there’s something peculiarly Western about the idea that even our small talk should be made to yield a profit. I believe historically most civilisations have been inclined to believe that the world was gradually deteriorating from a previous Golden Age, and that keeping things the way they had been in past was the most anyone could generally aspire to. Since the Renaissance, perhaps, we have become more accustomed to the idea of improvement and tend to look restlessly for progress: a culture constantly gearing up and apparently preparing itself for some colossal future undertaking the nature of which remains obscure. This driven quality clearly yields its benefits in prosperity for us, but when it gets down to the personal level it has its dangers, at worst it may promote slave-like levels of work, degrade friendship into networking and reinterpret leisure as mere recuperation. I’m not sure I want to see self-help books about leveraging those moments of idle chat. (In fairness, that’s not what Christian has in mind either.)

Christian may be right, in any case, that human interaction with machines will tend to emphasise the differences more than the similarities. I won’t reveal whether he ultimately succeeded in his quest to be Most Human Human (or perhaps was pipped at the post when a rival and his judge stumbled on a common and all-too-human sporting interest?), but I can tell you that this was not on any view humanity’s last stand:  the bots were routed.

8 thoughts on “Most Human

  1. Pingback: Links for 2011-06-25 [del.icio.us] | Books in the News

  2. It may be a blunder to consider the routine part of a conversation to be completely redundant. It exists in all cultures and it has evolved for some reason. In a face to face conversation, it helps set the stage, judge the mood of the other person, helping to decide whether it is a good time to discuss a particular topic or not, etc. I don’t think you go through the same rituals in a telephonic conversation, where you cannot see the other person. But there too, you may be reading a lot in the voice and the way of speech.

    Also, when we watch the same films, read the same news, we are creating a mass memory (or social memory, I don’t know what the technical term is). When you discuss the same news over coffee, you are reinforcing each other’s memory and figuring out what is important (if five other people have noted the same thing and are talking about it, it may be of some importance).

    Social interaction is a highly complex process that has evolved over millenia. While on the surface it may seem irrational, there may be deeper advantages that can be easily missed in the process of simplification.

  3. Peter,

    I’m not altogether sure how it comes about that humans can break pattern so easily while remaining confident that another human will readily catch their drift. Sometimes it’s a matter of human thought operating on more than one level, so that where two topics intersect we can leap from one to the other (a feature it might be worth trying to build into bots to some extent). In the case of the Loebner, though, hawkish judges are likely to make a point of leaving no thread of relevance whatever between each input, confident that any human being will be able to pick up a new narrative instantly from a standing start. I think it has something to do with the un-named human faculty that allows us to deal with pragmatics in language, evade the frame problem, and effortlessly catch and attribute meanings (at least, I think all those things rely at least partly on a common underlying faculty,or perhaps on an unidentified common property of mental processes).

    It is true that we have these faculties, but I am not sure how efficient we are at using them. Actually, there is a lot of misunderstanding in human communication, because of this. It is said, that two people (a couple?) really know each other, when they can apply this faculty with a very low failure rate, and that needs a lot of training/coexistence (like bots?).

    It would be interesting to check (statistically) how much real accurate information flows between two people talking in this “not unambiguous” human manner, and how much misinterpretation poisons the chat. Maybe that is one of the human characteristics, that misinterpretation is possible, can that happen between bots of the same kind/generation(same SW/HW)?

    Regarding the chat and leisure side, there is a zen sentence a like a lot:

    before talking check if your words improve the silence

    I wish I would have applied (and apply) it to myself more often.

  4. It would be interesting to check (statistically) how much real accurate information flows between two people talking in this “not unambiguous” human manner

    It would indeed – but I think the task of quantifying the information would be challenging.

  5. Peter,
    They can therefore be thrown off either by allusions to matters of fact that a human participant would certainly know (the details of the hotel breakfast; a topical story from the same day’s news) but would not be in the bots’ databases

    Suddenly it occurred to me that if you are a “conscious robot” (if such a thing exists), why would you want to prove that you are human? If you are a conscious man, why would you try to convince other people that you are a woman, by lying or making up stories? You are a robot and you are conscious, that in itself would be more meaningful than being able to trick a human into believing that you are one of his kind, wouldn’t it?

    Instead of lying, I think the conscious robot should just say, “Sorry sir, though I am conscious, I am not a human. While you were going through the morning routine of pushing organic materials down your digestive system, I was busy rebooting my left arm reflex circuit. Now it just feels so much better and my joint is so much more responsive. So, how was your breakfast?”

  6. Hi Kar, in a way it makes sense to me what you are saying, but may you be confusing consciousness and intellectual skills? I believe most humans conscious, still very few would be capable of making the argument you just did. You could have a conscious bot, that just follows its programming, and doesn’t care about convincing or not humans about anything. You are presenting a bot conscious and with humans psychological traits. Even us, how many times do we act subconsciously?

  7. Vicente, you are probably right. The robot I portrayed was still too human.

    The more I think about the Turing test, the more I think it is quite silly. So, what is the Turing test supposed to be testing for again?

  8. Pingback: events in nyc

Leave a Reply

Your email address will not be published. Required fields are marked *