Picture: elephant. “I’m leaving.”
“Who is she?”

This is the kind of human exchange which computers, it is said, will never be able to fathom. The computer might be able to decode the literal meaning of the sentences, but never pick up the “relationship crisis” scenario which human beings recognise more or less instantly. This is an example of the exciting subject of conversational implicature, an ugly word invented by H.P.Grice to describe the inferences we make which are vital to understanding each other. Grice proposed that in order to work out what other people are getting at, we take it for granted that when they talk to us they are normally going to follow certain rules, or maxims. There are nine altogether, grouped under four headings:


i) Make your contribution as informative as is required for the current purposes of the exchange.
ii) Do not make your contribution more informative than is required.

i) Do not say what you believe to be false.
ii) Do not say that for which you lack evidence.

Be relevant

i) Avoid obscurity of expression
ii) Avoid ambiguity
iii) Be brief
iv) Be orderly

These maxims help us express ourselves briefly without fear of being misunderstood. If someone tells you your horse came in either first or fourth, for example, you are entitled to assume, without his saying so, that he does not know which it was (unless other evidence suggests he is deliberately being annoying). Or consider this exchange:

A: Smith doesn’t seem to have a girlfriend these days.
B: He has been paying a lot of visits to New York recently.

If B is following the rules, we have to assume that what he says is relevant in some way, so we can legitimately deduce that he means Smith may have a girlfriend in New York. Besides straightforward use like this, the rules can be deliberately broken as a more subtle way of sending messages. If you provide a job reference which merely says that X worked for you and always turned up on time, the recipient may reasonably assume that you are deliberately being uninformative as a way of implying a negative judgement about X which you prefer not to spell out explicitly.

Grice’s work was directed towards philosophical ends, but it has proved unexpectedly fertile in other areas. In particular, it offers a promising-looking angle on the old debate about whether you can get syntax from semantics: maybe you don’t need to, but can look instead to the new field of pragmatics, in which the linguistic consequences of Grice’s original insights are explored. Particularly interesting is Dan Sperber and Deirdre Wilson’s book tackling the mystery of Relevance, a subject which (according to me, at least) is one of the three-and-a-half big problems of consciousness.

Sperber and Wilson describe two models of communication. In the code model, a thought in one brain (perhaps in mentalese, a brain’s private mental language) is encoded into language, transmitted to another brain, and decoded again. In the inferential model, by contrast, the speaker just provides the linguistic evidence from which the auditor, in a more or less Gricean way, can infer the intended message. The code model cannot provide a complete explanation of the process of communication because it can’t deal with the sort of example quoted above: on the other hand, when you examine real conversations, there does seem to be a whole lot of coding going on. Sperber and Wilson maintain that both code and inference have roles to play.

It’s rather as though human communication had started out as a game of charades (the game where one player has to mime a book, play or film title for the others to guess). At first, the players have to think of clues that straightforwardly make the audience think along the right lines – behaving like a whale for Moby Dick, say. But experienced players gradually develop a set of conventions, such as pointing to their nose (=”knows”) with one finger and at someone who has guessed correctly with another. The grammatical and lexical apparatus of ordinary language represents the code, the ultimate development of this kind of helpful convention, and in ordinary communication it now does most of the work, though successful communication still requires occasional resort to inferential methods.

Sperber and Wilson think Grice’s approach can be slimmed down, and most of the work done by the simple criterion of relevance. This means that the inferential component of communication rests very largely on the ability to work out what is and isn’t relevant in what people are saying to you. They offer an analysis of relevance in terms of contextual effect. Roughly speaking, the idea is that a new piece of information is relevant to the extent that it allows you to revise the beliefs you had already, strengthening, weakening or deleting them or allowing new deductions to be added. If it merely duplicates things you already know, or if it has no connection with any of the pieces of information you have already, it isn’t relevant at all. This, in a more formal guise, is how Sperber and Wilson propose the relevance of any given proposition could in principle be rated.

So, on their theory, if someone says “Coffee would keep me awake.”, I am entitled to assume that the utterance has some relevance to something. As a first shot, I try out the simplest hypothesis I can think of – maybe they just think I need to know about the stimulating properties of coffee? But on that interpretation, the relevance of the remark seems negligible – we’re not in a pharmacological seminar, and at best the remark might convey to me something I didn’t know about coffee – it doesn’t change any of my other current beliefs or allow me to draw many new conclusions. So I move on to more complex hypotheses, and eventually (rather quickly, I hope) I reach the correct hypothesis: depending on the circumstances, coffee is either being asked for or refused.

This looks like a promising avenue to explore, but there are some ifs and buts. First, the analysis is all to do with conversations, and whether it can be widened into any general analysis of relevance is unclear. Second, it’s not at all clear how this approach could be reduced to the kind of concrete algorithm you might use in programming a chat-bot or other program. In particular, there seems to be a danger of circularity. In many cases, when we come to tot up the contextual effects of a given remark, there are actually going to be an infinite number of valid new inferences we can make -nearly all of them will be trivial or uninteresting, but how do we weed them out. Ihave theuneasy feeling that at some stage we are going to find ourselves needing to disallow inferences which are (gulp) irrelevant…

Even if that difficulty is a real one, the idea of implicature seems an interesting one. Sperber and Wilson are good at coming up with little snatches of dialogue which exemplify different varieties of implicature, and it is noticeable that many of them have a witty or rhetorical quality. Towards the end of the book, they discuss the way implicatures contribute to poetry, metaphor and other special forms of language. It seems to me that virtually all jokes are based on implicature, too. Consider this example:

G: Last night I shot an elephant in my pyjamas. How he got into my pyjamas I’ll never know.

Surely a fine example of the manipulation of Gricean conversational implicatures..?

Leave a Reply