What do you mean?

Picture: pyramid of wisdom. Robots.net reports an interesting plea (pdf download) for clarity by Emanuel Diamant at the the 3rd Israeli Conference on Robotics. Robotics, he says, has been derailed for the last fifty years by the lack of a clear definition of basic concepts: there are more than 130 definitions of data, and more than 75 definitions of intelligence.

I wouldn’t have thought serious robotics had been going for much more than fifty years (though of course there are automata and other precursors which go much further back), so that sounds pretty serious: but he’s clearly right that there is a bad problem, not just for robotics but for consciousness and cognitive science, and not just for data, information, knowledge, intelligence, understanding and so on, but for many other key concepts, notably including ‘consciousness’.

It could be that this has something to do with the clash of cultures in this highly interdisciplinary area.  Scientists are relatively well-disciplined about terminology, deferring to established norms, reaching consensus and even establishing taxonomical authorities. I don’t think this is because they are inherently self-effacing or obedient; I would guess instead that this culture arises from two factors: first, the presence of irrefutable empirical evidence establishes good habits of recognising unwelcome truth gracefully; second, a lot of modern scientific research tends to be a collaborative enterprise where a degree of consensus is essential to progress.

How very different things are in the lawless frontier territory of philosophy, where no conventions are universally accepted, and discrediting an opponent’s terminology is often easier and no less prestigious than tackling the arguments. Numerous popular tactics seem designed to throw the terminology into confusion.  A philosopher may often, for instance, grab some existing words  – ethics/morality, consciousness/awareness, information/data, or whatever – and use them to embody a particular distinction while blithely ignoring the fact that in another part of the forest another philosopher is using the same words for a completely different distinction. When irreconcilable differences come to light a popular move is ‘giving’ the disputed word away:”Alright, then, you can just have ‘free will’ and make it what you like: I’m going to talk about ‘x-free will’ instead in future. I’ll define ‘x-free will’ to my own satisfaction and when I’ve expounded my theory on that basis I’ll put in a little paragraph pointing out that ‘x-free will’ is the only kind worth worrying about, or the only kind everyone in the real world is actually talking about”.  These and other tactics lead to a position where in some areas it’s generally necessary to learn a new set of terms for every paper: to have others picking up your definitions and using them in their papers, as happens with Ned Block’s p- and a-consciousness, for example, is a rare and high honour.

It’s not that philosophers are quarrelsome and egotistical (though of course they are);  it’s more that the subject matter rarely provides any scope for pinning down an irrefutable position, and is best tackled by single brains operating alone (Churchlands notwithstanding).

Diamant is particularly exercised by problems over ‘data’ , ‘information’, ‘knowledge’, and ‘intelligence’.  Why can’t we sort these out? He correctly identifies a key problem: some of these terms properly involve semantics, and the others don’t (needless to say, it isn’t clearly agreed which words fall into which camp).  What he perhaps doesn’t realise clearly enough is that the essential nature of semantics is an extremely difficult problem which has so far proved unamenable to science.  We can recognise semantics quite readily, and we know well enough the sort of thing semantics does; but exactly how it does those things remains a cloudy matter, stuck in the philosophical badlands.

If my analysis is right, the only real hope of clarification would be if we could come up with some empirical research (perhaps neurological, perhaps not) which would allow us to define semantics (or x-semantics at any rate), in concrete terms that could somehow be demonstrated in a lab. That isn’t going to happen any time soon, or possibly ever.

Diamant wants to press on however, and inevitably by doing so in the absence of science he falls into philosophy: he offers us implicitly a theory of his own and – guess what? Another new way of using the terminology. The theory he puts forward is that semantics is a matter of convention between entities. Conventions are certainly important: the meaning of particular words or symbols is generally a matter of convention; but that doesn’t seem to capture the essence of the thing. If semantics were simply a matter of convention, then before God created Adam he could have had no semantics, and could not have gone around asking for light; on the other hand, if we wanted a robot to deal with semantics, all we’d need to do would be to agree a convention with it or perhaps let it in on the prevailing conventions. I don’t know how you’d do that with a robot which had no semantics to begin with, as it wouldn’t be able to understand what you were talking about.

There are, of course, many established philosophical attempts to clarify the intentional basis of semantics. In my personal view the best starting point is H.P. Grice’s theory of natural meaning (those black clouds mean rain); although I think it’s advantageous to use a slightly different terminology…