Picture: pyramid of wisdom. Robots.net reports an interesting plea (pdf download) for clarity by Emanuel Diamant at the the 3rd Israeli Conference on Robotics. Robotics, he says, has been derailed for the last fifty years by the lack of a clear definition of basic concepts: there are more than 130 definitions of data, and more than 75 definitions of intelligence.

I wouldn’t have thought serious robotics had been going for much more than fifty years (though of course there are automata and other precursors which go much further back), so that sounds pretty serious: but he’s clearly right that there is a bad problem, not just for robotics but for consciousness and cognitive science, and not just for data, information, knowledge, intelligence, understanding and so on, but for many other key concepts, notably including ‘consciousness’.

It could be that this has something to do with the clash of cultures in this highly interdisciplinary area.  Scientists are relatively well-disciplined about terminology, deferring to established norms, reaching consensus and even establishing taxonomical authorities. I don’t think this is because they are inherently self-effacing or obedient; I would guess instead that this culture arises from two factors: first, the presence of irrefutable empirical evidence establishes good habits of recognising unwelcome truth gracefully; second, a lot of modern scientific research tends to be a collaborative enterprise where a degree of consensus is essential to progress.

How very different things are in the lawless frontier territory of philosophy, where no conventions are universally accepted, and discrediting an opponent’s terminology is often easier and no less prestigious than tackling the arguments. Numerous popular tactics seem designed to throw the terminology into confusion.  A philosopher may often, for instance, grab some existing words  – ethics/morality, consciousness/awareness, information/data, or whatever – and use them to embody a particular distinction while blithely ignoring the fact that in another part of the forest another philosopher is using the same words for a completely different distinction. When irreconcilable differences come to light a popular move is ‘giving’ the disputed word away:”Alright, then, you can just have ‘free will’ and make it what you like: I’m going to talk about ‘x-free will’ instead in future. I’ll define ‘x-free will’ to my own satisfaction and when I’ve expounded my theory on that basis I’ll put in a little paragraph pointing out that ‘x-free will’ is the only kind worth worrying about, or the only kind everyone in the real world is actually talking about”.  These and other tactics lead to a position where in some areas it’s generally necessary to learn a new set of terms for every paper: to have others picking up your definitions and using them in their papers, as happens with Ned Block’s p- and a-consciousness, for example, is a rare and high honour.

It’s not that philosophers are quarrelsome and egotistical (though of course they are);  it’s more that the subject matter rarely provides any scope for pinning down an irrefutable position, and is best tackled by single brains operating alone (Churchlands notwithstanding).

Diamant is particularly exercised by problems over ‘data’ , ‘information’, ‘knowledge’, and ‘intelligence’.  Why can’t we sort these out? He correctly identifies a key problem: some of these terms properly involve semantics, and the others don’t (needless to say, it isn’t clearly agreed which words fall into which camp).  What he perhaps doesn’t realise clearly enough is that the essential nature of semantics is an extremely difficult problem which has so far proved unamenable to science.  We can recognise semantics quite readily, and we know well enough the sort of thing semantics does; but exactly how it does those things remains a cloudy matter, stuck in the philosophical badlands.

If my analysis is right, the only real hope of clarification would be if we could come up with some empirical research (perhaps neurological, perhaps not) which would allow us to define semantics (or x-semantics at any rate), in concrete terms that could somehow be demonstrated in a lab. That isn’t going to happen any time soon, or possibly ever.

Diamant wants to press on however, and inevitably by doing so in the absence of science he falls into philosophy: he offers us implicitly a theory of his own and – guess what? Another new way of using the terminology. The theory he puts forward is that semantics is a matter of convention between entities. Conventions are certainly important: the meaning of particular words or symbols is generally a matter of convention; but that doesn’t seem to capture the essence of the thing. If semantics were simply a matter of convention, then before God created Adam he could have had no semantics, and could not have gone around asking for light; on the other hand, if we wanted a robot to deal with semantics, all we’d need to do would be to agree a convention with it or perhaps let it in on the prevailing conventions. I don’t know how you’d do that with a robot which had no semantics to begin with, as it wouldn’t be able to understand what you were talking about.

There are, of course, many established philosophical attempts to clarify the intentional basis of semantics. In my personal view the best starting point is H.P. Grice’s theory of natural meaning (those black clouds mean rain); although I think it’s advantageous to use a slightly different terminology…


  1. 1. Kevin Kim says:

    This is somewhat tangential to the topic of your post, but have you read Nicholas Rescher’s The Strife of Systems? Rescher’s book, which I’m reading right now, deals with the problem of irreducible philosophical diversity.

  2. 2. Peter says:

    No, haven’t seen that one, Kevin – thanks.

  3. 3. Vicente says:

    Thanks Kevin, the quip in the first page just makes worth the book:

    if two people agree, one of them isn’t a philosopher 🙂

  4. 4. Arnold Trehub says:

    The problem of meaning is highlighted when one tries to formulate an explicit system of neuronal brain mechanisms that are competent for the task of interpersonal communication. This is what I wrote in THE COGNITIVE BRAIN, pp. 300-301:

    The Pragmatics of Cognition

    The cognitive brain is a pragmatic and opportunistic organ, selectively favored in its evolutionary development to the extent that it has been able to contribute to the solution of ecologically relevant problems. Excluding the physical limitations of our sense organs, there are few a priori constraints on what will constitute the contents of our cognitive apparatus. Operating characteristics of the brain model I propose dispute the commonly held notion that we “carve the world at its joints” before objects in the visual environment are learned and stored in memory.
    Instead, we first learn whatever novel parts of the extended world happen to be captured within the visual-afferent aperture by the centroid-based parsing mechanism at a time when arousal is sufficiently high. Since arousal is typically increased by energetic needs and motives (homeostatic imbalances in the central hedonic system), which govern our actions, the parts of the world selected for learning will naturally be those that are experienced together with high motivation and goal-directed action. In short, among innumerable possible partitions, we tend to learn roughly those pieces of the visual world that have ecological utility.

    I do not imply that we do not carve the world at its joints but rather that this is a later perceptual-cognitive process. Primitive learned parsings can be imaged and projected to the retinoid system for assembly and analysis. Complex retinoid patterns can be decomposed into components that are projected back to the synaptic matrix, where they can be learned and mapped to class cell tokens. If such abstracted objects were to be imaged, we would take them to be the “real” parts of the world. But the boundaries of decomposition would be drawn, I believe, to satisfy some standard of utility. In this sense, we (the putative neuronal mechanisms of our cognitive brains) do not discover the objects of common discourse; we create them for our individual and social purposes. This conclusion is consistent with Putnam’s (1988) suggestion in support of the philosophical stance of internal realism, that “truth does not transcend use”.

    Given these biological constraints, it is clear that meaning and definition,
    expressed and understood through the medium of a common language within a community of individuals, can be no better than approximate and occasionally divergent. This is true because among different individuals, neuronal tokens that are linked to identical lexical items on the output side are not likely to be linked to identical object representations (images) in long-term memory or to be imbedded
    in semantic networks with identical associative (synaptic) structures. Furthermore, significant internal images may represent things that do not and never have existed in the real world. In such cases, referential content can be communicated only by extended description or by an analogical externalization like a diagram or some other physical artifact. If purposeful communication is a goal, there is no biological apparatus for insuring a commonality of understanding. Our plans for communicative expression are shaped by the pragmatics of social convention and our perceptions of the practical consequences of individual efforts.

  5. 5. Charles Wolverton says:

    Arnold –

    Very interesting. A few questions:

    1. Although only recently introduced to his ideas, I think I detect shades of Gibson. Correct, or am I seeing mirages?

    2. “commonly held notion that we “carve the world at its joints” before objects in the visual environment are learned and stored in memory.”

    While I am more or less in sympathy with this general idea, it isn’t clear how one would “learn” an object “before” doing some serious carving. Isn’t it the carving that creates the object?

    I distinguish between merely detecting properties of an entity – presence, movement, light reflecting properties, et al – and identifying the entity as being a member of a certain category. Acquiring the ability to do the latter is what I infer you mean by “learning”. Despite the presence of “learn”, your next paragraph seems consistent with only detecting. Are we attaching different meanings to “learn” ala the issue addressed in Peter’s post? Or am I missing something?

    3. “carv[ing] the world at its joints … is a later perceptual-cognitive process”

    Agreed, and that’s what I call “learning”.

    4. “Complex retinoid patterns can be decomposed into components that are projected back to the synaptic matrix, where they can be learned and mapped to class cell tokens.”

    This resonates with me, at least as I interpret it. Is this discussed in detail in your book? I have started to read it, but I’m eager to “skip ahead”. So, what is the nature of the “class cell tokens”? I’m hoping the answer is that they are some sort of neural structure having to do with language,

    5. “If such abstracted objects were to be imaged, we would take them to be the “real” parts of the world.”

    By “imaging”, are you referring to visual phenomenal experience. AKA, visual “qualia”? And if so, do you have an opinion on why we do that? (Note that I’m asking “why”, not “how”.)

    6. “we … do not discover the objects of common discourse; we create them for our individual and social purposes.”

    Isn’t this inherent in the fact that use preceded naming?

    7. “neuronal tokens that are linked to identical lexical items on the output side are not likely to be linked to identical object representations”

    The answer to question 4 above may be embedded in this statement, but I can’t quite parse it. “Output side” of what item in what arcitectural model? And in that architecture, where are “neuronal tokens”, “lexical items”, and “representations”?

    Given your last paragraph, you might find my comment on this post interesting (even if I’m wrong):


  6. 6. Arnold Trehub says:

    Charles, your questions are pertinent. If you look at the chapter headings in THE COGNITIVE BRAIN, you will get a good idea of what you should read to understand my views on the issues that interest you. The book can be read online here:


  7. 7. Kar Lee says:

    I have meant to type something in for this interesting post of yours, but only until now that I found the resource on the bottom shelf: The book “Surely You’re Joking Mr. Feynman!” by guess what? Mr. Feynman!

    For those who are outside of physics, Richard Feynman may not be a household name. But inside physics, he was considered one of the greatest mind in modern times. This book is one of the few books that when I put my hands on, I could not put it down. Here is what he wrote about an encounter he had with philosophers at Princeton (in the What Do You Mean domain):

    ——- quote ————
    In the Graduate College dining room at Princeton everybody used to sit with his own group. I sat with the physicists, but after a bit I thought: It would be nice to see what the rest of the world is doing, so I’ll sit for a week or two in each of the other groups.

    When I sat with the philosophers I listened to them discuss very seriously a book called PROCESS and REALITY by Whitehead. They were using words in a funny way, and I couldn’t quite understand what they were saying. Now I didn’t want to interrupt them in their own conversation and keep asking them to explain something, and on the few occasions that I did, they’d try to explain it to me, but I still didn’t get it. Finally they invited me to come to their seminar.

    They had a seminar that was like a class. It had been meeting once a week to discuss a new chapter out of Process and Reality – some guy would give a report on it and then there would be a discussion. I went to this seminar promising myself to keep my mouth shut, reminding myself that I didn’t know anything about the subject, and I was going there just to watch.

    What happened there was typical – so typical that it was unbelievable, but true. First of all, I sat there without saying anything, which is almost unbelievable, but also true. A student gave a report on the chapter to be studied that week. In it Whitehead kept using the words “essential object” in a particular technical way that presumably he had defined, but I didn’t understand.

    After some discussion as to what “essential object” meant, the professor leading the seminar said something meant to clarify things and drew something that looked like lightning bolts on the blackboard. “Mr. Feynman,” he said, “would you say an electron is an ‘essential object’?”

    Well, now I was in trouble. I admitted that I hadn’t read the book, so I had no idea of what Whitehead meant by the phrase; I had only come to watch. “But,” I said, “I’ll try to answer the professor’s question if you will first answer a question from me, so I can have a better idea of what ‘essential objects’ means. Is a brick an essential object?”

    What I had intended to do was to find out whether they thought theoretical constructs were essential objects. The electron is a theory that we use; it is so useful in understanding the way nature works that we can almost call it real. I wanted to make the idea of a theory clear by analogy. In the case of the brick, my next question was going to be, “What about the inside of the brick?” – and I would then point out that no one has ever seen the inside of a brick. Every time you break a brick, you only see the surface. That the brick has an inside is a simple theory which helps us understand things better. The theory of electrons is analogous. So I began by asking, “Is a brick an essential object?”

    Then the answers came out. One man stood up and said,” A brick as an individual, specific brick. That is what Whitehead means by an essential object”.

    Another man said, “No, it isn’t the individual brick that is an essential object; it’s the general character that all bricks have in common -their “‘brickiness’- that is the essential object.”

    “Another guy got up, and another, and I tell you I have never heard such ingenious different ways of looking at a brick before. And, just like it should in all stories about philosophers, it ended up in complete chaos.
    ——end quote———-

  8. 8. Vicente says:

    He he, yes great book indeed, Kar Lee. Well R. Feynman belongs to a generation of physicists well known for despicing philosophy and philosophers…

  9. 9. Kar Lee says:

    That is not my own view about philosophers though, especially those who can talk in a normal way… 😉

  10. 10. Vicente says:

    Doesn’t matter if some like or not, philosophical questions are almost impossible to avoid in science, when you think you have chase them off through the door, they get back in through the window…

    Mind you!! “normal” is just an statistical term.

  11. 11. Peter says:

    I remember seminars like that. I remember one where we were discussing something (I don’t even remember what – call it X), and someone was brave enough to say they had no idea what X was meant to be, and that the examples of X that had been given seemed to them to have nothing in common. There was a bit of a pause, and then people just took it as being something akin to a confession of colour-blindness, rather than a blow to the concept of X.

    But I don’t think philosophers are really to blame. I think what Feynman was bidding for was to play Socrates, the person in a discussion who calls all the shots, asks all the leading questions and generally gets people to follow his chosen thread. Probably in physics, as in Plato, you can get that compliance, but in philosophy people naturally want to raise all the hundred different legitimate ways of looking at the question simultaneously, hence chaos.

  12. 12. Kar Lee says:

    Sometimes people might want to look at a completely different question all together. I think it is the nature of the field. The situation is definitely not unique to philosophy. In sociology, history, and other fields where concepts are sometimes hard to define, practitioners can get into chaotic discussions with very little communications getting through. But philosophy also has this element of rigor that makes it unique. Physicists are relatively lucky in this regard, as most concepts are quite well defined. One should be able to go back to where the concept was first used, and it is how it is supposed to be used. Perhaps physicists have this tendency of trying to use the same criterion in other fields.

    I remember a guy called Alan Sokal from NYU (again, a physicist!) wrote a paper titled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity” and submitted it to a social and cultural journal Social Text as a hoax years ago – the famous Sokal hoax. He said he BS his way through the article, using profound sounding terms to impress people, and his article was accepted (though not peer reviewed as Social Text did not practice it back then). The result was a spectacular intellectual detonation as Sokal reviewed the nature of the hoax immediately in another journal, when the criticisms and praises came in. Here is an article I found written by Steven Weinberg about the incident http://www.physics.nyu.edu/faculty/sokal/weinberg.html

    It is interesting to see that if concepts are not clearly defined, or if they are used in multiple senses and in ways different from normal usage, parties in a discussion can easily talk passed each other. One can even confuse “experts” in the field into believing what he is talking is really SOMETHING, if he uses some profound sounding terms of vague meanings, just like the Sokal case.

    The article by Steven Weinberg contains an interesting re-quote from a quote in Sokal’s paper. I would like to explicitly type it out here as it provides a counter-example of what I called “talking in a normal way”. Here is the quote:
    “The Einsteinian constant is not a constant, is not a center. It is the very concept of variability — it is, finally, the concept of the game. In other words, it is not the concept of something — of a center starting from which an observer could master the field — but the very concept of the game.”
    Get it? Weinberg said he did not. Neither do I. Apparently, “center” here means something very different from what center means in daily usage. This quote is purported to have been written by a deconstructionist Jacques Derrida somewhere. I may have committed an act of “quoting out of context”, but I doubt I can understand the quote better even if I read the whole thing, by putting it into context.

  13. 13. Kar Lee says:

    “as Sokal reviewed the nature of the hoax immediately in another journal”

    should read

    “as Sokal revealed the nature of the hoax immediately in another journal”

    my lack of the “l” sound in my subvocalization caused the fingers to slip…the second time already.

  14. 14. Vicente says:

    Could this problem arise as result of the obsolescence of human language in the present scenario?

    I have the impression that the HW are SW we are equipped with definitely needs some upgrading. Maybe a language that evolved for many centuries to cope with communication problems, like warning of the presence of a predator or informing of the location of a food source, is not good any more to discuss philosophical questions.

    IMO, this is the case for emotional machinery, a source of problems rather than any other thing in current society.

    Could it be that we have a language designed to describe objective scenarios, but that struggles to handle abstract situations or concepts. In this line, physics could rely in current language much better that philosophy. Maybe, this is why “meaning” becomes such a controversial concept.

    Let see, if we wait a few thousand years maybe our cortex and language will adapt to our new intellectual ambitions.

  15. 15. Kar Lee says:

    Vicente, but philosophers from the same camp seem to understand each other all right. The concepts seem to be getting across without much problem. It is just that people from different camps have difficulty communicating. Maybe the key is the difference in the “target of explanation”.

  16. 16. Kar Lee says:

    Peter, regarding playing Socrates, I would think that is the right approach, in the sense that one is using the dialectic method. Often, helping to define the problem by asking “good” questions can already get one half way through the problem.

  17. 17. Vicente says:

    Kar Lee, yes, could be. Notice that you said “seem to”. Your analysis defeats one of the assumptions of this discussion, i.e. Philosophers never come to true agreement… should we say, philosophical camps instead of philosophers?

    Last night I watched a documentary about music and the brain. I realised that music in a way encompass meaning… music is not symbolic, it is not representational, but it transmits “something” (not really info, not really just emotions…) to each of us, completely lacking of consensus in this sense.

    People suffering amusia syndrome are not able to get music as such, for them is just noise because certain brain areas, partially overlapping the hearing cortex, don’t get activated.

    Could it be that certain types of philosophical/abstract understanding require certain brain structures/networks, and that leads to different philosopical camps?

    Well, I believe it was Socrates who said: “to learn is slow and painful”, I agree but I would say, to really learn, or to learn what really matters is slow and painful.

  18. 18. Kar Lee says:

    I too suspect that there are neurological based differences among people holding different views.

  19. 19. Rodger Cunningham says:

    “Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?”–T. S. Eliot.

Leave a Reply