Picture:  Semantic Map Picture: Bitbucket. So, another bastion of mysterianism falls. Cognition Technologies Inc has a semantic map of virtually the entire English language, which enables applications to understand English words. Yes, understand. It’s always been one of your bedrock claims that computers can’t really understand anything; but you’re going to have to give that one up.

Picture: Blandula. Oh, yes, I read about that. In fact it’s been mentioned in several places. I thought the Engadget headline got to the root of the issue pretty well – ‘semantic map opens way for robot uprising’. I suppose that’s pretty much your view. It seems to me just another case of the baseless hyperbole that afflicts the whole AI field. There are those people at Cyc and elsewhere who think cognition is just a matter of having a big enough encyclopaedia: now we’ve got this lot who think it’s just a colossal dictionary. But having a long list of synonyms and categories doesn’t constitute understanding; or my bookshelf would have achieved consciousness long ago.

Picture: Bitbucket. Let me ask you a question. Suppose you were a teacher? How would you judge whether your pupils understood a particular word? Wouldn’t you see whether they could give synonyms, either words or phrases that meant the same thing? If you yourself didn’t understand a word what would you do? You’d go over to that spooky bookcase and look at a list of synonyms. Once you’d matched up your new word with one or two synonyms, you’d understand it, wouldn’t you?

I know you’ve got all these reservations about whether computers really this or really that. You don’t accept that they really learn anything, because true human learning implies understanding, and they haven’t got that. They don’t really communicate, they just transfer digital blips. According to you all these usages are just misleading metaphors. According to you they don’t even play chess, not in the sense that humans do. Now frankly, it seems to me that when the machine is sitting there and moving the pieces in a way that puts you into checkmate, any distinction between what it’s doing and playing chess is patently nugatory. You might as well say you don’t really ride a bike because a bike hasn’t got legs, and that bike-riding is just a metaphor. However, I recognise that I’m not going to drive you out of your cave on this. But you’re going to have to accept that machines which use this kind of semantic mapping can understand words at least to the same extent that computers can play chess. Concede gracefully; that’ll be enough for today.

Picture: Blandula. I’ll grant you that the mapping is an interesting piece of work. But what does it really add? These people are using the map for a search engine, but is it really any better than old-fashioned approaches? So we search for, say ‘tears'; the search engine turns up thousands of pages on weeping, when we wanted to know about tears in a garment. Now Cognition’s engine will be able to spot from the context what I’m really after. Because I’ve searched for ‘What to do when something tears your trousers’ it will notice the word trousers and give me results about rips. But so will Google! If I give Google the word trousers as well as tears, it will find relevant stuff without needing any explicit semantics at all. These people don’t understand why a ‘semantic map’ is necessary for a search engine, and they’re sceptical about Cognition’s ontology (in that irritating non-philosophical sense).

Picture: Bitbucket. Wrong! When you do a search on Google for ‘What to do when something tears your trousers’ the top result is actually about tears from your eye, believe it or not. Typically you get all sorts of irrelevant stuff along with a few good results. But to see the point, look at an example Cognition give themselves. Their legal search demo (try it for yourself), when asked for results relating to ‘fatal fumes in the workplace’ came up with a relevant result which contains neither ‘fatal’ nor ‘workplace’, only ‘died’ and ‘working’. This kind of thing is going to be what Web 3 is built around.

Picture: Blandula. If this thing is so good, why haven’t they used it for a chatbot? If this technology involves complete understanding of English, they ought to breeze through the Turing test. I’ll tell you why: because that would involve actual understanding, not just a dictionary. Their machine might be able to look up meaning of pitbull and find that the synonym dog, but it wouldn’t have a hope with the lipstick pitbull, or are they the ones with all the issues.

Picture: Bitbucket. Nobody says the Cognition technology has achieved consciousness.

Picture: Blandula. I think you are saying that, by implication. I don’t see how there can be real understanding without consciousness.

Picture: Bitbucket. Or if there is, it won’t be real consciousness according to you – just a metaphor…

4 Comments

  1. 1. Derek James says:

    Meh, the angel trumps the abacus. The Google argument is sound. When I Google the tears-trousers string, it does indeed produce a Yahoo Answers page about “where tears come from”, but this page is retrieved because one responder is being cute and pointing out the polysemy of the word:

    “Somtimes you can get a double whammy! You catch your trousers on barbed wire and at the same time the barbs dig in to your skin. This gives a tear in your trousers and causes a flood of tears in youe eyes!!!!!!!!!!!!!!!!!!!!!!”

    So Google is fetching a page based on the correct semantic content. It’s not “confused”.

    This doesn’t mean Google, or the Cognition Technologies semantic web “understands” squat. Unless you want to define understanding as correlations between words, which is a very weak, restricted definition of the term. Words have referents to an underlying bedrock of sensory representations based on the real-world properties of the referent. So no, I wouldn’t be satisfied that a child understands what a thing is just by naming a synonym. That would indicate something, but not a whole lot. If I wanted to make sure a child understood what a dog was, I’d ask if they knew what dogs looked like, what sound they made, what kinds of things they do, and so on. If such a semantic web were hooked up to a chat bot, it would fail a Turing Test pretty miserably, because its understanding derives from symbols whose referents are merely other symbols at the same level of abstraction, rather than representations formed from experience, gathered by the senses.

  2. 2. Lloyd Rice says:

    In this case, I don’t believe either Angel or Abacus got it right. They are both intended as extreme views, are they not? Blandula is correct that definitions, even massive combinations of definitions do not accomplish understanding. But Bitbucket is also correct in believing that there is no reason the computer could not do better. It is my view that to understand something, you need perceptual references. The words need to be connected to percepts, or combinations of percepts, which let the entity (living or otherwise) relate the word to something perceived. From such relations arises understanding. The glossary is not enough. We then proceed to build massive metaphorical structures upon the percepts, by way of which we “understand” the universe. But there’s no reason a computer, given the necessary perceptual mechanisms (and massive memory), could not do the same.

  3. 3. Rodger Cunningham says:

    Tears, idle tears, I know not what they mean.

  4. 4. Anonymous says:

    One might one to search for “What to do when something tears your trousers” again on Google. The result is this very discussion!!!!!

Leave a Reply