Interesting stuff

correspondentA bit of a tribute to three people who have persisted in the face of scepticism.

How long has Doug Lenat has been working on the CYC project? I described it unkindly as a kind of dinosaur in 2005, when it was already more than twenty years old. What is it about? AI systems often lack the complex background of understanding needed to deal with real life situations. One strategy, back in the day, was to tackle the problem head-on and simply build a gigantic encyclopaedia of background facts. Once you had that, rules of inference would do the rest. That seems an impossibly optimistic and old-fashioned strategy now, but CYC has apparently been working away on its encyclopaedia ever since and – it’s said – is actually beginning to deliver.

Also in 2005 I described the remarkable findings of Maurits Van Den Noort apparently showing a reaction to stimuli before the stimuli actually occurred. (That post is one of the “lost” ones from when I moved over to WordPress. I’ve just brought it back, but alas the lively discussion in comments is gone.) I’ve heard no more about the research since, but Van Den Noort is still urging the case for a new relationship between neurophysics and quantum physics.

That post mentions Huping Hu, who with his long-time colleague (and wife) Maoxin Wu, also announced some remarkable findings relating consciousness and quantum physics. He went on to found his own journal – one way to ensure your papers are properly published – which continues to publish to this day.

We may feel (I do) that these people are in differing ways on the wrong track, but their persistence surely commands respect.

Cognitive Planes

Picture: plane. I see via MLU that Robert Sloan at the University of Illinois at Chicago has been given half a million dollars for a three year project on common sense. Alas, the press release gives few details, but Sloan describes the goal of common sense as “the Holy Grail of artificial intelligence research”.

I think he’s right. There is a fundamental problem here that shows itself in several different forms. One is understanding: computers don’t really understand anything, and since translation, for example, requires understanding, they’ve never been very good at it. They can swap a French word for an English word, but without some understanding of what the original sentence was conveying, this mechanical substitution doesn’t work very well. Another outcrop of the same issue is the frame problem: computer programs need explicit data about their surroundings, but updating this data proves to be an unmanageably large problem, because the implications of every new piece of data are potentially infinite. Every time something changes, the program has to check the implications for every other piece of data it is holding; it needs to check the ones that are the same just as much as those that have changed, and the task rapidly mushrooms out of control. Somehow, humans get round this: they seem to be able to pick out the relevant items from a huge background of facts immediately, without having to run through everything.

In formulating the frame problem originally back in the 1960s,  John McCarthy speculated that the solution might lie in non-monotonic logics; that is, systems that don’t require everything to be simply true or false, as old-fashioned logical calculus does.  Systems based on rigid propositional/predicate calculus needed to check everything in their database every time something changed in order to ensure there were no contradictions, since a contradiction is fatal in these formalisations. On the whole, McCarthy’s prediction has been borne out in that research since then has tended towards the use of Bayesian methods, which can tolerate contradictions and which can give propositions degrees of belief rather than simply holding them true or false. As well as providing practical solutions to frame problem  issues, this seems intuitively much more like the way a human mind works.

Sloan, as I understand it, is very much in this tradition; his earlier published work deals with sophisticated techniques for the manipulation of Horn knowledge bases. I’m afraid I frankly have only a vague idea of what that means, but I imagine it is a pretty good clue to the direction of the new project. Interestingly, the press release suggests the team will be looking at CYC and other long-established projects. These older projects tended to focus on the accumulation of a gigantic database of background knowledge about the world, in the possibly naive belief that once you had enough background information, the thing would start to work. I suppose the combination of unbelievably large databases of common sense knowledge with sophisticated techniques for manipulating and updating knowledge might just be exciting. If you were a cyberpunk fan and  unreasonably optimistic, you might think that something like the meeting of Neuromancer and Wintermute was quietly happening.

Let’s not get over-excited, though, because of course the whole thing is completely wrong. We may be getting really good at manipulating knowledge bases, but that isn’t what the human brain does at all. Or does it? Well,  on the one hand, manipulating knowledge bases is all we’ve got: it may not work all that well, but for the time being it’s pretty much the only game in town – and it’s getting better. On the other hand, intuitively it just doesn’t seem likely that that’s what brains do: it’s more as if they used some entirely unknown technique of inference which we just haven’t grasped yet. Horn knowledge bases may be good, but really are they any more like natural brain functions than Aristotelian syllogisms?

Maybe, maybe not: perhaps it doesn’t matter. I mentioned the comparable issue of translation. Nobody supposes we are anywhere near doing translation by computation in the way the human brain does it, yet the available programs are getting noticeably better. There will always be some level of error in computer translation, but there is no theoretical limit to how far it can be reduced, and at some point it ceases to matter: after all, even human translators get things wrong.

What if the same were true for knowledge management? We could have AI that worked to all intents and purposes as well as the human brain, yet worked in a completely different way. There has long been a school of thought that says this doesn’t matter: we never learnt to fly the way birds do, but we learnt how to fly. Maybe the only way to artificial consciousness in the end will be the cognitive equivalent of a plane. Is that so bad?

If the half-million dollars is well spent, we could be a little closer to finding out…