Push SinghI was shocked this week to discover, browsing through my list of pages to revisit when time allows, that Push Singh died a few months ago. He was universally regarded as a young man of exceptional promise, and poignantly his page still records that he was to join the MIT Media Lab faculty in 2007.

He certainly had some original ideas. A presentation he gave on the Future of the Mind envisages, among other things, that we might all carry around with us small electronic replicas of our own minds and personalities, able to interface with other people’s replicas and with other sources of data, helping to identify and arrange things which deserve to be brought to our attention. A rather nerdy dream, this one: our cybernetic duplicate manages all the messy and tiresome details of human interaction, and we get to meet the few interesting people, see the occasional worthwhile film, and so on, without having to sacrifice valuable lab time on small talk and merely social activity. We might also, he suggests, be able to improve ourselves by consulting the duplicates of our heroes and exemplars (some care would be required, I think).

The same talk floats the curious idea that in the new world of the future, we may need new emotions. I find it difficult even to come at this idea in any effective way from a philosophical, and particularly from a phenomenal perspective: it seems to require an underlying metaphysics of the emotions which does not exist and is not easily conceived of. It’s a bit easier, and probably more appropriate here, to think of the emotions in more practical terms, as predispositions to act in certain complex ways (a valid point of view even for those who don’t think that is all emotions amount to). Perhaps the feeling you have when you find that someone has been consulting your electronic model of yourself will deserve a new name – rather like the feeling, never experienced before the late twentieth century, caused by discovering that your mother has been reading your blog. It seems unlikely to me that the basic underlying vocabulary of the emotions could ever change, but it might be that the territorial instincts that lie at the bottom of many emotions might need to undergo some change in a world of virtual property and instant global interconnection.

But Push was chiefly concerned with common sense, and the difficult task of endowing machines with it. This problem, in many different guises, may well be the central challenge of AI research, and for that matter, of consciousness itself. Computers deal well with problems which can be defined in advance, where the range of objects and contingencies which need to be considered is known and limited. In real life, problems are never like that: their solution by human beings usually depend on two special human resources: first, a vast and heterogeneous fund of background knowledge and assumptions, and second a remarkable capacity for picking out relevant factors (or it might be more accurate to say a masterly ability to ignore millions of irrelevant ones).

Push was a disciple of Marvin Minsky (and indeed it was through his comment on a piece of mine about Minsky that he got onto my reading list), and the approach to the problem he followed is represented by the Open Mind Common Sense project. The philosophy behind this work is essentially that if common sense requires a vast set of pieces of knowledge, the sooner we start collecting them the better. There are various places from which to harvest miscellaneous information: in principle you could simply empty the contents of encyclopaedias and other books into your database, or vacuum up the contents of the internet (if we can properly see the internet as a repository of common sense). The strategy adopted here was to allow human beings to feed in facts one at a time through a specially designed web site. The project’s outlook and strategy is quite similar to that of the long-running CYC project.

Although you can’t help admiring the spirit of these vast projects, my honest view is that they are founded on a misconception. If we represent common sense as a series of propositions, the list of those propositions is indeed a vast one. It may well be infinite. I know, for example, that I shouldn’t drive after more than two drinks. I also know I shouldn’t drive after more than three drinks, and so on for any given number of drinks. You may think that most of the members of this unending series of propositions are blindingly obvious, but it’s pretty much this kind of obviousness that we need our machines to recognise and deal with. If we do it by writing lists, the job will go on forever. The truth, I think, is that common sense does not consist of a list of propositions at all.

That doesn’t mean the work being done on these projects is wasted. They may never replicate human common sense, but they might well lead to interesting new artificial forms of it. It’s certainly true that the approaches taken have become more sophisticated over the years, and it might well be that the understanding this work helps generate – of how to deploy a collection of diverse, specialised databases and scripts to cope with different areas of reality – might eventually become so valuable that the size of your common sense module’s database of facts seems a secondary consideration.

Certainly there are some indications in Push’s thesis of interesting departures. In his scenario of Green and Pink, two one-armed robots, Green seeks Pink’s help in constructing a table. Confused, Pink starts to pull off one of the table legs, but by attaching one himself, Green demonstrates his intention and fruitful collaboration ensues. At first sight this looks rather like the simple block worlds which have been a staple of AI research for so long: but if we take a closer look the communication going on seems to feature Gricean natural meaning, or (stretching it a bit) something like implicature. These, in human communication, are exactly the kind of tools we use to understand messages that are not spelled out in the explicit words, and pick up implications that are not, so to speak, already coded in. If a faculty for those could be successfully incorporated into a functioning AI module, we should take a very major leap forward.

Tragically, we’ll never know what Push Singh might have contributed.

Leave a Reply