minskyMarvin Minsky, who died on Sunday, was a legend.  Here’s a slightly edited version of my 2004 post about him.

Linespace

 

Is it time to rehabilitate Marvin Minsky? As a matter of fact, I don’t think he was ever dishabilitated (so to speak) but it does seem to be generally felt that there are a couple of black marks against his name. The most widely-mentioned count against him and his views is a charge of flagrant over-optimism about the prospects for artificial intelligence. A story which gets quoted over and over again has it that he was so confident about duplicating human cognitive faculties, even way back in the 1970s when the available computing power was still relatively modest, that he gave the task of producing a working vision system to one of his graduate students as a project to sort out over the summer.

The story is apocryphal, but one can see why it gets repeated so much. The AI sceptics like it for obvious reasons, and the believers use it to say “I may seem like a gung-ho over-the-top enthusiast, but really my views are quite middle-of-the-road, compared to some people. Look at Marvin Minsky, for example, who once…”

Still, there is no doubt that Minsky did at one time predict much more rapid progress than has, in the event materialized: in 1977 he declared that the problem of creating artificial intelligence would be substantially solved within a generation.

The other and perhaps more serious charge against him is that in 1969, together with Seymour Papert, he gave an unduly negative evaluation of Frank Rosenblatt’s ‘Perceptron’ (an early kind of neural network device which was able to tackle simple tasks such as shape recognition). Their condemnation, based on the single-layer version of the perceptron rather than more complex models, is considered to have led to the effective collapse of Rosenblatt’s research project and a long period of eclipse for networks before a new wave of connectionist research came along and claimed Rosenblatt as an unfairly neglected forerunner.

There’s something in both these charges, but surely in fairness neither ought to be all that damaging? Optimism can be a virtue, without which many long and difficult enterprises could not get started, and Minsky’s was really no more starry-eyed than many others. The suggestion of AI within a generation does no more at most than echo Turing’s earlier forecast of human-style performance by the end of the century, and although it didn’t come true, you would have to be a dark pessimist to deny that there were (and perhaps still are) some encouraging signs.

It seems to be true that Minsky and Papert, by focusing on the single-layer perceptron alone, did give an unduly negative view of Rosenblatt’s ideas – but if researchers were jailed for giving dismissive accounts of their rivals’ work, there wouldn’t be many on the loose today. The question is why Minsky and Papert’s view had such a strong negative influence when a judicious audience would surely have taken a more balanced view.

I suspect that both Minsky’s optimism and his attack on the perceptron should properly be seen as crystallizing in a particularly articulate and trenchant form views which were actually widespread at the time: Minsky was not so much a lonely but influential voice as the most conspicuous and effective exponent of the emergent consensus.

What then, about his own work? I take the most complete expression of his views to be “The Society of Mind”. This is an unusual book in several ways – for one thing it is formatted like no other book I have ever seen, with each page having its own unique heading. It has an upbeat tone, compared to many books in the consciousness field, which tend to be written as much against a particular point of view as for one. It is full of thought-provoking points, and is hard to read quickly or to summarise adequately, not because it is badly or obscurely written (quite the contrary) but because it inspires interesting trains of thought which take time to mull over adequately.

The basic idea is that each simple task is controlled by an agent, a kind of sub-routine. A set of agents which happen to be active when a good result is achieved get linked together by a k-line. The multi-level hierarchical structures which gradually get built up allow complex and highly conditional forms of behaviour to be displayed. Building with blocks is used as an example, starting with simple actions such as picking up a block, and gradually ascending to the point where we debate whether to go on playing a complex block-building game or go off to have lunch instead. Ultimately all mental activity is conducted by structures of this kind.

This is recognizably the way well-designed computer programs work, and it also bears a plausible resemblance to the way we do many things without thinking (when we move our arm we don’t think about muscle groups, but somehow somewhere they do get dealt with); but it isn’t a very intuitively appealing general model of human thought from an inside point of view. It naturally raises some difficulties about how to ensure that appropriate patterns of behaviour can be developed in novel circumstances. There are many problems which arise if we just leave the agents to slug it out amongst themselves – and large parts of the book are taken up with the interesting solutions Minsky has to offer. The real problem (as always) arises when we want to move out of the toy block world and deal with the howling gale of complexity presented by the real world.

Minsky’s solution is frames (often compared with Roger Schank’s similar strategy of scripts). We deal with reality through common sense, and common sense is, in essence, a series of sets of default assumptions about given events and circumstances. When we go to a birthday party, we have expectations about presents, guests, hosts, cakes and so on which give us a repertoire of appropriate to deploy and a context in which to respond to unfolding events. Alas, we know that common sense has so far proved harder to systematize than expected – so much so that these days the word ‘frame’ in a paper on consciousness is highly likely to be part of the dread phrase ‘frame problem’.

The idea that mental activity is constituted by a society of agents who themselves are not especially intelligent is an influential one, and Minsky’s version of it is well-developed and characteristically trenchant. He has no truck at all with the idea of a central self, which in his eyes is pretty much the same thing as an homunculus, a little man inside your head. Free will, for him, is a delusion which we are unfortunately stuck with. This sceptical attitude certainly cuts out a lot of difficulties, though the net result is perhaps that the theory deals better with unconscious processes than conscious ones. I think the path set out by Minsky stops short of a real solution to the problem of consciousness and probably cannot be extended without some unimaginable new development. That doesn’t mean it isn’t a worthwhile exercise to stroll along it, however.

2 Comments

  1. 1. Hunt says:

    Sad loss. I had the fortune to see him give a public lecture to a packed house once.

  2. 2. john davey says:

    “This is recognizably the way well-designed computer programs work”

    State machines in other words. Each activity can be broken down into smaller and smaller activities down to a fine grained “instruction set” of activities. Its probably the least credible suggestion in AI because there is nothing comparable to anything like a state machine in nature. Not of course that ‘analog’ computers fare much better.

    He was a nice chap – i exchanged emails with him some years ago. On the subject of qualia we got in a bit of an exchange -he said he felt that they were ‘something to do with feedback loops’. Can’t say I was convinced. I replied to him that ‘isn’t a computer program with a feedback loop just another computer program ?’ at which point he totally lost interest and got on with more important details.

    He had some kind of quote about AI robots keeping ‘humans’ as ‘household pets’ by about now. Seems to have got that wrong somewhat spectacularly. I’ve spent a life in computers and I still don’t know how “AI” computing is any different to any other kind. Subject matter ? It’s certainly identical technically.

Leave a Reply