Marvin Minsky

minskyMarvin Minsky, who died on Sunday, was a legend.  Here’s a slightly edited version of my 2004 post about him.

Linespace

 

Is it time to rehabilitate Marvin Minsky? As a matter of fact, I don’t think he was ever dishabilitated (so to speak) but it does seem to be generally felt that there are a couple of black marks against his name. The most widely-mentioned count against him and his views is a charge of flagrant over-optimism about the prospects for artificial intelligence. A story which gets quoted over and over again has it that he was so confident about duplicating human cognitive faculties, even way back in the 1970s when the available computing power was still relatively modest, that he gave the task of producing a working vision system to one of his graduate students as a project to sort out over the summer.

The story is apocryphal, but one can see why it gets repeated so much. The AI sceptics like it for obvious reasons, and the believers use it to say “I may seem like a gung-ho over-the-top enthusiast, but really my views are quite middle-of-the-road, compared to some people. Look at Marvin Minsky, for example, who once…”

Still, there is no doubt that Minsky did at one time predict much more rapid progress than has, in the event materialized: in 1977 he declared that the problem of creating artificial intelligence would be substantially solved within a generation.

The other and perhaps more serious charge against him is that in 1969, together with Seymour Papert, he gave an unduly negative evaluation of Frank Rosenblatt’s ‘Perceptron’ (an early kind of neural network device which was able to tackle simple tasks such as shape recognition). Their condemnation, based on the single-layer version of the perceptron rather than more complex models, is considered to have led to the effective collapse of Rosenblatt’s research project and a long period of eclipse for networks before a new wave of connectionist research came along and claimed Rosenblatt as an unfairly neglected forerunner.

There’s something in both these charges, but surely in fairness neither ought to be all that damaging? Optimism can be a virtue, without which many long and difficult enterprises could not get started, and Minsky’s was really no more starry-eyed than many others. The suggestion of AI within a generation does no more at most than echo Turing’s earlier forecast of human-style performance by the end of the century, and although it didn’t come true, you would have to be a dark pessimist to deny that there were (and perhaps still are) some encouraging signs.

It seems to be true that Minsky and Papert, by focusing on the single-layer perceptron alone, did give an unduly negative view of Rosenblatt’s ideas – but if researchers were jailed for giving dismissive accounts of their rivals’ work, there wouldn’t be many on the loose today. The question is why Minsky and Papert’s view had such a strong negative influence when a judicious audience would surely have taken a more balanced view.

I suspect that both Minsky’s optimism and his attack on the perceptron should properly be seen as crystallizing in a particularly articulate and trenchant form views which were actually widespread at the time: Minsky was not so much a lonely but influential voice as the most conspicuous and effective exponent of the emergent consensus.

What then, about his own work? I take the most complete expression of his views to be “The Society of Mind”. This is an unusual book in several ways – for one thing it is formatted like no other book I have ever seen, with each page having its own unique heading. It has an upbeat tone, compared to many books in the consciousness field, which tend to be written as much against a particular point of view as for one. It is full of thought-provoking points, and is hard to read quickly or to summarise adequately, not because it is badly or obscurely written (quite the contrary) but because it inspires interesting trains of thought which take time to mull over adequately.

The basic idea is that each simple task is controlled by an agent, a kind of sub-routine. A set of agents which happen to be active when a good result is achieved get linked together by a k-line. The multi-level hierarchical structures which gradually get built up allow complex and highly conditional forms of behaviour to be displayed. Building with blocks is used as an example, starting with simple actions such as picking up a block, and gradually ascending to the point where we debate whether to go on playing a complex block-building game or go off to have lunch instead. Ultimately all mental activity is conducted by structures of this kind.

This is recognizably the way well-designed computer programs work, and it also bears a plausible resemblance to the way we do many things without thinking (when we move our arm we don’t think about muscle groups, but somehow somewhere they do get dealt with); but it isn’t a very intuitively appealing general model of human thought from an inside point of view. It naturally raises some difficulties about how to ensure that appropriate patterns of behaviour can be developed in novel circumstances. There are many problems which arise if we just leave the agents to slug it out amongst themselves – and large parts of the book are taken up with the interesting solutions Minsky has to offer. The real problem (as always) arises when we want to move out of the toy block world and deal with the howling gale of complexity presented by the real world.

Minsky’s solution is frames (often compared with Roger Schank’s similar strategy of scripts). We deal with reality through common sense, and common sense is, in essence, a series of sets of default assumptions about given events and circumstances. When we go to a birthday party, we have expectations about presents, guests, hosts, cakes and so on which give us a repertoire of appropriate to deploy and a context in which to respond to unfolding events. Alas, we know that common sense has so far proved harder to systematize than expected – so much so that these days the word ‘frame’ in a paper on consciousness is highly likely to be part of the dread phrase ‘frame problem’.

The idea that mental activity is constituted by a society of agents who themselves are not especially intelligent is an influential one, and Minsky’s version of it is well-developed and characteristically trenchant. He has no truck at all with the idea of a central self, which in his eyes is pretty much the same thing as an homunculus, a little man inside your head. Free will, for him, is a delusion which we are unfortunately stuck with. This sceptical attitude certainly cuts out a lot of difficulties, though the net result is perhaps that the theory deals better with unconscious processes than conscious ones. I think the path set out by Minsky stops short of a real solution to the problem of consciousness and probably cannot be extended without some unimaginable new development. That doesn’t mean it isn’t a worthwhile exercise to stroll along it, however.

AI Resurgent

Picture: AI resurgent. Where has AI (or perhaps we should talk about AGI) got to now? h+ magazine reports remarkably buoyant optimism in the AI community about the achievement of Artificial General Intelligence (AGI) at a human level, and even beyond. A survey of opinion at a recent conference apparently showed that most believed that AGI would reach and surpass human levels during the current century, with the largest group picking out the 2020s as the most likely decade.  If that doesn’t seem optimistic enough, they thought this would occur without any additional fundingfor the field, and some even suggested that additional money would be a negative, distracting factor.

Of course those who have an interest in AI would tend to paint a rosy picture of its future, but the survey just might be a genuine sign of resurgent enthusiasm, a second wind for the field (‘second’ is perhaps understating matters, but still).  At the end of last year, MIT announced a large-scale new project to ‘re-think AI’. This Mind Machine Project involves some eminent names, including none other than Marvin Minsky himself. Unfortunately (following the viewpoint mentioned above) it has $5 million of funding.

The Project is said to involve going back and fixing some things that got stalled during the earlier history of AI, which seems a bit of an odd way of describing it, as though research programmes that didn’t succeed had to go back and relive their earlier phases. I hope it doesn’t mean that old hobby-horses are to be brought out and dusted off for one more ride.

The actual details don’t suggest anything like that. There are really four separate projects:

  • Mind: Develop a software model capable of understanding human social contexts- the signpost that establish these contexts, and the behaviors and conventions associated with them.
    Research areas: hierarchical and reflective common sense
    Lead researchers: Marvin Minsky, Patrick Winston
  • Body: Explore candidate physical systems as substrate for embodied intelligence
    Research areas: reconfigurable asynchronous logic automata, propagators
    Lead researchers: Neil Gershenfeld, Ben Recht, Gerry Sussman
  • Memory: Further the study of data storage and knowledge representation in the brain; generalize the concept of memory for applicability outside embodied local actor context
    Research areas: common sense
    Lead researcher: Henry Lieberman
  • Brain and Intent: Study the embodiment of intent in neural systems. It incorporates wet laboratory and clinical components, as well as a mathematical modeling and representation component. Develop functional brain and neuron interfacing abilities. Use intent-based models to facilitate representation and exchange of information.
    Research areas: wet computer, brain language, brain interfaces
    Lead researchers: Newton Howard, Sebastian Seung, Ed Boyden

This all looks very interesting.  The theory of reconfigurable asynchronous logic automata (RALA) represents a new approach to computation which instead of concealing the underlying physical operations behind high-level abstraction, makes the physical causality apparent: instead of physical units being represented in computer programs only as abstract symbols, RALA is based on a lattice of cells that asynchronously pass state tokens corresponding to physical resources. I’m not sure I really understand the implications of this – I’m accustomed to thinking that computation is computation whether done by electrons or fingers; but on the face of it there’s an interesting comparison with what some have said about consciousness requiring embodiment.

I imagine the work on Brain and Intent is to draw on earlier research into intention awareness. This seems to have been studied most extensively in a military context, but it bears on philosophical intentionality and theory of mind; in principle it seems to relate to some genuinely central and difficult issues.  Reading brief details I get the sense of something which might be another blind alley, but is at least another alley.

Both of these projects seem rather new to me, not at all a matter of revisiting old problems from the history of AI, except in the loosest of senses.

In recent times within AI I think there has been a tendency to back off a bit from the issue of consciousness, and spend time instead on lesser but more achievable targets. Although the Mind Machine Project could be seen as superficially conforming with this trend, it seems evident to me that the researchers see their projects as heading towards full human cognition with all that that implies (perhaps robots that run off with your wife?)

Meanwhile in another part of the forest Paul Almond is setting out a pattern-based approach to AI.  He’s only one man, compared with the might of MIT – but he does have the advantage of not having $5 million to delay his research…