Where has AI (or perhaps we should talk about AGI) got to now? h+ magazine reports remarkably buoyant optimism in the AI community about the achievement of Artificial General Intelligence (AGI) at a human level, and even beyond. A survey of opinion at a recent conference apparently showed that most believed that AGI would reach and surpass human levels during the current century, with the largest group picking out the 2020s as the most likely decade. If that doesn’t seem optimistic enough, they thought this would occur without any additional fundingfor the field, and some even suggested that additional money would be a negative, distracting factor.
Of course those who have an interest in AI would tend to paint a rosy picture of its future, but the survey just might be a genuine sign of resurgent enthusiasm, a second wind for the field (‘second’ is perhaps understating matters, but still). At the end of last year, MIT announced a large-scale new project to ‘re-think AI’. This Mind Machine Project involves some eminent names, including none other than Marvin Minsky himself. Unfortunately (following the viewpoint mentioned above) it has $5 million of funding.
The Project is said to involve going back and fixing some things that got stalled during the earlier history of AI, which seems a bit of an odd way of describing it, as though research programmes that didn’t succeed had to go back and relive their earlier phases. I hope it doesn’t mean that old hobby-horses are to be brought out and dusted off for one more ride.
The actual details don’t suggest anything like that. There are really four separate projects:
- Mind: Develop a software model capable of understanding human social contexts- the signpost that establish these contexts, and the behaviors and conventions associated with them.
Research areas: hierarchical and reflective common sense
Lead researchers: Marvin Minsky, Patrick Winston
- Body: Explore candidate physical systems as substrate for embodied intelligence
Research areas: reconfigurable asynchronous logic automata, propagators
Lead researchers: Neil Gershenfeld, Ben Recht, Gerry Sussman
- Memory: Further the study of data storage and knowledge representation in the brain; generalize the concept of memory for applicability outside embodied local actor context
Research areas: common sense
Lead researcher: Henry Lieberman
- Brain and Intent: Study the embodiment of intent in neural systems. It incorporates wet laboratory and clinical components, as well as a mathematical modeling and representation component. Develop functional brain and neuron interfacing abilities. Use intent-based models to facilitate representation and exchange of information.
Research areas: wet computer, brain language, brain interfaces
Lead researchers: Newton Howard, Sebastian Seung, Ed Boyden
This all looks very interesting. The theory of reconfigurable asynchronous logic automata (RALA) represents a new approach to computation which instead of concealing the underlying physical operations behind high-level abstraction, makes the physical causality apparent: instead of physical units being represented in computer programs only as abstract symbols, RALA is based on a lattice of cells that asynchronously pass state tokens corresponding to physical resources. I’m not sure I really understand the implications of this – I’m accustomed to thinking that computation is computation whether done by electrons or fingers; but on the face of it there’s an interesting comparison with what some have said about consciousness requiring embodiment.
I imagine the work on Brain and Intent is to draw on earlier research into intention awareness. This seems to have been studied most extensively in a military context, but it bears on philosophical intentionality and theory of mind; in principle it seems to relate to some genuinely central and difficult issues. Reading brief details I get the sense of something which might be another blind alley, but is at least another alley.
Both of these projects seem rather new to me, not at all a matter of revisiting old problems from the history of AI, except in the loosest of senses.
In recent times within AI I think there has been a tendency to back off a bit from the issue of consciousness, and spend time instead on lesser but more achievable targets. Although the Mind Machine Project could be seen as superficially conforming with this trend, it seems evident to me that the researchers see their projects as heading towards full human cognition with all that that implies (perhaps robots that run off with your wife?)
Meanwhile in another part of the forest Paul Almond is setting out a pattern-based approach to AI. He’s only one man, compared with the might of MIT – but he does have the advantage of not having $5 million to delay his research…