Third Wave AI?

DARPA is launching a significant new AI initiative; it could be a bad mistake.

DARPA (The Defense Advanced Research Projects Agency)has an awesome record of success in promoting the development of computer technology; without its interventions we probably wouldn’t be talking seriously about self-driving cars, and we might not have any internet. So any big DARPA project is going to be at least interesting and quite probably groundbreaking. This one seeks to bring in a Third Wave of AI. The first wave, on this showing, was a matter of humans knowing what needed to be done and just putting that knowledge into coded rules (this actually smooshes together a messy history of some very different approaches). The second wave involves statistical techniques and machines learning for themselves; recently we’ve seen big advances from this kind of approach. While there’s still more to be got out of these earlier waves, DARPA foresee a third one in which context-based programs are able to explain and justify their own reasoning. The overall idea is well explained by John Launchbury in this video.

In many ways this is timely, as one of the big fears attached to recent machine learning projects has arisen from the fact that there is often no way for human beings to understand, in any meaningful sense, how they work. If you don’t know how a ‘second wave’ system is getting its results, you cannot be sure it won’t suddenly go wrong in bizarre ways (and in fact they do). There have even been moves to make it a legal requirement that a system is explicable.

I think there are two big problems, though. The demand for an explanation implicitly requires one that human beings can understand. This might easily hobble computer systems unnecessarily, denying us immensely useful new technologies that just happen to be slightly beyond our grasp. One of the limitations of human cognition, for example, is that we can only hold so many things in mind at once. Typically we get round this by structuring and dividing problems so we can deal with simple pieces one at a time; but it’s likely there are cognitive strategies that this rules out. Already I believe there are strategies in chess, devised by computers, that clearly work but whose conditional structure is so complex no human can understand them intuitively. So it could be that the third wave actually restores some of the limitations of the first, by tying progress to things humans already get.

The second problem is that we still have no real idea how much of human cognition works. Recent advances in visual recognition have brought AI to levels that seem to match or exceed human proficiency, but the way they break down suddenly in weird cases is so unlike human thought that it shows how different the underlying mechanisms must still be. If we don’t know how humans do explainable recognition, where is our third wave going to come from?

Of course, the whole framework of the three waves is a bit of a rhetorical trick. It rewrites and recategorises the vastly complex, contentious history of AI into something much simpler; it discreetly overlooks all the dead ends and winters of disillusion that actually featured quite prominently in that story. The result makes the ‘third wave’ seem a natural inevitability, so that we ask only when and by whom, not whether and how.

Still, even projects whose success is not inevitable sometimes come through…

5 thoughts on “Third Wave AI?

  1. That is probably the best 15 minute explanation of AI that I can imagine. Thanks for bringing it up.

    As for the third wave of AI, I think it is inevitable, and I think the problems you mention are surmountable. I think solutions to both will come from work similar to what is being done by Chris Eliasmith and what he calls Semantic Pointers. Eliasmith is working specifically on neural networks based on biological models. I.e., he is creating neural networks that seem to work like brain neural networks do.

    What Eliasmith calls a semantic pointer is essentially a set of neurons firing in a particular pattern, and that pattern constitutes a pointer (or symbolic reference) to what is essentially a concept. Multiple pointers can be combined (via specific neural interactions) to create new pointers, aka new concepts. That seems to me to be the beginning of a pathway for learning concepts like “legs”,”face”, “tail”, etc., and combining them to create “cat”. This work could address both problems you mention: developing mechanisms similar to human thought and providing explanations that are human understandable.

    You also mention the concept that requiring everything to be explainable will decrease the utility of systems. But I would suggest that just as First Wave systems are still being developed and are useful even as Second Wave systems are ascendant (as described in the video), so Second Wave systems will continue to be a worthwhile project even as Third Wave systems arise. We just need to understand when we can use which.

    *

  2. I’m not sure what the results of this initiative will be, but in recent years, every time I read something like this, I think that the main problem is that, by aiming to achieve human level cognition, we’re trying to cover too much ground in one leap.

    We should first strive to match the spatial and movement intelligence of an insect, such as an ant or a fruit fly. That’s several orders of magnitude easier than a human. Then we can gradually graduate to bees, fish, simple mammals such as mice, and eventually work our way up to primates.

    What use would a system be with the general intelligence capabilities of a bee? Imagine how much more useful a Roomba might be if it had the navigation and decision making intelligence of a cockroach, but coupled with a strong instinct to vacuum your carpet.

  3. AI can not itself impose any limitations in it’s finite to infinite to finite positioning…

    That we impose dualism as Our-Position is just as limiting; when looking to understand the necessity of third Force waves for our experience…

  4. Ooh, this is too tempting,
    I’ll sneak out of lurk mode for a little self promotion, hopefully contentful nevertheless.
    First, as always, Peter excels at generating elegant summaries, this one made me particularly happy:

    One of the limitations of human cognition, for example, is that we can only hold so many things in mind at once. Typically we get round this by structuring and dividing problems so we can deal with simple pieces one at a time;

    Ladies and gentleman, I give you the definition of the reductionist method, along with the reason why we deploy it everywhere. All in two sentences!

    Note that I’ve mentioned reductionism as a method, not as a fundamental metaphysical axiom. The temptation of sliding into the latter, given the seemingly unlimited successes of the former, is more or less ubiquitous, and wrong. IMHO, reduction is how our explicit (conscious) cognition works, it is fundamental for our own reasoning, not something the world itself needs to comply with by necessity.
    The interesting thing to note in here is that whatever generates consciousness needs to be unconscious, so it’s probably a kind of cognition that doesn’t work in ways that resemble the reductionist method.
    Whoops, so where do we go from here?
    Don’t know, but I have written something that fits into the current topic, and has been informed by numerous discussions in CE.
    The point I try to make is, to my eyes, implicit in Peter’s comment above: we don’t know how our own abilities of visual recognition work and the fact that we can’t explain how designed recognition mechanisms do it (AKA – Machine Learning systems, the fact that we designed them does not imply, perhaps surprisingly, that we can “explain” them) is NOT a coincidence!

    I suppose my overall point is that mapping, understanding and acknowledging our own cognitive limitations would help a lot to start making sense of the problems posed by philosophy of mind… Or at least, that’s what my own limited cognition seems to suggest. 😉

Leave a Reply

Your email address will not be published. Required fields are marked *