Picture: Gandalfr. Kristinn R. Thorisson wants artificial intelligence to build itself.  Thorisson was the creator of Gandalf*, the ‘communicative humanoid’ who was designed in a way that amply disproved Frank Zappa’s remark:

“The computer … can give you the exact mathematical design, but what’s missing is the eyebrows.”

Thorisson proposes that constructionism must give way to constructivism (pdf) if significant further progress towards artificial general intelligence is to be made. By constructionism, he means a traditional ‘divide and conquer’ approach in which the overall challenge is subdivided, modules for specific tasks are more or less hand-coded and then the results are bolted together. This kind of approach, he says, typically results in software whose scope is limited, which suffers from brittleness of performance, and which integrates poorly with other modules.  Yet we know that a key feature of general intelligence, and particularly of such features as global attention is a high level of very efficient integration, with different systems sharing heterogeneous data to produce responsive and smoothly coordinated action.

Thorisson considers some attempts to achieve better real-world performance through enhanced integration, including his own, and acknowledges that a lot has been achieved. Moreover, it is possible to extend these approaches further and achieve more: but the underlying problems remain and in some cases get worse: a large amount of work goes into producing systems which may perform impressively but lack flexibility and the capacity for ‘cognitive growth’. At best, further pursuit of this line is likely to produce improvements on a linear scale and “Even if we keep at it for centuries…  basic limitations are likely to asymptotically bring us to a grinding halt in the not-too-distant future.”

It follows that a new approach is needed and he proposes that it will be based on self-generated code and self-organising architectures. Thorisson calls this ‘constructivism’, which is perhaps not an ideal choice of name, since there are several different constructivisms in different fields already. He does not provide a detailed recipe for constructivist projects, but mentions a number of features he thinks are likely to be important. The first, interestingly, is temporal grounding – he remarks that in contrast to computational systems, time appears to be integral to the operation of all examples of natural intelligence. The second is feedback loops (but aren’t they a basic feature of every AI system?); then we have Pan-Architectural Pattern Matching, Small White-Box Components (White-Box as opposed to Black-Box, ie simple modules whose function is not hidden), and Architecture Meta-Programming and Integration.

Whether or not he’s exactly right about the way forward, Kristinsson’s criticisms of traditional approaches seem persuasive, the more so as he has been an exponent of them himself. They also raise some deeper questions which, as a practical man, he is not concerned with. One issue, indeed, is whether we’re dealing here with difficulties in practice or difficulties in principle. Is it just that building a big AGI is extremely complex, and hence in practice just beyond the scope of the resources we can reasonably expect to deploy on a traditional basis? Or is it that there is some principled problem which means that an AGI can never be built by putting together pre-designed modules?

On the face of it, it seems plausible that the problem is one of practice rather than principle, and is simply a matter of the huge complexity of the task. After all, we know that the human brain, the only example we have of successful general intelligence, is immensely complex, and that it has quirky connections between different areas. This is one occasion when Nature seems to have been indifferent to the principles of good, legible design; but perhaps ‘spaghetti code’ and a fuzzy allocation of functions is the only way this particular job can be done;  if so, it’s only to be expected that the sheer complexity of the design is going to defeat any direct attempt to build something similar.

Or we could look at it this way. Suppose constructivism succeeds, and builds a satisfactory AGI. Then we can see that in principle it was perfectly possible to build that particular AGI by hand, if only we’d been able to work out the details. Working out the details may have proved to be way beyond us, but there the thing is: there’s no magic that says it couldn’t have been put together by other methods.

Or is there? Could it be that there is something about the internal working of an AGI which requires a particular dynamic balance, or an interlocking state of several modules, that can’t be set up directly but only approached through a particular construction sequence – one that amounts to it growing itself? Is there after all a problem in principle?

I must admit I can’t see any particular reason for thinking that’s the way things are, except that if it were so, it offers an attractive naturalistic explanation of how human consciousness might be, as it were, gratuitous: not attributable to any prior design or program, and hence in one sense the furthest back we can push explanation of human thoughts and actions. If that’s true, it in turn provides a justification for our everyday assumption that we have agency and a form of free will. I can’t help finding that attractive; perhaps if the constructivist approaches Thorisson has in mind are successful this will become clearer in the next few years.

* For anyone worried about the helmet, I should explain that this Gandalf was based on a dwarf from Icelandic cosmogony, not Tolkien’s wizard of the same name.


  1. 1. Peter says:

    Some thoughts from Paul Almond…

    Of course, I agree that we could not build an AI in a modular way. One of the reasons, I think, is that there are not well-defined modules corresponding to things that look to us like well-defined functions. For example, modeling and planning – I am one of these people who think planning is just a special case of modeling. (I think it also explains things like dreaming as well, as I was just writing: Making sense of it in a modular system will be hard.) I just can’t imagine that someone could code all these modules and it would work.

    I find it hard to explain exactly why. There is a feeling – and idea – I have, and it just seems obvious to me. Maybe I should write about it sometime. I will call it an “ontological bottleneck”. If you have two sub-systems made by humans then they will tend to have well-specified inputs/outputs. Even if the systems have some powerful emergent process going on inside, it doesn’t help them at the point where they have to connect together. The connection between those systems will be a place where the ontology is “squeezed” through a “narrow ontological pipe” – and by this I mean that things may be dynamic, emergent and interesting in one of these sub-systems, but as soon as something produced in one of them has to go to the other system, it has to go through the human designed input/output interface of the sub-system. A lot of the richness will be lost. You could have a lot of powerful emergence in Sub-system A and a lot of powerful emergence in Sub-system B, but it won’t be the same emergence. You have two systems effectively isolated, and as soon as you go through this ontological bottleneck, all the depth will be lost. In a real system, the connection between sub-systems needs to be a lot richer and a lot deeper, so much so that it may be questionable to think in terms of sub-systems as we understand it.

    A good example of this is what happens if we try to treat planning and modeling as separate systems. As soon as we do that, we need to specify an interface between the planning and modeling system. As soon as you do that, it does not matter what your modeling system is doing: Everything it does has to be reduced to whatever the specification of its external interface.

  2. 2. Favorite links – 6/5/10 | Minds and Brains says:

    […] Conscious Entities on constructing intelligent systems […]

  3. 3. Lloyd Rice says:

    I think I disagree with Paul’s position in #1. Is it not just a matter of bandwidth and coding efficiency? What am I missing?

  4. 4. Paul Bello says:

    eh, this is typical of the kind of stuff gotten from the AGI crowd. It’s all the standard shibboleths without the requisite substance. Self-organization, rules are brittle so we need to have giant learning architectures, pattern-matching, blah blah blah emergence blah.

    Peter, I think you’re on to something with the notion of interfaces between modules. As I see it, there are a few fundamentals that need to be covered for any attempt at human-level AI: specification of what the modules are, and a theory of integration. These are to AGI what soundness and completeness are to formal logical systems. Usually, the AGI guys have something to say about modules, but little to say about integration past some bogus story about self-organization.

    While nowhere nearing a complete theory, I (and others) have been working on one for 5-8 years now called “Polyscheme.” A recent paper describing some of the less theoretical work can be found here: http://www.ukurup.com/media/files/SMC09.pdf

    For the more theoretical stuff, I’d start with the somewhat dated: http://www.rpi.edu/~cassin/papers/AIMag06.pdf
    and then the more recent: http://www.rpi.edu/~cassin/papers/SimulationReasoningCogPro2009.pdf and http://www.rpi.edu/~cassin/papers/SimulationReasoningCogPro2009.pdf

  5. 5. Vicente says:

    I manage to finish reading the paper, and there si something I would like to ask to the experts. I believe the core issue is to develope systems the reprogramme themselves on the fly to cope with new situations, and this has to be affordable in industrial terms. Basically the author is proposing an strategy to find a solutions for the FRAME PROBLEM, isn’t it?

    When I hear self-generated code, I usually think of the CASE tools. No problem, the logic and the intelligence relies on the programmer and the tool just throws code lines according to requested functions, fine. But the author goes much further, the system he proposes plays the role of the analyst and the programmer…

    Now, it seems to me that the remedy is worse than the disease. To adapt to an evolving scenario we need to know how is the scenario actually evolving, we need a perspective. The propose feed back loops can only tell us that beyond a certain point we cannot adapt any more, and the system fails. How to analyse the global situation, to decide what has to be changed or added, to code it, and resume operations?

    I believe it is a philosophical problem, rather than technical, I don’t think the system has the perspective required to do so. Or in the best case, to develope such a “general purpose” system (if possible) will cost much much more that to create and upgrade ad-hoc solutions.

    In p.179 it is said that in the next decades this approach will be necessary to make progress… so is there any small prototype of any kind in which these self-generated code and self-organised architectures have already been implemented, even in a very simple case?

  6. 6. Kar Lee says:

    The main question I have about AGI is what exactly is an AGI? How general is general enough. AI exists for some specific purpose. But AGI…what is it for? This seemingly innocent/naive question brings out some very interesting deeper questions. Life forms exist to pass on their genetic materials. For that, they do crazy things: Wasting precious energy in silly dancing to get mating partners (penguins), spawn eggs and then die on the spot (squid), rather be eaten alive by the mate than not mating (blackwidow), human parents going to ridiculously extraordinary great length to get their kids into just slightly better schools….

    But for a AGI, what is it for? As a pure software AGI sitting on a machine, to stay alive is to stay dormant. Software don’t change over time. They don’t age. What general problem are they supposed to solve? Life without purpose can be dangerous. In fact, we have an equivalent example in human: Clinically depressed people. Extremely depressed people simply want to get out of existence altogether, or go for a long long long sleep, perhaps never to wake up again to face reality. In fact, it has been said that depressed people are the ones who have a more accurate picture of the world. How should a AGI be designed so that it does not fall into depression (a very real possibility if you design something with general intelligence and leave it to its own device)? Didn’t we hear some smart people comment that “the more we look at it, the more life seems pointless”? What is the built-in motivation for and AGI to stay active? Stay active to do what? When we design AGI, the purpose is the first thing one should start with. But then, with a specific purpose, the AGI stops being an AGI, and turn into a regular AI. Won’t it?

    Or maybe the general purpose is “to serve one’s human master”. Then we go back to the self-inconsistency of Asimov’s “Three laws of robotics”, and that will be very tricky.

  7. 7. Lloyd Rice says:

    Kar Lee: I think you raise an interesting and important point: What am I here for? Of course, we don’t know in a rational way why we are here, but, as you say, reproduction always rules.

    But for a machine, how about some motive that might grow out of a specialized function, say, industrial robotics. A group of machines that were build by NASA to help astronauts might take as their inborn function: “conquer space”. Assuming they had mechanisms to build more of themselves, they might end up spreading across the galaxy — even after we are gone.

  8. 8. Vicente says:

    “reproduction always rules”: Reproduction is a mean or a goal?

    Why would a set of molecules arranged in cells and tissues, exchanging chemical and electrical signals, question about the point of life… weird huh.

    Lloyd physicalist approach could say that as long as brain homeostasis is preserved everything else is irrelevant. So the whole purpose of a brain existence could be to maintain its homeostasis, to get nutrients, excrete waste and avoid accidents, fine. So AGI machines could very well do the same, why not, to get a power source, get repaired and avoid accidents…

    And now brains want to create other brains… and to conquer space… and they get bored… and so do AGIbots.

    You might find interesting: Chance and Necessity by Jacques Monod.

  9. 9. Lloyd Rice says:

    Vicente: Yes, I read Chance and Necessity some time ago, probably 30 years ago.

  10. 10. Ottmar says:

    While I feel unqualified to give very good comments on any of the fine subjects on this site, I do appreciate them and the effort at amassing such an assemblage and have duly linked to you in a part of my site. Keep up the good work.

Leave a Reply