Self-assembly consciousness

Picture: Gandalfr. Kristinn R. Thorisson wants artificial intelligence to build itself.  Thorisson was the creator of Gandalf*, the ‘communicative humanoid’ who was designed in a way that amply disproved Frank Zappa’s remark:

“The computer … can give you the exact mathematical design, but what’s missing is the eyebrows.”

Thorisson proposes that constructionism must give way to constructivism (pdf) if significant further progress towards artificial general intelligence is to be made. By constructionism, he means a traditional ‘divide and conquer’ approach in which the overall challenge is subdivided, modules for specific tasks are more or less hand-coded and then the results are bolted together. This kind of approach, he says, typically results in software whose scope is limited, which suffers from brittleness of performance, and which integrates poorly with other modules.  Yet we know that a key feature of general intelligence, and particularly of such features as global attention is a high level of very efficient integration, with different systems sharing heterogeneous data to produce responsive and smoothly coordinated action.

Thorisson considers some attempts to achieve better real-world performance through enhanced integration, including his own, and acknowledges that a lot has been achieved. Moreover, it is possible to extend these approaches further and achieve more: but the underlying problems remain and in some cases get worse: a large amount of work goes into producing systems which may perform impressively but lack flexibility and the capacity for ‘cognitive growth’. At best, further pursuit of this line is likely to produce improvements on a linear scale and “Even if we keep at it for centuries…  basic limitations are likely to asymptotically bring us to a grinding halt in the not-too-distant future.”

It follows that a new approach is needed and he proposes that it will be based on self-generated code and self-organising architectures. Thorisson calls this ‘constructivism’, which is perhaps not an ideal choice of name, since there are several different constructivisms in different fields already. He does not provide a detailed recipe for constructivist projects, but mentions a number of features he thinks are likely to be important. The first, interestingly, is temporal grounding – he remarks that in contrast to computational systems, time appears to be integral to the operation of all examples of natural intelligence. The second is feedback loops (but aren’t they a basic feature of every AI system?); then we have Pan-Architectural Pattern Matching, Small White-Box Components (White-Box as opposed to Black-Box, ie simple modules whose function is not hidden), and Architecture Meta-Programming and Integration.

Whether or not he’s exactly right about the way forward, Kristinsson’s criticisms of traditional approaches seem persuasive, the more so as he has been an exponent of them himself. They also raise some deeper questions which, as a practical man, he is not concerned with. One issue, indeed, is whether we’re dealing here with difficulties in practice or difficulties in principle. Is it just that building a big AGI is extremely complex, and hence in practice just beyond the scope of the resources we can reasonably expect to deploy on a traditional basis? Or is it that there is some principled problem which means that an AGI can never be built by putting together pre-designed modules?

On the face of it, it seems plausible that the problem is one of practice rather than principle, and is simply a matter of the huge complexity of the task. After all, we know that the human brain, the only example we have of successful general intelligence, is immensely complex, and that it has quirky connections between different areas. This is one occasion when Nature seems to have been indifferent to the principles of good, legible design; but perhaps ‘spaghetti code’ and a fuzzy allocation of functions is the only way this particular job can be done;  if so, it’s only to be expected that the sheer complexity of the design is going to defeat any direct attempt to build something similar.

Or we could look at it this way. Suppose constructivism succeeds, and builds a satisfactory AGI. Then we can see that in principle it was perfectly possible to build that particular AGI by hand, if only we’d been able to work out the details. Working out the details may have proved to be way beyond us, but there the thing is: there’s no magic that says it couldn’t have been put together by other methods.

Or is there? Could it be that there is something about the internal working of an AGI which requires a particular dynamic balance, or an interlocking state of several modules, that can’t be set up directly but only approached through a particular construction sequence – one that amounts to it growing itself? Is there after all a problem in principle?

I must admit I can’t see any particular reason for thinking that’s the way things are, except that if it were so, it offers an attractive naturalistic explanation of how human consciousness might be, as it were, gratuitous: not attributable to any prior design or program, and hence in one sense the furthest back we can push explanation of human thoughts and actions. If that’s true, it in turn provides a justification for our everyday assumption that we have agency and a form of free will. I can’t help finding that attractive; perhaps if the constructivist approaches Thorisson has in mind are successful this will become clearer in the next few years.

* For anyone worried about the helmet, I should explain that this Gandalf was based on a dwarf from Icelandic cosmogony, not Tolkien’s wizard of the same name.

Sceptical sceptical folk theory theory theory

Picture: Daniel Dennett. Daniel Dennett not sceptical enough about qualia? It seems unlikely. Dennett’s trenchant view can be summed up in two words:  ‘What qualia?”.  It makes no sense, he would say, for us to talk about ineffable items of direct experience: things which by their definition, we can’t talk about. That’s not to say we can’t talk about our experience of the world: we just need to talk about it in third-person, heterophenomenological terms. Instead of claiming to discuss people’s first-person inner experience, we discuss what they report about their first-person  inner experience. In fact, if we think about it carefully, we’ll realise that’s all we could ever do, all we’ve ever done: really, whatever we may have supposed, all phenomenology is actually heterophenomenology; all discussion is necessarily in third-person, objective, scrutable, effable terms.

Typically Dennett’s is a relatively lonely voice ranged against those who would assert that qualia, direct private phenomenal experiences, are knowably, undeniably real, however hard they may be to explain and to reconcile with the objective third-person world described by science. Now Justin Sytsma (‘Dennett’s Theory of the Folk Theory’, JCS Vol 17, no 3-4) interestingly takes a different tack, suggesting that in fact Dennett has conceded too much by accepting that the folk theory, what ordinary people naively believe about their own experience, includes belief in qualia. He quotes Dennett saying:

“there seem to be qualia, because it really does seem as if science has shown us that colors can’t be out there, and hence must be in here…”

Reasonably, if somewhat unphilosophically, Sytsma treats what people actually believe as an empirical matter, something we can test; in a sense we could say that this is turning heterophenomenology on itself. It turns out, apparently, that Dennett’s assumption is false: in fact people don’t regard, say, redness, as an ineffable mental quality, but as a real property of things in the world.  Perhaps ordinary people are more sophisticated than Dennett has given them credit for: perhaps less; not sufficiently aware of  ‘what science has shown us‘ for it to have had much impact on what they believe.

What are we to make of this? Well, one issue is that there is an inbuilt tension in the entity we’re trying to discuss: Sytsma is talking about folk theories: but folk beliefs are really what we have when we have no theories: a folk theory is a kind of contradiction in terms. Julian of Norwich, I think, said that the worst thing about heretics was that they forced honest Christians to determine the truth of theological propositions which pious folk could otherwise have ignored; in a similar way we might argue that philosophical experimenters force their subjects into addressing tricky phenomenological questions which would otherwise never have troubled them.

Sytsma, then, by asking his subjects questions, was not evoking their previously-held views on phenomenology so much as engendering these views for the first time. There is an obvious danger that the terms of the question would tend to influence the form of the views evoked; but really that doesn’t matter because whatever view Sytsma evoked, it would be different to the no-view that his subjects held initially. Is the folk view pro or anti qualia?  The most accurate answer is probably ‘no’.

However,  forensically Dennett must be in the right.  If we want to establish a position, we need to argue against its negation; even if the majority or ‘the folk’ favour our view, we must instead argue against an arbitrary opponent; in fact against all the most plausible and persuasive opponents we can think of.  Even if Dennett was wrong about what people generally believe, tactically it was correct to assume that they disagreed with him: without being unduly elitist, what the majority, or the folk, or the man on the Clapham omnibus actually happen to think, is in this case philosophically uninteresting. We need not worry about whether we should endorse a sceptical theory about Dennett’s sceptical theory about folk theories. Isn’t that a relief?

Interesting stuff – May 2010

Picture: correspondent. Paul Almond’s Attempt to Generalize AI has reached Part 8:  Forgetting as Part of the Exploratory Relevance Process. (pdf)

Aspro Potamus tells me I should not have missed the Online Consciousness Conference.

Jesús Olmo recommends a look at the remarkable film ‘The Sea That Thinks’, and notes that the gut might be seen as our second brain.

An interesting piece from Robert Fortner contends that speech recognition software has hit a ceiling at about 80% efficiency and that hope of further progress has been tacitly abandoned. I think you’d be rash to assume that brute force approaches will never get any further here: but it could well be one of those areas where technology has to go backwards for a while and pursue a theoretically different approach which in the early stages yields poorer results, in order to take a real step forward.

A second issue of the JCER is online.

Alec wrote to share an interesting idea about dreaming:

“It seems that most people consider dreaming to be some sort of unimportant side-effect of consciousness. Yes, we know it is involved in assimilation of daily experiences, etc, but it seems that it is treated as not being very significant to conciousness itself. I have a conjecture that dreaming may be significant in an unusual way – could dreaming have been the evolutionary source of consciousness?
It is clear that “lower animals” dream. Any dog owner knows that. On that basis, I would conclude that dreaming almost certainly preceded the evolution of consciousness. My conjecture is this: Could consciousness possibly have evolved from dreaming?

Is it possible that some evolutionary time back, humans developed the ability to dream at the same time as being awake, and consciousness arises from the interaction of those somewhat parallel mental states? Presumably the hypothetical fusion of the dream state and the waking state took quite a while to iron out. It still may not be complete, witness “daydreams.” We can also speculate that dreaming has some desirable properties as a precursor to consciousness, especially its abstract nature and the feedback processes it involves.

Hmm.