Picture: Ouroboros. Knud Thomsen has put a draft paper (pdf) describing his ‘Ouroboros Model’ – an architecture for cognitive agents – online. It’s a resonant title at least – as you may know, Ouroboros is the ‘tail-eater’; the mythical or symbolic serpent which swallows its own tail, and in alchemy and elsewhere symbolises circularity and self-reference.

We should expect some deep and esoteric revelations, then; but in fact Thomsen’s model seems quite practical and unmystical. At its base are schemata, which are learnt patterns of neuron firing, but they are also evidently to be understood as embodying scripts or patterns of expectations. I take them to be somewhat similar to Roger Schank’s scripts, or Marvin Minsky’s frames. Thomsen gives the example of a lady in a fur coat; when such a person enters our mind, the relevant schema is triggered and suggests various other details – that the lady will have shoes (for that matter, indeed, that she will have feet). The schemata are flexible and can be combined to build up more complex structures.

In fact, although he doesn’t put it quite like this, Thomsen’s model assumes that each mind has in effect a single grand overall schema unrolling within it. As new schemas are triggered by sensory input, they are tested for compatibility with the others in the current grand structure through a process Thomsen calls consumption analysis. Thomsen sees this as a two-stage process – acquisition, evaluation, acquisition, evaluation. He seems to believe in an actual chronological cycle which starts and stops, but it seems to me more plausible to see the different phases as proceeding concurrently for different schemata in a multi-threaded kind of way.

Thomsen suggests this model can usefully account for a number of features of normal cognitive processes. Attention, he suggests, is directed to areas where there’s a mismatch between inputs and current schemata. It’s certainly true that attention can be triggered by unexpected elements in our surroundings, but this isn’t a particularly striking discovery, or one that only a cyclical model can account for – and it doesn’t explain voluntary direction of attention, nor how attention actually works. Thomsen also suggests that emotions might primarily be feedback from the consumption analysis process. The idea seems to be that when things are matching up nicely we get a positive buzz, and when there are problems negative emotions are triggered. This doesn’t seem appealing. For one thing, positive and negative reinforcement is at best only the basis for the far more complex business of emotional reactions; but more fatally it doesn’t take much reflection to realise that surprises can be pleasant and predictability tediously painful.

More plausibly, Thomsen claims his structure lends itself to certain kinds of problem solving and learning, and to the explanation of certain weaknesses in human cognition such as priming and masking, where previous inputs condition our handling of new ones. He also suggests that sleep fits his model as a time of clearing out ‘leftovers’ and tidying data. The snag with all these claims is that while the Ouroboros model does seem compatible with the features described, so are many other possible models; we don’t seem to be left with any compelling case for adopting the serpent rather than some other pattern-matching theory. The claim that minds have expectations and match their inputs against these expectations is not new enough to be particularly interesting: the case that they do it through a particular kind of circular process is not really made out.

What about consciousness itself? Thomsen sees it as a higher-order process – self-awareness or cognition about cognition. He suggests that higher order personality activation (HOPA) might occur when the cycle is running so well there is, as it were, a surfeit of resources; equally it might arise when when a particular concentration of resources comes together to deal with a particularly bad mismatch. In between the two, when things are running well but not flawlessly, we drift on through life semi-automatically. In itself that has some appeal – I’m a regular semi-automatic drifter myself – but as before it’s hard to see why we can’t have a higher-order theory of consciousness – if we want one – without invoking Thomsen’s specific cyclical architecture.

In short it seems to me Thomsen has given us no great reason to think his architecture is optimal or especially well-supported by the evidence; however, it sounds at least a reasonable possibility. In fact, he tells us that certain aspects of his system have already worked well in real-life AI applications.

Unfortunately, I see a bigger problem. As I mentioned, the idea of scripts is not at all new. In earlier research they have delivered very good results when confined to a limited domain – ie when dealing with a smallish set of objects in a context which can be exhaustively described. Where they have never really succeeded to date is in producing the kind of general common sense which is characteristic of human cognition; the ability to go on making good decisions in changed or unprecedented circumstances, or in the seething ungraspable complexity of the real world. I see no reason to think that the schemata of Ouroboros are likely to prove any better at addressing these challenges.

Update 16 July 2008: Knud Thomsen has very kindly sent the following response.

I feel honored by the inclusion of my tiny draft into this site!

One of my points actually is that any account of mind and consciousness ought to be “quite practical and un-mystical”. The second one, of course is, that ONE single grand overall PROCESS – account can do it all (rather than three stages: acquisition, evaluation, action…). This actually is the main argument in favor of the Ouroboros Model: it has something non-trivial to say about a vast variety of topics, which commonly are each addressed in separate models, which do not know anything about each other, not to mention that they together should form a coherent whole. The Ouroboros Model is meant to sketch an “all-encompassing” picture in a self-consistent, self-organizing and self-reflexive way.

Proposals for technical details of neuronal implementation certainly explode the frame of this short comment; no doubt, reentrant activity and biases in thalamo-cortical loops will play a role. Sure, emotional details and complexities are determined by the content of the situation; nevertheless, a most suitable underlying base could be formed by the feedback on how expectations come true. Previously established emotional tags would be one of the considered features and thus part of any evaluation, – “inherited”, partly already from long-ago reptile ancestors.

The Ouroboros Model offers a simple mechanism telling to what extent old scripts are applicable, what context seems the most adequate and when schemata have to be adapted flexibly and in what direction.


  1. 1. wolf says:

    Hi there. Even though I do agree that Thomsen’s specific architecture does not seem to have to offer something extraordinarily new, I don’t think the idea of schemata/scripts is that bad. Of course, in the “old” AI/cognitive literature, the script approach has disastrously failed — no script that _only_ comprises pre-compiled knowledge can cope with even minor innovations or environmental surprises. But I think schemata in Thomsen’s architecture (as well as in comparable ones) are modifiable as in Piaget’s concept of schemata: individuals start their development with some simple, pre-compiled (i.e. inherited, genetically coded in biological organisms) ways of coping with their environment but then learn to adapt those schemata/scripts whenever they encounter difficulties. A schema can be very simple (e.g. “lift your hand and touch something in front of you”) but becomes adapted (“don’t touch things that are on fire”). So a schema is just some kind of a memorized code for some action, usually more elaborate than a simple memory item like “Berlin is the capital of Germany”.

  2. 2. Peter says:

    Fair comment: the idea of schemata isn’t that bad. I can well believe that they might work well in a lot of AI contexts, and even that they might in some form have a significant role in the human mind. But there’s a flexible originality in full human cognition which I don’t think can ever be captured by scripts alone.

  3. 3. Lloyd Rice says:

    This could work. If a “script” is really just a way of describing a collection of memory-intensive pattern recognizers (such as in Hawkins’ HTM approach), it seems to me that such a system would be able to start from simple patterns, then adjust and adapt as it learns. I’m not committed to HTMs, but I think the general approach is useful. Lakoff’s work on metaphors suggests that you need to start with some simple DNA-precoded patterns, but then these can be adapted to a variety of new inputs, including understanding math and physics.

  4. 4. Lloyd Rice says:

    In the above comment, I wondered if a script (or schemata) could be a pattern recognizer. Of course, the std AI schemata would be more like an ‘if (…) then do {…}’ statement. My thought was that the pattern recognizer could do the ‘if (…)’ part. Something else would be needed for the ‘then do {…}’ part. Years ago, I read a paper (I’ve now lost the ref) describing a type of thalamo-cortical circuit which acted like a flip-flop, a wide branching dendritic tree in the cortex turned it on, then a loop to the thalamus initiated some action. The loop stayed active until a thalamic response turned it off. Too speculative, of course, but something like that seems to make sense.

  5. 5. Peter says:

    Yes, it does. I’m not sure that’s the way Thomsen was thinking – it’s possible.

  6. 6. Ralph Frost says:

    If one inquires about the source of electrons flowing through synaptic… channels one sees that our energy arises in the respiration reaction at sites within neurons. Integrated within that reaction is the routine formation of about 10^20 water molecules per second which are roughly tetrahedral-like things having 2 plus and two minus vertices. A sequence of n molecules forming at each site can do so in 6^n pattern within an enfolding field. Allowing that surrounding replicating vibrations ought to select out repeating sequencing of oriented water molecules, as we collect energy, we also generate an internal structurally coded (hydrogen-bonded interactive) representation of our surroundings. This dynamic level of structural coding links through protein-folding with memory formation, motility, and expression, and also could provide for (minimum energy-based) pattern-recognizer and evaluative paths.

Leave a Reply