Language and Consciousness

Picture:  Jordan Zlatev. There is clearly a close relationship between consciousness and language. The ability to conduct a conversation is commonly taken as the litmus test of human-style consciousness for both computers and chimpanzees, for example. While the absence of language doesn’t prove the absence of consciousness – not all of our thoughts are in words – the lack of a linguistic capacity seems to close off certain kinds of explicit reflection which form an important part of human cognition. Someone who had no language at all might be conscious, but would they be conscious in quite the same way as a normal, word-mongering human?
It might therefore seem that when Jordan Zlatev asserts the dependence of language on consciousness, he is saying something uncontroversial. In fact, he has both broader and more specific aims: he wants to draw more attention to the relationship on the one hand, and on the other readjust our view of where the borders between conscious and unconscious processes lie.

It seems pretty clear that a lot of the work the brain does on language is unconscious. When I’m talking, I don’t, for example, have to think to myself about the grammar I’m using (unless perhaps I’m attempting a foreign language, or talking about some grammatical point). I don’t even know how my brain operates English grammar; it surely doesn’t use the kind of rules I was taught at school; perhaps in some way it puts together Chomskyan structures, or perhaps it has some altogether different approach which yields grammatical sentences without anything we would recognise as explicit grammatical rules. Whatever it does, the process by which sentences are formed is quite invisible to me, the core entity to whom those same sentences belong, and whose sentiments they communicate. It seems natural to suppose that the structure of our language is provided pre-consciously.

Zlatev, however, contends that the rules of language are social and normative; to apply them we have to understand a number of conventions about meanings and usage; and whatever view we may take of such conventions their use requires a reflective knower (Zlatev picks up on a distinction set out by Honderich between affective, perceptual, and reflective consciousness; it’s the latter he is concerned with). To put it another way, operating the rules of language requires us to make judgements of a kind which only reflection can supply, and reflection of this kind deserves recognition as conscious. Zlatev is not asserting that the rules of grammar at work in our brain are consciously know after all: he draw a distinction between accessibility and introspectability; he wants to say that the rules are known pre-theoretically, but not unconsciously.

Perhaps we could put Zlatev’s point a different way: if the rules of language were really unconscious, we should be incapable of choosing to speak ungrammatically, just as we are incapable of making our heart beat slower or our skin stop sweating by an act of will. Utterances which did not follow the rules would be incomprehensible to us. In fact, we can cheerfully utter malformed sentences, distinguish them from good ones and usually understand both. Deliberate transgressions of the rules are used for communicative or humorous effect (a rather Gricean point). While the theory may be hidden from introspection, the rules are accessible to conscious thought.

If the rules of language were unconscious, asks Zlatev, how would we account for the phenomenon of self-correction, in which we make a mistake, notice it, and produce an emended version? And how could it be that the form of our utterances is often structured to enhance the meaning and distribute the emphasis in the most helpful way? An unconscious sentence factory could never support our conscious intentions in such a nicely judged way. Zlatev also brings forward evidence from language acquisition studies to support his claim that unconscious mechanisms may support, but do not exhaust, the processes of language.

At times Zlatev seems to lean towards a kind of Brentano-ish view; language requires intentionality, and nothing but consciousness can provide it (alas, in a way which remains unexplained). Intriguingly, he says that he and others were deceived into accepting the unconsciousness of language production at an earlier stage by the allure of connectionism, whose mechanistic nature only gradually became clear. I think connectionists might feel this is a little unfair, and that Zlatev need not have given up on connectionist approaches to reflective judgement simply because they are ‘mechanistic’.

All in all, I think Zlatev offers some useful insights; his general point that a binary division between conscious and unconscious simply isn’t good enough is indeed a good one and well made. I wonder whether this is a point particular to the language faculty, however. Couldn’t I make some similar points about my tennis-playing faculty? Here too I rely on some unconscious mechanisms, and couldn’t tell you exactly which arm muscles I used in which way. Yet making my hand twist the racquet around and move it to the right place also seems to require some calculated, dare I say reflective, judgements and the way I do it is exquisitely conditioned by tactics and strategy which I devise and entertain at a fully self-conscious level.

Be that as it may, it’s bad news for the designers of translation software if Zlatev is right, since their systems will have to achieve real consciousness before they can be perfected.

Alien Consciousness

Picture: alien.

Picture: Blandula. I was reading somewhere about SETI and I was struck by the level of confidence the writer seemed to enjoy that we should be able, not only to recognise a signal from some remote aliens, but actually interpret it. It seems to me, on the contrary, that finding the signal is the ‘easy problem’ of alien communication. We might spend much longer trying to work out what they were saying than we did finding the message in the first place. In fact, it seems likely to me that we could never interpret such a signal at all.

Picture: Bitbucket. Well, I don’t think anyone underestimates the scope of the task, but you know it can hardly be impossible. For example, we send off a series of binary numbers; binary is so fundamental, yet so different from a random signal, that they would be bound to recognise it. The natural thing for them to do is echo the series back with another term added. Once we’ve got onto exchanging numbers, we send them, like say 640 and 480 over and over. If they’re sophisticated enough to send radio signals, they’re going to recognise we’re trying to send them the dimensions of a 2d array. Or we could go straight to 3D, whatever. Then we can send the bits for a simple image. We might do that Mickey Mouse sort of picture of a water molecule: odds are they’re water-based too, so they are bound to recognise it. We can get quite a conversation on chemistry going, then they can send us images of themselves, we can start matching streams of bits that mean words to streams of bits that mean objects, and we’ll be able to read and understand what they’re writing. OK, it’ll take a long time, granted, because each signal is going to take years to be delivered. But it’s eminently possible.

Picture: Blandula. The thing is, you don’t realise how many assumptions you’re making. Maybe they never thought of atoms as billiard balls, and to them methane is more important than water anyway. Maybe they don’t have vision and the idea of using arrays of bits to represent images makes no sense to them. Maybe they have computers that actually run on hexadecimal, or maybe digital computers never happened for them at all, because they discovered an analogue computing machine which is far better but unknown to us, so they’ve never heard of binary. But these are all trivial points. What makes you think their consciousness is in any way compatible with ours? There must be an arbitrarily large number of different ways of mapping reality; chances are, their way is just different to ours and radically untranslatable.

Picture: Bitbucket. Every human brain is wired up uniquely, but we manage to communicate. Seriously, if there are aliens out there they are the products of biological evolution – no, they are, that’s not just an assumption, it’s a reasonable deduction – and that alone guarantees we can communicate with them, just as we can communicate with other species on Earth up to the full extent of their capability. The thing is, they may be mapping reality differently, but it’s essentially the same reality they’re mapping, and that means the two maps have to correspond. I might meet some people who use Urgh, Slarm, and Furp instead of North and South; they might use cubits for Urgh distances, rods for Slarm ones, and holy hibbles for the magic third Furpian co-ordinate, but at the end of the day I can translate their map into one of mine. Because they are biological organisms, they’re going to know about all those common fundamentals: death and birth, hunger, strife, growth and ageing, food and reproduction, kinship, travel, rest; and because we know they have technology they’re going to know about mining and making things, electricity, machines – and communication. And that guarantees they ‘ll be able and willing to communicate with us.

Picture: Blandula. You see, even on Earth, even with our fellow humans, it doesn’t work. Look at all the ancient inscriptions we have no clue about. Ventris managed to crack Linear B only because it turned out to be a language he already knew; Linear A, forget about it. Or Meroitic. We have reams and reams of Meroitic writing, and the frustrating thing is, it uses Egyptian characters, so we actually know what it sounded like. You can stand in front of some long carved Meroitic screed and read it out, knowing it sounds more or less the way it was meant to; but we have no idea whatever what it means: and we probably never will.

Picture: Bitbucket. What you’re missing there is that the Cretans and the Meroitic people are never going to respond to our signals. The dialogue is half the point here: if we never get an answer, then obviously we’re not going to communicate.

Though I like to think that even if we picked up the TV signal of some distant race which had actually perished long before, we’d still have a chance of working it out because there’d just be more of it, and a more varied sample than your Egyptian hieroglyphs which let’s face it are probably all Royal memorials or something.

Picture: Blandula. Look, you argue that consciousness is a product of evolution. But what survival advantage does phenomenal experience give us – how do qualia help us survive? They don’t, because zombies could survive every bit as well without them. So why have we got them? It seems likely to me that we somehow got endowed with a faculty we don’t even begin to understand. One by-product of this faculty was the ability to stand back and deliberate on our behaviour in a more detached way; another happened to be qualia, just an incidental free gift.

Picture: Bitbucket. So you’re saying aliens might be philosophical zombies? They might have no real inner experience?

Somehow I knew it would come back to qualia eventually.

Picture: Blandula. More than that – I’m not just saying they might be zombies, but that’s one possibility, isn’t it?

Incidentally, I wonder what moral duty would we owe to creatures that had no real feelings? Would it matter how we behaved towards them…? Curious thought.

Picture: Bitbucket. Even zombies have rights, surely? Whatever the ethical theory, we can never be sure whether they actually are zombies, so you’d have to treat them as if they had feelings, just to be on the safe side.

Anyway, to be honest, I don’t think I like where you seem to be going with this.

Picture: Blandula. No, well the point is not that they might be zombies, but that instead of our faculty of consciousness, they might have a completely different one which nevertheless served the same purpose from an evolutionary point of view and had similar survival value. We’re the first animals on Earth to evolve a consciousness: it’s as if we were primitive sea creatures and have just developed a water squirt facility, making us able to move about in a way no other creature can yet do. But these aliens might have fins. They might have legs. You sit there blandly assuming that any mobile creature will want to match squirts with us, but it ain’t necessarily so.

Picture: Bitbucket. No, your analogy is incorrectly applied. I’m not saying they’d want to match squirts; I’m saying we’d be able to follow each other, or have races, or generally share the commonalities of motion irrespective of the different kit we might be using to achieve that mobility.

Picture: Blandula. Your problem here is really that you can’t imagine how an alien could have something that delivered the cognitive benefits of consciousness without being consciousness itself. Of course you can’t; that’s just another aspect of the problem; our consciousness conditions our view of reality so strongly that we’re not capable of realising its limitations.

Picture: Bitbucket. Look, if we launch a missile at these people, they’ll send us a signal, and that signal will mean Stop. It’s those cognitive benefits you dismiss so lightly that I rest my case on. For practical purposes, about practical issues, we’ll be able to communicate; if they have extra special 3d rainbow qualia, or none, or some kind of ineffable quidlia that we can never comprehend – I don’t care. You might have alien quidlia buzzing round your head for all I know; that possibility doesn’t stop us communicating. Up to a point.

Picture: Blandula.

You know that Wittgenstein said that if a lion could speak, we couldn’t understand what he was saying?

.

Picture: Bitbucket. Yes, I do know; just one of many occasions when he was talking balls. In fairness to old Wittless he also said ‘If I see someone writhing in pain with evident cause I do not think; all the same, his feelings are hidden from me.’

And the same goes for writhing aliens.

The Schemata of Ouroboros

Picture: Ouroboros. Knud Thomsen has put a draft paper (pdf) describing his ‘Ouroboros Model’ – an architecture for cognitive agents – online. It’s a resonant title at least – as you may know, Ouroboros is the ‘tail-eater’; the mythical or symbolic serpent which swallows its own tail, and in alchemy and elsewhere symbolises circularity and self-reference.

We should expect some deep and esoteric revelations, then; but in fact Thomsen’s model seems quite practical and unmystical. At its base are schemata, which are learnt patterns of neuron firing, but they are also evidently to be understood as embodying scripts or patterns of expectations. I take them to be somewhat similar to Roger Schank’s scripts, or Marvin Minsky’s frames. Thomsen gives the example of a lady in a fur coat; when such a person enters our mind, the relevant schema is triggered and suggests various other details – that the lady will have shoes (for that matter, indeed, that she will have feet). The schemata are flexible and can be combined to build up more complex structures.

In fact, although he doesn’t put it quite like this, Thomsen’s model assumes that each mind has in effect a single grand overall schema unrolling within it. As new schemas are triggered by sensory input, they are tested for compatibility with the others in the current grand structure through a process Thomsen calls consumption analysis. Thomsen sees this as a two-stage process – acquisition, evaluation, acquisition, evaluation. He seems to believe in an actual chronological cycle which starts and stops, but it seems to me more plausible to see the different phases as proceeding concurrently for different schemata in a multi-threaded kind of way.

Thomsen suggests this model can usefully account for a number of features of normal cognitive processes. Attention, he suggests, is directed to areas where there’s a mismatch between inputs and current schemata. It’s certainly true that attention can be triggered by unexpected elements in our surroundings, but this isn’t a particularly striking discovery, or one that only a cyclical model can account for – and it doesn’t explain voluntary direction of attention, nor how attention actually works. Thomsen also suggests that emotions might primarily be feedback from the consumption analysis process. The idea seems to be that when things are matching up nicely we get a positive buzz, and when there are problems negative emotions are triggered. This doesn’t seem appealing. For one thing, positive and negative reinforcement is at best only the basis for the far more complex business of emotional reactions; but more fatally it doesn’t take much reflection to realise that surprises can be pleasant and predictability tediously painful.

More plausibly, Thomsen claims his structure lends itself to certain kinds of problem solving and learning, and to the explanation of certain weaknesses in human cognition such as priming and masking, where previous inputs condition our handling of new ones. He also suggests that sleep fits his model as a time of clearing out ‘leftovers’ and tidying data. The snag with all these claims is that while the Ouroboros model does seem compatible with the features described, so are many other possible models; we don’t seem to be left with any compelling case for adopting the serpent rather than some other pattern-matching theory. The claim that minds have expectations and match their inputs against these expectations is not new enough to be particularly interesting: the case that they do it through a particular kind of circular process is not really made out.

What about consciousness itself? Thomsen sees it as a higher-order process – self-awareness or cognition about cognition. He suggests that higher order personality activation (HOPA) might occur when the cycle is running so well there is, as it were, a surfeit of resources; equally it might arise when when a particular concentration of resources comes together to deal with a particularly bad mismatch. In between the two, when things are running well but not flawlessly, we drift on through life semi-automatically. In itself that has some appeal – I’m a regular semi-automatic drifter myself – but as before it’s hard to see why we can’t have a higher-order theory of consciousness – if we want one – without invoking Thomsen’s specific cyclical architecture.

In short it seems to me Thomsen has given us no great reason to think his architecture is optimal or especially well-supported by the evidence; however, it sounds at least a reasonable possibility. In fact, he tells us that certain aspects of his system have already worked well in real-life AI applications.

Unfortunately, I see a bigger problem. As I mentioned, the idea of scripts is not at all new. In earlier research they have delivered very good results when confined to a limited domain – ie when dealing with a smallish set of objects in a context which can be exhaustively described. Where they have never really succeeded to date is in producing the kind of general common sense which is characteristic of human cognition; the ability to go on making good decisions in changed or unprecedented circumstances, or in the seething ungraspable complexity of the real world. I see no reason to think that the schemata of Ouroboros are likely to prove any better at addressing these challenges.

Update 16 July 2008: Knud Thomsen has very kindly sent the following response.

I feel honored by the inclusion of my tiny draft into this site!

One of my points actually is that any account of mind and consciousness ought to be “quite practical and un-mystical”. The second one, of course is, that ONE single grand overall PROCESS – account can do it all (rather than three stages: acquisition, evaluation, action…). This actually is the main argument in favor of the Ouroboros Model: it has something non-trivial to say about a vast variety of topics, which commonly are each addressed in separate models, which do not know anything about each other, not to mention that they together should form a coherent whole. The Ouroboros Model is meant to sketch an “all-encompassing” picture in a self-consistent, self-organizing and self-reflexive way.

Proposals for technical details of neuronal implementation certainly explode the frame of this short comment; no doubt, reentrant activity and biases in thalamo-cortical loops will play a role. Sure, emotional details and complexities are determined by the content of the situation; nevertheless, a most suitable underlying base could be formed by the feedback on how expectations come true. Previously established emotional tags would be one of the considered features and thus part of any evaluation, – “inherited”, partly already from long-ago reptile ancestors.

The Ouroboros Model offers a simple mechanism telling to what extent old scripts are applicable, what context seems the most adequate and when schemata have to be adapted flexibly and in what direction.