Kolmogorov meaning

Is the meaning of life non-computable? Noson F Yanofsky has a nice rumination about Kolmogorov complexity and meaning in Nautilus. Kolmogorov complexity, he tells us, is a measure based on the length of the shortest computer program that can generate a given string of digits. The shorter the program, the lower the complexity. This way of looking at it means that structure, which informally we might see as a sign of complexity, actually tends to reduce it, the most complex numbers being completely random strings.

The number giving the Kolmogorov complexity should be determined by the length of the shortest program which can generate the number. Interestingly, though, determining the shortest program is a non-computable problem, something that can apparently be proved by reductio, though Yanofsky does not trouble his readers with the actual proof.

Yanofsky notes that our efforts to give meaning to events generally takes the form of looking for patterns, something you could see as loosely analogous to identifying the programs that generate the Kolmogorov values. Perhaps this too, is an intractable problem, he suggests, so that we can never be sure there isn’t a deeper meaning we have missed; our search will never be over.

This is a matter of general philosophical pondering rather than a tight logical argument, but I found it a congenial point of view that slots easily in alongside the family of non-computable issues to do with the frame problem, radical translation, and so on which lie around the fringes of consciousness and suggest that it too involves dealing with problems that are computationally intractable.

In fact, while Yanofsky speaks of meaning in a very general sense, I think we might apply a similar analogy to meaning in the sense of intentionality or ‘aboutness’. Let’s suppose we are dealing with strings of characters and the task is to identify what those characters mean (let’s keep it simple and suppose ‘what it means’ is just a matter of identifying which physical object is referred to by the string of characters). Now any string of characters has an idefinitely long list of possible interpretations. We could read them in English, in French, or in any arbitrary encoding we care to devise, so that, in short, we can make them be about anything. But Grice helpfully tells us that we can assume in interpreting human utterances that only the minimum of necessary and relevant information has been provided. So in a way rather similar to the search for a Kolmogorov value, we need to look for the simplest meaning that would have required this string of characters to convey it.

Is that, too, a non-computable problem? Intuitively, I feel it probably is, though I’m not quite sure how we could shape a proof – perhaps something about examining a tricksy self-referential string like ‘the meaning of this string is not determinable’, but I leave that as an exercise for the reader.

Curiously, this reasoning seems to imply that all strings of characters that have a meaning at all have one correct meaning. We just can’t be sure (computationally at least) what it is.

Jochen’s Intentional Automata

intentional automatonJochen’s paper Von Neumann Minds: Intentional Automata has been published in Mind and Matter.

Intentionality is meaningfulness, the quality of being directed at something, aboutness. It is in my view one of the main problems of consciousness, up there with the Hard Problem but quite distinct from it; but it is often under-rated or misunderstood. I think this is largely because our mental life is so suffused with intentionality that we find it hard to see the wood for the trees; certainly I have read more than one discussion by very clever people who seemed to me to lose their way half-way through without noticing and end up talking about much simpler issues.

That is not a problem with Jochen’s paper which is admirably clear.  He focuses on the question of how to ground intentionality and in particular how to do so without falling foul of an infinite regress or the dreaded homunculus problem. There are many ways to approach intentionality and Jochen briefly mentions and rejects a few (basing it in phenomenal experience or in something like Gricean implicature, for example) before introducing his own preferred framework, which is to root meaning in action: the meaning of a symbol is, or is to be found in, the action it evokes. I think this is a good approach; it interprets intentionality as a matter of input/output relations, which is clarifying and also has the mixed blessing of exposing the problems in their worst and most intractable form. For me it recalls the approach taken by Quine to the translation problem – he of course ended up concluding that assigning certain meanings to unknown words was impossible because of radical under-determination; there are always more possible alternative meanings which cannot be eliminated by any logical procedure. Under-determination is a problem for many theories of intentionality and Jochen’s is not immune, but his aim is narrower.

The real target of the paper is the danger of infinite regress. Intentionality comes in two forms, derived on the one hand and original or intrinsic on the other. Books, words, pictures and so on have derived intentionality; they mean something because the author or the audience interprets them as having meaning. This kind of intentionality is relatively easy to deal with, but the problem is that it appears to defer the real mystery to the intrinsic intentionality in the mind of the person doing the interpreting. The clear danger is that we then go on to defer the intentionality to an homunculus, a ‘little man’ in the brain who again is the source of the intrinsic intentionality.

Jochen quotes the arguments of Searle and others who suggest that computational theories of the mind fail because the meaning and even the existence of a computation is a matter of interpretation and hence without the magic input of intrinsic intentionality from the interpreter fails through radical under-determination. Jochen dramatises the point using an extension of Searle’s Chinese Room thought experiment in which it seems the man inside the room can really learn Chinese – but only because he has become in effect the required homunculus.

Now we come to the really clever and original part of the paper; Jochen draws an analogy with the problem of how things reproduce themselves. To do so it seems they must already have a complete model of themselves inside themselves… and so the problem of regress begins. It would be OK if the organism could scan itself, but a proof by Svozil seems to rule that out because of problems with self-reference.  Jochen turns to the solution proposed by the great John Von Neumann (a man who might be regarded as the inventor of the digital computer if Turing had never lived). Von Neumann’s solution is expressed in terms of a tw0-dimensional cellular automaton (very simplistically, a pattern on a grid that evolves over time according to certain rules – Conway’s Game of Life surely provides the best-known examples). By separating the functions of copying and interpretation, and distinguishing active and passive states Von Neumann managed to get round Svozil successfully.

Now by importing this distinction between active and passive into the question of intentionality, Jochen suggests we can escape the regress. If symbols play either an active or a passive role (in effect, as semantics or as syntax) we can have a kind of automaton which, in a clear sense, gives its own symbols their interpretation, and so escapes the regress.

This is an ingenious move. It is not a complete solution to the problem of intentionality (I think the under-determination monster is still roaming around out here), but it is a novel and very promising solution to the regress. More than that, it offers a new perspective which may well offer further insights when fully absorbed; I certainly haven’t managed to think through what the wider implications might be, but if a process so central to meaningful thought truly works in this unexpected dual way it seems there are bound to be some. For that reason, I hope the paper gets wide attention from people whose brains are better at this sort of thing than mine…

Pointing

knight 2This is the second of four posts about key ideas from my book The Shadow of Consciousness. This one looks at how the brain points at things, and how that could provide a basis for handling intentionality, meaning and relevance.

Intentionality is the quality of being about things, possessed by our thoughts, desires, beliefs and (clue’s in the name) our intentions. In a slightly different way intentionality is also a property of books, symbols, signs and, pointers. There are many theories out there about how it works; most, in my view, have some appeal, but none looks like the full story.

Several of the existing theories touch on a handy notion of natural meaning proposed by H.P.Grice. Natural meaning is essentially just the noticeable implication of things. Those spots mean measles; those massed dark clouds mean rain. If we regard this kind of ‘meaning’ as the wild, undeveloped form of intentionality we might be able to go on to suggest how the full-blown kind might be built out of it; how we get to non-natural meaning, the kind we generally use to communicate with and the kind most important to consciousness.

My proposal is that we regard natural meaning as a kind of pointing, and that pointing, in turn, is the recognition of a higher-level entity that links the pointer with the target. Seeing dark clouds and feeling raindrops on your head are two parts of the recognisable over-arching entity of a rain-storm. Spots are just part of the larger entity of measles. So our basic ability to deal with meanings is simply a consequence of our ability to recognise things at different levels.

Looking at it that way, it’s easy enough to see how we could build derived intentionality, the sort that words and symbols have; the difference is just that the higher-level entities we need to link everything up are artificial, supplied by convention or shared understanding: the words of a language, the conventions of a map. Clouds and water on my head are linked by the natural phenomenon of rain: the word ‘rain’ and water on my head are linked by the prodigious vocabulary table of the language. We can imagine how such conventions might grow up through something akin to a game of charades; I use a truncated version of a digging gesture to invite my neighbour to help with a hole: he gets it because he recognises that my hand movements could be part of the larger entity of digging. After a while the grunt I usually do at the same time becomes enough to convey the notion of digging.

External communication is useful, but this faculty of recognising wholes for parts and parts for wholes enables me to support more ambitious cognitive processes too, and make a bid for the original (aka ‘intrinsic’) intentionality that characterises my own thoughts, desires and beliefs. I start off with simple behaviour patterns in which recognising an object stimulates the appropriate behaviour; now I can put together much more complex stuff. I recognise an apple; but instead of just eating it, I recognise the higher entity of an apple tree; from there I recognise the long cycle of tree growth, then the early part in which a seed hits the ground; and from there I recognise that the apple in my hand could yield the pips required, which are recognisably part of a planting operation I could undertake myself…

So I am able to respond, not just to immediate stimuli, but to think about future apples that don’t even exist yet and shape my behaviour towards them. Plans that come out of this kind of process can properly be called intentional (I thought about what I was doing) and the fact that they seem to start with my thoughts, not simply with external stimuli, is what justifies our sense of responsibility and free will. In my example there’s still an external apple that starts the chain of thought, but I could have been ruminating for hours and the actions that result might have no simple relationship to any recent external stimulus.

We can move thinks up another notch if I begin, as it were, to grunt internally. From the digging grunt and similar easy starts, I can put together a reasonable kind of language which not only works on my friends, but on me if I silently recognise the digging grunt and use it to pose to myself the concept of excavation.

There’s more. In effect, when I think, I am moving through the forest of hierarchical relationships subserved by recognition. This forest has an interesting property. Although it is disorderly and extremely complex, it automatically arranges things so that things I perceive as connected in any way are indeed linked. This means it serves me as a kind of relevance space, where the things I may need to think about are naturally grouped and linked. This helps explain how the human brain is so good at dealing with the inexhaustible: it naturally (not infallibly) tends to keep the most salient things close.

In the end then, human style thought and human style consciousness (or at any rate the Easy Problem kind) seem to be a large and remarkably effective re-purposing of our basic faculty of recognition. By moving from parts to whole to other parts and then to other wholes, I can move through a conceptual space in a uniquely detached but effective way.

That’s a very compressed version of thoughts that probably need a more gentle introduction, but I hope it makes some sense. On to haecceity!

 

Personhood Week

Banca RuritaniaPersonhood Week, at National Geographic is a nice set of short pieces briefly touring the issues around the crucial but controversial issue of what constitutes a person.

You won’t be too surprised to hear that in my view personhood is really all about consciousness. The core concept for me is that a person is a source of intentions – intentions in the ordinary everyday sense rather than in the fancy philosophical sense of intentionality (though that too).  A person is an actual or potential agent, an entity that seeks to bring about deliberate outcomes. There seems to be a bit of a spectrum here; at the lower level it looks as if some animals have thoughtful and intentional behaviour of the kind that would qualify them for a kind of entry-level personhood. At its most explicit, personhood implies the ability to articulate complicated contracts and undertake sophisticated responsibilities: this is near enough the legal conception. The law, of course, extends the idea of a person beyond mere human beings, allowing a form of personhood to corporate entities, which are able to make binding agreements, own property, and even suffer criminal liability. Legal persons of this kind are obviously not ‘real’ ones in some sense, and I think the distinction corresponds with the philosophical distinction between original (or intrinsic, if we’re bold) and derived intentionality. The latter distinction comes into play mainly when dealing with meaning. Books and pictures are about things, they have meanings and therefore intentionality, but their meaningfulness is derived: it comes only from the intentions of the people who interpret them, whether their creators or their ‘audience’.  My thoughts, by contrast, really just mean things, all on their own and however anyone interprets them: their intentionality is original or intrinsic.

So, at least, most people would say (though others would energetically contest that description). In a similar way my personhood is real or intrinsic: I just am a person; whereas the First Central Bank of Ruritania has legal personhood only because we have all agreed to treat it that way. Nevertheless, the personhood of the Ruritanian Bank is real (hypothetically, anyway; I know Ruritania does not exist – work with me on this), unlike that of, say, the car Basil Fawlty thrashed with a stick, which is merely imaginary and not legally enforceable.

Some, I said, would contest that picture: they might argue that ;a source of intentions makes no sense because ‘people’ are not really sources of anything; that we are all part of the universal causal matrix and nothing comes of nothing. Really, they would say, our own intentions are just the same as those of Banca Prima Centrale Ruritaniae; it’s just that ours are more complex and reflexive – but the fact that we’re deeming ourselves to be people doesn’t make it any the less a matter of deeming.  I don’t think that’s quite right – just because intentions don’t feature in physics doesn’t mean they aren’t rational and definable entities – but in any case it surely isn’t a hit against my definition of personhood; it just means there aren’t really any people.

Wait a minute, though. Suppose Mr X suffers a terrible brain injury which leaves him incapable of forming any intentions (whether this is actually possible is an interesting question: there are some examples of people with problems that seem like this; but let’s just help ourselves to the hypothesis for the time being). He is otherwise fine: he does what he’s told and if supervised can lead a relatively normal-seeming life. He retains all his memories, he can feel normal sensations, he can report what he’s experienced, he just never plans or wants anything. Would such a man no longer be a person?

I think we are reluctant to say so because we feel that, contrary to what I suggested above, agency isn’t really necessary, only conscious experience. We might have to say that Mr X loses his legal personhood in some senses; we might no longer hold him responsible or accept his signature as binding, rather in the way that we would do for a young child: but he would surely retain the right to be treated decently, and to kill or injure him would be the same crime as if committed against anyone else.  Are we tempted to say that there are really two grades of personhood that happen to coincide in human beings,  a kind of ‘Easy Problem’ agent personhood on the one hand and a ‘Hard Problem’ patient personhood?  I’m tempted, but the consequences look severely unattractive. Two different criteria for personhood would imply that I’m a person in two different ways simultaneously, but if personhood is anything, it ought to be single, shouldn’t it? Intuitively and introspectively it seems that way. I’d feel a lot happier if I could convince myself that the two criteria cannot be separated, that Mr X is not really possible.

What about Robot X? Robot X has no intentions of his own and he also has no feelings. He can take in data, but his sensory system is pretty simple and we can be pretty sure that we haven’t accidentally created a qualia-experiencing machine. He has no desires of his own, not even a wish to serve, or avoid harming human beings, or anything like that. Left to himself he remains stationary indefinitely, but given instructions he does what he’s told: and if spoken to, he passes the Turing Test with flying colours. In fact, if we ask him to sit down and talk to us, he is more than capable of debating his own personhood, showing intelligence, insight, and understanding at approximately human levels. Is he a person? Would we hesitate over switching him off or sending him to the junk yard?

Perhaps I’m cheating. Robot X can talk to us intelligently, which implies that he can deal with meanings. If he can deal with meanings, he must have intentionality, and if he has that perhaps he must, contrary to what I said, be able to form intentions after all – so perhaps the conditions I stipulated aren’t possible after all? And then, how does he generate intentions, as a matter of fact? I don’t know, but on one theory intentionality is rooted in desires or biological drives. The experience of hunger is just primally about food, and from that kind of primitive aboutness all the fancier kinds are built up. Notice that it’s the experience of hunger, so arguably if you had no feelings you couldn’t get started on intentionality either! If all that is right, neither Robot X nor Mr X is really as feasible as they might seem: but it still seems a bit worrying to me.

CEMI and meaning

bindingJohnjoe McFadden has followed up the paper on his conscious electromagnetic information (CEMI) field which we discussed recently with another in the JCS – it’s also featured on MLU, where you can access a copy.

This time he boldly sets out to tackle the intractable enigma of meaning. Well, actually, he says his aims are more modest; he believes there is a separate binding problem which affects meaning and he wants to show how the CEMI field offers the best way of resolving it. I think the problem of meaning is one of those issues it’s difficult to sidle up to; once you’ve gone into the dragon’s lair you tend to have to fight the beast even if all you set out to do was trim its claws; and I think McFadden is perhaps drawn into offering a bit more than he promises; nothing wrong with that, of course.

Why then, does McFadden suppose there is a binding problem for meaning? The original binding problem is to do with perception. All sorts of impulses come into our heads through different senses and get processed in different ways in different places and different speeds. Yet somehow out of these chaotic inputs the mind binds together a beautifully coherent sense of what is going on, everything matching and running smoothly with no lags or failures of lip-synch. This smoothly co-ordinated experience is robust, too; it’s not easy to trip it up in the way optical illusions so readily derail up our visual processes. How is this feat pulled off? There are a range of answers on offer, including global workspaces and suggestions that the whole thing is a misconceived pseudo-problem; but I’ve never previously come across the suggestion that meaning suffers a similar issue.

McFadden says he wants to talk about the phenomenology of meaning. After sitting quietly and thinking about it for some time, I’m not at all sure, on the basis of introspection, that meaning has any phenomenology of its own, though no doubt when we mean things there is usually some accompanying phenomenology going on. Is there something it is like to mean something? What these perplexing words seem to portend is that McFadden, in making his case for the binding problem of meaning, is actually going to stick quite closely with perception. There is clearly a risk that he will end up talking about perception; and perception and meaning are not at all the same. For one thing the ‘direction of fit’ is surely different; to put it crudely, perception is primarily about the world impinging on me, whereas meaning is about me pointing at the world.

McFadden gives five points about meaning. The first is unity; when we mean a chair, we mean the whole thing, not its parts. That’s true, but why is it problematic? McFadden talks about how the brain deals with impossible triangles and sees words rather than collections of letters, but that’s all about perception; I’m left not seeing the problem so far as meaning goes. The second point is context-dependence. McFadden quite rightly points out that meaning is highly context sensitive and that the same sequence of letters can mean different things on different occasions. That is indeed an interesting property of meaning; but he goes on to talk about how meanings are perceived, and how, for example, the meaning of “ball” influences the way we perceive the characters 3ALL. Again we’ve slid into talking about perception.

With the third point, I think we fare a bit better; this is compression, the way complex meanings can be grasped in a flash. If we think of a symphony, we think, in a sense, of thousands of notes that occur over a lengthy period, but it takes us no time at all. This is true, and it does point to some issue around parts and wholes, but I don’t think it quite establishes McFadden’s point. For there to be a binding problem, we’d need to be in a position where we had to start with meaning all the notes separately and then triumphantly bind them together in order to mean the symphony as a whole – or something of that kind, at any rate. It doesn’t work like that; I can easily mean Mahler’s eighth symphony (see, I just did it), of whose notes I know nothing, or his twelfth, which doesn’t even exist.

Fourth is emergence: the whole is more than the sum of its parts. The properties of a triangle are not just the properties of the lines that make it up. Again, it’s true, but the influence of perception is creeping in; when we see a triangle we know our brain identifies the lines, but we don’t know that in the case of meaning a triangle we need at any stage to mean the separate lines – and in fact that doesn’t seem highly plausible. The fifth and last point is interdependence: changing part of an object may change the percept of the whole, or I suppose we should be saying, the meaning. It’s quite true that changing a few letters in a text can drastically change its meaning, for example. But again I don’t see how that involves us in a binding problem. I think McFadden is typically thinking of a situation where we ask ourselves ‘what’s the meaning of this diagram?’ – but that kind of example invites us to think about perception more than meaning.

In short, I’m not convinced that there is a separate binding problem affecting meaning, though McFadden’s observations shed some interesting lights on the old original issue. He does go on to offer us a coherent view of meaning in general. He picks up a distinction between intrinsic and extrinsic information. Extrinsic information is encoded or symbolised according to arbitrary conventions – it sort of corresponds with derived intentionality – so a word, for example, is extrinsic information about the thing it names. Intrinsic information is the real root of the matter and it embodies some features of the thing represented. McFadden gives the following definition.

Intrinsic information exists whenever aspects of the physical relationships that exist between the parts of an object are preserved – either in the original object or its representation.

So the word “car” is extrinsic and tells you nothing unless you can read English. A model of a car, or a drawing, has intrinsic information because it reproduces some of the relations between parts that apply in the real thing, and even aliens would be able to tell something about a car from it (or so McFadden claims). It follows that for meaning to exist in the brain there must be ‘models’ of this kind somewhere. (McFadden allows a little bit of wiggle room; we can express dimensions as weights, say, so long as the relationships are preserved, but in essence the whole thing is grounded in what some others might call ‘iconic’ representation. ) Where could that be? The obvious place to look is in the neurons. but although McFadden allows that firing rates in a pattern of neurons could carry the information, he doesn’t see how they can be brought together: step forward the CEMI field (though as I said previously I don’t really understand why the field doesn’t just smoosh everything together in an unhelpful way).

The overall framework here is sensible and it clearly fits with the rest of the theory; but there are two fatal problems for me. The first is that, as discussed above, I don’t think McFadden succeeds in making the case for a separate binding problem of meaning, getting dragged back by the gravitational pull of perception. We have the original binding problem because we know perception starts with a jigsaw kit of different elements and produces a slick unity, whereas all the worries about parts seem unmotivated when it comes to meaning. If there’s no new binding problem of meaning, then the appeal of CEMI as a means of solving it is obviously limited.

The second problem is that his account of meaning doesn’t really cut the mustard. This is unfair, because he never said he was going to solve the whole problem of meaning, but if this part of the theory is weak it inevitably damages the rest.  The problem is that representations that work because they have some of the properties of the real thing, don’t really work.  For one thing a glance at the definition above shows it is inherently limited to things with parts that have a physical relationship. We can’t deal with abstractions at all. If I tell you I know why I’m writing this, and you ask me what I mean, I can’t tell you I mean my desire for understanding, because my desire for understanding does not have parts with a physical relationship, and there cannot therefore be intrinsic information about it.

But it doesn’t even work for physical objects. McFadden’s version of intrinsic information would require that when I think ‘car’ it’s represented as a specific shape and size. In discussing optical illusions he concedes at a late stage that it would be an ‘idealised’ car (that idealisation sounds problematic in itself); but I can mean ‘car’ without meaning anything ideal or particular at all. By ‘car’ I can in fact mean a flying vehicle with no wheels made of butter and one centimetre long  (that tiny midge is going to regret settling in my butter dish as he takes his car ride into the bin of oblivion courtesy of a flick from my butter knife), something that does not in any way share parts with physical relationships which are the same as any of those applying to the big metal thing in the garage.

Attacking that flank, as I say, probably is a little unfair. I don’t think the CEMI theory is going to get new oomph from the problems of meaning, but anyone who puts forward a new line of attack on any aspect of that intractable issue deserves our gratitude.