bindingJohnjoe McFadden has followed up the paper on his conscious electromagnetic information (CEMI) field which we discussed recently with another in the JCS – it’s also featured on MLU, where you can access a copy.

This time he boldly sets out to tackle the intractable enigma of meaning. Well, actually, he says his aims are more modest; he believes there is a separate binding problem which affects meaning and he wants to show how the CEMI field offers the best way of resolving it. I think the problem of meaning is one of those issues it’s difficult to sidle up to; once you’ve gone into the dragon’s lair you tend to have to fight the beast even if all you set out to do was trim its claws; and I think McFadden is perhaps drawn into offering a bit more than he promises; nothing wrong with that, of course.

Why then, does McFadden suppose there is a binding problem for meaning? The original binding problem is to do with perception. All sorts of impulses come into our heads through different senses and get processed in different ways in different places and different speeds. Yet somehow out of these chaotic inputs the mind binds together a beautifully coherent sense of what is going on, everything matching and running smoothly with no lags or failures of lip-synch. This smoothly co-ordinated experience is robust, too; it’s not easy to trip it up in the way optical illusions so readily derail up our visual processes. How is this feat pulled off? There are a range of answers on offer, including global workspaces and suggestions that the whole thing is a misconceived pseudo-problem; but I’ve never previously come across the suggestion that meaning suffers a similar issue.

McFadden says he wants to talk about the phenomenology of meaning. After sitting quietly and thinking about it for some time, I’m not at all sure, on the basis of introspection, that meaning has any phenomenology of its own, though no doubt when we mean things there is usually some accompanying phenomenology going on. Is there something it is like to mean something? What these perplexing words seem to portend is that McFadden, in making his case for the binding problem of meaning, is actually going to stick quite closely with perception. There is clearly a risk that he will end up talking about perception; and perception and meaning are not at all the same. For one thing the ‘direction of fit’ is surely different; to put it crudely, perception is primarily about the world impinging on me, whereas meaning is about me pointing at the world.

McFadden gives five points about meaning. The first is unity; when we mean a chair, we mean the whole thing, not its parts. That’s true, but why is it problematic? McFadden talks about how the brain deals with impossible triangles and sees words rather than collections of letters, but that’s all about perception; I’m left not seeing the problem so far as meaning goes. The second point is context-dependence. McFadden quite rightly points out that meaning is highly context sensitive and that the same sequence of letters can mean different things on different occasions. That is indeed an interesting property of meaning; but he goes on to talk about how meanings are perceived, and how, for example, the meaning of “ball” influences the way we perceive the characters 3ALL. Again we’ve slid into talking about perception.

With the third point, I think we fare a bit better; this is compression, the way complex meanings can be grasped in a flash. If we think of a symphony, we think, in a sense, of thousands of notes that occur over a lengthy period, but it takes us no time at all. This is true, and it does point to some issue around parts and wholes, but I don’t think it quite establishes McFadden’s point. For there to be a binding problem, we’d need to be in a position where we had to start with meaning all the notes separately and then triumphantly bind them together in order to mean the symphony as a whole – or something of that kind, at any rate. It doesn’t work like that; I can easily mean Mahler’s eighth symphony (see, I just did it), of whose notes I know nothing, or his twelfth, which doesn’t even exist.

Fourth is emergence: the whole is more than the sum of its parts. The properties of a triangle are not just the properties of the lines that make it up. Again, it’s true, but the influence of perception is creeping in; when we see a triangle we know our brain identifies the lines, but we don’t know that in the case of meaning a triangle we need at any stage to mean the separate lines – and in fact that doesn’t seem highly plausible. The fifth and last point is interdependence: changing part of an object may change the percept of the whole, or I suppose we should be saying, the meaning. It’s quite true that changing a few letters in a text can drastically change its meaning, for example. But again I don’t see how that involves us in a binding problem. I think McFadden is typically thinking of a situation where we ask ourselves ‘what’s the meaning of this diagram?’ – but that kind of example invites us to think about perception more than meaning.

In short, I’m not convinced that there is a separate binding problem affecting meaning, though McFadden’s observations shed some interesting lights on the old original issue. He does go on to offer us a coherent view of meaning in general. He picks up a distinction between intrinsic and extrinsic information. Extrinsic information is encoded or symbolised according to arbitrary conventions – it sort of corresponds with derived intentionality – so a word, for example, is extrinsic information about the thing it names. Intrinsic information is the real root of the matter and it embodies some features of the thing represented. McFadden gives the following definition.

Intrinsic information exists whenever aspects of the physical relationships that exist between the parts of an object are preserved – either in the original object or its representation.

So the word “car” is extrinsic and tells you nothing unless you can read English. A model of a car, or a drawing, has intrinsic information because it reproduces some of the relations between parts that apply in the real thing, and even aliens would be able to tell something about a car from it (or so McFadden claims). It follows that for meaning to exist in the brain there must be ‘models’ of this kind somewhere. (McFadden allows a little bit of wiggle room; we can express dimensions as weights, say, so long as the relationships are preserved, but in essence the whole thing is grounded in what some others might call ‘iconic’ representation. ) Where could that be? The obvious place to look is in the neurons. but although McFadden allows that firing rates in a pattern of neurons could carry the information, he doesn’t see how they can be brought together: step forward the CEMI field (though as I said previously I don’t really understand why the field doesn’t just smoosh everything together in an unhelpful way).

The overall framework here is sensible and it clearly fits with the rest of the theory; but there are two fatal problems for me. The first is that, as discussed above, I don’t think McFadden succeeds in making the case for a separate binding problem of meaning, getting dragged back by the gravitational pull of perception. We have the original binding problem because we know perception starts with a jigsaw kit of different elements and produces a slick unity, whereas all the worries about parts seem unmotivated when it comes to meaning. If there’s no new binding problem of meaning, then the appeal of CEMI as a means of solving it is obviously limited.

The second problem is that his account of meaning doesn’t really cut the mustard. This is unfair, because he never said he was going to solve the whole problem of meaning, but if this part of the theory is weak it inevitably damages the rest.  The problem is that representations that work because they have some of the properties of the real thing, don’t really work.  For one thing a glance at the definition above shows it is inherently limited to things with parts that have a physical relationship. We can’t deal with abstractions at all. If I tell you I know why I’m writing this, and you ask me what I mean, I can’t tell you I mean my desire for understanding, because my desire for understanding does not have parts with a physical relationship, and there cannot therefore be intrinsic information about it.

But it doesn’t even work for physical objects. McFadden’s version of intrinsic information would require that when I think ‘car’ it’s represented as a specific shape and size. In discussing optical illusions he concedes at a late stage that it would be an ‘idealised’ car (that idealisation sounds problematic in itself); but I can mean ‘car’ without meaning anything ideal or particular at all. By ‘car’ I can in fact mean a flying vehicle with no wheels made of butter and one centimetre long  (that tiny midge is going to regret settling in my butter dish as he takes his car ride into the bin of oblivion courtesy of a flick from my butter knife), something that does not in any way share parts with physical relationships which are the same as any of those applying to the big metal thing in the garage.

Attacking that flank, as I say, probably is a little unfair. I don’t think the CEMI theory is going to get new oomph from the problems of meaning, but anyone who puts forward a new line of attack on any aspect of that intractable issue deserves our gratitude.

13 Comments

  1. 1. scott bakker says:

    Meaning *holism* (internal relationality) is what he’s after, and I think he has a point, he’s just lacking the conceptual machinery to see it through. Regarding perception and conception, I’m not sure how to make a principled distinction anymore, so I have no problem with his argument in this regard. Otherwise, take your critique of intrinsic versus extrinsic information:

    “For one thing a glance at the definition above shows it is inherently limited to things with parts that have a physical relationship. We can’t deal with abstractions at all. If I tell you I know why I’m writing this, and you ask me what I mean, I can’t tell you I mean my desire for understanding, because my desire for understanding does not have parts with a physical relationship, and there cannot therefore be intrinsic information about it.”

    First, from a rigidly materialist standpoint, what you say here must be false. Clearly your ‘desire for understanding’ does have parts with physical relationships. My understanding of your desire in all likelihood turns on my brain’s ability to ‘mirror’ your own. And it’s important to note that ‘isomorphism’ (or ‘mirroring’) isn’t what’s crucial in this relation between mechanisms, so much as is *systematicity.* Strip away all the representational claptrap and we’re simply talking mechanisms tracking mechanisms, in some cases relying on (heuristic) isomorphisms, but in many more cases relying on dynamic(heuristic) systematicities.

    The question is, What ‘lights up’ all these *externally related* mechanisms such that we experience *internally related* meanings? It’s the hard problem, in effect, or very close to it. All McFadden is saying is that his and other EMF theories actually offer the beginning of a plausible solution to this problem, one that seems to be finding more and more empirical support.

    So my own position, for instance, concentrates on the information asymmetry you find between external relationality and internal relationality. It takes more information/computational power to cognize the former than the latter, so the Blind Brain Theory simply asks why we think the latter is the miraculous achievement. Of course consciousness as metacognized is ‘unified.’ The same way Aristotle, lacking astronomical information assumed the stars were set in a single sphere, metacognition, lacking neurological information, assumes experience is singular.

    This is a parsimonious enough explanation, except that it says nothing about why we should have any experience in the first place! It explains (away) a certain aspect of the way consciousness appears to metacognition (by simply ‘following the information’) without explaining consciousness itself. Why, in other words, should the absence of information result in what might be called ‘default identity.’ Why do we confuse, always and everywhere, aggregates with individuals in the absence of information otherwise? CEMI actually offers a way to begin to make sense of this, the ‘lighting up’ part, in effect. As McFadden says, fields are nonlocal. It could be the field provides the unitary canvas, which the myriad mechanical informational transactions of the brain dimple with differentiations, thus stranding us with the low-dimensional welter we confuse for the first person.

  2. 2. Arnold Trehub says:

    Scott: “CEMI actually offers a way to begin to make sense of this, the ‘lighting up’ part, in effect. As McFadden says, fields are nonlocal. It could be the field provides the unitary canvas, which the myriad mechanical informational transactions of the brain dimple with differentiations, thus stranding us with the low-dimensional welter we confuse for the first person.”

    All active neurons generate a local electromagnetic field in the brain. But some parts of the brain are clearly not necessary for conscious experience, and do not appear to give rise to conscious experience. Also lower organisms with EM fields such as aplysia give no evidence of being conscious. So whereas an electromagnetic field in the brain might be necessary for consciousness, it does not seem to be sufficient.

    Two questions: What do you mean by “lighting up” with respect to the brain? And why should a “low-dimensional welter” give us a first-person experience?

  3. 3. scott bakker says:

    Arnold: “What do you mean by “lighting up” with respect to the brain? And why should a “low-dimensional welter” give us a first-person experience?”

    What allows configurations of matter to generate experience.

    Because many peculiarities that make first-person experience (as metacognized) so refractory to mechanistic explanation can be parsimoniously interpreted in terms what information we might plausibly assume metacognition to have available.

  4. 4. Annon says:

    If Trehub doesn’t show up soon to answer this I am sending the out the national guard!!

  5. 5. Christophe Menant says:

    The enigma of meaning looks indeed intractable when we talk about humans meanings where the mystery of human consciousness sits in the background.
    But animals also manage meanings. And there things are easier to deal with as we do not have to cope with the mystery of human consciousness (the mystery about the nature of life looks simpler…).
    The sight of a cat is source of meaning for a mouse. The meaning “Danger” is generated because the mouse has a “stay alive” constraint to satisfy. And that meaning will initiate an action (hide or run away).
    And this brings us to a simple model for meaning generation by a system submitted to an internal constraint where:
    “a meaning is meaningful information that is created by a system submitted to a constraint when it receives external information that has a connection with the constraint.
    The meaning is formed of the connection existing between the received information and the constraint of the system.
    The function of the meaning is to participate to the determination of an action that will be implemented in order to satisfy the constraint of the system”.
    (short paper at http://crmenant.free.fr/ResUK/MGS.pdf).
    This simple model for meaning generation can be used for animals, humans and robots, assuming we correctly identify the constraints. And this is the difficult part for human meanings as we face again the unknown nature of human consciousness. But take a constraint like “look for happiness” and you can get to many human meanings….

  6. 6. Vicente says:

    Christophe,

    The function of the meaning is to participate to the determination of an action that will be implemented in order to satisfy the constraint of the system”.

    It could just be an instinctive subconscious response, with some emotional content, e.g. intense fear. I believe some additional requirements are needed to put meaning in place.

  7. 7. scott bakker says:

    Christophe: How does your account solve the symbol grounding problem?

    The worry I have is simply that of equivocation: unless an account actually resolves the issues that make meaning so problematic, the suspicion is that you are not so much naturalizing meaning as simply attaching the term to a bunch of naturalistic processes.

  8. 8. Christophe Menant says:

    Vincent,
    you are right.
    Meaning generation for humans can be conscious or unconscious (or mixed most of the time).
    For us humans, most of our conscious actions depend on free will type of process. But not entirely, as conscious actions can have unconscious motivations cohabiting with self-consciousness & free will (based on Id & Superego in Freudian terms). Defining human constraints for the MGS process has not really been done so far. We can just propose “look for happiness”, “valorize ego”, “limit anxiety”, …
    For animals (no being self-conscious), meaning generation for constraint satisfaction is of unconscious type. The constraints to be satisfied can be summarized as “stay alive” (individual & species) and “live group life”. We humans are also submitted to these animal constraints. But we can manage them partly on a self-conscious basis, with free will.
    The Meaning Generator System allows an evolutionary approach to meaning generation with what we know (and don’t know) about animals and humans. The key point is about the identification of constraints. The advantage of the MGS is in the system approach: it can be used for any system (animals, humans and robots). And it allows addressing intrinsic or derived aspect of constraints and meanings.
    If you have more time, have a look at http://crmenant.free.fr/2009BookChapter/10-Menant.pdf

  9. 9. Christophe Menant says:

    Scott,
    The groundings in/out made possible by the MGS are to show how meaning generation links a system to its environment. The Symbol Grounding Problem as formulated by S. Harnad is about the possibility for artificial agents to attribute intrinsic meanings to words or symbols. The MGS does not solution the SGP. But it can be used as a tool for an analysis of the intrinsic aspect of the generated meaning. The MGS defines a meaning for a system submitted to an internal constraint as being the connection existing between the constraint and the information received from the environment. The intrinsic aspect of the generated meaning results from the intrinsicness of the constraint. In order to generate an intrinsic meaning, an agent has to be submitted to an intrinsic constraint.
    Putting aside metaphysical perspectives, we can say that the performance of meaningful information generation appeared on earth with the first living entities. Life is submitted to an intrinsic and local “stay alive” constraint that is not present in the material world and exists only where life is. As today artificial agents are made with material elements, they cannot generate intrinsic meanings because they do not contain intrinsic constraints (they contain only derived constraints coming from the designers). So the semantic interpretation of meaningless symbols cannot be intrinsic to artificial agents. Looks like the SGP cannot have a solution in the world of today artificial agents….

  10. 10. scott bakker says:

    I fear, then, that this simply muddies the waters, rather than clarifies anything. Unless you’re clear that you are simply stipulating a definition, people will assume that you are actually tackling the *problem* of meaning.

    Otherwise, the fact that brains systematically engage environments via sensorimotor loops, the parameters of which are governed (constrained) by onboard information kind of goes without saying, doesn’t it? Why invoke the term ‘meaning’ at all?

    Another thing I find puzzling: what could possibly distinguish ‘derived’ from ‘original’ meaning on your account, given that artificial and biological information processing systems just do what they do at any given moment, regardless of causal history. On your account, would the discovery that God created life mean that only God possesses ‘original meaning’? What, in naturalistic terms, would this property of ‘originality’ consist in?

  11. 11. Chistophe Menant says:

    Brains indeed systematically engage environments via sensorimotor loops. But the point has to be addressed very differently if we are talking about human or animal brains.
    The notion of meaning is most of the time addressed relatively to human language. Meaning for human brains (http://plato.stanford.edu/entries/meaning/#NonProThe, http://mdpi.muni.cz/entropy/papers/e5020125.pdf). Taking humans as a starting point makes the notion of meaning pretty complex because humans are self-conscious and capable of free will. Performances that we do not understand. So the choice to ease the problem by investigating the notion of meaning at a less complex level where we do not have to take into account these mysterious performances: the animal level. This leads us to a simple model for meaning generation (the MGS). The next step is then to see if and how the proposed model can be used also for humans and for machines. So the focus on internal constraints which characterize meaning generation.
    And this brings us to the derived vs intrinsic constraints and meanings (I avoid using ‘original’ as it is used for intentionality. A complex subject that can be avoided here). Intrinsic constraint is to be understood as ‘constraint related to the nature of the agent and not resulting from an outside action on the agent’. This in order to differentiate living entities (animals & humans) from artificial systems. ‘Stay alive’ or ‘look for happiness’ are intrinsic to animal and human natures. If we program a robot with the constraint ‘avoid obstacles’ or ‘keep battery charged’, the constraints come from us, from outside the robot. They are derived constraints. Same for the resulting meanings.

  12. 12. scott bakker says:

    Christophe:

    “Taking humans as a starting point makes the notion of meaning pretty complex because humans are self-conscious and capable of free will. Performances that we do not understand. So the choice to ease the problem by investigating the notion of meaning at a less complex level where we do not have to take into account these mysterious performances: the animal level.”

    The problem, of course, is that humans are the inescapable starting point, given that the enterprise of cognizing meaning is a human enterprise. You can export the term ‘meaning’ to the ‘animal level,’ the same way you can name your gerbil ‘Meaning’ if you were so inclined. I appreciate that simplification is your motive, but what I’m asking for are your criteria of identity. How do you know that you are still talking about *meaning* when you leave the human behind? You may be ‘simplifying meaning’ as you hope. But then you may be talking about something entirely different under the guise of the term ‘meaning.’ What allows you to assert the former?

    Do you see the problem? Stipulating ‘Meaning = X’ is something anybody can do. Discovering ‘Meaning = X’ is a different matter entirely. If discovering the nature of meaning is what you’re after, then you need some of way of showing that you aren’t simply stipulating, and thus ultimately equivocating meaning with something that it is not.

    “Intrinsic constraint is to be understood as ‘constraint related to the nature of the agent and not resulting from an outside action on the agent’. This in order to differentiate living entities (animals & humans) from artificial systems. ‘Stay alive’ or ‘look for happiness’ are intrinsic to animal and human natures. If we program a robot with the constraint ‘avoid obstacles’ or ‘keep battery charged’, the constraints come from us, from outside the robot. They are derived constraints. Same for the resulting meanings.”

    Why would constraints arising out of evolution ontologically differ from constraints arising out of manufacture. A constraint is a constraint, isn’t it? What functional difference does the history make? And if it makes no functional difference, then what’s the point of drawing the distinction?

  13. 13. Christophe Menant says:

    Scott,
    The background of the proposed approach is an evolutionary one. Begin by the simplest. The starting point for meaning generation is not humans but animals. So we are following an evolutionary trend of increasing complexity, starting with a simple definition and a simple application for “meaning”. Human constraints come after in time and result from an evolutionary process that includes the animal constraints.
    (I’v always been surprised by the fact that our modern philosophies (analytic approach & phenomenology) take humans as stating point and neglect the tools offered by evolution).
    The first internal constraint came up in evolution with the first living entity. As said, the “stay alive “constraint exists only where life is. It has been something real new in evolution as existing only locally and very different from the physico-chemical laws that exist everywhere. For me, the notion of identity came up with life and meaning generation as characterizing and differentiating the volume of the living entity from its environment. Only that volume was submitted to the constraint, not the surroundings. The first “identity” in evolution came up with a finite volume submitted to a local and specific internal constraint.
    We can also probably put a starting point for the notion of autonomy with that coming up in evolution of a local constraint to be satisfied. To satisfy the “stay alive” con-straint the volume had to “invent” and implement the needed actions. And these actions had to be developed by the living entity. Nothing in the environment requested them. As the “stay alive” constraint is teleological, we can probably consider that, in a potentially hostile environment, teleology (and/with meaning generation)=> autonomy. But this subject may deserve more developments.
    And regarding robotic constraints, they come from outside the robot, from us humans. They are different from the “stay alive” constraint that has emerged from matter during evolution. And I would agree with you: choosing the right scale of observation, we can say that both types of constraint are the result of evolution: matter – life (stay alive constraint)-humans (robots constraints).
    But it is probably psychologically more comfortable to differentiate our free will from a trend to increasing complexity.
    (I’ll be away for a week, so sorry for late answers)

Leave a Reply