intentional automatonJochen’s paper Von Neumann Minds: Intentional Automata has been published in Mind and Matter.

Intentionality is meaningfulness, the quality of being directed at something, aboutness. It is in my view one of the main problems of consciousness, up there with the Hard Problem but quite distinct from it; but it is often under-rated or misunderstood. I think this is largely because our mental life is so suffused with intentionality that we find it hard to see the wood for the trees; certainly I have read more than one discussion by very clever people who seemed to me to lose their way half-way through without noticing and end up talking about much simpler issues.

That is not a problem with Jochen’s paper which is admirably clear.  He focuses on the question of how to ground intentionality and in particular how to do so without falling foul of an infinite regress or the dreaded homunculus problem. There are many ways to approach intentionality and Jochen briefly mentions and rejects a few (basing it in phenomenal experience or in something like Gricean implicature, for example) before introducing his own preferred framework, which is to root meaning in action: the meaning of a symbol is, or is to be found in, the action it evokes. I think this is a good approach; it interprets intentionality as a matter of input/output relations, which is clarifying and also has the mixed blessing of exposing the problems in their worst and most intractable form. For me it recalls the approach taken by Quine to the translation problem – he of course ended up concluding that assigning certain meanings to unknown words was impossible because of radical under-determination; there are always more possible alternative meanings which cannot be eliminated by any logical procedure. Under-determination is a problem for many theories of intentionality and Jochen’s is not immune, but his aim is narrower.

The real target of the paper is the danger of infinite regress. Intentionality comes in two forms, derived on the one hand and original or intrinsic on the other. Books, words, pictures and so on have derived intentionality; they mean something because the author or the audience interprets them as having meaning. This kind of intentionality is relatively easy to deal with, but the problem is that it appears to defer the real mystery to the intrinsic intentionality in the mind of the person doing the interpreting. The clear danger is that we then go on to defer the intentionality to an homunculus, a ‘little man’ in the brain who again is the source of the intrinsic intentionality.

Jochen quotes the arguments of Searle and others who suggest that computational theories of the mind fail because the meaning and even the existence of a computation is a matter of interpretation and hence without the magic input of intrinsic intentionality from the interpreter fails through radical under-determination. Jochen dramatises the point using an extension of Searle’s Chinese Room thought experiment in which it seems the man inside the room can really learn Chinese – but only because he has become in effect the required homunculus.

Now we come to the really clever and original part of the paper; Jochen draws an analogy with the problem of how things reproduce themselves. To do so it seems they must already have a complete model of themselves inside themselves… and so the problem of regress begins. It would be OK if the organism could scan itself, but a proof by Svozil seems to rule that out because of problems with self-reference.  Jochen turns to the solution proposed by the great John Von Neumann (a man who might be regarded as the inventor of the digital computer if Turing had never lived). Von Neumann’s solution is expressed in terms of a tw0-dimensional cellular automaton (very simplistically, a pattern on a grid that evolves over time according to certain rules – Conway’s Game of Life surely provides the best-known examples). By separating the functions of copying and interpretation, and distinguishing active and passive states Von Neumann managed to get round Svozil successfully.

Now by importing this distinction between active and passive into the question of intentionality, Jochen suggests we can escape the regress. If symbols play either an active or a passive role (in effect, as semantics or as syntax) we can have a kind of automaton which, in a clear sense, gives its own symbols their interpretation, and so escapes the regress.

This is an ingenious move. It is not a complete solution to the problem of intentionality (I think the under-determination monster is still roaming around out here), but it is a novel and very promising solution to the regress. More than that, it offers a new perspective which may well offer further insights when fully absorbed; I certainly haven’t managed to think through what the wider implications might be, but if a process so central to meaningful thought truly works in this unexpected dual way it seems there are bound to be some. For that reason, I hope the paper gets wide attention from people whose brains are better at this sort of thing than mine…

75 Comments

  1. 1. Jochen says:

    Dear Peter, thanks for your kind words. Unfortunately, I have relatively little to comment—you’ve summarized the idea well (and charitably!), and I believe you’re also right in saying that underdetermination is a problem for it (as it is, I believe, for any similar naturalization of intentionality). There are some additional problems I’m not sure how to address yet, such as that of different modalities or attitudes towards mental content—if my construction satisfactorily explains how a physical system can be about some p, it’s still a question how one goes from there to a belief that p, as opposed to a desire that p, and so on. Also, intentional content is presented to us through several different channels (senses, memory etc.), yet those channels seem to pertain to the same content—if I see and smell an apple, and furthermore remember that I put that apple into the fruit bowl, then I’m smelling, seeing and remembering the same apple via different routes.

    One possibility to attack the first of these might be to note that believing something versus wanting something to be the case leads to different sorts of action—action consistent with the belief that p, versus action aimed at bringing about p, and so on. In a similar vein, one might surmise that the same actions (say, grabbing the apple) are triggered by different pathways upon my seeing, smelling and remembering the apple, i.e. that there is a different cause in each case (in my paper, the model assumes that the CA pattern is influenced by sensory data, which then gives rise to the particular pattern coding for some action—it might then be that two different ‘mother’ patterns, one arrived at via the smell of the apple, another via its sight, then just yield the same ‘daughter’ generation). All of this, I think, is in need of some further elaboration.

    Regarding the underdetermination, my only idea so far is to perhaps modify the notion of meaning somewhat to also include causal factors (as I implicitly already did in response to the binding issue above). There are different underdetermination problems for different accounts of intentionality: causal theories, where sensory data cues up a representation, suffer from a disjunction problem, that is, a representation of an apple is perhaps not just cued up by an apple, but also by an oddly-shaped pear. That some such thing must happen is demonstrated by our occasionally being wrong about intentional content—we think there is an apple, when in fact there is a pear. However, if what a representation represents merely is what causes that representation to be called up, then in this case, the representation would just (correctly) represent “apple or oddly-shaped pear”, thus leading to underdetermination.

    The action-based account I favor leads to underdetermination in a different way: there is not one single simple fact actions pertain to, but they always appear within a network of facts, each of whose failure would cause the action to miss its intended goal. So, for instance, if you want to make applesauce, then you need for there to be an apple; but you also need, say, for your stove to work.

    Now, if one combines causal and action-based notions (as seems compatible with my proposal: a certain sensory input causes a change in the pattern, which then gives rise to one coding for appropriate action), it seems you get two sources of underdetermination, making the problem worse—but the idea here is that the set of things picked out by the causal account and the set of things picked out by the action-based account will have a small overlap—ideally, only one element. Then, an apple would be the only thing both causally sufficient for the pattern change and necessary for the intended action, making applesauce, to meet its goal. (A pear might trigger the same pattern, but one wouldn’t be able to make applesauce from it; a functioning stove is necessary to make applesauce, but wouldn’t cause the apple-pattern to form.)

    Overall, I think there is some further work needed to make sure this works with my account, however.

  2. 2. Peter says:

    All very interesting. On belief vs desire the ‘direction of fit’ might be relevant. You know this is the idea that if belief and world don’t match, you try to change the belief; if desire and world don’t match, you try to change the world.

  3. 3. Sci says:

    Congrats Jochen! I fear I’m still trying to get a handle on the paper though Peter’s summarization definitely helps so thanks to him for posting this.

  4. 4. Jochen says:

    Peter, I’ve been thinking about how that notion might play into my considerations—it’s perhaps not quite straightforward, since my meanings are cashed out by actions in the world both for beliefs and desires. I also struggle with a bit of an asymmetry between the two directions of fit that it seems to me is often glossed over—in a sense, some belief is a precondition for a desire, namely the belief that something isn’t the case (which could, in principle, also be wrong). So the mind-to-world direction seems to depend on the world-to-mind direction, but not vice versa—we desire to change the world as we believe it to be, but what we desire has no influence on how the world is. But I need to collect my thoughts to make some more sense here.

    Sci, thanks! If you have any questions, I’d be happy to try and answer—you’d be doing me a favor, as my recent discussion with Sergio has reinforced how important it can be to try and explain your views to somebody else in helping to really get a clear picture of them. And don’t worry, I wouldn’t expect you to show the patience Sergio did in going down that particular rabbit hole with me!

  5. 5. Peter says:

    Direction of fit could be a complete red herring, of course. It would be handy if you could reduce desires to beliefs (beliefs that x is good to have, that kind of thing), but I’m not sure it works. Or it could be that there’s something around how they relate to action – desires having a direct and particular connection, while beliefs condition the wider environment in which action is formulated.

    I’m pretty sure I’m not really helping…

  6. 6. Hunt says:

    I can’t seem to access the paper, even though willing to drop a little coin for it. Not letting that stop me, from the gist gained via the comments and Peter’s review…
    I like the approach that ties intention with action (behavior) though for some people I suspect this will raise the specter of behaviorism and invite claims of non-explanation. Ultimately though I think intentionality will yield to some explanation combining action with discovery, or these things iterated.
    I have not been entirely able to glean the significance of the von Neumann connection, but I’m familiar with his work with automata. It seems to me that the intent here is to ground regress solidly in a self-referential absolute. I tend to prefer the idea that meaning “emerges” (an overused word, but…) from networks of reference. The self-referential idea is that meaning bootstraps from a node with a loop to itself. The network bootstrap idea is that meaning is a bit of legerdemain where each node points to another. There is no “there there”, and meaning exists in the relationships between symbols. But what is the meaning of the symbol? Yet more relationships to yet more symbols. The network is the meaning. Okay, before this gets any more Steve Jobsian, I will end it. Please ignore if I’m way off base.

  7. 7. Jochen says:

    Peter, I’m not sure regarding the reduction—is it the same to believe that x being the case would be a good thing, and to desire x? There’s a difference in content, for one: the content of the belief is not x, but ‘x should be the case’, which has a normative element; I’m not sure that desire necessarily includes such an element. But if such an analysis works, then maybe one could use that element as a kind of ‘sabot’ encasing the content x—turn it from a belief into a desire by adding the right sort of padding, like ‘…should be the case’, and then fire it at the world in the same way as any ordinary belief. (Now I kind of want it to work just so I get to use that sabot-metaphor…)

    Hunt, if you’ve got some email address you’re comfortable sharing publicly, I’d be more than happy to send you a copy of the paper. Otherwise, you could either ask Peter for my address, or just ask him for a copy of the paper.

    Regarding the von Neumann construction, the basic idea is that action is fine to transport meanings, but in order to have meaning to an agent, something more is needed—in the paper, I construct a variant of the Chinese Room where its inhabitant, basically, is taught Chinese by means of carrying out certain actions (like drawing a circle) in response to instructions in Chinese. However, this presupposes intentionality on the part of the occupant, and hence, leads to an infinite regress; to cut that regress short, I need a kind of symbols that ‘carry their meaning on their sleeve’, so to speak.

    I argue that von Neumann replicators can be used to accomplish just this—within a cellular automaton world, with meaning dictated by action, a pattern means something to another pattern if it causes that second pattern to do something—say, construct a third one, which then is what the first pattern means to the second. However, a replicator pattern effectively carries its own code within itself, so it means something to itself—it causes itself to create a copy of itself. If, now, this is linked to, e.g., some perception, then it may create a different pattern out of itself, and thus, mean something, be about something beyond itself. Thus, used as a symbol within some agent’s mental architecture, it supplies its own meaning without any need for interpretation.

    That need for interpretation is, I’m afraid, why I am skeptical about proposals such as your network. Basically, any set of relations really only fixes the cardinality of its set of relata—this is essentially Newman’s objection to Russell’s structural realism. That is, if all you have are objects a,b,c,… and relations Rab, Rac,… and so on, then absolutely any set of things (as long as there are sufficiently many) fulfills this structure. The reason for this is that along with any collection of objects, their powerset exists, which can be viewed as the set of all relations in which these objects may stand to one another.

    So if you have your set of symbols, together with their relations, they may be taken to refer to absolutely any set of objects, as long as there’s sufficiently many of them. Meaning in such a setup would be completely underdetermined, and thus, entirely open to interpretation. For similar reasons, I believe that there is no purely computational account of meaning (basically any physical system can be considered to execute any computation; computation is a mind-dependent notion and hence, trying to ground mind in computation is circular). Thus, the recourse to actions in the world, which is not a computational notion.

  8. 8. ihtio says:

    Jochen,

    I love the idea of founding meaning in action.

    What do you think of the following objection?
    Water flows down on a rock (for example a flow from a waterfall falls on a shallow ground). Rock changes as a result – it vibrates, it restructures, it may even move. Does the water have any meaning to a rock? If yes, then any cause + effect is contaminated with intentionality and/or meaning. If no, then what exactly is the difference between your view on meaning dictated by action(s) and other non-meaningful cause and effect chains?

  9. 9. Hunt says:

    Jochen,
    Here’s an old address I never use:
    thehuntbox@hotmail.com

    Thanks!

  10. 10. Peter says:

    Jochen,

    I doubt the reduction can be made to work myself; however, if someone said that:

    a) they believed x happening would be good and
    b) they did not desire that x should happen,

    wouldn’t we see a contradiction (absent all the ambiguities and complications that real-world utterances are prone to)?

  11. 11. Jochen says:

    Ihtio, you’re right that my notion of meaning is rather broad; however, it’s not quite that broad: the idea behind meaning-as-action is that there are some conditions that need to be fulfilled in order for an action, as undertaken, to make sense. If an animal jumps from a cliff, for instance, absent confounding factors (suicidality, lemming migration etc.), we would conclude that it believes it can fly. That’s where pure success semantics originates: one can be said to believe that p if p is necessary for the actions one undertakes to succeed.

    I’m trying to be a bit more broad here, and instead speak of an action’s ‘pertaining’ to a certain state of affairs—for example, consider Pavlov’s dogs salivating upon hearing the bell: their action would be appropriate in the presence of food, but are instead invoked by something different, to which it is not directly appropriate; hence, the bell can be thought of as a ‘symbol’ for food, as it causes actions not directed at itself, but rather, at food.

    ‘Actions’ of the sort you consider really don’t have success conditions, or pertain to anything. I would call them rather reactions: because of some causal influence, the thing being influenced reacts in a certain way.

    In some sense, however, anything of that sort can be used in a symbolic fashion—like the rock erosion on Mars in some sense indicates the presence of water in ancient times. It’s in fact this notion of using something as a symbol that I think is lost in many attempts to naturalize meaning, including teleosemantics and success semantics, and it’s why I introduce the von Neumann construction. The dog doesn’t really know that the bell is about food; all it knows is that it starts to salivate. Thus, the meaning of the bell isn’t meaning to the dog, but to an external third party, like Pavlov analyzing the dog’s behavior. The von Neumann construction now allows to create symbols whose meaning is in fact meaning to themselves, just like how if we think about an apple, that thought has meaning to the mind that thinks it.

    Peter, I’m a bit afraid of just blundering around on this ground—I feel I haven’t surveyed it enough to really know my way around. But I’m not going to let that stop me.

    I think intuitively, I want to agree, but it seems that somebody could hold that they have no desire towards making x happen, while nevertheless thinking that x happening would be good in some sense. For instance, Buddhists are warned that desiring Nirvana bars them from achieving Nirvana; but I’m not sure that this doesn’t mean that they would consider achieving Nirvana a good thing, in some sense.

    There’s also a question of whether we necessarily want what is good: somebody could consider the survival of humanity good, but nevertheless try to eradicate it, because they’re evil (OK, that’s perhaps putting it a bit supervillainy). Generally, I always worry that one runs afoul of one or the other well-known problem when one equates normative concepts with descriptive ones—beliefs concern how we take the world to be, while desires concern how we think it ought to be. There’s already some non-trivial assumption smuggled into the foundations—that one should desire what one thinks would be good.

    So in everyday discourse, I would agree with you, but I’m not completely convinced that there’s not some cases where this sort of intuition invites fallacious reasoning.

  12. 12. ihtio says:

    Jochen, if you would be so kind 🙂

    tjat@interia.pl

  13. 13. Charles Wolverton says:

    The dog doesn’t really know that the bell is about food; all it knows is that it starts to salivate. Thus, the meaning of the bell isn’t meaning to the dog, but to an external third party, like Pavlov analyzing the dog’s behavior.

    It seems to me that the symbol in this scenario is A(x) (the dog’s neural activity consequent to aural sensory stimulation x due to the ringing bell) and the meaning of that symbol is the dog’s action of salivation. Once the classical conditioning has been completed – ie, once the association between the ringing bell and a subsequent feeding has been established – it seems possible that the food drops out of consideration so that the symbol isn’t “about” the food but is merely part of a sensorimotor structure that takes A(x) as input and in response effects salivation. Analogous to the concept that the meaning of speech is speaker-intended action by a hearer, in this scenario the meaning of the symbol A(x) would then be the Pavlov-intended action that the dog salivate. In which case I agree that any “intentionality” lurking in the scenario is derived from Pavlov, but not via a posteriori analysis of the dog’s behavior but via a priori intent.

    I see the scenario in which V(x) is visual stimulation due to light reflected from a flower as similar. As described in the paper, the replication process seems to incorporate a new description ?(V(x)) into the composite description of the various functional components so that U can reconstruct V(x) as well in subsequent iterations. But for V(x) to have meaning to an enhanced entity that is capable of uttering “flower” in response, occurance of V(x) must have become associated with that word. And that requires a teacher, suggesting that any intentionality is again derived.

    Separate issue:

    The paragraphs at the top of p. 10 contain the expression ?(U+S+C+V(x)+?(A)+A), which suggests creation of a description of a description. The expression also seems inconsistent with the processing steps described. Perhaps inclusion of ?(A) is a typo?

  14. 14. Charles Wolverton says:

    Seems the comment interpreter doesn’t like the extended character set. In that comment, the various misplaced question marks are supposed to be phi’s as in Jochen’s paper.

  15. 15. David Duffy says:

    “computation is a mind-dependent notion and hence, trying to ground mind in computation is circular”

    http://arxiv.org/pdf/1210.6448.pdf

  16. 16. Jochen says:

    Charles:

    It seems to me that the symbol in this scenario is A(x) (the dog’s neural activity consequent to aural sensory stimulation x due to the ringing bell) and the meaning of that symbol is the dog’s action of salivation

    Well, there are problems with interpreting neural activity as being symbolic: who’s using it as a symbol? This immediately conjures up images of somebody (or -thing) looking down on the neural activity, using it to ‘stand for’ the dog’s salivating action; but this of course leads exactly to the problems I hope to evade. On my analysis, something means p when its actions pertain to p—for instance, a grabbing action means an apple if that action is successful only if an apple is present. A symbol must be able to stand for something else; hence, the actions it causes must pertain to that something. That’s why, in my scenario, the bell stands for food, as it causes an action that’s only sensible in regards to food, but not in regards to the bell’s ringing.

    To interpret the neural activity as a symbol directly invites the homunculus problem: to whom (or what) is it a symbol? Thus, there ought to be an entity distinct from A(x) to whom A(x) means something; but this yields a three-place representation relation, and hence, an infinite regress. That’s why my symbols have to be ‘self-interpreting’: they supply their interpretation without any need for a third party.

    But for V(x) to have meaning to an enhanced entity that is capable of uttering “flower” in response, occurance of V(x) must have become associated with that word.

    The presence of a visual stimulus causes a change in the first-generation pattern, which then goes on to construct a second-generation pattern incorporating that change in morphology. Thus, the first-generation pattern produces an action, depending on which stimulus it has received. The second-generation pattern then effectively provides a set of instructions for the organism—the actions to be undertaken whenever that particular stimulus is present (such as uttering ‘flower’). Of course, which actions are to be taken is the result of a learning process, an example of which I give in my discussion of the Chinese Room. But the intentionality is grounded in the first-generation pattern, changed through the visual stimulus, having meaning to itself, by virtue of producing an action dependent on the stimulus it received (the construction of the second-generation pattern and then the actions that pattern produces).

    Perhaps consider a pure CA universe. A pattern X might cause Z to produce Y—there, we have a three-place representation relation, as X means Y to Z. This is derived intentionality, and insufficient to ground meaning. Alternatively, consider a pattern N, which has the capacity of ‘scanning’ a pattern Y (which is not prohibited by Svozil’s theorem), changing itself into N'(Y), incorporating some description of Y, and then producing copies of Y. There, N'(Y) means Y to itself—the relation becomes two-place, and no external agent is needed to provide an interpretation. N'(Y) is about Y, i.e. it is a physical system that points beyond itself. This is what we need to ground intentionality.

    Now all that’s left is to hook this up to the real world. I do this in two ways: first, replace the pattern Y by a sensory stimulus; second, interpret the second-generation pattern as being a kind of ‘code’ for a set of actions. Here, ‘code’ merely means a set of signals addressing some output channels—such as motion, speech, etc.; it’s a set of instructions. If this works, then I can basically use the von Neumann construction to replace Searle in the Chinese Room, and thus, get rid off his homuncular role.

    —————

    David, you probably meant to use this paper against my claim somehow, but I don’t see how. Could you elaborate?

  17. 17. Jochen says:

    Sorry, I forgot addressing your side remark, Charles. As far as I can see, everything seems in order: the whole assembly is U + C + S + A + F(A), with the F(A), the description of A, necessary in order to be able to produce a copy of the analyzer, which can’t scan itself in order to do so (otherwise, the whole exercise would be unnecessary). In order to produce this set of patterns, U must be fed with F(U + C + S + A + F(A)) (this does include a description of a description, but this is not a problem—a description is a CA pattern like any other CA pattern, and thus, can be constructed by means of providing a constructor with its description). In order to create this description, A scans all patterns except for itself, i.e. U, S, C, and F(A), producing F(U + S + C + F(A)). To this, then the description of A, F(A), must be added, such that it, too, is reproduced, which is done by the copying machine C, yielding the required F(U + S + C + A + F(A)). Or am I missing something?

  18. 18. Charles Wolverton says:

    Jochen –

    I don’t think of conditioning in terms of symbols, meanings, etc, for the reasons you suggest. I was just trying to fit it into your schema. To me, conditioning is a (conceptually) simple, learned stimulus-response pairing (implemented via something like Edelman’s neural Darwinism). One can describe the pairing using the intentional vocabulary – the response is “about” the stimulus – but I don’t see what is added by doing so. I’ve come to suspect that we’ll always disagree on these matters because we start with incompatible premises, in particular you’re commited to concept of intentionality, I’m not.

    As for V(x), I think I understand the replication process and the meaning-as-action idea – as you know, I subscribe to that idea in the case of speech. But reconstruction of an aural sensory input pattern from a description of that pattern isn’t the kind of action I have in mind. To toss off the learning of responsive behavioral actions that seem to me more relevant (uttering “flower”, picking a flower, etc) by “interpreting the second-generation pattern as being a kind of ‘code’ for a set of actions” seems to beg the key question: how does an ostensive learning process avoid derived intentionality via the teacher?

    Learning how to behave in some context via trial-and-error might be a better example since there’s no explicit teacher. But that scenario seems to invite Ihtio’s concern as to how far down the stimulus-response path one can go and retain “meaning as action” as a reasonable idea. Is walking around a big rock in one’s path the “meaning” of the rock?

    I understand the initial replication process that the problematic (to me, anyway) symbology is intended to represent to be:

    1. The analyzer (A) generates descriptions of the patterns (U,C,S,V(x). For clarity, I’m going to assume that process is distributive, so it yields the 4-tuple [Φ(U),Φ(C),Φ(S),Φ(V(x)] .

    2. So that A can also be replicated in the next step, Φ(A) is copied, and the copy is “appended” to the description 4-tuple to form [Φ(U),Φ(C),Φ(S),Φ(V(x),Φ(A)].

    3. These descriptions are copied, then used by U to construct [U,C,S,V(x),A]. The tape then contains [U,C,S,V(x),A] and their descriptions.

    In this process, Φ(A) is preserved in the daughter pattern by copying, A is preserved by being constructed from Φ(A). No descriptions of descriptions are required.

  19. 19. Jochen says:

    Charles:

    I’ve come to suspect that we’ll always disagree on these matters because we start with incompatible premises, in particular you’re commited to concept of intentionality, I’m not.

    Well, I’m not simply committed to intentionality as a matter of principle; if you show me how to make do without it, then I’ll happily cross it from the list. The problem is that I don’t see how your framework accounts for the evidence: our mental states have content, to us, that is, to the mind being in that state. I don’t simply grasp for an apple upon seeing it, but rather, that apple is within my mental content. This is just data, and as such, needs to be accounted for in any suitable theory; and I have yet to see a theory that accounts for this data on a purely behaviorist conception, such as you seem to be championing. So that’s the reason I postulate the von Neumann process, simply in order to account for the evidence that to our minds, our minds have content.

    But reconstruction of an aural sensory input pattern from a description of that pattern isn’t the kind of action I have in mind.

    Well, the idea is that if somebody has a blueprint, and from that blueprint constructs some gadget, then to them, that blueprint is about (or pertains to, etc.) that gadget, no? But then, if something constructs something else from its own state, then it is to itself about that something else. This doesn’t have to be the sensory input pattern per se, but can be more generally related to the input; and indeed, my model is to have that which is constructed be a set of instructions for actions appropriately related to the input, i.e. actions whose aim is bound up with the presence of the stimulus that generated the input.

    Learning how to behave in some context via trial-and-error might be a better example since there’s no explicit teacher.

    The teacher is just part of the environment—something that responds to probe actions with some certain reaction. It’s not their intentionality that grounds the intentionality of my von Neumann minds—indeed, they might be just as well a non-intentional robot. Derived intentionality would mean that the teacher’s intentionality is necessary in order to interpret the von Neumann mind’s states, in the same way that my intentionality is necessary to interpret your words as meaning something; however, the states of the von Neumann mind interpret themselves, without any recourse to another being’s intentionality.

    For clarity, I’m going to assume that process is distributive

    What operation does it distribute over?

    In this process, ?(A) is preserved in the daughter pattern by copying, A is preserved by being constructed from ?(A). No descriptions of descriptions are required.

    Well, that’s a possible process, but I simply chose a different one, in order for there to be a description of the whole pattern; this is perhaps not essential, but to me, seemed cleaner.

    Apart from that, I think now that this particular example wasn’t really the best one; I should probably have stuck with the original one, where one starts out with the set of pattern plus their description, i.e. U + S + C + ?(U + S + C). That way, any change on the tape induces a change in the daughter pattern, i.e. from a first-generation N: U + S + C + ?(U + S + C + V(x)) we obtain a second generation N’: U + S + C + V(x) + ?(U + S + C + V(x)); thus the visual stimulus produces an N that is about N’. Does that work for you?

  20. 20. Jochen says:

    …Alright, how did you do the trick with the phi? I had hoped simply copying it from your post would work, but unfortunately, no dice.

  21. 21. David Duffy says:

    “you probably meant to use this paper against my claim somehow, but I don’t see how.”

    There are a series of recent papers on the relationship between Shannon entropy and thermodynamics, many of which are cited by or cite that paper. That particular one I think of as a refutation of the “dancing pixies” and related arguments that would have it that the chemical motor is computing in the same way as the informational motor. AIUI, Shannon entropy and old style entropy are only identical at equilibrium – the extended second law of thermodynamics that includes Shannon entropy in it are nice because they describe the nonequilibrium conditions essential both for life and causal type computing that operates under the Landauer principle, that reuse of memory requires work:

    “we can extract an amount of work, on average, proportional to the information acquired in the measurement. For error-free measurements, the value of M is unequivocally determined by X , and the mutual information is the Shannon entropy of the measurement I (X(tms);M) = H(M)”

    The dancing pixies type computation never does any work. One example of such calculations is
    http://scitation.aip.org/content/aip/journal/jcp/139/12/10.1063/1.4818538

    “We often think of the main entropic hurdle that must be overcome
    by biological self-organization as being the cost of assembling the
    components of the living thing in the appropriate way. Here, however,
    we have evidence that this cost for aerobic bacterial respiration
    is relatively small, and is substantially outstripped by the sheer
    irreversibility of the self-replication reaction as it churns out copies
    that do not easily disintegrate into their constituent parts.

    More significantly, these calculations also establish that the E. coli
    bacterium produces an amount of heat less than six times (220npep/42npep)
    as large as the absolute physical lower bound dictated by its growth rate,
    internal entropy production, and durability.”

    It is linking computation to work that is done to efficiently maximize a quantity, here fitness cum durability, where I am guessing something interesting will be able to be said about aboutness. It would be nice to link non-equilibrium free energy in thermodynamics with Friston’s ideas that “as the heart pumps blood, the brain minimizes [information theoretic] free energy.”

  22. 22. Jochen says:

    David, thanks for your elaboration. I’ve yet to read the paper in detail, but from quickly looking over it, I don’t see that their construction really challenges the arguments made by Searle/Putnam et al regarding the arbitrariness of interpretation of a computation.

    The problem, as I see it in this case specifically, is that Shannon interpretation is a syntactic quantity, which has no reference to any semantic properties, i.e. to the ‘meaning’ of certain strings of symbols. A sentence in Chinese may be a source of semantic information to you, if you speak Chinese, but it’s useless to me; nevertheless, to both of us, it will have the same Shannon entropy. (A point Peter also made in his latest entry.) This objectivity, of course, marks its usefulness in terms of analyzing communication channels etc., but it also means that the dimension which we’re interested in when considering the mind-body issue is glossed over.

    In a sense, Shannon entropy counts differences; and each difference can be used to store one bit of information. But what this bit-string then means is a different question altogether.

    It’s also the syntactic, not the semantic dimension of information that’s linked to thermodynamic quantities. You can see that already by noting that thermodynamic entropy really is just a measure of how many microstates make up a macrostate, or what the phase-space volume associated to a given macrostate is. Thus, thermodynamic entropy concerns a number of differences—between the microstates—we coarse-grain out, and which hence aren’t available anymore to store information.

    Or consider Landauer’s principle. Again, the information linked here to heat production is merely syntactic: consider a bit-string s in a memory represented by particles that may either be in one half or the other of a small box. In order to erase that memory, you have to re-set all the particles to a known state, say, being in the left half (representing the value 0). You could do this by measuring the particles, and turning around the boxes where the particle is in the right half; however, then, in your measurement records, you would still have a copy of s, and hence, wouldn’t have deleted it.

    In order to erase the information, you could instead insert a piston from the right, and compress the particle—a one-molecule gas—into the left half. By doing so, you reduce its entropy by k ln(2), which necessitates you doing work kT ln(2) (where k is Boltzmann’s constant, and T is the temperature). Thus, we can associate with a bit of (Shannon-) information a quantity of work equal to kT ln(2). By an inversion of this procedure, we can conversely use a bit of information (about where the particle is) to extract an equivalent measure of work—if the particle is in the left side, slide in a piston from the right, and let the gas expand; if it is in the right side, slide in a piston from the left. Thus, having information can be used to do work.

    However, the crucial point is that we never, in the above, had to consider the meaning of the string s; all we needed was the physical differences that embody it. Whether it contains the coded works of Shakespeare or a book of dirty limericks is simply immaterial to these thermodynamic considerations, and, I believe, likewise to those in the paper you cite. But it is indeed this semantic dimension that we’re after in terms of providing a grounding for the intentional content of minds—here, it is a crucial difference whether we’re thinking of Shakespeare, or chuckling at a filthy rhyme (well, not that there are no filthy rhymes in Shakespeare, but you get the picture). So I don’t see that the work a computation does (and in fact, all computations can be done reversibly, without doing any work) has any implication on the meaning of the information being shuffled around.

  23. 23. Charles Wolverton says:

    Jochen –

    Re symbols, see http://www.htmlhelp.com/reference/html40/entities/symbols.html

  24. 24. Jochen says:

    Thanks, Charles, but I totally misunderstood that on the first reading, what with all the talk of symbols and the like… I went to that page (hadn’t looked at the URL), expecting some form of philosophical analysis of symbols from a behavioral point of view, and at first, just thought, “Well how’s that supposed to help?” 😀

  25. 25. Charles Wolverton says:

    Well, we do seem to have multiple communication problems!

  26. 26. Charles Wolverton says:

    I don’t simply grasp for an apple upon seeing it …

    This seemingly simple phrase captures much of our disconnect. First of all, what do you mean by “seeing”? And more specifically, what do you mean by “seeing an apple”? These questions may seem bizarre, but the best I can do by way of explanation is to refer you to these comments to this post: 28,38,40, and 68.

    … rather, that apple is within my mental content.

    Again, you’ll may find it bizarre, but I have no clear idea what “mental content” means or in what sense an apple could be part of it. If it has something to do with the phenomenal experience of an object (what I take to mean something like forming a “mental image” of it), I seriously doubt that such a “mental image” plays any role in grasping the object.

    Since I don’t understand intentionality, I clearly can’t understand “derived intentionality”. So, I probably should withdraw my comments about either.

    In your paper, you say the analyzer generates Φ(U+C+S+V(x)) and then “appends” Φ(A). Because the “+” operator isn’t explicitly defined, it wasn’t clear whether the first symbol string meant a composite description or four separate descriptions (what I meant by Φ(A) being “distributive”).

    If a composite (which I take it you meant it to be), it seemed to me that the result of literally “appending” should be expressed something like [Φ(U+C+S+V(x)) U Φ(A)] where “U” means something like “union”. If “appending” means incorporation into the composite description to create an expanded composite description, then it seems the result should be Φ(U+C+S+V(x)+A). In neither case in there a description of a discription. (And I understand there could be, but there appears to be no need for one.)

    Had the symbol string been meant to indicate four separate descriptions, “appending” would mean just adding Φ(A) to the n-tuple of descriptions. But apparently it wasn’t meant to indicate that.

    I think adding the analyzer was at worst OK, probably even clarifying. So, I’d stick with it.

  27. 27. David Duffy says:

    Dear Jochen, ISTM that the meaning of the string is precisely what useful work it can do – unfortunately the information in the Chinese string (using one measure) cannot actually cause anything in the combined system of message plus recipient’s “information reservoir” 😉 The point about the maximum efficiency of the Szilard engine is the one to one matching between the demon’s memory and her task.

  28. 28. Hunt says:

    I have some background in biology, so I’m always considering how systems might have developed by evolutionary fashion. This paper seems to give a very plausible account how of micellular machines (micelles with sequestered chemical reactions) might have operated. I realize there’s a danger of taking “cellular automata” too literally and equating them with actual primordial cells, but it seems almost too attractive to pass up. Intentionality, the “being about” something in the primordial soup must have been something like this. Indeed, how could it be much of anything else?, since there were no external (Z) agents. Evolution doesn’t backtrack. Basically, the thing it hits on first is what it’s stuck with, though this is an overly negative observation. While coming upon each mechanism, there are no doubt thousands, millions of contending processes, each vying for dominance. Evolution is stuck with each step-wise solution, but each step is subjected to rigorous quality assurance. So if each living creature is stuck with DNA as a mechanism of inheritance, would each living creature be stuck with the mechanism of intentional automata, as root processes? If so, how do they compose themselves into more complex representations?

  29. 29. Jochen says:

    Charles, I’m afraid the comments you cite weren’t quite clear to me, without knowing the context of the discussion, and I don’t really have the time to go through the whole comments thread at the moment; so perhaps, if you could summarize your reservations against my vocabulary?

    To me, things seem quite clear: seeing is what occurs when light, reflected from an object, impinges on the retina, where it creates some pattern of activity, then to be translated into some neural activity. I don’t, in particular, consider any sort of phenomenal experience that may be attendant to this process.

    As for mental content, it’s basically the same as verbal content—there’s a way the world is, which is expressed in terms of matters of fact; those matters of fact can, in turn, be clad in propositions, that are true or false whenever the matters of fact they concern obtain, or fail to. The question is, now, how do propositions manage to connect to the matters of fact they concern? How are sentences about things in the world?

    Of course, there is a possibility that they may not be—that there either are no things in the world, no matters of fact, or that propositions don’t actually connect to anything, and are just so much spilling of ink and flapping of lips. This is a possibility I can’t exclude, and it’s not my aim to. All I’m concerned with is how, if some picture such as that all-too-brief sketch above obtains (which it certainly seems to do—so any adequate theory must at least address this seeming, and mine just does it by taking appearances at face value; common sense may mislead, but there is also danger in chucking it out too early), it may work in a natural world.

    So then you go about and fill in those commonsensical notions of aboutness with some theoretical underpinning—things like what I said above, that a blueprint is about some gadget to the one building that gadget from the blueprint, and that thus, if some symbol is both the blueprint and the entity using that blueprint, it supplies its own interpretation. This may work not at all, you may get tangled in contradictions; it may work partially, explaining some of the terms used in quotidian discourse, but not all of them; or it may yield a solid underpinning for all of our discourse. My hope is to achieve at least the second option—explaining some part of aboutness and intentionality by means of my von Neumann model, while others may surely have to be modified.

    However, at this point at least, I see no reason to follow option one, to already throw in the towel, and just try and get rid of the whole of intentional vocabulary—in part, because I simply don’t see how any meaningful discourse remains possible. Eventually, hyperbolic doubt becomes so strong as to self-undermine, whence each and everything we utter might just be utterly meaningless. Again, I recognize that this may well be the case, and that if every project such as mine fails may well be the conclusion one is forced to draw, but for the moment, I think there is a very real chance to account for the phenomena as they appear to us.

    In your paper, you say the analyzer generates Φ(U+C+S+V(x)) and then “appends” Φ(A).

    Not quite. I say the following:

    Then, the control unit initiates a replication cycle: the analyzer analyses the whole pattern (except for itself), generating the description Φ(U + S + C + V(x) + Φ(A)). Then, C copies the description Φ(A), appending it to the one generated by the analyzer, to yield Φ(U + S + C + V(x) + Φ(A) + A). S then activates U, which utilizes the description to produce the first-generation daughter pattern.

    (Fingers crossed for phis!)

    So the analyzer generates a description of everything within the pattern, except for itself; then, the description of the analyzer is appended to the resulting description. The result is a complete description of the full pattern; using your method, there would not be such a complete description.

    The reason I now prefer the original version is that it’s somewhat cleaner, conceptually, to have the change introduced in the first generation pattern’s tape, to then come to fruition in the second generation. In this way, it’s more obvious how the first generation pattern comes to be about something different from itself, rather than merely being changed.

    —————–

    David, I’m sorry, but I don’t really understand what you mean by meaning being the useful work a string of bits can do. How do you intend to cash out this idea? How does, for instance, the word ‘apple’ come to mean apple?

    In the setting as described above, what the string s means—if it indeed means anything—is completely irrelevant with respect to the work one can use it to perform. Picture three agents A, B, and C—in the code used by the first, s means ‘apple’, in the code used by the second, it means ‘dog’, and in the code used by the third, it means nothing—it’s not a valid codeword. Yet all three can use it to perform the same work. The reason for this, again, is that the Shannon entropy omits the semantic by definition—that’s what it was designed to do, to quantify the resources needed to transfer a message, not to quantify the message itself.

    Tying it to the specific physical system used to perform that work also won’t work. The application above could have just as well referred to capacitors in a flash-memory rather than one-molecule gases, or to any other device you mention. So the string of bits—again, just a string of differences, if one omits the semantic dimension—at most refers to any physical system capable of instantiating those differences, that is, having enough accessible microstates. Beyond that, I really don’t see what statement you could make regarding the meaning of the string.

    And again, it seems that your stance also commits you to the rather strange proposition that reversible computations—which do not perform any work—don’t mean anything, while irreversible ones do; however, every computation can be equally well be performed both ways.

  30. 30. Jochen says:

    Hunt, that’s an interesting idea. However, my background in biology is too slight in order to really assess it; I’ll have to read up on the micellular machinery (if you know any gentle introduction to the non-specialist, I’d much appreciate it). I’m not sure, however, in how far one really can apply the concept of intentionality to single-celled organisms… My own, very rough, sketch of how my mechanism might be realized in an organism revolves around thalamo-cortical resonance, where sensory data is relayed to cortex via the thalamus, which involves an intriguing amount of backpropagation—the idea would be to have the thalamic nuclei essentially provide the ‘tape’ configuration, which goes on to construct the pattern in the cortex, that then backacts on the tape, and so on, until a stable pattern is reached. But this is at this point barely an idea for an idea.

    What you’re saying seems possibly related to W. Tecumseh Fitch’s ideas on ‘nano-intentionality’; it’s not altogether dissimilar to my approach in that I think of it as trying to find something like an ‘atom of intentionality’; but ultimately, I’m not sure that his concepts can do the necessary work (hence, I had to come up with my own).

    (I seem to have some trouble with posting my comment right now, perhaps the spam filter thinks I’m being too loquacious, or it has some problem with the link I tried to post; I’ll leave it out, see if this works…)

  31. 31. Michael Baggot says:

    Jochen or Charles
    How does your schemme distinguish between the functional roles of syntax and semantics in understanding?

  32. 32. David Duffy says:

    “Shannon entropy…quantif[ies] the resources needed to transfer a message, not…the message itself”

    The key quantity, as I understand it, is the mutual information or, I think equivalently, the relative entropy. With respect to reversible computation, when an informational system is efficiently interacting with the environment, there seem to be two strategies – either measurement is free energy-wise and memory restoration costly, or vice versa. It looks like
    http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003974
    is relevant, by I haven’t digested this in detail. A good review of all this (behind a paywall) is Parrondo et al “Thermodynamics of information”

    Regarding your interchangeability of substrates for computing, that’s why I think this approach is nice, the calculations are all the same. I don’t believe there are any papers ambitious enough to try linking this up to language use, but I reckon if one can model bacterial chemotaxis, then next should be one bee explaining to another “a propositional function with four variables…There is a source of food smelling of A, requiring an effort B to reach it, in direction C, of economic value D”.

  33. 33. Charles Wolverton says:

    Jochen –

    To me, things seem quite clear: seeing is what occurs when light, reflected from an object, impinges on the retina, where it creates some pattern of activity, then to be translated into some neural activity.

    Quite so. I just wanted to be sure since that’s not everyone’s understanding of “seeing”.

    there’s a way the world is, which is expressed in terms of matters of fact [which] can be clad in propositions

    I don’t subscribe to that, but in any event I don’t see it as relevant. The neural activity is all the brain has to work with. Even if it’s consequent to visual stimulation by light reflected from a flower, any association with an action – eg, uttering “flower” or reaching for an object assumed to be nearby – is with that neural activity, not with any “way the world is” outside the brain. The stimulus that causes that neural activity may actually be light reflected from a flower, but instead could be light artificially produced by a programmable source. Or the neural activity itself could be produced internally by dreaming or hallucinating. So, I’d say that if the responsive action is “about” anything at all, it’s “about” the neural activity.

    You keep suggesting that if speech isn’t “about” the world, it’s meaningless. But then you don’t subscribe to meaning-as-action in the same sense that I do. If you speak and I react as you intend (recall, I have explained what I mean by “intend”), then I take the speech to be meaningful. For me, truth and reality aren’t relevant.

    If you’re following the current US political circus, you perhaps have noticed that certain candidates are getting the results they no doubt intend with speech that is at best uncorrelated – arguably negatively correlated – with either of those concepts. But in my sense, it is nonetheless being “understood” by the target audiences.

    Re the replication cycle, I gather that there are effectively (but not necessarily actually) two tapes involved in a replication cycle. The current content of Tape 1 is used to construct the new content of Tape 2. A process that seems to do the job is:

    Step …………….. Content ……………………………………….. Action

    1 …… Tape 1: U,C,S,V(x),A,φ(A) ………… Create composite φ(U,C,S,V(x)), copy to Tape 2

    2 …… Tape 2: φ(U,C,S,V(x)) ……………… Copy φ(A) from Tape 1 to Tape 2

    3 …… Tape 2: φ(U,C,S,V(x)),φ(A) ….. Apply U to both descriptions on Tape 2

    4 …… Tape 2: φ(U,C,S,V(x)), φ(A),U,C,S,V(x),A

    I understand that A can’t analyze itself so that φ(A) has to preserved in producing the content of Tape 2. But doing so by creating a composite description including φ(A) seems unnecessary since simply copying it to Tape 2 seems to suffice.

  34. 34. Charles Wolverton says:

    Michael – Perhaps you can glean my idea of semantics from the first few paragraphs of the previous comment.

    I don’t give much thought to syntax. Not because I dismiss it but because my position doesn’t have the depth and coherence that “scheme” suggests; it’s just a jumble of ideas about how speech might do its job in practice.

  35. 35. Jochen says:

    Michael: in my scheme, the syntactic dimension is given by the actual pattern of cellular automaton cells (respectively, if an analogue can be made to work in the realm of neural nets or biological brains, a pattern of neuronal activation), while the semantic aspect is provided by that pattern being itself an active agent, supplying its interpretation through its actions. I think the above blueprint metaphor illustrates that well: to some agent using a blueprint to build something, that blueprint is about what is built; now, my von Neumann minds are simply both the blueprint and the agent interpreting it.

    ——————-

    David:

    The key quantity, as I understand it, is the mutual information or, I think equivalently, the relative entropy.

    Both are simply derived quantities from the Shannon entropy, and thus, just as purely syntactic. If you have two sources, X and Y, producing tokens x_i and y_i with probabilities p(x_i) and q(y_i), then the relative entropy quantifies, roughly, the difference between the two probability distributions. The mutual information quantifies the correlations between two sources, by means of the relative entropy between the joint probability distribution and the product distribution obtained by taking the marginal distribution of each source, and simply multiplying them.

    The mutual information tells you something about conditional probabilities, effectively—the probability that given that the first source has produced a certain token, the other will produce one from its repertoire. If those probabilities differ for different tokens from the first source, the mutual information will be non-zero—this tells you that you have a better chance predicting which token the second source will produce, if you know which token the first one produced. Again, here, it’s completely immaterial what those tokens are, and what they are about, or represent—the representational aspect simply doesn’t enter. The sources could be lights flashing either on and off, or flags being waved, or sportscars of a different color being produced. All of this will be described in the same way, but of course, it is very different to us whether we think of sportscars, lights, or flags.

    Regarding your interchangeability of substrates for computing, that’s why I think this approach is nice, the calculations are all the same

    But this is a bug, not a feature: it tells you that these informational notions don’t suffice to nail down content, because they can be underwritten equivalently by infinitely many different physical systems.

    ————-

    Charles:

    The neural activity is all the brain has to work with.

    Formulations like that always make me nervous. Because of course, the brain does in no sense ‘work with’ neural activity; rather, neural activity is the working of the brain. Taking your sentence literally (which I’m not sure you do yourself) collapses into regress.

    If you speak and I react as you intend (recall, I have explained what I mean by “intend”), then I take the speech to be meaningful.

    The problem is that this just doesn’t account for the evidence we have—we don’t merely act, we also understand. I don’t find myself merely being buffeted around by the reactions your acoustic emanations produce in me, I also know what you’re talking about, and form an intention to act in a certain way in response—or at least, that’s how things appear to me, and this appearance is something that must find explanation in any theory of the mind; but I fail to see how your notions could yield that. Finding a grounding in action for this understanding is what my model is about.

    For me, truth and reality aren’t relevant.

    And do you believe it’s true that truth and reality aren’t relevant? Even on a pragmatic reading, I simply don’t see how you can avoid holding some things to be true—if you say that if acting as if something were the case is beneficial, this is grounds enough to hold to that something’s ‘truth’, then it seems you still must hold that it’s true that such acting is beneficial; and so on. How do you get yourself out of that muddle?

    As to the replication, yes, you could probably use a two-tape process involving only copying the description of A (although the way I’m using the word ‘tape’, it really only can include descriptions of patterns, not the patterns themselves—indeed, in as much as it makes sense to talk about ‘tapes’ in a CA environment, they’re defined by the set of descriptions); but I wanted to have my process include creating a full description of the whole pattern, which can then be used by U to create a second-generation copy, so I chose differently.

  36. 36. ihtio says:

    Jochen,
    I’ve read the paper. It was a fascinating read. In fact, it was much, much better than I expected – and I expected something good.
    I have some points I would like to discuss, if you find some time. Most of them are questions. I wouldn’t want to heavily critique the theory while having some doubts about crucial points. Let’s begin.

    “Mental states can be, in some sense, considered to ‘point’ towards things not intrinsic to them. If I am thinking of an apple, then my mental state is in some way about the apple my thinking is directed towards.”

    – This can be rewritten as “If I am thinking of an apple, then I am thinking of an apple.” The “mental state is about an apple” part seems to be just a way of speaking that someone is thinking of an apple / about an apple.
    – The account given in the paper presupposes that for every R(x) there must exist an x. (R – representation, x – object or the represented). If I am thinking about an unicorn, then my mental states is about an unicorn, ‘points’ towards it. How would intentional automata deal with this?
    What is a symbol? A notion of an active symbol was introduced. One would expect to be provided with a clear delineation between “normal” symbols and “active” ones. My understanding is that the cellular automaton patterns established in the paper is just one of many possible implementations of the general principle.
    Is symbol an arbitrary structure? Then everything is – or can be used as – a symbol.
    Should I place an equal sign between “symbol” and a “[CS-] pattern”?

    I fear that the notion of intentional symbols/patterns is fuzzy in the sense that it is not entirely sure which actions can be considered proper fragments of some meaning. That is, for every cause, there is some action. But which pairs can be said to be meaningful and which are just non-meaningful events?

    “the meaning of some symbol, or string of symbols, can be construed as that which  the presence of the symbol causes an agent to do.”

    – The meaning of a symbol is thus the effect it has on an agent.

    ” Phi(X ) means X to U, by virtue of causing U to construct X.”

    – Wood mean Smoke to Fire, by virtue of causing Fire to make Smoke.
    Indeed, we don’t know how do filter out meaningless events from meaningful ones.

    The above point continued:

    “The basic idea here is that a symbol is something whose meaning pertains to that which it symbolizes, that is, which brings about actions that are not directed towards the symbol itself, but towards its referent.”

    “Thanks to the copying process, the mental state represents this proposition to itself. The pattern embodies the relationship between the Chinese characters and the action of circle-drawing they elicit. Thus, in being reconstructed, it also means that relationship, as the meaning of a pattern is given by what it causes a pattern itself—to do, and in this case, this is the construction of another pattern embodying this relationship. The pattern is hence about the relationship between the Chinese characters and the circle-drawing to itself. Thus, we can now validly say that to the von Neumann Mind, those Chinese characters mean ‘circle’.”

    – Thus meaning of a symbol is a set of effects that are directed towards something other than the symbol. In other words, meaning of a cause C of a symbol S is an action A performed on object O, where S != O.
    – This means that for a symbol to have any meaning (in other words, for the symbol to mean something specific) an action must be performed on some object.
    – If the above quote could be interpreted more generally as stating that a meaning of S are effects directed at O (where O != S), then we would circle back to stating that, for instance, a representation of an apple has meaning because it has an effect that is directed at an apple. Without the actual action performed on an apple, we are left with a structure (network, set, space, etc.) of representations that, loosely speaking, pertain to an apple (memories of the specific texture, smell, taste) – but this structure would not gain any meaning according to the account of intentionality presented in the paper.
    – No actions == no meaning.

    “[A] blueprint symbolizes the thing to be constructed by bringing about actions leading to its construction.”

    Say I am just sitting in my rocking chair thinking about a miniature Taj Majal. It was a fleeting thought. I forgot about it in a few seconds. My thought went in other directions. Never again will I ever think of this, nor will I build or buy such a miniature, I will not search for anything like it on the Internet. Nothing. I will not perform any action towards said miniature. I don’t even know if such a thing exists. Did my thought have a meaning? And what was its meaning?

    “[A] blueprint symbolizes the thing to be constructed by bringing about actions leading to its construction.”

    – Such a blueprint has an extremely large meaning space / it symbolizes almost uncountable number of things / actions!

    “Thus, the intentional content of computational systems is radically underdetermined: given the right implementation relation, any computation can be considered to be about anything.
    In fact, what operation a conventional symbolic system implements is always relative to something external—ultimately, to the meaning we give to the symbols it operates on. The final meaning is only produced by a human mind acting as ‘capstone’, grounding a symbol ultimately in its understanding, such that the aboutness in such a system is always extrinsic, and the representation relation always three-place.”

    – Computation is not different than other physical, biological, meteorological or ecological processes: water evaporates from trees, ponds, rivers, seas, gets to colder mountains, condenses, and then it rains. We may call it a water cycle. Electricity voltages are changing, switching across transistors in a CPU, voltage flows in various paths. We may call it computation. There aren’t many differences, really. We may, somewhat naively interpret the water cycle as “the water wants to go up and then towards the mountains” and the computation as “a list of numbers is sorted”, but these interpretations don’t matter that much when we know the facts. I find the “everything can be said to be performing any computation” as a particularly weak line of argumentation. Basically, it reduces to “any arbitrary interpretation can be translated to any arbitrary interpretation”. The error in the argument is due to mistaking the computation – what really happens in a computer – with our descriptions, interpretations or whatever we want to call it, such as “quick sort”.

    “We can now answer the three questions posed in the introduction:
    1. What does the representing? — The flower is represented by a pattern of the CA that developed in response to a visual stimulus (by e.g. retinotopically mapping to a configuration of cells in different states).
    2. How is the representation used? — The representation is used to create a code that, if interpreted by the right machinery, produces a copy of itself, giving rise to a representation of the original flower-symbol. In order to be copied, the copying machinery must read the symbol, i.e. obtain an ‘intrinsic theory’ of it.
    3. Who is the representation’s user? — The symbol itself is part of the machinery that undergoes the copying process, which is thus its own user in this sense.”

    – Then, the symbol has meaning most likely only to itself.
    – Not sure how other patterns of the mind in general could use the symbol to do something.
    – We can’t know what a symbol points to if we don’t have a specific “analyzer”.

    “By this process, the von Neumann Mind creates, within itself, a kind of mirror image of the world in terms of patterns, that is, it mirrors within itself the external world’s structure.”

    – I don’t see how these patterns could actually mirror the external world’s structure. I mean, hypothetically they could, but the paper does not clearly point to how complex structures could be mirrored. While the paper surely presents an account of patterns “representing” things from the world, a jump to “representing” complex structures and situations is, in my view, a big one.

    “mirror-image of the world that the von Neumann Mind gives rise to a self: the self is merely the fixed point of the mapping.”

    – From my understanding I can infer that they could be many fixed points in such a system. Because why not? And if so, then are they all “selves”?

    “The copying operation here acts as a kind of ‘internal perception’: just as vision orginally gave rise to a pattern representing the flower, this is now done by copying. But recall that the design I proposed above is self-reading: the flower-symbol then effectively codes for itself in the next reproductive cycle.”

    – If the replication of itself is the only thing the pattern would do, then does this mean that the meaning of the symbol would be the symbol itself, not the flower?

  37. 37. David Duffy says:

    “Both are simply derived quantities from the Shannon entropy, and thus, just as purely syntactic.”

    Hi Jochen. I don’t think one can describe the change in the non-equilibrium free energy of a complex system acquired by an act of measurement as “synactic”. It is actually causal in the world, and occurs continually in organisms to further their purposes. I suspect, but certainly cannot prove, that these are more than simply constraints on implementation of computation.

    “By directly relating memory and predictive power to dissipation out of equilibrium, the results we have derived here indicate that these two important paradigms are deeply connected.”
    http://arxiv.org/abs/1203.3271

    Anyway, this is all rather peripheral to your actual paper!

  38. 38. Charles Wolverton says:

    And do you believe it’s true that truth and reality aren’t relevant?

    No. I take seriously Rorty’s quip (oft maligned, mistakenly IMO):

    Truth is what your peers let you get away with saying.

    (There’s an implicit assumption that those peers have credibility with respect to the relevant discipline).

    And you won’t let me get away with saying that! In any event, I’m not presenting an argument for a position’s being correct, just seeing how far I can get starting from those premises.

    In your second paragraph you speak of “evidence”, in particular “appearances”, the way things seem to be. But it seems quite obvious that the earth is flat, the sun revolves around the earth, we live in a world of multicolored solid objects, large heavy objects can’t rise off the ground on their own, etc. But we know all those seemings are inconsistent with scientific consensus (ie, “truth” in Rorty’s sense). Which suggests that the way things “seem” isn’t really evidence. Of course, neither is it the case that things are never the way they seem to be. So, you may be right. I’m just exploring the possibility that you aren’t.

    I see Ihtio is experiencing the same confusions I am about the functioning of your von Neumann mind, so I’ll wait for your response to him before pursuing that topic further.

  39. 39. Jochen says:

    Ihtio:

    I’ve read the paper. It was a fascinating read. In fact, it was much, much better than I expected – and I expected something good.

    Wow, thanks! 😀

    On to your questions:

    This can be rewritten as “If I am thinking of an apple, then I am thinking of an apple.”

    Well, that’s true of any definition, right? If I say ‘A is B’, then, since A=B, I can substitute A for B wherever it occurs, yielding ‘A is A’. What a definition establishes is that there is this equivalency, i.e. that whenever I think of an apple, my mind is directed at that apple—something that may seem trivial to you, but that some people, e.g. eliminativists, presumably would deny; hence, this also serves to locate my account in the wider landscape of possibilities.

    The account given in the paper presupposes that for every R(x) there must exist an x. (R – representation, x – object or the represented).

    But every x in the paper is given by an activation pattern of the cellular automaton, which can either be a sense impression, a memory, or wholly imaginary. This will then yield actions consistent with your belief in, or thinking about, unicorns.

    One would expect to be provided with a clear delineation between “normal” symbols and “active” ones.

    Well, I do give a definition of active symbols: those which themselves can be seen as active agents, i.e. that do something. Typically, one might think a symbol to be a passive representation, however, my symbols, in order to supply their interpretation, must carry out certain actions.

    And yes, in principle, everything that stands to something else in some relationship can be used as a symbol—you can use the tracks in the snow as a symbol for ‘there was a deer in my yard last night’, for instance. My active symbols, however, are their own user, and their use is given by the von Neumann process—which I think serves perfectly well as a definition. What do you think is fuzzy, there?

    The meaning of a symbol is thus the effect it has on an agent.

    No. The meaning of a symbol is what the action an agent carries out in response pertains to—consider pure-bred success semantics: then, the meaning of a symbol is the success condition of the action that an agent carries out upon encountering the symbol. I did leave things a bit more open-ended than that, merely appealing to a notion of ‘pertaining’, with what exactly it means for an action to pertain to something being left unspecified; if that worries you, however, you may substitute ‘has as success condition’ for ‘pertains to’.

    a representation of an apple has meaning because it has an effect that is directed at an apple

    The action need not necessarily be directed at the apple, but the apple needs to be necessary for the action to fulfill its goal, or to come to fruition. Telling you ‘there’s an apple on the table’ is an action directed at you, but will only successfully communicate a true state of affairs if there is, in fact, an apple on the table.

    I will not perform any action towards said miniature.

    You have already performed many actions pertaining to that miniature; you’ve created a representation of it in your mind, decided not to take any action related to it (in the world), and so on. If there had been no activity in which that miniature figures, you simply would not have been aware of it.

    Such a blueprint has an extremely large meaning space / it symbolizes almost uncountable number of things / actions!

    But only in the case that it is used towards building what it is a blueprint of do those actions accomplish their goal. If you have a blueprint of the Taj Mahal, and you build the Cheops pyramid, then you’ve simply messed up!

    Computation is not different than other physical, biological, meteorological or ecological processes:

    It is, since there is no unique mapping picking out a computation from a given physical system’s behavior (while there is, e.g., a unique weather, given the physical circumstances). Say you have a physical system that has ‘inputs’ and ‘outputs’ (note that in labeling inputs and outputs, we already remove a lot of ambiguity; additionally, in identifying the physical system, we’re also fixing a certain scale, removing even more ambiguity). So, you do measurements on those inputs, and find that, whenever on both inputs, a voltage lower than 0.5 V is applied, the output also yields an output lower than 0.5 V; however, when one or both inputs have a voltage larger than 0.5 V applied, then the output, too, will be larger than 0.5 V.

    Furthermore, let’s remove even more ambiguity and guarantee you that the system computes a Boolean function. So, three crass reductions of ambiguity: you know inputs and outputs, you know the relevant scale, and you know the kind of computation that’s being performed. If you were presented with a random physical system, there’d be essentially no way to fix these issues, but I’m giving you all that for free.

    Still, I claim that the question of which computation is being performed can’t be answered: you need a mapping taking voltages to logical bit values; and that mapping is arbitrary. So, you don’t know anything about what the computation does, beyond what I’ve told you; hence, everything about a computation is dependent on arbitrary, interpretational choices. Thus, unlike other physical phenomena, and contrarily exactly like the meaning of a text, what computation is being performed is wholly dependent on the intentionality derived from the user. Without such external intentionality, there simply is no fact of the matter regarding what computation a given physical system performs, and hence, computation does not work as a grounding for mental content.

    – Then, the symbol has meaning most likely only to itself.
    – Not sure how other patterns of the mind in general could use the symbol to do something.
    – We can’t know what a symbol points to if we don’t have a specific “analyzer”.

    No, the symbol supplies its own interpretation. It will be completely transparent to any third-party what the symbol means, as it unfolds its meaning via its actions. Like a text that, when read, re-enacts the events it references.

    I mean, hypothetically they could, but the paper does not clearly point to how complex structures could be mirrored.

    You’re right; hence, I don’t claim to have a complete theory of intentionality—my aim was to get rid of the homunculus, and produce a kind of ‘intentional atom’, i.e. a physical system that can be said, without any circularity, to be of something besides itself. Creating a full picture of the world from that is an open problem—I acknowledge that in the conclusions.

    From my understanding I can infer that they could be many fixed points in such a system. Because why not? And if so, then are they all “selves”?

    That’s an interesting question. I generally think of the mappings between the ‘outside world’ and the structure of the von Neumann mind as being a kind of projection—analogous to the relationship between a map and the territory. In that case, you have only (well, at most) one fixed point—the position of the map within the territory. But perhaps that could go out of whack, leading to misattributions of self-hood? I need to think about this some more.

    If the replication of itself is the only thing the pattern would do, then does this mean that the meaning of the symbol would be the symbol itself, not the flower?

    Well, the meaning—the daughter pattern—is itself a set of instructions that yield the entity to perform actions which appropriately relate to the presence of the flower; that’s how ultimately meanings are hooked up to the world.

  40. 40. Jochen says:

    David:

    I don’t think one can describe the change in the non-equilibrium free energy of a complex system acquired by an act of measurement as “synactic”. It is actually causal in the world, and occurs continually in organisms to further their purposes.

    I, on the other hand, think that’s exactly the sort of thing one could describe as syntactic. You can use the change in free energy to signify something, but what it signifies is arbitrary—subject to convention between two parties wishing to communicate, say. Think ‘one if by land, two if by sea’: without this stipulation, it means nothing how many lanterns are lighted—the number of lanterns is a syntactic property, while the message transferred is its semantics. There’s nothing that forces that semantics upon us if all we have is the physical system of one or two lanterns.

    Anyway, this is all rather peripheral to your actual paper!

    Well, it’s rather the reason I wrote it! I used to believe that a computational account of mind would be sufficient; and it’s just that sort of argument that made me believe it’s not, and come up instead with my own attempt at a non-computational, and yet nevertheless fully naturalistic account. I’d still be delighted if a computational account could be made to work, though; hence, my prodding.

    Out of curiosity, what’s your reaction to the thought experiment I posed to Ihtio above (the question of which Boolean function a given physical system implements)?

    ——————–

    Charles:

    And you won’t let me get away with saying that!

    Well, because my immediate reaction is to ask whether you believe that it’s true that ‘truth is what you peers let you get away with saying’. I mean, how could one ever justifiably arrive at such a stance? Typically, one might change one’s mind when presented with evidence, with certain facts contradicting what one has held before; but of course, that can’t lead one to a stance in which there are no facts.

    Which suggests that the way things “seem” isn’t really evidence.

    The way things seem is the only evidence we ever have. The world’s flatness, for instance, is a perfectly sensible approximation on the everyday level; it was only once we started going beyond that level that its curvature became important. And on that level, of course, the world does no longer seem flat—it seems curved! So, we revoke the conclusions we’ve come to once things seem inconsistent with them—but no sooner.

  41. 41. ihtio says:

    Jochen,

    This can be rewritten as “If I am thinking of an apple, then I am thinking of an apple.”

    If I say ‘A is B’, then, since A=B, I can substitute A for B wherever it occurs, yielding ‘A is A’. What a definition establishes is that there is this equivalency

    Yes, I agree. What I had in mind was:
    “Thinking of an apple” is interchangeable with “a thought ‘points’ to an apple”, so instead of a definition we have a series of sentences that don’t really help grasping the core idea that much.
    However I wouldn’t like to fight over a definition, especially of something that difficult to put into words. Let’s better focus on more important points.

    The account given in the paper presupposes that for every R(x) there must exist an x. (R – representation, x – object or the represented).

    But every x in the paper is given by an activation pattern of the cellular automaton, which can either be a sense impression, a memory, or wholly imaginary. This will then yield actions consistent with your belief in, or thinking about, unicorns.

    Actually, I think that we should be thinking along the lines of:
    x – unicorn, R(x) – representation of a unicorn.
    If we were to say that x is a pattern of a cellular automaton or neuronal assembly, then we couldn’t say that any such pattern represents an apple or a unicorn, because they would only represent themselves / other patterns. The way we think of the word “about” couldn’t be continued.
    If a pattern gains a meaning about an apple due to having some actions that may be performed on apples, then we would be inclined to think that a pattern about a unicorn has this meaning (= it is about a unicorn), because it has some actions pertaining to unicorns (doable with / on unicorns).

    Well, I do give a definition of active symbols: those which themselves can be seen as active agents, i.e. that do something. Typically, one might think a symbol to be a passive representation, however, my symbols, in order to supply their interpretation, must carry out certain actions.

    And yes, in principle, everything that stands to something else in some relationship can be used as a symbol—you can use the tracks in the snow as a symbol for ‘there was a deer in my yard last night’, for instance. My active symbols, however, are their own user, and their use is given by the von Neumann process—which I think serves perfectly well as a definition. What do you think is fuzzy, there?

    Everything is clear now. I guess I missed something or just got confused.

    The meaning of a symbol is thus the effect it has on an agent.

    No. The meaning of a symbol is what the action an agent carries out in response pertains to—consider pure-bred success semantics: then, the meaning of a symbol is the success condition of the action that an agent carries out upon encountering the symbol. I did leave things a bit more open-ended than that, merely appealing to a notion of ‘pertaining’, with what exactly it means for an action to pertain to something being left unspecified; if that worries you, however, you may substitute ‘has as success condition’ for ‘pertains to’.

    It worries me very much. It is a slight of foot close to “The meaning of a symbol is a meaningful action of the symbol on the object that the symbol means (points to)”. Without clear cut elaboration of “success condition” of an action of a symbol (how does someone / something notices if the action was successful?) I don’t think I can grasp the idea.
    And unicorns once again: none of the actions any symbol performs pertains to “real” unicorns, therefore none of the unicorn patterns has a meaning “unicorn”, therefore we cannot think “about” unicorns.

    a representation of an apple has meaning because it has an effect that is directed at an apple

    The action need not necessarily be directed at the apple, but the apple needs to be necessary for the action to fulfill its goal, or to come to fruition. Telling you ‘there’s an apple on the table’ is an action directed at you, but will only successfully communicate a true state of affairs if there is, in fact, an apple on the table.

    I don’t follow. Could you present a simple example of an action fulfilling its goal and how it relates to the meaning of a symbol/pattern? I could answer this question from the last quoted sentence.

    I will not perform any action towards said miniature.

    You have already performed many actions pertaining to that miniature; you’ve created a representation of it in your mind, decided not to take any action related to it (in the world), and so on. If there had been no activity in which that miniature figures, you simply would not have been aware of it.

    In some way we could say that all those things mentioned (thinking about the miniature, imagining it) pertain to it (even if it doesn’t exist). However from the perspective of an individual symbol / pattern it’s not so clear. How does the pattern know that it is about a miniature that doesn’t exist or that wasn’t acted on in any other way.
    The thing is, the term “action” that should “pertain to” the object the symbol signifies is not that easy to understand. If the action does not need to do anything with the – external to the mind – object, then how do we know that the action pertains to the object in anyway?

    I wouldn’t want to delve into the computation issue. I just say that if we make a bread, then we also arbitrarily assign meanings to the ingredients / inputs (flour, water, salt), we have arbitrary output (a block of bread). The same thing is with computation. The question “which Boolean function a given physical system implements?” is the same as the question “which type of bread does the bakery produce?” – in both of these two situations you have some physical items (voltages, flour), that are somehow processed (through logic gates or mixing and baking in high temperature) to produce an effect (some pattern of voltages or a bread).
    But the matter is not important to the discussion at hand, so there is no need to continue it here.

    – Then, the symbol has meaning most likely only to itself.
    – Not sure how other patterns of the mind in general could use the symbol to do something.
    – We can’t know what a symbol points to if we don’t have a specific “analyzer”.

    No, the symbol supplies its own interpretation. It will be completely transparent to any third-party what the symbol means, as it unfolds its meaning via its actions. Like a text that, when read, re-enacts the events it references.

    Yes, I understand what you mean. My concern is with the collaboration of patterns. If we have many active symbols, then how are they able to create new meaning from themselves, how do they share meaning if at all? In the most obvious case of language we have words that presumably have corresponding patterns. They somehow are arranging themselves into sentences and create new meanings. This is a crucial part of a theory of meaning – compositionality.
    Of course I know that this is much to ask and I know from your next paragraph in the comment that it is one of the TODO items.

    From my understanding I can infer that they could be many fixed points in such a system. Because why not? And if so, then are they all “selves”?

    That’s an interesting question. I generally think of the mappings between the ‘outside world’ and the structure of the von Neumann mind as being a kind of projection—analogous to the relationship between a map and the territory. In that case, you have only (well, at most) one fixed point—the position of the map within the territory. But perhaps that could go out of whack, leading to misattributions of self-hood? I need to think about this some more.

    OK. In the theory of dynamical systems – be they discreet or continuous – we have tools to look for and analyze fixed points. Many studied dynamical systems have more than one fixed point. It wouldn’t be strange to find many fixed points in a mind constructed from cellular automatons.

    If the replication of itself is the only thing the pattern would do, then does this mean that the meaning of the symbol would be the symbol itself, not the flower?

    Well, the meaning—the daughter pattern—is itself a set of instructions that yield the entity to perform actions which appropriately relate to the presence of the flower; that’s how ultimately meanings are hooked up to the world.

    Right. Now I will remember that. However once again we have this “appropriate relation”, “success condition”, “action that pertains to the object” that is hard to grasp and hard to test for.

  42. 42. Jochen says:

    Ihtio:

    “Thinking of an apple” is interchangeable with “a thought ‘points’ to an apple”,

    Well, to you, that may be trivial, but it’s a contentious position at least in some circles—again, eliminativists would hold that it’s the kind of folk psychological assertion that is to be thrown out.

    If we were to say that x is a pattern of a cellular automaton or neuronal assembly, then we couldn’t say that any such pattern represents an apple or a unicorn, because they would only represent themselves / other patterns.

    Well, the idea is that everything that influences a first-generational pattern such that the change triggers a distinct second-generational pattern, yielding to certain actions, can be represented if these actions are appropriate to the represented. Thus, if somebody says to you, ‘look, a unicorn!’, causing you to turn, or to say ‘there are no unicorns’, or something similar, then that would constitute a representation of the unicorn.

    It is a slight of foot close to “The meaning of a symbol is a meaningful action of the symbol on the object that the symbol means (points to)”.

    No, there need not be any ‘meaningfulness’ to the action. Let’s think about things in the pure CA world. There, a symbol furnishes instructions for a constructor to construct a particular pattern. In the von Neumann replicator, a pattern influences a first-generational pattern so that it furnishes instructions for itself to build a second-generational pattern. I take it here it’s unambiguous what is meant by an action pertaining to something—namely, actions bringing about that something. And there is also no hidden meaning smuggled into the system. This then is my primary goal: to construct a system in which there is intrinsic aboutness without ambiguity.

    The next question then is how to hook that system up to the world, such that the objects in the world can be objects of that system’s intentionality. Hooking up the left, input side doesn’t seem too much of a problem: sensory data, memories, thoughts etc. lead to changes in a first-generation pattern, and then to the construction of a second-generation pattern differing from the first just in the effect of, e.g., the sensory influence. By the same reasoning, then, the pattern is to itself about this sensory data.

    But we don’t want the pattern’s intentionality to be directed at sensory data, but rather, at things in the world. To accomplish this, my proposal then is to interpret the second-generation pattern as instructions towards performing certain actions—thus, if the real-world object is the unique object of those actions, establishing a one-to-one relationship between the intentional object of the pattern, and the real-world object the actions are directed at.

    Now you ask for how to cash out the notion of actions being directed at something. There are several different ways this might be achieved. The ancestor of the sort of action-based semantics I am proposing is Frank Ramsey’s notion that “any set of actions for whose utility p is a necessary and sufficient condition might be called a belief that p, and so would be true if p, i.e. if they are useful”. So a chicken’s acting in such a way as if a certain caterpillar is poisonous is useful to that chicken exactly if that caterpillar is in fact poisonous; and then, one can talk about the chicken’s belief that the caterpillar is poisonous.

    I think that this notion alone does not suffice to ground intentionality, as I try to illustrate in my variant of the Chinese Room experiment, where the meaning of signs can be transferred via inducing certain actions, but only if the occupant is already intentional. It’s this homunculus that my von Neumann construction is intended to replace.

    Ultimately, I believe it’s perfectly fine to talk about an action’s directedness, however—this directedness merely recognizes that acting in a certain way only meshes with the world, so to speak, if the action is understood as aiming at something. So the chicken’s avoidance of the caterpillar is directed at its being poisonous, because, all else being equal, only if the caterpillar is poisonous does it make sense for the chicken to act that way. This is just a statement regarding what facts obtain in the world.

    To summarize, my first aim is to show, within the CA world, a system that can be validly said to be about something beyond itself; if in this setting, you grant me the sensibility of my definitions, then I consider that main aim to be fulfilled.

    The secondary aim is really more of a work in progress; you’re right to note that I am somewhat vague about the hooking-up of the CA concepts to the world, but what I really intend to do here is to plot a course for further research: if all of these notions can be cashed out the way I have sketched them, or perhaps in due time, a full theory of intentionality can be developed along these lines—but I’m nowhere claiming to be there just yet.

    To me, the main obstacle a theory of intentionality faces is Brentano’s claim that ‘no physical phenomenon exhibits anything like’ the sort of intentional inexistence mental content possesses. So, that’s my point of entry: try and construct a system that does. If I’ve been successful at that, then I would consider my paper as having achieved its main object.

    I just say that if we make a bread, then we also arbitrarily assign meanings to the ingredients / inputs (flour, water, salt), we have arbitrary output (a block of bread). The same thing is with computation. The question “which Boolean function a given physical system implements?” is the same as the question “which type of bread does the bakery produce?”

    No. Whenever you start with the same physical ingredients, follow the same physical procedure, then, in the case of baking bread, you will obtain the same bread (that’s, after all, why recipes work); but in the case of computation, what is being computed is left completely unspecified. You still have to make that arbitrary choice associating, say, low voltage with a logical 0, and high voltage with a logical 1, in which case, the Boolean function being computed will be the logical OR; while if you associate low voltage with 1, and high voltage with 0 (a choice standing on exactly the same footing), you get the Boolean AND. Of course, many more choices are possible; nothing compels you to make the same identification on inputs and outputs, for instance, or even on both inputs.

    Thus, by merely ‘looking at’ system differently, you change what computation is being performed; no matter how you look at bread making, the bread that comes out will be the same. Hence, computation depends on derived intentionality, in contrast to bread making, which is why I rely on real patterns, and on actions carried out in the (CA) world, rather than on computational notions, to ground my theory.

    If we have many active symbols, then how are they able to create new meaning from themselves, how do they share meaning if at all?

    Well, one possibility would be that patterns can arrange themselves into larger patterns, that then undergo replication as a whole. But as I said, that’s something I probably can only address adequately once the model is more developed in its details.

    It wouldn’t be strange to find many fixed points in a mind constructed from cellular automatons.

    Well, it depends on what that mind does. If it just gives, in total, some image of the world with the agent in it, then there would be only that one fixed point where the agent maps onto itself, so to speak. But as I said, it’s a fascinating question—perhaps certain forms of mental illness, or out-of-body experiences, or things of that general sort could be due to that self as a fixed point being out of whack. But that’s blue-sky speculation at this point.

  43. 43. ihtio says:

    If we were to say that x is a pattern of a cellular automaton or neuronal assembly, then we couldn’t say that any such pattern represents an apple or a unicorn, because they would only represent themselves / other patterns.

    Well, the idea is that everything that influences a first-generational pattern such that the change triggers a distinct second-generational pattern, yielding to certain actions, can be represented if these actions are appropriate to the represented. Thus, if somebody says to you, ‘look, a unicorn!’, causing you to turn, or to say ‘there are no unicorns’, or something similar, then that would constitute a representation of the unicorn.

    Amongst the stimuli that lead to representations of unicorns are children’s books, tales, animations, etc. Normally when someone has a thought “a unicorn is a magical horse with a horn” we say that s/he is thinking about a unicorn (non-existent animal) – not about these stories, etc. This fact is very, very hard to see in your account of intentionality as active cellular automata patterns. On a direct reading, your paper seems to be saying that patterns can represent these stories, the pictures of unicorns from the books, etc., but not unicorns per se.

    It is a slight of foot close to “The meaning of a symbol is a meaningful action of the symbol on the object that the symbol means (points to)”.

    (…)
    But we don’t want the pattern’s intentionality to be directed at sensory data, but rather, at things in the world. To accomplish this, my proposal then is to interpret the second-generation pattern as instructions towards performing certain actions—thus, if the real-world object is the unique object of those actions, establishing a one-to-one relationship between the intentional object of the pattern, and the real-world object the actions are directed at.

    Now you ask for how to cash out the notion of actions being directed at something. There are several different ways this might be achieved. The ancestor of the sort of action-based semantics I am proposing is Frank Ramsey’s notion that “any set of actions for whose utility p is a necessary and sufficient condition might be called a belief that p, and so would be true if p, i.e. if they are useful”. So a chicken’s acting in such a way as if a certain caterpillar is poisonous is useful to that chicken exactly if that caterpillar is in fact poisonous; and then, one can talk about the chicken’s belief that the caterpillar is poisonous.
    (…)

    I can only understand that either you claim that you have developed an account of intentionality that works for “real objects” (without unicorns) where actions directed at the objects ground the meaning or that you have developed an account of intentionality that works for both “real objects” and “imaginary objects” (such as unicorns) where actions “as if directed” (as if the objects were real) ground the meaning of the symbol / pattern / representation.
    The thing is that my saying “there are no unicorns!” is not an action directed at unicorns in any way. A CA pattern(s) may represent the proposition or a belief “there are no unicorns” in some way, but it’s hard to see how could they represent just “a unicorn” (and not “there is a unicorn”).

  44. 44. Jochen says:

    Ihtio:

    Amongst the stimuli that lead to representations of unicorns are children’s books, tales, animations, etc.

    Take somebody that has never seen an elephant, but heard of it only via stories, tales, animations, and so on. These cause certain effects within the mind—in my scheme, influencing parent generation patterns to lead to specific daughter generations. The resulting actions certainly may pertain to elephants in a sensible way, even though no direct interaction with an elephant has ever occurred.

    With regards to unicorns, we are in the same situation—the only difference being that there is not even in principle a possibility of having a direct interaction with a unicorn. But this doesn’t introduce any novel difficulties. Whether the unicorn actually exists doesn’t play into this; if there is a possibility to have intentional states directed at things without direct interaction with them, then the nonexistent doesn’t pose any more problems than the merely absent.

    And that there is such a possibility seems obvious, to me: the chicken need not have interacted with a given caterpillar to acquire a belief of its being poisonous, suitably expressed by its actions. Indeed, if the only way to learn of something’s poisonous nature were to interact directly with that thing, this would probably bode ill for the chicken!

  45. 45. ihtio says:

    Take somebody that has never seen an elephant, but heard of it only via stories, tales, animations, and so on. These cause certain effects within the mind—in my scheme, influencing parent generation patterns to lead to specific daughter generations. The resulting actions certainly may pertain to elephants in a sensible way, even though no direct interaction with an elephant has ever occurred.

    How can an action pertain to an elephant in a “sensible way” with no contact with an elephant?
    And how do we know whether an action pertains to an elephant in a “sensible way”? What can we identify “success criteria” of an action?

    the chicken need not have interacted with a given caterpillar to acquire a belief of its being poisonous, suitably expressed by its actions. Indeed, if the only way to learn of something’s poisonous nature were to interact directly with that thing, this would probably bode ill for the chicken!

    Let’s say we have two chickens: one believes that the caterpillar is poisonous and the other one believes that the caterpillar is his sister. They both decide not to eat the caterpillar. Therefore we can say that there is some “success” of an action (eating something else instead of the caterpillar). We have exactly the same action, exactly the same object (caterpillar), but two beliefs. Two the two beliefs have the same meaning due to the same action?

  46. 46. Jochen says:

    Ihtio:

    What can we identify “success criteria” of an action?

    Actions can either succeed or fail. Reaching for an apple succeeds if there is an apple present; it fails if there isn’t. Thus, the presence of an apple is a necessary condition for the success of that action.

    Likewise, there are actions a person can undertake that make sense only if there are elephants in the world, like going on safari in Africa for a photo-op, or, god forbid, becoming an ivory poacher. Since it’s no use trying to poach ivory if there are no elephants, there being elephants is a necessary condition for the action to be sensible.

    For the unicorn case, now, exactly the same conditions hold. Somebody believed in unicorns if they, say, roam through forests that are rumored to contain unicorns, try to persuade a virgin girl to attract them, and so on. These actions make sense only if unicorns exist. Now, in the real world, there are no unicorns, so the actions will never in fact be successful; but that isn’t a problem. All that matters if that if there are unicorns, the actions will succeed.

    Now, you might want to object that we don’t know the goal of an action, and hence, that we don’t know what it means for it to succeed. Indeed, you might want to argue that an action having a goal itself presupposes intentionality, and hence, grounding meaning in action is circular.

    But there are objective criteria independent of the ‘intended’ goal of an action that allow us to infer what an action aims at; indeed, that’s what we’re doing whenever we conclude that, say, the child is trying to get the ball from the roof. After all, we have no access to the child’s intentional mental content; we infer their aim simply from their actions. For one, as long as an action has not reached its goal, it might be continued; they might try again. Conversely, once an action has succeeded, activity ceases.

    So actually, goals of actions are publicly available, and thus, so are their success conditions, or, more broadly, what they pertain to.

    Let’s say we have two chickens: one believes that the caterpillar is poisonous and the other one believes that the caterpillar is his sister. They both decide not to eat the caterpillar.

    That is getting it exactly backwards. The chickens don’t have a belief, on which they then act; rather, beliefs can be ascribed by virtue of their actions. Thus, same actions = same beliefs. In the case you mention, a different set of actions implies a belief that the caterpillar is poisonous, versus the caterpillar being one of the chicken’s sister.

    A poisonous caterpillar would be shunned altogether, for example, because here, the goal is survival, and its success condition is not coming into contact with the poisonous caterpillar. While a chicken that believes the caterpillar to be its sister would behave towards it in whatever way chicken behave towards their siblings (perhaps try to establish a pecking order, for example, which would make no sense if the belief was that the caterpillar is poisonous, but would be very sensible if it was another chicken).

    However, you’re right to note nevertheless that there is an issue of underdetermination here: there typically isn’t a single, well-defined ‘success condition’ to any action. Peter has already pointed to that as an open challenge to my views, and I agree (but see my attempt in the first reply to use the dual hooking-up of my scheme to the world to narrow down mental content further). In fact, I also point this out in the paper itself. So this, too, is on the ‘to do’-list.

  47. 47. ihtio says:

    Actions can either succeed or fail. Reaching for an apple succeeds if there is an apple present; it fails if there isn’t. Thus, the presence of an apple is a necessary condition for the success of that action.

    Likewise, there are actions a person can undertake that make sense only if there are elephants in the world, like going on safari in Africa for a photo-op, or, god forbid, becoming an ivory poacher. Since it’s no use trying to poach ivory if there are no elephants, there being elephants is a necessary condition for the action to be sensible.

    For the unicorn case, now, exactly the same conditions hold. Somebody believed in unicorns if they, say, roam through forests that are rumored to contain unicorns, try to persuade a virgin girl to attract them, and so on. These actions make sense only if unicorns exist. Now, in the real world, there are no unicorns, so the actions will never in fact be successful; but that isn’t a problem. All that matters if that if there are unicorns, the actions will succeed.

    I cannot do with this for two reasons:
    First, if there are actions that can never succeed, then how exactly can they ground any meaning? The only way for the meaning of a failed action to be discovered is through an external agent – an external observer or an external active symbol. If we have an action “reach for an apple” that succeeds then we can safely say that the action pertains to the apple when it is indeed taken. But if an arms reaches towards an empty space on a table, then no one is ever able to determine what could the meaning be of this action (and hence of any mental representation associated with it). Did the person try to reach for an imaginary apple, phone, or what? The undetermination is – in my opinion at least – not a problem that can be said to be solved later, as it seems that it just springs from the very core of the conception of active symbols and their connection with actions.
    Second, you wrote that actions are associated with, in this example, a belief that unicorns exist. But what about thoughts that don’t include this “belief” part? For example I am thinking about unicorns without believing or disbelieving in the existence of unicorns. I’m just entertaining a quite simple thought. What actions that could in any way pertain to unicorns can be found in this example?

    The chickens don’t have a belief, on which they then act; rather, beliefs can be ascribed by virtue of their actions.

    Would you say the same for humans? People don’t have beliefs, but beliefs can only be ascribed to them by virtue of their actions?

    Thus, same actions = same beliefs. In the case you mention, a different set of actions implies a belief that the caterpillar is poisonous, versus the caterpillar being one of the chicken’s sister.

    That is exactly why I proposed a simple example with only one action. If we have a whole complex of actions, relations between them, etc. then the matter is much simpler. However this would imply that a meaning of a symbol is not grounded in an action but somehow in the network of who knows how many different actions that may be triggered across a large spans of time in various contexts. If that would be the case, then one symbol would have to be extremely large and complex, so it could generate all those actions or it could connect with other symbols.
    If one action cannot determine the meaning of a representation (active symbol), then grounding meaning in such actions doesn’t make sense in the context of your previous statement that “goals of actions are publicly available, and thus, so are their success conditions, or, more broadly, what they pertain to.”

    The undetermination problem is hardly, in my mind, a thing that could be said to be one of the problems of the theory to be solved in the future, because the undetermination problem seems to be preventing the theory from working even for the simple cases.

  48. 48. Jochen says:

    Ihtio:

    First, if there are actions that can never succeed, then how exactly can they ground any meaning?

    Because they nevertheless have success conditions, that is, necessary circumstances under which they would succeed; it’s those that ground meaning.

    But if an arms reaches towards an empty space on a table, then no one is ever able to determine what could the meaning be of this action (and hence of any mental representation associated with it).

    Nobody need to determine it—that would be a profoundly weird theory on which only upon some external agency successfully divining the meaning of your action, you would be endowed with intentional states! What’s important is that a success condition exists, whether it’s met or not, not that anybody know that condition.

    The undetermination is – in my opinion at least – not a problem that can be said to be solved later, as it seems that it just springs from the very core of the conception of active symbols and their connection with actions.

    The underdetermination is a very generic problem that pretty much all attempts at naturalizing intentionality face—causal, teleosemantic, representational, and yes, action-based ones; and for all of them, there exist more or less convincing attempts at solving it (for one take at solving this issue in the action-based case, I’d suggest the paper by Simon Blackburn I cite). It’s simply not the problem (not the main problem, at any rate) that I set myself in the paper, which was geared at the homunculus regress, and the issue of whether the meaning bestowed by a specific mechanism could be meaning to the agent performing that action, which is something that—to the best of my knowledge—only the phenomenal intentionality camp could so far boast for themselves.

    For example I am thinking about unicorns without believing or disbelieving in the existence of unicorns. I’m just entertaining a quite simple thought. What actions that could in any way pertain to unicorns can be found in this example?

    In as much as a thought has content, it includes beliefs or desires—i.e. it has a world-to-mind or mind-to-world direction of fit, a way the world is taken to be, or a way the world ought to be. So, while you might not entertain any particular belief regarding the existence of unicorns, you will believe that you neither think that unicorns are real, nor the converse, and act appropriately—including making appropriate verbal reports. The content in this case will not simply be ‘unicorns are real’, but ‘I don’t know whether unicorns are real’, which is itself perfectly well expressible via behavior.

    People don’t have beliefs, but beliefs can only be ascribed to them by virtue of their actions?

    I’d rather say that they have beliefs by virtue of their actions, and we can ascribe the beliefs they have to them by observing their actions.

    However this would imply that a meaning of a symbol is not grounded in an action but somehow in the network of who knows how many different actions that may be triggered across a large spans of time in various contexts.

    That is, I think, a given—what Searle calls the ‘background’. No action ever occurs in isolation, but always before the complex of behavior.

    If one action cannot determine the meaning of a representation (active symbol), then grounding meaning in such actions doesn’t make sense in the context of your previous statement that “goals of actions are publicly available, and thus, so are their success conditions, or, more broadly, what they pertain to.”

    Why not? Any set of actions can be thought of as a larger action, a behavior, etc. What level of action would you have in order to have grounding intentionality make sense? One involving no more than 36 muscles? One not taking longer than four minutes? What’s the criterion for something being ‘one action’, and why should meaning be grounded only in one action?

    The undetermination problem is hardly, in my mind, a thing that could be said to be one of the problems of the theory to be solved in the future, because the undetermination problem seems to be preventing the theory from working even for the simple cases.

    But it does work for the simple cases—recall the CA universe, in which everything is nice and orderly enough, meaning is clearly defined as that which a pattern causes another to do, and so on. If our universe works sufficiently like that CA universe—which it does if it evolves according to fixed laws from prior conditions—then I see no principled reason that such an account could not be likewise made to work.

    And besides, I have supplied an attempt at a response to the underdetermination issue in my first reply in this comment thread, which I pointed out in my last reply to you, so you arguing as if nothing has so far been said on the matter looks a little strange, to me.

  49. 49. Jochen says:

    Ihtio, I think we’ve gone a bit off-track here. Ultimately, I’m not terribly interested in giving a defense of full-fledged success semantics—other people have done so much more capably than I could, but more importantly, I don’t really need it.

    What I need, ultimately, is merely the notion that a symbol’s meaning is what it makes its user do—which, I think, is a reasonably clear-cut notion (but please, do voice any reservations you have on that front!). This is something that can be made plausible by an appeal to something like success semantics, but does not rest much on its substantive, and contentious, claims. So I think it is perhaps more fruitful to focus on this notion, and the work it does in my account.

    Ultimately, the aim is then to ground a representational scheme in the sort of self-reading symbols I propose, for which the hardest problem (as I see it) is the homunculus regress, which I attempt to tame. So my angle is somewhat different from that of pure-bred success semantics: while the latter intents to cash out the notion of belief directly in terms of action, I’m interested in grounding the notion of representation—how a symbol comes to mean something—to then avail myself of that in order to formulate an account of beliefs, desires, and so on, in an ultimately representational manner.

    So, while the straight success semanticist would hold that the content of a belief is grounded in action, to me, it’s rather that the content of a symbol is grounded in action, that is, in what it causes a system to do—whence my notion that symbols ought to be understood as instructions. It’s with this in mind that I then consider the von Neumann construction in order to evade the homuncular problems, which would arise if each symbol caused an entity different from itself to act in a certain manner (which directly leads into regress).

    These symbols then can be used in a quasi-representational manner to build up beliefs—some set of symbols represents a state of affairs, which then constitutes the beliefs the agent holds. As to how a symbol comes to represent a state of affairs, this is where the von Neumann construction comes in: when some stimulus is present, it triggers a change in a pattern, such that this pattern comes to be about that stimulus by means of the notion that a symbol is about that which it causes an agent to do; so if a symbol’s own form causes it to do something, then it is about that which it causes itself to do, to itself, without appeal to a third party.

    What it causes itself to do is to give rise to a new pattern, which codes for a set of actions in the world; those actions are then directly correlated to the stimulus. Due to this correspondence, the symbol can thus be used to represent the stimulus, and moreover, represent it without appeal to some further already-intentional entity. So we can furnish an account of representation that does not falter on homuncular worries, and nor does it need the notion of ‘pertaining to’ a state of affairs that you find (with some justification) to be rather vague—all that’s really needed is the notion that a symbol is about what it causes an agent to do.

    Does this address some of your concerns, or only raise new ones?

  50. 50. ihtio says:

    The underdetermination problem.

    The undetermination is – in my opinion at least – not a problem that can be said to be solved later, as it seems that it just springs from the very core of the conception of active symbols and their connection with actions.

    The underdetermination is a very generic problem that pretty much all attempts at naturalizing intentionality face—causal, teleosemantic, representational, and yes, action-based ones; and for all of them, there exist more or less convincing attempts at solving it (for one take at solving this issue in the action-based case, I’d suggest the paper by Simon Blackburn I cite). It’s simply not the problem (not the main problem, at any rate) that I set myself in the paper, which was geared at the homunculus regress, and the issue of whether the meaning bestowed by a specific mechanism could be meaning to the agent performing that action, which is something that—to the best of my knowledge—only the phenomenal intentionality camp could so far boast for themselves.

    OK. Fair enough. I think I’m with you on this, after reading your reply.

    I have supplied an attempt at a response to the underdetermination issue in my first reply in this comment thread, which I pointed out in my last reply to you, so you arguing as if nothing has so far been said on the matter looks a little strange, to me.

    Don’t you worry. I read everything. It’s just it bothered me so much that it was hard for me to let it go so easily.

    Success conditions of actions.

    First, if there are actions that can never succeed, then how exactly can they ground any meaning?

    Because they nevertheless have success conditions, that is, necessary circumstances under which they would succeed; it’s those that ground meaning.

    Where are the success conditions stored? How are they represented? How does an agent interpret them or their representations?

    But if an arms reaches towards an empty space on a table, then no one is ever able to determine what could the meaning be of this action (and hence of any mental representation associated with it).

    Nobody need to determine it—that would be a profoundly weird theory on which only upon some external agency successfully divining the meaning of your action, you would be endowed with intentional states! What’s important is that a success condition exists, whether it’s met or not, not that anybody know that condition.

    I agree that it would in fact be a weird theory if we needed some external agency to determine the meaning of an action.
    So what does it mean that “a success condition exists“?
    It seems that the meaning is constructed as follows: there is a symbol / an agent, whose meaning is due to it’s actions directed towards an object, and the meaning of an action is grounded in specific sucess criteria. In the paper itself there are only brief mentions of this topic.

    Beliefs and thoughts.

    For example I am thinking about unicorns without believing or disbelieving in the existence of unicorns. I’m just entertaining a quite simple thought. What actions that could in any way pertain to unicorns can be found in this example?

    In as much as a thought has content, it includes beliefs or desires—i.e. it has a world-to-mind or mind-to-world direction of fit, a way the world is taken to be, or a way the world ought to be. So, while you might not entertain any particular belief regarding the existence of unicorns, you will believe that you neither think that unicorns are real, nor the converse, and act appropriately—including making appropriate verbal reports. The content in this case will not simply be ‘unicorns are real’, but ‘I don’t know whether unicorns are real’, which is itself perfectly well expressible via behavior.

    You’re saying that every thought is a belief / desire? If such is the case, then there is a grave disconnect in our perspectives.

    Grounding of meaning in actions.

    However this would imply that a meaning of a symbol is not grounded in an action but somehow in the network of who knows how many different actions that may be triggered across a large spans of time in various contexts.

    That is, I think, a given—what Searle calls the ‘background’. No action ever occurs in isolation, but always before the complex of behavior.

    That’s understandable. But how it relates to grounding meaning of one particular active symbol? The repertoire of actions of a symbol would have to be very large, together with huge set of success conditions.

    If one action cannot determine the meaning of a representation (active symbol), then grounding meaning in such actions doesn’t make sense in the context of your previous statement that “goals of actions are publicly available, and thus, so are their success conditions, or, more broadly, what they pertain to.”

    Why not? Any set of actions can be thought of as a larger action, a behavior, etc. What level of action would you have in order to have grounding intentionality make sense? One involving no more than 36 muscles? One not taking longer than four minutes? What’s the criterion for something being ‘one action’, and why should meaning be grounded only in one action?

    I don’t know what level of detail / abstraction / size / approximation / coarse-graining is sufficient for your theory of intentional automata to work properly. I think you may set it to whatever works best for you.

    The reason I am so hard on you is because I like your account very much. I noticed some problems that I mentioned earlier (e.g. “is every effect a meaning for every cause?”) in your proposal that struck a chord with me, as I’m facing very similar problems with my own journey at getting at meaningful “atoms”. So I’m pushing very hard, because I like it very much and I would like to iron out every last bit of it.

  51. 51. ihtio says:

    Jochen,

    I haven’t noticed your comment above. It’s most likely because I didn’t refresh the page in a while.

    I agree with everything you said. I have only some concerns about success conditions of actions, which you can read above, and the thing we were talking about earlier – which actions are meaningful and which are not?
    I see that a pattern which gives rise to a new pattern, which codes for a set of actions that are “correlated” with the stimulus can be thought as being a meaningful symbol. But an action is simply , that is subjects is in an event with the object. Every has this property: there always is something acting on something else. Light warming up the water. Rain making ponds on the ground. Why exactly such actions are not meaningful (why light is not an active symbol, in other words)? Last time our discussion went in the direction of “success conditions” of actions, but I think that you want to distance yourself from it.

  52. 52. Jochen says:

    Ihtio, I’m going to go ahead and try and address your concerns re success semantics, since I wouldn’t want to leave you hanging; but again, its failings and successes (heh) aren’t really my main concern. So, with that in mind:

    Don’t you worry. I read everything. It’s just it bothered me so much that it was hard for me to let it go so easily.

    Fair enough. What seemed odd to me merely was that if the strategy I’ve proposed above works, your concerns seem answered to me, so I have to presume you don’t think that it does; but you didn’t expand on why you want to reject it.

    Another approach, somewhat similar, that’s sometimes proposed is that beliefs aren’t ‘nailed down’ by single actions, but by separate actions within a network of behavior: so if an action A1 has as its success conditions beliefs B1, B2, and B3, and action A2 has likewise B1, B2, and B4, while action A3 has B1, B4, and B6, we can use these equations to nail down B1 completely, and so on.

    Where are the success conditions stored? How are they represented? How does an agent interpret them or their representations?

    It’s not clear to me why you believe that they must be stored, or represented, or interpreted. A success condition is a way the world must be for an action to achieve its goal, or to be of utility, or something like that; it’s a (putative) fact about the world. So if that fact obtains, the action will work out, if it doesn’t, it won’t; but that fact needn’t be ‘stored’ anywhere, it’s independent of any specific agent’s knowledge.

    So what does it mean that “a success condition exists“?

    Nothing but that if the world is a certain way, the action will succeed.

    You’re saying that every thought is a belief / desire? If such is the case, then there is a grave disconnect in our perspectives.

    Well, it’s at least a common assumption in the philosophy of intentionality—every intentional thought is either concerned with a way you take the world to be (a belief), or a way you want the world to be (a desire). I’m not completely sure I buy into this wholesale, but it’s not an extraordinary position (as a cite, a quick google search yields e.g. this paper, where it is claimed in the abstract that “[i]t is beliefs and desires which are usually considered the rock?bottom components of individual intentional states”).

    I’m not completely sure where this originates—I want to say it’s from Searle’s book, but I’m not terribly sure on this point—but it’s certainly something that permeates thought on the matter, at least in my impression. It’s not hard to see for most propositional attitudes how they can be analyzed either in terms of beliefs or desires—you hope for things that you don’t believe you have, but desire to have; you fear things you belief don’t obtain, but you desire not to occur; and so on. But as I said, I’m not wedded to this account.

    But an action is simply , that is subjects is in an event with the object. Every has this property: there always is something acting on something else. Light warming up the water. Rain making ponds on the ground. Why exactly such actions are not meaningful (why light is not an active symbol, in other words)?

    I’m not sure I understand this paragraph correctly. Are you asking what constitutes an action? If so, then I think in the cases you mention it’s fairly straightforward—water warms up by being acted upon, that is, it’s wholly passive in the matter; likewise, the ground is being passively filled up by water, and so on. Actions only occur when the part that acts takes on an active role: you wouldn’t say that ‘filling up with water’ is something the depressions in the ground do, rather, it’s something that merely happens to them. Likewise, you wouldn’t say that me reaching for an apple is something that just happens to me; rather, it’s something I do.

    You’re right, though, that meanings construed in that way are still rather broad. What is an actor, what a mere recipient of action? But ultimately, the detailed answer to these questions seems to me not to matter overly much: within the CA framework, and by extension, within neural networks and brains, if the account can be made to work there, there’s a clear-cut notion of active and passive, which in the CA case is explicitly due to active and passive states of the grid. And I don’t need more: as long as it’s clear that the replication is an action, I can avail myself of the relevant notion of ‘meaning to’ a CA pattern.

    Or, in other words, there may be lots of proto-meanings in the world, but it’s only when this is appropriately processed by the von Neumann construction that these become meanings in the sense we ordinarily use the term; so as long as action is well defined in this setting, I don’t think there is much of a problem.

  53. 53. ihtio says:

    Jochen,

    Success semantics and “what is an action anyway?”

    I will try to explain my problem with actions and success conditions that pushed me to dwell on this for so long. The thing is that in the case of cellular automata we have a very clear situation. We can clearly define, see and most likely program / evolve simple agents that will have actions on other agents and on themselves. This is very nice, indeed. The problem arises only when I try to extrapolate all this to other cases, to – so called – the real world. We already have examples of “things”, systems that we are unsure whether we ascribe minds to them or not – for example ant nests [1], computers, bacteria colonies [2], slime mold [3], robots, societies and organizations of people, etc. Now, sometimes it is hard to define what an “agent” is, sometimes it’s very simple. However it all the cases we have some events and processes. Some of them we could call actions, but very often these “actions” are chemical processes triggered by external stimuli, such as neuron’s spike activity profile.
    If I try to think of a meaningful action in terms of success conditions, then it doesn’t say much to me. I’m just trying to define one hard concept with another, even more elusive one.

    As I said, in the world of cellular automata it works very well.

    Storing / interpreting of success conditions.

    Where are the success conditions stored? How are they represented? How does an agent interpret them or their representations?

    It’s not clear to me why you believe that they must be stored, or represented, or interpreted. A success condition is a way the world must be for an action to achieve its goal, or to be of utility, or something like that; it’s a (putative) fact about the world. So if that fact obtains, the action will work out, if it doesn’t, it won’t; but that fact needn’t be ‘stored’ anywhere, it’s independent of any specific agent’s knowledge.

    I guess I assumed that if we ground symbol’s meaning in the meaning of an action and the meaning of an action is based on a success condition, then the symbol would somehow want to “know” its success conditions.

    Thoughts and beliefs, desires.

    It’s weird to me that every thought should have an associated belief. It’s like saying that every thought has an associated emotion / feeling. I just don’t think it is that way. It’s (our disagreement) more likely an artefact of our introspections – we both introspect something different and call it a “thought”.

    Actions vs reactions – intentionality of an action presupposes intentionality of an agent.

    within a cellular automaton world, with meaning dictated by action, a pattern means something to another pattern if it causes that second pattern to do something—say, construct a third one, which then is what the first pattern means to the second. However, a replicator pattern effectively carries its own code within itself, so it means something to itself—it causes itself to create a copy of itself. If, now, this is linked to, e.g., some perception, then it may create a different pattern out of itself, and thus, mean something, be about something beyond itself. Thus, used as a symbol within some agent’s mental architecture, it supplies its own meaning without any need for interpretation.

    ‘Actions’ of the sort you consider really don’t have success conditions, or pertain to anything. I would call them rather reactions: because of some causal influence, the thing being influenced reacts in a certain way.

    water warms up by being acted upon, that is, it’s wholly passive in the matter; likewise, the ground is being passively filled up by water, and so on. Actions only occur when the part that acts takes on an active role: you wouldn’t say that ‘filling up with water’ is something the depressions in the ground do, rather, it’s something that merely happens to them. Likewise, you wouldn’t say that me reaching for an apple is something that just happens to me; rather, it’s something I do.

    You’re right, though, that meanings construed in that way are still rather broad. What is an actor, what a mere recipient of action?

    Indeed, it is not clear to say generally what an agent is in the “real world”, but there are no such problems with cellular automata. So, yes, your example works quite well for CAs. The problems I already mentioned at the beginning of this comment are with trying to use it outside of CA world, to generalize it somehow.

    If an action differs from a reaction [of a “dead” object] by it being performed by an agent, then the meaning of an action hangs on the intentionality of the agent, whose meaningfulness is grounded in actions, … So we’re looping.

    1. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0141012
    2. https://en.wikipedia.org/wiki/Microbial_intelligence
    3. http://www.nature.com/news/how-brainless-slime-molds-redefine-intelligence-1.11811

  54. 54. Jochen says:

    Ihtio, perhaps the following helps by way of an illustration:

    Consider a robot, tasked to find its way through an obstacle course—evading various obstacles, following a certain path, perhaps manipulating certain objects. The robot, if it is to complete its task, must have some sort of image of the obstacle course—its world. This image is given to it by means of a set of well-defined instructions: move ahead for 30cm, turn left 65°, move ahead 50cm, extend manipulator, perform grasping motion, and so on.

    Now, whether there’s any meaning associated with this robot, I can’t say. However, what I can say (if my ideas are right, that is) is that there is no meaningfulness the way we (think we) have it—whatever meaning may be there is not represented to the robot. Unlike our mental states, which are meaningful to us, that is, ultimately to those same mental states, there may be a meaning of the instructions to the robot, but there’s no buck-stops-here center where the regress of meaning-to bottoms out. Hence, there is no intentionality in such a system.

    But if we outfit the robot with a von Neumann mind, this changes: the instructions that, as argued above, implicitly contain a picture of the world, are just what a first-generation pattern represents to itself. Hence, here we can at least hope to duplicate the phenomenon of a mental state meaning something to itself, and moreover, that meaning is something that contains a picture of the world. So what exactly might be said to contain proto-meanings is really somewhat immaterial; what matters is that once we have the von Neumann construction, we get the sort of meaningful meanings that our own minds seem to possess.

    Now, of course, the picture of the world that is contained in a simple set of instructions that the robot needs to navigate its obstacle course is a very rudimentary one: from knowing those instructions, you could perhaps infer where certain obstacles are, or where ‘something graspable’ is, and so on, but the picture would be, at best, a dim shadow of the complexity of things that we actually believe are there. But that’s to be expected: the more complex the task becomes, the richer the set of instructions is, the more detailed the implicit image of the world will be; so an agent with a complex set of tasks—like humans that aim to navigate a complicated world effectively—will always perceive an agent with a more narrow focus to have a world-model that lacks detail.

    Can this suffice to explain the full detail of the world in our apperception? Well, I guess the most honest answer would be that I don’t know: there could be some obstacle such that the picture imbued by instructions can’t progress beyond a certain level of detail. But there certainly does not seem an obvious such obstacle, so the aim should be to further develop this idea to see how far it can go—and if one finds that it can go no further at some point, then it seems that one has learned something non-trivial about the nature of intentionality, which would itself be a favorable outcome.

    Note also that there’s no need for the picture to capture everything within the world—indeed, upon careful examination, it often turns out that we ourselves make do with a much poorer picture than we might believe; it’s just that we don’t see what we don’t see, so to speak, like with a blind spot. So some residual underdetermination might be tolerable, if it turns out that we ourselves cannot further refine our picture of the world to resolve it.

    This opens up another issue, however: it seems that there could be real behavioral zombies—that is, systems like the robot above, lacking the von Neumann process (and consequently, any real intentionality), that are nevertheless successful in the world. So, why do we apparently possess it, if we could do without it?

    I have hinted at a possible solution for this in the paper: it might be that the von Neumann process is utilized in the mind for the same reason that it is in nature (by evolving organisms)—because it provides good general-purpose adaptability. For this, one might imagine that the instruction set evolves—this is, after all, what the von Neumann process was invented to model, with some form of selection dynamics eventually converging to a good behavioral solution, given the sensory (and other, such as memory etc.) data. If this is right, then the account might also pose a way toward attacking the notorious frame problem of artificial intelligence: robots do well in carefully constrained environments, but once these constraints are lifted, they are simply overwhelmed by the options. So, by proposing such an (internal) evolutionary component, it might just be the case that an appropriate—or not overly deleterious—response can be found for basically arbitrary environments; with the added benefit that arbitrary environments may find their representation!

    OK, now on to your more detailed questions:

    I guess I assumed that if we ground symbol’s meaning in the meaning of an action and the meaning of an action is based on a success condition, then the symbol would somehow want to “know” its success conditions.

    I think I may be guilty of some muddying-of-the-waters here: the pure success semantics account is not a representational one, that is, it does not feature symbols as such; rather, it grounds the content of belief in facts of the world that need to obtain for certain behavior to be useful (say, in an evolutionary sense). Representation and symbols only come in on my account, where what a symbol means just is straightforwardly what it makes an agent (which does not mean an intentional agent) do.

    It’s weird to me that every thought should have an associated belief.

    Well, not necessarily every thought, or every mental process—some generalized anxiety, for instance, does not have an intentional object in any straightforward sense, an hence, does not correspond to either belief or desire. But in general, it seems to be at least a common presupposition that if you think about something, then you have certain beliefs about, or desires toward that something. But as I said, it’s not something I’m wedded to. Out of interest, do you have any particular case in mind that does not seem to you to fit that mold?

    If an action differs from a reaction [of a “dead” object] by it being performed by an agent, then the meaning of an action hangs on the intentionality of the agent, whose meaningfulness is grounded in actions, … So we’re looping.

    An agent does not necessarily mean an intentional agent. The (original) robot I described above is certainly an agent, but is not intentional; so I don’t really see any circularity (and in any case, for the examples that interest us, this does not seem to matter—again, within the CA world, and within brains by extensions (modulo the usual caveat), roles of actor and action are clearly defined, and that’s all we need to pick out an intentional agent).

  55. 55. ihtio says:

    Jochen,

    OK. We may imagine two robots:
    Zombie Robot performs its actions on the world only through usual computations, mechanisms, etc. He has no intentionality, no understanding, he doesn’t have any representation of meaning.
    Intentional Robot performs its actions on the world due to having active symbols that interpret stimuli, understand themselves (that is, they are able to interpret themselves), and perform actions. This robot’s is endowed with a von Neumann mind. The robot understands the world.

    You said yourself that the actions of both robots will be identical. Therefore I see that a) meaning is very close to being attack as being merely an epiphenomenon (which I would not do), and b) if it doesn’t make a difference to the agent whether he possesses intentionality or not, then it would be hard to evolve such a creature.

    What is interesting is that we have the exact same actions, and through them we cannot know if an agent is intentional or not. We need to know the implementation of the robot’s “brain” to know that. And the only way for anything (any microagent / atom) to be meaningful is for it to look directly at itself in search for meaning, which are instructions for actions.

    It’s weird to me that every thought should have an associated belief.

    Well, not necessarily every thought, or every mental process—some generalized anxiety, for instance, does not have an intentional object in any straightforward sense, and hence, does not correspond to either belief or desire. But in general, it seems to be at least a common presupposition that if you think about something, then you have certain beliefs about, or desires toward that something. But as I said, it’s not something I’m wedded to. Out of interest, do you have any particular case in mind that does not seem to you to fit that mold?

    Well, much of my daily thoughts don’t have any noticeable traces of beliefs. Yesterday I was reading a book – Metro 2033 – and I enjoyed it very much, I imagined the dark, smelly, dirty and scary tunnels of the metro. However not once I felt like I was believing / disbelieving anything I was reading / imagining. For me, there is a quite clear distinction between imaginings, thoughts and beliefs.

    If an action differs from a reaction [of a “dead” object] by it being performed by an agent, then the meaning of an action hangs on the intentionality of the agent, whose meaningfulness is grounded in actions, … So we’re looping.

    An agent does not necessarily mean an intentional agent. The (original) robot I described above is certainly an agent, but is not intentional; so I don’t really see any circularity (and in any case, for the examples that interest us, this does not seem to matter—again, within the CA world, and within brains by extensions (modulo the usual caveat), roles of actor and action are clearly defined, and that’s all we need to pick out an intentional agent).

    Yes, if we know which agents are intentional then there is no problem. The problem arises when we try to identify minds basing our “probes” on a theory: is a water cycle a mind? Is a termite nest / colony a mind? Is a volcano a mind? A solar system?
    We know our brains are intentional organs, therefore we infer that the building blocks of our brains are akin to active symbols. But if we don’t know if something is a mind, then we basically have no way of checking if that is so or not. This is a general problem: how to check if something is / has a mind? It is probably analogous to checking if something is conscious or not: you just have to assume it.

  56. 56. Jochen says:

    Ihtio:

    Therefore I see that a) meaning is very close to being attack as being merely an epiphenomenon (which I would not do),

    Something is only an epiphenomenon if it doesn’t have any consequences in the physical world; but the von Neumann process is instantiated in the physical world—there is a physical difference between the intentional and the non-intentional robot.

    b) if it doesn’t make a difference to the agent whether he possesses intentionality or not, then it would be hard to evolve such a creature.

    But it does make a difference (in fact, all the difference!) to the agent—the mental states of the intentional agent will be meaningful to itself, while those of the non-intentional agent carry no meaning to it.

    It’s also fallacious to expect an evolutionary reason for every feature of biological creatures—some things are just simply not ‘costly’ enough to be gotten rid off, and spread through a population via genetic drift. But I’ve also sketched that I do believe there is a benefit to having the von Neumann process, since it may allow a greater adaptability—so while we can produce a non-intentional behavioral isomorph for every given situation, the capacity of adapting to every situation may at least be aided by the von Neumann process. So while there may be ‘local’ behavioral isomorphs in any situation, there is no ‘global’ isomorph equaling the intentional organism across all possible situations.

    Yesterday I was reading a book – Metro 2033 – and I enjoyed it very much,

    But that’s straightforwardly a belief—‘I like this book’. The same goes for your imaginings—you believe tunnels of the sort described would be dark, would smell bad, you believe you would be scared in them, and so on. That doesn’t imply you believe they exist, but you do believe that if they existed, they would be a certain way.

    The problem arises when we try to identify minds basing our “probes” on a theory: is a water cycle a mind? Is a termite nest / colony a mind? Is a volcano a mind? A solar system?

    But that’s exactly what my theory answers—only if they instantiate the von Neumann process. (Which, in the cases you mention, probably means no.) That’s an advantage this approach has with regard to computational ones: since we have no way of knowing what computation is instantiated by, say, a termite’s nest, even if there is a computation that leads to intentionality, we would have no way of knowing. Indeed, due to Rice’s theorem (which roughly states that we can’t in general decide what a computation produces, even if we know the computation), we couldn’t even tell if we knew what was being computed—whereas the question whether the von Neumann process is instantiated is straightforwardly answerable.

  57. 57. Jochen says:

    And in a sense, of course, we already know that our behavioral performance can be equaled, and even surpassed, in certain fixed contexts, by agents that don’t possess any intentionality: take, for instance, chess—while to the human chess player, their strategies will be about achieving a certain goal, their moves will have a certain intent, and so on, the computer is unfazed by any such elements, simply calculating moves, checking with a database, etc. So that such ‘behavioral zombies’ are possible is ultimately merely data the theory has to accommodate—which it appears to do.

  58. 58. Charles Wolverton says:

    Jochen –

    In a previous comment, you said that you now think the machine shouldn’t include an analyzer. So, in what follows I assume there isn’t one. So, all the descriptions of functional components U,C, and S are already on the tape.

    You said that the tape portion of each generation of machine hosts only descriptions. So, what is written onto the tape when the current machine receives input V(x)? It should be a description, but without an analyzer how would φ(V(x)) be generated?

    Since V(x) is a fixed pattern, can it be its own description since unlike the functional components it doesn’t change from generation to generation and could be copied to the next generation machine rather than reconstructed ?

    Also, as I understand it V(x) is essentially a snapshot of neural activity consequent to some stream of sensory input. In assuming that the sensory input is light reflected from a flower, you are assuming that the machine has “chosen” to take a snapshot of the content of that stream at a moment when the content happens to be consequent to the presence of a flower in the visual sensor’s FOV. Why does the machine do that? Ie, what about a particular pattern in the stream causes the machine to treat some pattern as worthy of being “saved” (which seems to be what the replication process effectively does)?

  59. 59. ihtio says:

    Jochen,

    b) if it doesn’t make a difference to the agent whether he possesses intentionality or not, then it would be hard to evolve such a creature.

    But it does make a difference (in fact, all the difference!) to the agent—the mental states of the intentional agent will be meaningful to itself, while those of the non-intentional agent carry no meaning to it.

    Yes, I meant it doesn’t make a difference to the agent’s behaviour, therefore such a feature could not be selected for.

    I do believe there is a benefit to having the von Neumann process, since it may allow a greater adaptability—so while we can produce a non-intentional behavioral isomorph for every given situation, the capacity of adapting to every situation may at least be aided by the von Neumann process. So while there may be ‘local’ behavioral isomorphs in any situation, there is no ‘global’ isomorph equaling the intentional organism across all possible situations.

    Yes, of course organisms are quite adaptable, through changes in genes expression or through changes in brains or hormonal activity, etc. I don’t know how explanation of minds / brains by means of intentional automata / von Neumann minds would make for a stronger case for adaptability in the problem solving space than for example what neuroscience, cognitive science, evolutionary psychology already provide. I guess a virtual world with the implementation of von Neumann creatures could show us how it could work in practice.

    Yesterday I was reading a book – Metro 2033 – and I enjoyed it very much

    But that’s straightforwardly a belief—‘I like this book’. The same goes for your imaginings—you believe tunnels of the sort described would be dark, would smell bad, you believe you would be scared in them, and so on. That doesn’t imply you believe they exist, but you do believe that if they existed, they would be a certain way.

    And this is why introspection movement in psychology was a failed endeavour – you think I have a belief B then I’m thinking about T, but I find that I am only thinking about T without at the same time believing that B. And there is no way anyone could ever settle this difference.

    The problem arises when we try to identify minds basing our “probes” on a theory: is a water cycle a mind? Is a termite nest / colony a mind? Is a volcano a mind? A solar system?

    But that’s exactly what my theory answers—only if they instantiate the von Neumann process. (Which, in the cases you mention, probably means no.) That’s an advantage this approach has with regard to computational ones: since we have no way of knowing what computation is instantiated by, say, a termite’s nest, even if there is a computation that leads to intentionality, we would have no way of knowing. Indeed, due to Rice’s theorem (which roughly states that we can’t in general decide what a computation produces, even if we know the computation), we couldn’t even tell if we knew what was being computed—whereas the question whether the von Neumann process is instantiated is straightforwardly answerable. Actually, you’re right. Could you tell what things, creatures, etc. you suspect of being intentional?

  60. 60. Jochen says:

    Charles, the new pattern would be introduced as part of the ‘tape’, such that the reproduction process would be (writing the tape part in brackets for clarity): U + C + S + [Φ(U + C + S) + V(x)] –> U + C + S + U(V(x)) + [Φ(U + C + S) + V(x)], with U(V(x)) simply being what the universal constructor constructs upon being handed V(x). This would then be analogous to a genetic mutation—a part of the DNA is altered, leading to a different phenotype in the next generation.

    As for how the machine chooses what particular stimulus to incorporate: it doesn’t; ultimately, it simply takes whatever visual (or other) input it gets. The daughter pattern then incorporates the change due to that input, and codes for actions appropriate to the situation encoded in that input. The connection between the daughter pattern and the actions is learned: agents probe their world, sticking with those actions that work, given some particular input.

    ——————-

    Ihtio:

    Yes, I meant it doesn’t make a difference to the agent’s behaviour, therefore such a feature could not be selected for.

    But in the case I sketched, there would be a behavioral difference—not within one single setting, but across all possible settings, in that the von Neumann mechanism would essentially act like a genetic algorithm, finding appropriate solutions to novel environmental situations, where a zombie robot would be overwhelmed by the issues surrounding the frame problem.

    Yes, of course organisms are quite adaptable, through changes in genes expression or through changes in brains or hormonal activity, etc.

    That’s not what I meant. In the scenario I have sketched, the sensory input introduces a kind of ‘fitness landscape’ upon which many different von Neumann processes compete; eventually, the best-adapted one is chosen in order to govern actions to be actually carried out in the world, which, if the sensory input was accurate and the optimization worked, will provide a solution to the problems that particular environment poses to the agent. The adaptation is a purely internal process, and one that wouldn’t occur with a non-intentional agent; such an agent would face the familiar problems stemming from the unpredictable complexity of an arbitrary environment, and thus, even if it equaled the performance of the intentional agent within some domain of fixed variables, within a frame, it would not equal that performance across different frames—a chess-playing program is useless for poker, but something that can adapt its strategy to arbitrary environments would eventually find one (probably not the optimal one) for either situation.

    That is, of course, if the idea of ‘evolving’ a response via the von Neumann process can be made precise, which I don’t know how to do at the moment (in particular, it’s not obvious what criterion to single out for determining the ‘fitness’ of a possible course of action within a given environment).

    And this is why introspection movement in psychology was a failed endeavour – you think I have a belief B then I’m thinking about T, but I find that I am only thinking about T without at the same time believing that B.

    The thing is that a thought ‘I like that book’ is a belief that you like that book, your thought that the described tunnels are scary is a belief that you would find such tunnels scary, and so on. ‘Belief’ does not just mean believing that something exist, but also having beliefs towards how such-and-such would be if it were to exist, and so on. How do you think you separate the two?

    Could you tell what things, creatures, etc. you suspect of being intentional?

    Not without engaging in irresponsible speculation, no.

  61. 61. ihtio says:

    Jochen,

    As for how the machine chooses what particular stimulus to incorporate: it doesn’t; ultimately, it simply takes whatever visual (or other) input it gets. The daughter pattern then incorporates the change due to that input, and codes for actions appropriate to the situation encoded in that input. The connection between the daughter pattern and the actions is learned: agents probe their world, sticking with those actions that work, given some particular input.

    These are two very hard problems: how to find actions appropriate to the given situation, and what is the learning algorithm for creating daughter patterns with proper actions. These problem are very hard, and all AI algorithms face it. There are solutions and algorithms that work well nowadays [1], but they still require much more hours of game to infer the rules, etc.

    Yes, I meant it doesn’t make a difference to the agent’s behaviour, therefore such a feature could not be selected for.

    But in the case I sketched, there would be a behavioral difference—not within one single setting, but across all possible settings, in that the von Neumann mechanism would essentially act like a genetic algorithm, finding appropriate solutions to novel environmental situations, where a zombie robot would be overwhelmed by the issues surrounding the frame problem.

    A zombie robot could very well simulate the environment and its own actions in an evolutionary algorithm and come up with an optimal solution. Without a clear algorithm describing how a genetic-like learning mechanisms would work in a von Neumann agent, this is only speculation. There are already evolutionary algorithms for cellular automata, and they have similar successes and mishaps as many other implementations – they are slow.

    You mention that it is not clear what criterion to choose for the ‘fitness’ of an action. Yes, this is one of the problems. But when you have an assembly of active symbols, and they all somehow participate in an interaction with the environment, and they succeed, then there is the problem of distributing the “points” across all these mini-agents. Some of them could have done a better job, some are fantastic – but how do we know? This is a known problem in neural networks with many layers – the error somehow needs to be backpropagated to the deeper units [2]. It seems that an assembly of active symbols would face similar issues.

    The thing is that a thought ‘I like that book’ is a belief that you like that book

    Yes.

    your thought that the described tunnels are scary is a belief that you would find such tunnels scary

    No. But of course you can define beliefs in such a way that they are associated with virtually every feeling, image, thought, etc.
    I agree that a belief is a “such and such is the case” or “such and such would be the case”, but I can hardly imagine how these two aspect would be associated with practically every mental phenomena.

    1. http://www.wired.com/2015/02/google-ai-plays-atari-like-pros/
    2. https://en.wikipedia.org/wiki/Backpropagation

  62. 62. Jochen says:

    Ihtio:

    Without a clear algorithm describing how a genetic-like learning mechanisms would work in a von Neumann agent, this is only speculation.

    I completely agree; all of this is highly speculative at this point. In my defense, nowhere did I claim to have solved all the problems of how to create a human-like artificial mind; indeed, I think this would be impossible at the moment. The problem I set out to attack is how to make the apparent meaningfulness of our thoughts compatible with a naturalistic account of the mind; that is, how to create a concrete system that is about something besides itself, without requiring external interpretation.

    So you should view my proposal (as I also say in the paper) as a point of entry towards giving a naturalistic account of the mind, not as such an account in itself. It’s a toy model, in the end, and will have to be probably radically altered before it can boast of any sort of realistic appeal (first and foremost, one would have to find an implementation of some logical equivalent in neural networks, for instance). Many people still hold that there is no way to encorporate meaningfulness within a naturalistic worldview, either proposing that it is some additional dimension to the world, or to eliminate it altogether; it’s to address those worries that I proposed my model.

    Out of interest, regarding your requirement that an organism that possesses intentionality be different in behavior from one that doesn’t: how do you think that should work? It seems to me that one always can come up with a non-intentional behavioral isomorph, simply by using a lookup-table argument. Additionally, as I pointed out, we already are equaled in behavior by non-intentional automata, such as chess computers, and, in the not-too-distant future, self-driving cars. What kind of behavior do you think that only an intentional agent can perform?

    Regarding beliefs and desires: well, as I said, it’s not something I cooked up myself; but I must confess that I find it hard to come up with counterexamples. You claim to think that tunnels of a certain sort would be scary, but not to believe that they would be scary; to me, that’s simply contradictory. If you think x would be the case, then that is a belief that x would be the case.

    There are arguably certain mental states with a ‘null’-direction of fit, such as something like a generalized anxiety; or perhaps, being sorry about something, or being happy about something. But in the former case, I’m not sure it’s an intentional state at all, while in the latter, there are certainly beliefs/desires associated—you believe that you’ve done something wrong, and you desire not to have done that; hence, you’re feeling sorry. Do you think there’s more to that?

  63. 63. ihtio says:

    Jochen,

    Out of interest, regarding your requirement that an organism that possesses intentionality be different in behavior from one that doesn’t: how do you think that should work? It seems to me that one always can come up with a non-intentional behavioral isomorph, simply by using a lookup-table argument.

    Well, of course we can build such “zombie” systems. My comment was actually in the context of natural evolution of system: creatures that possess intentionality would have to behave differently than “zombie” creatures, for intentionality to be selected for. Of course it is possible that non-intentional creatures just couldn’t evolve at a certain point, or intentionality is a core aspect of life (or of life of given complexity).

    You claim to think that tunnels of a certain sort would be scary, but not to believe that they would be scary; to me, that’s simply contradictory. If you think x would be the case, then that is a belief that x would be the case.

    Yes, I do claim that dark postapocalyptic tunnels would be scary, and I claim that this is a belief. However when I’m reading the book and imagining these things, I rarely find that many beliefs in my mind. I would simply calls these imaginings “imaginings”, “visuals”, “thoughts”, “representations”, “stories”, etc. Whether I have associated beliefs regarding these tunnels or not at the moment of reading is of course unknown to me, but I recognize then a “simple” thought about the tunnels enters my mind vs a beliefs about possible tunnels happens in my mind.

    All in all, I find your proposal very refreshing. I enjoyed reading the paper and participating in the discussion. I hope you will work on other elements of the theory and improve the ideas further. I would very much like to read more about von Neumann minds.

  64. 64. Jochen says:

    Ithio:

    My comment was actually in the context of natural evolution of system: creatures that possess intentionality would have to behave differently than “zombie” creatures, for intentionality to be selected for.

    But again, not every trait in an animal is there because it provides a selective advantage. Blue eyes in humans, for instance, are due to a single genetic mutation 6-10,000 years ago; all blue-eyed people are descended from the individual possessing this mutation, and its spread is solely due to genetic drift—it doesn’t have a survival advantage.

    Additionally, it may not be intentionality itself that is selected for, but something else correlated with it—say, the large amount of backpropagation between cortex and the thalamic nuclei allows a more accurate response to environmental stimuli by allowing to error-correct information received from the environment. Then, the accurate response is being selected for, with intentionality just being a fringe benefit, if that is indeed its origin.

    I would simply calls these imaginings “imaginings”, “visuals”, “thoughts”, “representations”, “stories”, etc.

    I guess I just don’t get the distinction you’re making. Whatever I imagine, or visualize, I believe to be a certain way—that’s what imagining is. To the extent that my thoughts regard the way I take the world to be, or the way the world would be in a certain case, they are beliefs; to the extent that they concern the way I would want the world to be, they are desires. If a story tells you that Ishmael signed on to a whaler, you believe that Ishmael did so, without any belief that Ishmael really exists.

    In other words, those ‘simple’ thoughts you mention, they are beliefs (resp. desires)—there’s no extra ‘believing’ you need to do. (Of course, there’s a difference between having a thought, and believing you have that thought—but that further belief is not the belief expressed in your original thought, since that belief—or desire—may have had, say, an apple as its content, while that second-stage belief’s content is the thought itself.) You somehow seem to think that there’s a further step necessary, sort of becoming aware of the thought as expressing a belief; but, as they say, it’s the thought that counts. And not even that: you probably believe that Iowa is not covered three feet in Nutella, without that belief ever being present within your mind as such.

    All in all, I find your proposal very refreshing. I enjoyed reading the paper and participating in the discussion. I hope you will work on other elements of the theory and improve the ideas further. I would very much like to read more about von Neumann minds.

    Thanks. I hope that I will eventually be able to donate some more time to the proposal, but at the moment, I have to concentrate on other things. If anything more comes out of this, I’ll be sure to make a note here somewhere.

  65. 65. ihtio says:

    I would simply calls these imaginings “imaginings”, “visuals”, “thoughts”, “representations”, “stories”, etc.

    I guess I just don’t get the distinction you’re making. Whatever I imagine, or visualize, I believe to be a certain way—that’s what imagining is. To the extent that my thoughts regard the way I take the world to be, or the way the world would be in a certain case, they are beliefs; to the extent that they concern the way I would want the world to be, they are desires.
    (…)
    In other words, those ‘simple’ thoughts you mention, they are beliefs (resp. desires)—there’s no extra ‘believing’ you need to do.

    OK. It just means that we use the words “belief” and “believe” to mean different things. I don’t know how common your or mine positions are. However I’m not particularly willing to fight over words.

    I hope that I will eventually be able to donate some more time to the proposal, but at the moment, I have to concentrate on other things. If anything more comes out of this, I’ll be sure to make a note here somewhere.

    That would be awesome. I will be informed about all the comments on this page, so if we get back to work ;), drop a note.
    It could probably be a good idea to write an implementation of a simple active symbol in the architecture of neural networks. I’m sure it would rock the boats of many a scientist and a nerd to play with such autopoietic and autointerpreting neural “thing”.

    Best of luck!

  66. 66. Callan S. says:

    Sounds pretty good – just needs to give up symbols and simply work in terms of mechanisms that trigger other mechanisms and it’s made the final jump. I mean, pretty much already there by drawing a parallel with organisms self replicating.

  67. 67. Jochen says:

    Ihtio:

    It just means that we use the words “belief” and “believe” to mean different things.

    Not meaning to prod, but out of interest, what is your definition of ‘belief’? To me, it’s any thought that expresses how the world is, or would be, given certain circumstances. After all, whenever I take the world to be a certain way, I have a belief about the world—that seems nearly axiomatic to me, so I find your disagreement interesting. What must be the case for you such that you would claim to have a certain belief?

    It could probably be a good idea to write an implementation of a simple active symbol in the architecture of neural networks.

    Well, I have a lot to learn about neural networks before I could attempt such a thing, but I’ll certainly look into that. It might take a while, though…

    ———————

    Callan:

    Sounds pretty good – just needs to give up symbols and simply work in terms of mechanisms that trigger other mechanisms and it’s made the final jump. I mean, pretty much already there by drawing a parallel with organisms self replicating.

    Not sure if I catch your meaning here. I mean, of course, in a sense it is nothing but mechanisms acting on mechanisms (or themselves), but in such a way, via the analogue to self-reproducing systems, as to acquire symbolic character. Getting rid of symbols would rather defeat the purpose—the intention was to show that symbols, and hence, representation, can be appealed to in explanations of intentionality without immediately inviting the homunculus fallacy.

  68. 68. Callan S. says:

    Jochen,

    Doesn’t it seem it could just as much explain a behaviorally complex organism that doesn’t involve any symbolic character?

  69. 69. Jochen says:

    Callan:

    Doesn’t it seem it could just as much explain a behaviorally complex organism that doesn’t involve any symbolic character?

    I think for explaining behavior, it would simply be superfluous; the aim was, after all, to explain how, in addition to exhibiting behavior, our mental content may be meaningful to us—or at least, to give a possible opening for such an explanation.

  70. 70. Callan S. says:

    To explain how it’s meaningful to us, but without ending up at a homunculous?

    I think because of that it might have further potential in regards to naturalising meaning.

  71. 71. Jochen says:

    Indeed, that’s the general idea (or perhaps hope).

  72. 72. ihtio says:

    Jochen,

    I will not give you definitions, but just point at specific examples.
    – I’m reading a particular scenario from a book and I feel scared – these contents of my mind I would call emotion and imagination and thought.
    – I’m reading a passage from a book and I think to myself “Wow, this is interesting!” – I would just call it a thought, or a proposition.
    – I’m reading a passage from a book and I think to myself “If the world ends, this is what will be left of us!” – I would call it a belief.
    – I’m walking down the street to the bookstore and I think to myself “I’m gonna buy the next book by the same author” – I would call it a desire.
    – I’m walking to the bookstore and think “The book costs 20, I have 27, so I can afford it” – I would call these belief and inference.

    you probably believe that Iowa is not covered three feet in Nutella, without that belief ever being present within your mind as such

    No, I don’t think that I believed that Iowa is not covered in Nutella. I doubt there is any need to ascribe beliefs that were never present within one’s mind to someone. That would lead to the necessity of ascribing an infinite number of various beliefs to people, just because such beliefs can be formulated (as this one with Nutella).

  73. 73. Callan S. says:

    I’m guessing ihtio’s difference between belief and believe is kind of like a noun and a verb. A story might be that the president murdered someone – you have a belief about the story president doing that action in his story world. But you don’t believe it, as in you don’t go to a real life police station and report the president as committing murder.

  74. 74. ihtio says:

    Actually, Callan, no. I’m not confusing the noun “belief” and the verb “to believe”.

    What you call a story is a proposition, a statement, a sentence. It is content. The proposition “the president murdered someone” represents some state of affairs.
    Now, “I believe that the president murdered someone”, “Johnny fears that the president murdered someone”, “Marry doubts that the president murdered someone”, “Garry suspects that the president murdered someone” are propositional attitudes – attitudes one can make toward a proposition – called, respectively, a belief, a fear, a doubt, a suspicion.

  75. 75. Jochen says:

    So if anybody is still interested in this topic, I’ve had a somewhat revised version of my original model accepted into FQXi’s current essay contest on ““Wandering Towards a Goal – How can mindless mathematical laws give rise to aims and intention?“. The essay is posted here, and I’d be very happy to discuss any issues regarding the model in the comments thread below. And if you think the essay is of some value (or if you think it stinks, of course), I’d be grateful for an appropriate rating!

    (Peter, I hope you don’t mind this little self-promotion. If it’s too much, simply delete the post!)

    [Not at all – please do keep us up to date! Peter]

Leave a Reply