upload angelA digital afterlife is likely to be available one day, according to Michael Graziano, albeit not for some time; his piece re-examines the possibility of uploading consciousness, and your own personality, into a computer. I think he does a good job of briefly sketching the formidable difficulties involved in scanning your brain, and scanning so precisely that your individual selfhood could be captured. In fact, he does it so well that I don’t really understand where his ultimate optimism comes from.

To my way of thinking, ‘scan and build’ isn’t even the most promising way of duplicating your brain. One more plausible way would be some kind of future bio-engineering where your brain just grows and divides, rather in the way that single cells do. A neater way would be some sort of hyper-path through space that split you along the fourth spatial dimension and returned both slices to our normal plane. Neither of these options is exactly a feasible working project, but to me they seem closer to being practical than a total scan. Of course neither of them offers the prospect of an afterlife the way scanning does, so they’re not really relevant for Graziano here. He seems to think we don’t need to go down to an atom by atom scan, but I’m not sure why not. Granted, the loss of one atom in the middle of my brain would not destroy my identity, but not scanning to an atomic level generally seems a scarily approximate and slapdash approach to me given the relevance of certain key molecules in the neural process –  something Graziano fully recognises.

If we’re talking about actual personal identity I don’t think it really matters though, because the objection I consider strongest applies even to perfect copies. In thought experiments we can do anything, so let’s just specify that by pure chance there’s another brain nearby that is in every minute detail the same as mine. It still isn’t me, for the banal commonsensical reason that copies are not the original. Leibniz’s Law tells us that if B has exactly the same properties as A, then it is A: but among the properties of a brain are its physical location, so a brain over there is not the same as one in my skull (so in fact I cheated by saying the second brain was the same in every detail but nevertheless ‘nearby’).

Now most philosophers would say that Leibniz is far too strong a criterion of identity when it comes to persons. There have been hundreds of years of discussion of personal identity, and people generally espouse much looser criteria for a person than they would for a stone – from identity of memories to various kinds of physical, functional, or psychological continuity. After all, people are constantly changing: I am not perfectly identical in physical terms to the person who was sitting here an hour ago, but I am still that person. Graziano evidently holds that personal identity must reside in functional or informational qualities of the kind that could well be transferred into a digital form, and he speaks disparagingly of ‘mystical’ theories that see problems with the transfer of consciousness. I don’t know about that; if anyone is hanging on to residual spiritual thinking, isn’t it the people who think we can be ‘taken out of’ our bodies and live forever? The least mystical stance is surely the one that says I am a physical object, and with some allowance for change and my complex properties, my identity works the same as that of any other physical object. I’m a one-off, particular thing and copies would just be copies.

What if we only want a twin, or a conscious being somewhat like me? That might still be an attractive option after all. OK, it’s not immortality but I think without being rampant egotists most of us probably feel the world could stand a few more people like ourselves around, and we might like to have a twin continuing our good work once we’re gone.

That less demanding goal changes things. If that’s all we’re going for, then yes, we don’t need to reproduce a real brain with atomic fidelity. We’re talking about a digital simulation, and as we know, simulations do not reproduce all the features of the thing being simulated – only those that are relevant for the current purpose. There is obviously some problem about saying what the relevant properties are when it comes to consciousness; but if passing the Turing Test is any kind of standard then delivering good outputs for conversational inputs is a fair guide and that looks like the kind of thing where informational and functional properties are very much to the fore.

The problem, I think, is again with particularity. Conscious experience is a one-off thing while data structures are abstract and generic. If I have a particular experience of a beautiful sunset, and then (thought experiments again) I have an entirely identical one a year later, they are not the same experience, even though the content is exactly the same. Data about a sunset, on the other hand, is the same data whenever I read or display it.

We said that a simulation needs to reproduce the relevant aspects of the the thing simulated; but in a brain simulation the processes are only represented symbolically, while one of the crucial aspects we need for real experience is particular reality.

Maybe though, we go one level further; instead of simulating the firing of neurons and the functional operation of the brain, we actually extract the program being run by those neurons and then transfer that. Here there are new difficulties; scanning the physical structure of the brain is one thing; working out its function and content is another thing altogether; we must not confuse information about the brain with the information in the brain. Also, of course, extracting the program assumes that the brain is running a program in the first place and not doing something altogether less scrutable and explicit.

Interestingly, Graziano goes on to touch on some practical issues; in particular he wonders how the resources to maintain all the servers are going to be found when we’re all living in computers. He suspects that as always, the rich might end up privileged.

This seems a strange failure of his technical optimism. Aren’t computers going to go on getting more powerful, and cheaper? Surely the machines of the twenty-second century will laugh at this kind of challenge (perhaps literally). If there is a capacity problem, moreover, we can all be made intermittent; if I get stopped for a thousand years and then resume, I won’t even notice. Chances are that my simulation will be able to run at blistering speed, far faster than real time, so I can probably experience a thousand years of life in a few computed minutes. If we get quantum computers, all of us will be able to have indefinitely long lives with no trouble at all, even if our simulated lives include having digital children or generating millions of digital alternates of ourselves, thereby adding to the population. Graziano, optimism kicking back in, suggests that we can grow  in understanding and come to see our fleshly life as a mere larval stage before we enter on our true existence. Maybe, or perhaps we’ll find that human minds, after ten billion years (maybe less) exhaust their potential and ultimately settle into a final state; in which case we can just get the computers to calculate that and then we’ll all be finalised, like solved problems. Won’t that be great?

I think that speculations of this kind eventually expose the contrast between the abstraction of data and the reality of an actual life, and dramatise the fact, perhaps regrettable, perhaps not, that you can’t translate one into the other.

 

88 Comments

  1. 1. VicPanzica says:

    Aspects of evolution like food translate into the enjoyment of a restaurant meal with friends. A true computer informational simulation would not just simulate the emotional enjoyment but also the breakdown of food molecules as well as the day after brain chemical hangover because of too much wine.

  2. 2. SelfAwarePatterns says:

    When we contemplate whether a particular thing is possible, it helps to consider if there is anything in nature that precludes it. For example, traveling faster than light appears to be impossible unless, maybe, we can harness cosmological amounts of energy for things like Alcubierre drives or wormholes. Nothing in nature has been observed to actually travel faster than light. Yet people often think finding a way to do so is simply a matter or can-do spirit.

    But when something does exist in nature, the possibility exists that we will eventually be able to recreate it. All evidence points to the mind being a physical system that exists in this universe and operates according to the laws of physics, and operates with very modest amounts of energy.

    I do think it’s a valid point that we may not be able to copy a mind with existing computer paradigms. The amount of processing power to emulate a brain’s neural processing in a silicon or similar substrate may never be practical.

    But that doesn’t mean we will never be able to build a substrate that can do it. Even if a physical neural substrate is required, there’s no known fundamental roadblock (at least not yet) that would permanently prevent us from creating one. This may be decades, centuries, or millenia in the future, but saying it is forever impossible strikes me as a statement that requires justification.

    Whether the copied mind is the same person or is really conscious may always be a metaphysical debate.

  3. 3. Hunt says:

    An interesting variant on thinking about identity is to take the perspective of those who know you. It sidesteps some of the more confusing aspects of considering this first person. How satisfied would friends/family be of the reproduced you, and can we really say this is a less valid way to view it? In a way, this gives the impression of being a more “Turing” way to look at things. Would I be satisfied of the identity of a reproduced friend, if she were identical, molecule for molecule? Minus any mystical reasons, I would say yes. Would I be satisfied with a friend reproduced as an anthropomorphic robot? Perhaps, if the robot looked exactly like, behaved and responded exactly like my friend. I can imagine a future date when science and technology might render a desperately ill person back in this form, and friends and family might be relieved, and satisfied to greet her. Perhaps it would be the butt of family jokes after a while. Weird? I’m not so sure. What about if a friend were reproduced as a voice issuing from a machine? If the voice/person were happy and grateful for her existence it might be something of a relief; perhaps awaiting a new body. If it were dismayed and panicked, it could be something of a nightmare. These are all different ways to look at the problem. I think identity is really a matter of what works.

  4. 4. Sergio Graziosi says:

    Peter, tu quoque?
    You are almost agreeing with Graziano and are not remarking that what he is dreaming about is impossible because minds include (or rest upon) non-computational abilities?
    I’m surprised, but in a good way! Overall, Graziano’s article didn’t trip my most deep nonsense-detectors, only a handful of superficial ones ;-).

    You also mention the strange assumption that our super-scanner won’t need to go down to the atomic level, and here I think I agree with you: we don’t know if that’s the case, but we do have some reasons to believe we’ll need to get very close to single-atom precision – at least for some things. Why? Because, well, DNA.
    I feel like a broken record, repeating myself over and over, but anyway, we can probably agree that our genes have something to do with our personalities, in complex ways that we can’t even begin to comprehend, but there is actually more to it.
    The difference between cells of type A and type B is, most of the time, “just” a difference in history and gene activation. “History” accounts for what machinery the cell contains, it includes:
    (A) – Shape and functional components of the cell. Which have an all important role in shaping what a neuron does, so can’t be ignored.
    (B) – The regulatory machinery of gene expression, including modifications of DNA itself. This constrains the possible evolution of (A), so it will need to be taken into account.

    Naturally, we have decent ideas about how some of the stuff above works, but we are spectacularly far from having a complete understanding.
    Overall (broken record again), it seems to me that the amount of stuff that will need to be understood (theory building, “just” biology), measured (the scanner) and then simulated (a supercomputer) defies our ability to even imagine it. The amount is so great that we even fail to grasp how big it is, let alone understand the details. I’ve made this point more fully here: for some stuff we might be able to get by using a high level of abstraction (reducing what needs to be accounted for), but because of the cascading effects of molecular changes at the level of DNA (both sequence and transcription), I’m fairly sure that we will need to go down at least to the level of single molecules. (After all, in one cell you normally have only two copies of the same gene, with plenty of exceptions!)

    One thing that Graziano gets perfectly right is this: “The whole thing is an ethical nightmare” and he didn’t even get into the territory of what needs to happen along the way. How many scans and simulations will need to go wrong before we’ll learn how to “do it right”? Can you shut down and even destroy the broken simulations you built in the process – what if they are in (simulated)pain? Can we even hope to find sensible answers to this kind of question? I’m sort of evoking Scott here, expecting him to arrive saying “Armageddon will follow”, so that I don’t have to ;-).

    Overall, Graziano’s theoretical optimism is justified because it’s both needed and theoretically defensible. If you’re doing neuroscience, you are more or less implicitly accepting the “theoretical possibility” of a fully functional description of human brains. This doesn’t translate into practical optimism, but, as I’ve argued elsewhere (link omitted, for once!) a certain amount of optimistic delusion is required in order to keep moving on.

  5. 5. Peter says:

    not remarking that what he is dreaming about is impossible because minds include (or rest upon) non-computational abilities?

    I thought I sort of had remarked that – perhaps my zeal is slipping in old age.

    Because, well, DNA.

    Yes, and neurotransmitters and well, possibly everything really.

  6. 6. Jochen says:

    So I’ve aired my view on the notion of digital consciousness here often enough that, for once, I’ll spare the lot of you yet another reiteration (pause for relieved sighs), but even ignoring this issue, I think the idea of copying minds faces some sharp challenges.

    As Peter hints, the notion of identity espoused (often implicitly) by the pro-copyists is in many ways a strange one. For instance, if I have an origami bird, and give you detailed instructions towards its recreation (let’s say to avoid fighting the hypothetical atom-level detailed), and you fold a replica bird from that (again, atom-level indistinguishable if that’s what it takes), then few people would say that your and my origami bird are one and the same. Rather, one is the bird I folded, and the other the one you folded.

    I think one can usefully apply the notion of token versus type here: the two origami birds are of the same type, but different tokens. Type-identical, but token-distinct.

    Now, when it comes to persons, since there’s usually only one of each around, typically token- and type-identity coincide. This ceases to be the case when we can produce (by whatever method) copies of persons. But it seems to me that this doesn’t mean we suddenly have the same person twice—we have simply different tokens of the same person-type. So the cloned Jochen might be an instance of the same Jochen-type as myself, but nevertheless a different entity—after all, different propositions would be true of him, for instance, that he was grown in a cloning vat, or whatever method might be employed. Also, he’s over there, I’m over here, and hence, I am not the thing that is over there. (A real philosopher might talk about numerical distinctness here, I think.)

    People usually feel forced to accept the possibility of cloning by way of noting that one can change, and yet, be the same, and hence, one can’t just be identical with the set of one’s constituent atoms, as they might be switched out wholesale without leading to any difference regarding identity. But then, why couldn’t that other set of atoms, if it’s properly configured, lay the same claim to being me as I do?

    I think one can usefully import a bit of four-dimensionalism into the view I’ve been sketching: the type ‘Jochen’ need not be something that is constant across time; it might change. So, at any given time, I am an instance of the Jochen-type, which is a time-varying thing; thus, change does not pose any problem for this view. Another instance of the same Jochen-type at the same time would simply be that: another Jochen, but as distinct from me as another origami bird folded according to the same recipe is to the original.

  7. 7. Sergio Graziosi says:

    Peter,

    I thought I sort of had remarked that – perhaps my zeal is slipping in old age.

    Or perhaps I’m mastering the art of wishful reading! 😉
    Rereading the OA I can see you did put some hints, but yes, I was expecting something like “let’s forget it’s nonsense and try running with the idea” or something of equal strength. Something explicit enough that even I couldn’t gloss over it!

    Yes, and neurotransmitters and well, possibly everything really.

    Well, you can make a case that some stuff may be aggregated, e.g. the “strength” of a given synapse: once we know what neurotransmitter is released as well as the curve response of release, re-uptake and of the post-synaptic neuron (globally: the strength of a given synapse) you could hope that you don’t need to account for all molecules and treat a synapse like a unit with a handful of variables. (We do know we can’t aggregate all connections between two neurons, though – “where synapses are” counts for a lot, as well as timing.)
    In the case of a single molecule of DNA, it seems that you really have to look at its state: sequence, methylation, what portions are packed, etc, etc. Many more such cases may loom in the vast sea of unknowns, anyway.

  8. 8. John Davey says:

    Selfawarepatterns


    “Whether the copied mind is the same person or is really conscious may always be a metaphysical debate.”

    Why “metaphysical” ? It sounds a perfectly normal scientific question to me.

    J

  9. 9. SelfAwarePatterns says:

    Hi John Davey,
    I think it could be a scientific question, if we can agree on a specific definition of person. But the definition is a philosophical issue. As Peter mentioned in his post, if you regard location as a property of person A, then a perfect copy of person A would not be person A. (Although if we take that position, technically when I get up from my desk, I am then a different person.)

    But if you regard the pattern that makes up Person A to be the defining characteristic, then maybe Person A-copy is the same person. The problem is that any copy will inevitably have differences, even if it only amounts to the original being made of biological stuff and the copy being made of technological stuff. Which differences cause the copy to no longer be the same person? (Again, if we insist on absolutely no differences, then the fact that my brain has changed –as it continuously does– while I was typing this means the person finishing this sentence is not the same person who started typing it.)

    Ultimately, I think the reality will be if friends and family of the original person intuitively feel that the copy is that person, society will end up considering them the same person. But there will likely be people who never accept that.

  10. 10. john davey says:

    SelfAwarePatterns


    “I think it could be a scientific question, if we can agree on a specific definition of person.”

    Homo Sapiens… ? A physical duplicate of homo sapiens is not a new idea : it’s called a ‘twin’. It’s a separate person – the same genes, twice. It doesn’t matter if they are completely identical or not : they are still separate.

    Given that the question of whether a copied mind is ‘really conscious’ strikes me as being solely a scientific one – no metaphysics necessary.


    Ultimately, I think the reality will be if friends and family of the original person intuitively feel that the copy is that person, society will end up considering them the same person.

    Are twins discriminated against ?

    Anyway don’t worry, it’ll never happen. Don’t worry about robot morals either, they haven’t any.

  11. 11. SelfAwarePatterns says:

    John Davey,
    Biological species distinctions are themselves often subject to confounding cases. Anyway, twins are only identical genetically. They each have their own distinct experiences and memories. Comparing them to copies of a mature mind doesn’t seem particularly relevant. But maybe I’m missing something?

    Of course, once we copy a mind, over time each instance would get their own set of new experiences and memories, gradually leading to them being distinct entities. Which might eventually raise the question of who gets the checking account, who is obligated under contacts signed by the original, etc.

    “Anyway don’t worry, it’ll never happen.”
    What leads you to conclude that?

  12. 12. Hunt says:

    Final part of the linked article gets interesting and kind of reminds me of Rudy Ruckers early Sci Fi, like the novel Software. If people are rendered into digital form and live in a virtual space, it becomes a second, dual universe. Of course, there would be nothing stopping the virtual world from having eyes and ears onto the real world, so with gradual software update, it would seem that the two would merge eventually. So individuals would morph from larval biological stage to immortal digital stage, and then eventually…? as biotechnology progresses perhaps back to biological stage, or something in between.

    Of course, this barely scratches the surface of speculation if any of this is possible (a caveat I add for John Davey :)) For a vitual world could be made without care or pain, a heaven on earth, or in computer. Minds could also be merged, and we would have to deal with privacy issues, what you would or would not share with others, likened to computer file sharing these days. And of course, there would be threats to deal with, like computer viruses, or ransoming individuals, encrypting them and threatening to throw away the key…so perhaps it would never be such a heaven after all.

  13. 13. John Davey says:

    Selfawarepatterns

    Comparing them to copies of a mature mind doesn’t seem particularly relevant. But maybe I’m missing something?

    You’re talking about twins with shared memories. They are still separate persons.

    “What leads you to conclude that?”
    It’s possible that a matter duplication machine might manage something like that – as proposed in various sc-fi schemes – but this business of downloading minds as if they were information is balderdash. It’s like saying you can upload a mountain, or download the planet mars. Not going to happen.

    You could duplicate a mountain, or duplicate a planet. But that is material duplication, not ‘downloading’.

    J

  14. 14. John Davey says:

    Selfawarepatterns


    Which might eventually raise the question of who gets the checking account

    Easy. The original person. The copy would have to get his own.

    J

  15. 15. SelfAwarePatterns says:

    John Davey,
    “They are still separate persons.”
    Maybe if they’re both still around, but if the original is gone (any foreseeable scanning process would likely be destructive), is the copy a new person? Or are they effectively a continuation of the original? Before answering, consider that every atom in your body (except for portions of your skeletal structure) have been replaced in the last few years in an ongoing maintenance process. Physically, you’re not the person you were five years ago; there’s certainly nothing physically left of the child you were at age 5.

    “but this business of downloading minds as if they were information is balderdash. It’s like saying you can upload a mountain, or download the planet mars. Not going to happen.”
    So you make a comparison with mountains and planets, but have you ever copied a computer file, or a book? Were the copies the same entity? Physically they weren’t, but in every way that matters, they were merely different instances of the same entity.

    “Easy. The original person. The copy would have to get his own.”
    Glad you have that figured out. Now if, before being copied, the original committed a crime, is the copy responsible for it? If not, what prevents a criminal from simply cloning himself after the crime to evade responsibility? (Or immediately before a planned crime if the timing of the fork matters.)

  16. 16. Jochen says:

    SelfAwarePatterns:

    Maybe if they’re both still around, but if the original is gone (any foreseeable scanning process would likely be destructive), is the copy a new person?

    Wouldn’t everything else be profoundly strange? I mean, you’re effectively proposing that whether something is an instance of the same person depends on whether there’s more than one instance around: so let’s say you have a copying process that appears to be destructive, but in fact, the original is just stored somewhere (say, on Mars)—then whether the copy is ‘the same person’ as the original depends on facts that are external (and possibly unknowable) to the copy. So nobody would know whether something is the same person unless they are perfectly knowledgeable about the external person-determining facts, i.e. whether the original is still around in some form!

    Before answering, consider that every atom in your body (except for portions of your skeletal structure) have been replaced in the last few years in an ongoing maintenance process. Physically, you’re not the person you were five years ago; there’s certainly nothing physically left of the child you were at age 5.

    I think this intuition, albeit common, is a red herring: there’s not anymore of a problem with a thing changing, yet retaining its identity, than there is with it changing, say, its location: you can model the way a thing changes by a trajectory through an abstract space, the configuration space; and anything whose trajectory through this space is contiguous has as valid a claim to being the same thing as something that has merely been translated in (ordinary) space (and time).

    In some ways, this is analogous to four-dimensionalism, in that it attaches the label ‘Jochen’ to my whole temporal evolution, with each Jochen at a particular point in space-time, with a particular configuration, being just an instantiation of that ‘Jochen-type’, which is simply a time-varying entity, as I have called it above.

    So you make a comparison with mountains and planets, but have you ever copied a computer file, or a book? Were the copies the same entity?

    Physically, a copy of a book isn’t anymore the same book as a copy of a mountain is that same mountain. If I destroy the copy, nothing happens to the original; so obviously, I haven’t destroyed that book, but merely a different instantiation of the same book-type.

    Of course, you’re really referring to the information contained in the book; but that is a quantity that is always relative to an observer: to somebody speaking the language, both original and copy contain the same information, but to somebody who doesn’t speak it, this level simply doesn’t exist. Going down the route of philosophical thought experiments, to somebody speaking a language in which every utterance of the original book’s language has a different meaning, when he reads the copy, he will receive different information.

    The marks on the book’s paper really are only necessary conditions to bring about a certain state of mind within an observer; other necessary conditions exist that are not inherent to the book, such as fluency in the language it’s written in, knowledge of the alphabet, and so on. So the information isn’t really in the book, but only arises once the book is read by a competent language-user; thus the ‘identity’ of said information is not (wholly) contained in the book as a physical object.

    Thus, conceiving of minds in an analogous way is a bad metaphor: there, we cannot rely on some external observer interpreting the mind in the right way; rather, mind has to be inherent to the object we want to copy.

  17. 17. Sergio Graziosi says:

    Oh dear, I really am getting senile. What follows is mostly repetitions (I can’t help it, with apologies).

    First, I think Jochen is exactly right in the token- and type-identity distinction (not surprising, since I write software mostly via object-oriented languages). I also note that Graziano’s “Y” image works pretty well and resembles Jochen’s metaphor in important ways. However, this merely points to the discussion we had here a while ago. The point in that case was that this sort of Sci-fi new technologies will indeed break plenty of well-established moral assumptions. Replicas of actual people (merely digital or not) are just another case of something that will mess our moral compass a great deal.

    SelfAwarePatterns: the conclusion I reach by accepting the distinction of type and token as well as realising that type itself constantly changes, is that questions like “Is the copy the same person or not?” are just the wrong kind of question. They simply would cease to apply if we could replicate (in whichever way). That’s to say that no definitive answer is possible, reducing the debate to a pointless fight about outdated semantics (outdated if something like that will eventually become possible). I’d suggest to move on and realise that we’ll have to re-ground and reshape our moral compasses in radical ways. (But see also below).

    John (#8):

    “Whether the copied mind is the same person or is really conscious may always be a metaphysical debate.”
    Why “metaphysical” ? It sounds a perfectly normal scientific question to me.

    In addition to what has been said already, it is a metaphysical debate because people make it so. For example, if someone embraces epiphenomenalism, they will always be able to say “yes, the copy is behaviourally indistinguishable, but it is merely a zombie”. There is exactly nothing that science can discover to repel this conclusion (Jochen and I had a looooooooong discussion on this a while ago – discussion starts here).

    Jochen:
    [Now that we have had time to cool off, should we resume our epiphenomenal sparring? No worries if you have better ways to use your time!]

    I suppose we just can’t agree on information. A book, printed or otherwise, has information in virtue of the structural properties of the text. I regard it as “information” because of the potential that it has in causing some specific effect, namely “to bring about a certain state of mind within an observer”. The structural properties of the text carry the information, via the physical instantiation, but any instantiation that preserves such structural properties contains (also) the same information. Thus, a book has an essence (the type, in the current lingo), and copying the essence can be regarded as copying the book. [Translating the book is more interesting as it is bound to somewhat transform the essence.] This kind of reasoning allows to stop depending on the interpreter, we objectify (and somewhat trivialise) information, nullifying the homunculus regress. I just can’t avoid constantly nagging you about this, sorry!
    Anyway, in this context, assuming we can have information without an interpreter (indulge with me for a minute, if you may), the question becomes:
    Do people have an essence?
    or
    Is the nature of person-hood exclusively informational (i.e. about structural properties, not dependent on what material instantiates the structural properties)?

    Considering that the stuff that makes up our bodies does change all the time, answering yes may be possible. If you espouse the view that information requires an already conscious (qualia-generating) receiver, then the homunculus pops up and you need the kind of esoteric way out that you are exploring (HC, etc).
    However, our structural properties also change all the time, so at best we may hope that our nature is informational, but time dependent: my type changes with time, a snapshot of me at time t will be somewhat different from a snapshot taken at time t+N. This brings me back to the “Y” image proposed by Graziano, which looks rather convincing.
    Anyway, if our nature is indeed informational (in the sense of structures which make a difference, just like the text of a book), then copying people onto different media is theoretically possible.

    Now I expect both Jochen and John to jump at me from two different directions…

  18. 18. Jochen says:

    Sergio:

    [Now that we have had time to cool off, should we resume our epiphenomenal sparring? No worries if you have better ways to use your time!]

    Well, I’m compiling my thesis at the moment, so time is a little more scarce than it used to be… Then again, I do seem to find the spare moments to devote to this discussion, so if you think there’s some new ground to be covered, I’ll try to find the time and reply.

    I suppose we just can’t agree on information. A book, printed or otherwise, has information in virtue of the structural properties of the text. I regard it as “information” because of the potential that it has in causing some specific effect, namely “to bring about a certain state of mind within an observer”.

    I think we should distinguish here between the various ways the term ‘information’ is commonly used, i.e. syntactic, pragmatic, and semantic. On the syntactic level, I agree that the book contains information; but the syntactic level isn’t really what we’re after. In a sense, we can view the syntactic information as a potential to bear semantic information; and it’s the latter we care about here. So the Shannon entropy measures the (average) amount of semantic information that can be contained in a string of symbols, but it doesn’t tell us whether there actually is any information in there, and if so, how much.

    Using just the book itself, however, you don’t have access to the semantic level; you can measure its syntactic information content, but without an interpreter, you know exactly nothing about what all those signs and symbols actually mean. There can be a string of random letters (whose Shannon information is maximal), which simply doesn’t mean anything to anyone. Or, somebody could interpret those same letters (regardless of length) to stand for ‘dog’ (here the animal, not the word “dog”). If that string of letters is part of some universal code, then there will have to be some other strings that are quite short, but have a high information content in order for things to average out, but there’s no problem for single strings of high Shannon information to bear a low amount of semantic information.

    In order to transfer some semantic information between individuals, we have to look at the pragmatic level. In short, that’s what the string of symbols makes us do, or does to us—like e.g. bringing about a certain state of mind. So I write the string “dog”, which is a syntactic entity, you read it, bringing about a certain mental image, the process of which is the pragmatic dimension, and the result of with is your understanding of the semantic content of the string “dog”. By copying that string, you only directly copy the syntactic part; presenting it to an observer then ‘activates’ the semantic information via its pragmatic dimension.

    The thing is, now, what the string “dog” means is not connected to its syntactic information at all. That there’s so many bits of Shannon information does not tell us anything about ‘dogs’. A different observer could react to the same string differently, creating a different mental state, and receiving a different meaning—for instance, in his language, “dog” could mean ‘god’. It’s only in the combination of the string of symbols with an observer that a semantic meaning is realized by way of the effect the symbols have on the observer; a string in isolation literally means nothing. And this observer-dependency is what makes a mind different from information, thinking different from symbol-shuffling, and so on: a mind must be meaningful in itself in order to avert the homunculus regress.

    This is not different from any other physical system: a planet is just that planet without any interpreting entity; a mountain is just that mountain. In the same way, a mind is just that mind. But it does mean that the information processing metaphor does not cut deeply enough to be explanatory of the mental capacity (that’s not to say that minds can’t be used to process information, and in fact, we continually do so; but so can mountains, and planets).

  19. 19. SelfAwarePatterns says:

    Jochen,
    “I mean, you’re effectively proposing..”
    Actually, I’m not proposing anything about personhood here. My point was that this is a philosophical issue, that there is no scientific fact of the matter, that society would have to decide how it would deal with these issues. (Conceivably, different societies may come up with different answers.)

    “and anything whose trajectory through this space is contiguous has as valid a claim to being the same thing as something that has merely been translated in (ordinary) space (and time).”
    Again, that’s a philosophical decision you’re making, that physical contiguity is what matters. I think a case could be made that the contiguity of the information patterns are what matters, but in the case of multiple instances, there doesn’t seem to be any easy answers.

    You make an excellent case for why a book is not an ultimate source of information, but rather an information nexus. But what about a mind makes it something more than an information nexus, that is more than the sum total of its perceptual inputs and genetic predispositions? Yes, a mind has an internal state that makes its outputs hard to predict if we don’t know every input it has ever had and every genetic predisposition, but so is a complex IT system if we don’t know every input it’s ever had and its base programming. What attribute does a mind have that makes it more than this?

  20. 20. SelfAwarePatterns says:

    Sergio Grazio,
    “That’s to say that no definitive answer is possible, reducing the debate to a pointless fight about outdated semantics”
    Oh, I agree completely. The questions I was asking was more to flush out my original point that this is a philosophical debate, that there’s no fact of the matter answer.

  21. 21. David Duffy says:

    “a philosophical debate…no fact of the matter answer”: once you have two instances of the type “SelfAwarePatterns”, you can ask “are they exchangeable?”. There will a fact of the matter up to any specified tolerance.

  22. 22. SelfAwarePatterns says:

    David,
    I think for me it would matter whether the two instances of me could share memories. If I could remember *being* the other instance, then my sense of self would likely expand to include both instances.

    Of course, it’s quite possible that the architecture of the human mind may never be conducive to that kind of sharing. (At least, not without modifications so deep that the mind is no longer human by even the most liberal interpretations.) If so, then the Y forking reality that Graziano described is more likely. In that case, I’d probably only want one instance running around at a time, with frequent backups in case the current one dies.

  23. 23. Sergio Graziosi says:

    Jochen,
    on Information: yes, I follow you almost to the end – I keep overestimating how much we disagree! In case you missed, Golonka and Wilson have a paper in the making ‘Ecological Representations‘, which tries to do lots of things, including bridging the intentional gap via embodiment. They are asking for feedback; naturally, I’ve offered mine (how could I resist?). I mention it because it’s very relevant to the subject at hand, and in case you haven’t read their paper, I am guessing you’ll find it interesting.
    Re epiphenomenalism: don’t worry! After my last reply I was afraid you finally concluded that time spent on my lucubrations was wasted (I wouldn’t blame you if you did), glad to hear that this might not be the case. Less egocentrically, I am sure your thesis is much more important, so please feel free to leave our discussion in the freezer. We might resume it one day, if we’ll feel like it. (I’m struggling to find some spare energy myself, so not resuming does suit me as well).

    SelfAwarePatterns: yes, that’s what I thought, but wasn’t entirely sure.

    David:
    I think you’re missing the point. The debate around p-zombies is precisely that functional equivalence, even if perfect (presuming we could establish it!), may not be enough to declare that two functionally equivalent instances are indeed instances of the same type. You and I might agree that such a position is pointless, ill advised or utterly unscientific, but we would still need to acknowledge that plenty of smart and well informed people think otherwise.

  24. 24. john davey says:

    SelfAwarePatterns


    Maybe if they’re both still around, but if the original is gone (any foreseeable scanning process would likely be destructive), is the copy a new person?

    Yes. Same as a ford car is not the same car as one it was built to duplicate.


    Physically, you’re not the person you were five years ago; there’s certainly nothing physically left of the child you were at age 5.

    I think that’s reductionist nonsense to be honest. Of course you’re the same person : ‘person’ is a big enough concept to cope with material recycling.


    you ever copied a computer file, or a book?

    Yes – but they are informational : an abstract product of human tools. A mind is a physical pheonmenon, devoid of abstraction, and in that sense is like a mountain.


    Now if, before being copied, the original committed a crime, is the copy responsible for it?

    Legal liability questions seldom have much to do with epistemology. My guess is that most lawyers would say no, twins are not responsible, as the duplicate can’t be blamed for the past of the original.


    If not, what prevents a criminal from simply cloning himself after the crime to evade responsibility?

    He couldn’t. He could duplicate himself and commit suicide afterwards, but he’s kind of punished himself in the process.

  25. 25. Jochen says:

    SelfAwarePatterns:

    My point was that this is a philosophical issue, that there is no scientific fact of the matter, that society would have to decide how it would deal with these issues. (Conceivably, different societies may come up with different answers.)

    Just because it’s a ‘philosophical issue’ doesn’t mean there’s no fact of the matter. Science isn’t the sole arbiter of the factual—mathematical facts are not scientific, aren’t discovered empirically, but that doesn’t make them any less factual. Philosophical inquiry can likewise lead to discoveries of facts (if a viewpoint has been shown inconsistent, then it’s factually wrong, for instance). So claiming that there’s no scientific fact of the matter doesn’t make an issue open to relativism, as you seem to be saying.

    But what about a mind makes it something more than an information nexus, that is more than the sum total of its perceptual inputs and genetic predispositions?

    A book only contains information if interpreted in some way—the meaning is not intrinsic. A mind’s meaning is intrinsic—indeed, minds are those things that fix the meaning in books. Whether it’s ‘more than the sum total of its perceptual inputs and genetic predispositions’ is a completely separate issue.

    I’m not sure where you think something’s being ‘hard to predict’ enters.

    Sergio:

    In case you missed, Golonka and Wilson have a paper in the making ‘Ecological Representations‘, which tries to do lots of things, including bridging the intentional gap via embodiment. They are asking for feedback; naturally, I’ve offered mine (how could I resist?).

    I’ve noticed the paper (via your blog), and still haven’t given it a complete read-through; but from a first skim, I agree with what Jim Hanlyn said in the comments below the blog post: just because something can be used as a representation doesn’t make it a representation. One needs an account of who uses it, and how it’s used—and that’s I think the difficult part (in particular since using something as a representation seems to imply the user itself being capable of representing things, as simply per ‘using something as something’, which threatens circularity). As far as I understand it, ecological information may correlate with what we wish to represent reliably, but correlation is not representation.

  26. 26. Sergio Graziosi says:

    Jochen,
    We certainly agree on what’s the difficult part!
    A provocation: how about a system that is unable to treat its input as what it is, but always treats it as standing for something else?
    A very oblique and ineffective way to remark once more that I simply can’t see how a bottom-up picture can suffer of circularity. My own failing I guess, but I just keep reaching the same conclusion: the position you keep trying to explain me (thanks!) seem to insert some top-down assumption on what representations are supposed to be, smuggling in the circularity threat in the process. :-/

    John (24), you’ve stumbled on an interesting anomaly, methinks:

    you ever copied a computer file, or a book?
    Yes – but they are informational : an abstract product of human tools. A mind is a physical pheonmenon, devoid of abstraction, and in that sense is like a mountain.

    Hmm, I’d see nothing remarkable if you actually wrote “A brain is a physical pheonmenon, devoid of abstraction”. But a mind? Aren’t minds filled with abstractions? This in a nutshell is one way of highlighting why so many people think minds are informational constructs, I’d guess.

  27. 27. SelfAwarePatterns says:

    Jochen,
    First, let me say that I value philosophy a great deal. (I wouldn’t be reading and commenting on a philosophical blog if I didn’t.) But the fact remains that most of the interesting problems in modern philosophy simply don’t have a fact of the matter answer. If they did, they’d be in the realms of science, mathematics, or perhaps pure logic. No matter how much we might want there to be, there’s simply no fact of the matter on whether utilitarianism is more right than deontology or virtue ethics, or whether someone dies in a Star Trek teleporter or lives on in the new location, or whether a copied mind is the same person as the original.

    “A mind’s meaning is intrinsic”
    This gets back to the interpretational issue, for which I think we know each other’s positions. The hard to predict thing was basically noting the difference between a book and a mind, but still pointing out why both are ultimately information. The point is that the mind is also a nexus of information inputs and outputs, just like a book, albeit orders of magnitude more sophisticated.

  28. 28. Jochen says:

    Sergio:

    A provocation: how about a system that is unable to treat its input as what it is, but always treats it as standing for something else?

    Well (barring my own idiosyncratic models), I don’t know how a system treats anything as anything, be it ‘what it is’, or something else, so this is just another way to frame the same problem to me.

    A very oblique and ineffective way to remark once more that I simply can’t see how a bottom-up picture can suffer of circularity. My own failing I guess, but I just keep reaching the same conclusion: the position you keep trying to explain me (thanks!) seem to insert some top-down assumption on what representations are supposed to be, smuggling in the circularity threat in the process. :-/

    To me, representation is just ‘taking something as something else’. So, if you start out with systems taking things as something, then that’s where the circularity enters right there.

    Let’s consider a simple model universe that contains some (right now, unanalyzed) observer, and two balls, one either red or green, the other either blue or yellow. The observer has only access to one of the balls; her goal is to use that ball to derive conclusions about the other. Plainly, this is only possible if there is some correlation between the ball colors: say, only the pairs (red, blue) and (green, yellow) are allowed to occur, thus eliminating half of the possibilities.

    If that’s the case, then, it might seem our observer can use her ball to represent the distant one: she can do things like, for example, utter the sentence ‘the second ball is red’, whenever hers is blue—her ball can act as a stand-in for the other. So now, her ball is a representation of the distant one! Right?

    And that’s the rub: in drawing this conclusion hastily, we’re leapfrogging the problem: we’re depending on the observer’s ability to take the ball in her hand as a stand-in of the distant ball in the first place; and how’s that to be accomplished? Well, how do you use things as representations? Via derived intentionality: you use the intentional nature of your mind to ‘lend’ that intentionality to, say, a symbol you use to represent something. That symbol thus has meaning only via your mind’s ability to confer, and manipulate, meanings. Thus, in the above toy-model of representation, we have implicitly relied on the observer’s capacity for intentional manipulations in order to use the ball she has access to as a representation of the distant ball.

    But if now intentionality is itself to be analyzed in terms of representation—which most people arguing for ‘mental representation’ are trying to do—then of course we can’t rely on intentionality in our analysis of representation!

    So say we now turn our analytic gaze on the observer herself. Imagine she is some sort of mechanism that has, at minimum, the capacity to react to balls of certain color—perhaps by uttering canned appropriate sentences. Now wire everything up, such that the reactions to ‘her’ colored ball are appropriate, in fact, to the color of the distant ball we wish to represent—say if ‘her’ ball is green, she will utter ‘the ball is red’.

    Have we achieved any representation by this? No—we have merely created a certain system with reactive dispositions to a certain quality, i.e. the color of ‘her’ ball. The reaction to that ball being green is the production of certain sounds, that another observer possessing full-fledged intentional capacities may interpret as an assertion of color—but there’s no representation to the original observer; it’s been supplanted by a simple mechanism throwing up certain flags in response to certain conditions.

    So either way, this analysis of representation leads nowhere—either we land in circularity, or lose the phenomenon we set out to explain (at which point it is often introduced again through the back door of claiming that the mechanically produced assertion ‘the ball is red’ means that the ball is, in fact, red; but this is again just an interpretation produced by appealing to an unstated and unanalyzed second observer lending their intentionality to the mechanism’s sound productions, thus appealing again to what we set out to explain in the first place).

  29. 29. Jochen says:

    SelfAwarePatterns:

    But the fact remains that most of the interesting problems in modern philosophy simply don’t have a fact of the matter answer. If they did, they’d be in the realms of science, mathematics, or perhaps pure logic.

    What’s your reason for believing that only the domains of science, mathematics, and pure logic are concerned with the factual? I mean, I suppose you don’t hold that there are no facts in philosophy, for this is surely a philosophical position, which you’ve just stated as a fact… So then, there are facts that don’t reduce to the previous categories; so who’s to say what has a factual answer, and what doesn’t? I’m certainly not capable of such a priori judgments.

    This gets back to the interpretational issue, for which I think we know each other’s positions.

    Yes, it comes back to the fact that it’s simply circular to believe that minds could both be interpretation-dependent and the entities that interprete. Say we have a mind A1, which has some mental content x: if it is in need of interpretation, then it needs a mind A2 in order to have content x; that is, A2 needs to have mental content y = ‘A1 has mental content x’. But in order for A2 to have that content, it must be interpreted in the right way, that is, there must be a mind A3… And so on.

    It doesn’t help to ‘short-circuit’ things either. For if, say, A1 interprets A2 and vice versa, then we need both A1 to have definite content—the particular interpretation of A2—before A2 can have definite content, and we need A2 to have definite content—the particular interpretation of A1—before A1 can have definite content: but this is a contradiction, as both minds must have definite content before either can have definite content. So if minds need to be interpreted in order to interpret, we either need an infinite tower of minds interpreting one another, or we run into vicious circularity.

  30. 30. SelfAwarePatterns says:

    Jochen,
    That list wasn’t necessarily meant to be all encompassing. And, in reality, I don’t see a sharp distinction between a “fact” and an “opinion”, but a spectrum of levels of certitude. The areas I listed have what I perceive to be the highest certitude (although admittedly, this varies tremendously between different scientific fields).

    Why do I think most interesting problems in philosophy don’t have as high a certitude, or often no definitive answer? Well, I gave examples. Of course, there are examples of philosophers engaging in fact of the matter questions, such as Kuhn’s work on the philosophy of science (almost more a sociology of science), but I perceive that this kind of work is relatively infrequent compared to pondering questions such as whether euthanasia is moral (an important question, but not one that will ever be answered as factually as whether the earth orbits the sun). All that said, I’m always prepared to change my mind if you can make a convincing argument.

    On the interpretational issue, I think it might pay to remember what “interpretation” is in this context: a system taking in inputs, building a model from those inputs, and then determining if that model is similar to the model it holds of itself, with the degree of similarity or dissimilarity determining whether it applies the label “mind” to the model of the external system. Why does the system have this model or construct models from its inputs? Because they provided evolutionary advantages. If you accept that minds evolved, growing from simple pre-mind phenomena into what we now call minds, then no infinite towers or vicious circularities arise.

  31. 31. john davey says:

    Sergio

    “But a mind? Aren’t minds filled with abstractions? ”

    What’s abstract about wanting to pee, or the pain of standing on a nail ? These aren’t ideas. Minds deal with abstract concepts but that doesn’t make them abstract themselves. Distinguish between platform and content : between the mind and a relatively small part of mental activity, reflecting on abstract objects (which seems to be the entirety of mental activity to some, probably because that’s how the Greeks thought of it).


    This in a nutshell is one way of highlighting why so many people think minds are informational constructs, I’d guess.

    Yes – its the colossal blunder that unites all computationalists alas. Minds can be viewed as sources of information – anything can I suppose, information being a non-intrinsic, observer-relative term – but they aren’t in themselves “information” of course, “information” have no intrinsic existence, unlike minds – more specifically, consciousness.

  32. 32. Jochen says:

    SelfAwarePatterns:

    And, in reality, I don’t see a sharp distinction between a “fact” and an “opinion”,

    There’s plenty of difference, though: facts are objective and incontrovertible, while opinions are subjective and falsifiable. Opinions can be right or wrong (depending on what the facts are), while a fact just is a fact.

    All that said, I’m always prepared to change my mind if you can make a convincing argument.

    Well, as I said, I don’t purport to be able to a priori settle whether something has a factual answer—indeed, I think that’s what much philosophical debate is about.

    I think it might pay to remember what “interpretation” is in this context: a system taking in inputs, building a model from those inputs, and then determining if that model is similar to the model it holds of itself, with the degree of similarity or dissimilarity determining whether it applies the label “mind” to the model of the external system.

    That’s not what interpretation means in this context, at least not the way we originally got at it. There, interpretation simply means imbuing a set of symbols with meaning.

    Furthermore, by appealing to the modeling capacities of systems, you’re already imbuing them with intentionality (to model something is to take something—the modeling system—as something other than itself—the modeled system, and thus, an intentional act). But that’s the very thing we’re trying to explain, so you’re using the explanandum as a primitive to arrive at the explanans, which is circular.

    If you accept that minds evolved, growing from simple pre-mind phenomena into what we now call minds, then no infinite towers or vicious circularities arise.

    The circularity arises completely independently of history, implementation, embeddedness or whatever other characteristic the bearer of the symbols might have: the problem is that a mind, computationally or informationally characterized, is simply a bunch of symbols that don’t have an inherent meaning.

    Go back to the adder example I gave: what the environment provides are voltage levels; their manipulation is all that can be selected for evolutionarily. Yet, whether the system adds, or performs a different operation, is completely unspecified, because what logical states the voltage levels map to is simply undetermined without an arbitrary choice. The same goes for a system purported to ‘compute’ a mind.

  33. 33. David Duffy says:

    Hi S.A.P: Re “whether the two instances of me could share memories”,

    In the past few years, the combination of transgenics, optogenetics, and other technologies has allowed neuroscientists to begin identifying memory engram cells by detecting specific populations of cells activated during specific learning epochs and by engineering them not only to evoke recall of the original memory, but also to alter the content of the memory.

    [http://www.ncbi.nlm.nih.gov/pubmed/26335640]. We already know how to create “false memories” in humans using less intrusive methods, of course. Copy 1 and Copy 2 will have to have a dialogue about whether their qualia agree, which they’ll do by constructing semantic networks anchored on phenomenology of standard objects (:)).

    As to Jochen’s comment re circularity of modelling and intentionality – I think starting right at the bottom of embodiment is one way to think about such matters. The strands of RNA replicating themselves in
    http://www.nature.com/nature/journal/v491/n7422/full/nature11549.html
    are ensuring continuation of an abstract individual information structure through time. Such a structure maintains homeostasis best by being able to respond to environmental changes – it models the its environment in some way once it becomes complex enough. Any computing being done is intentional, though I don’t think calling it consciousness in the panpsychic kind of way is useful. But I also don’t think there is a difference, in terms of this definition of aboutness, between the actual RNA doing its thing and a simulation of this system.

  34. 34. SelfAwarePatterns says:

    Jochen,
    My point about facts and opinions was epistemic, that there isn’t a clear border between what we can know is fact and what we can’t.

    “That’s not what interpretation means in this context, at least not the way we originally got at it. There, interpretation simply means imbuing a set of symbols with meaning.”
    This statement seems to me to be equivalent to what I said, simply at a different level of understanding. What would you say is the difference between applying a label to a model and imbuing a set of symbols with meaning? Isn’t a label a symbol? My overall point is that we should unpack exactly what we mean by “interpretation”. I think once we do, the apparent difficulties disappear.

    “Furthermore, by appealing to the modeling capacities of systems, you’re already imbuing them with intentionality”
    An interesting statement since computer systems can also build models based on inputs, albeit not yet with the sophistication of vertebrate brains. I’m not sure philosophers who use the word “intentionality” necessarily have that in mind when they use it, but I’ll admit my own understanding of how it is used philosophically isn’t rock solid.

    “the problem is that a mind, computationally or informationally characterized, is simply a bunch of symbols that don’t have an inherent meaning.”
    A mind is a system embedded in a body, in an input-output framework. Without that framework, who’s to say what the electrochemical firings between neurons mean? Would those neurons, if they had never been in a body and processed any perceptions, still be a mind? What makes the high order organization of that signalling that we call “mind” more objective than the software running a robot?

  35. 35. Sergio Graziosi says:

    Jochen,
    I still don’t see it.
    If we locate minds as the result of what happens in brains, then my provocation starts making a little more sense (I did say “ineffective” ;-)). Brains receive input from sensory neurons/cells, but because of the reasons why such systems evolved, they have no reasons to (and do not) treat action potentials, synaptic releases and the lot as such. They react to them as signals that stand for whatever physical stimulus was collected and transduced. Let’s forget mental life and concentrate in simple reflexes, for example: are we ok with the idea that action selection depends on this mechanism, treating signals as representations, and that there is No circularity involved?
    I can see there is circularity in the bit where I (specifically myself – a fully intentional agent!) write “treating signals as representations”: I am interpreting the signals as representations, the system just reacts to them, making my interpretation handy, but with no weight in describing the actual mechanism.
    With me?
    Now add memories, which in a very pragmatic sense are the reason why we have such huge brains. Their functional role is to allow the system to optimise its responses based on however the outside world responded to previously selected actions. We still don’t need to rely on “explicit” memories, we can think about simple accommodation mechanisms here. A sudden loud noise may trigger a defensive response, but if repeated many times while nothing else remarkable happened, the action selection circuitry may learn in very basic ways (and fairly well understood on the molecular level) to stop selecting the defensive reaction.
    IOW, we already have a system that reacts in teleological ways. You should be raising a philosophical eyebrow right now.
    The next trick is to realise that simple learning (in mechanically similar, if quantitatively expanded ways) can also apply to meta-levels. Depending on what can be somewhat memorised (the actual detail is irrelevant) and what pattern-matching abilities a system is equipped with, such a system could start reacting to patterns which include its own actions (its own reactions to stimuli). Add enough storage and pattern-detection powers and you get a system which exhibits all that’s needed for introspection (in the past doing A in response to B worked well, unless C. I’ve now received stimuli B and C so I shall not do A and try something else instead).
    In other words, the system has learned to interpret the signals it receives as representations of what’s in the world – if it also stores a trace of the part I’ve included in parentheses above, it will now be able to react to patterns arising in the matching between stimuli and the shadows of its own decision making processes.
    Your philosophical jaw should be dropping now ;-).
    The original recognisable (proto)intentionality of the signals I have described (the bit that was circular, because of my own interpretation), has become available to the system itself, because it was there from the very beginning! As per my provocation, the system inevitably treats the collected stimuli as “standing in” for whatever generated them.
    There is no circularity that I can detect now. Stimuli B and C are treated as fully intentional, they are representations of what generated them and are being interpreted by the system collecting them.
    Exercise: is there any element describing a “mental life” missing from the picture above?

    If you still see circularity, do feel free to point to it, but if possible, please do so while using my example: your own extraordinary (in a good sense) scenarios regularly end up confusing me, sorry!

    Do note that I am emphatically not proposing to “add enough complexity” so that “the magic happens”. I am pragmatically isolating the building blocks of what is, at worse, a p-zombie. Trouble is, such a p-zombie would generate thoughts such as “ah, yes, I remember that this fruit looks and smells delicious but is terribly bitter” (such a thought is the memorable shadow of the actual decision making mechanism) so hey, if it will be bothered to reflect about its own thoughts, it will start wondering about what generates the qualities of sight, smell and taste. Oh, hang on. Do we need to explain anything else? Qualia are the necessary building blocks of memories of past stimuli (aka, for a system such as the above, “past experience”).

    John (31): besides the above, I don’t think we disagree so much. The one thing we might disagree about is, in fact, the above. To me, it works, and is translatable in informational terms, at least because (many of) the underlying mechanisms are understood and can be modelled in informational terms (neurons as a particular kind of logic gates, and so forth). Also: why is it obviously wrong (to you!) to think that “brain” is the platform and “mind” is the content?

    David (33): Yes (a 1000 times)! Any system which interacts with the environment, in such a way as to preserve homoeostasis, is inevitably building what can be used as an internal representation of the (relevant aspects of the) outside world. See my discussion of Friston’s official take (e.g. “[their] internal states (and their [Markov] blanket) […] will appear to model — and act on — their world to preserve their functional and structural integrity”).
    The only mystery for me remains why I can’t seem to either grasp why people disagree or why I fail to make people agree (the relation between the two mysteries isn’t mysterious, hence my long rants in here).

    S.A.P (34): Yes to you too ^_^.

  36. 36. SelfAwarePatterns says:

    Hi David,
    I’m going to have to check out those references. Thank you!

    On memories, I think one obstacle is that, because of the order in which they’re received, the memories often won’t mean the same thing to the receiving instances as it did to the instance that actually had the experience. It seems to me that this would, subtly, lead to differences between them that, over time, would likely still have their personality diverging. Although perhaps minds that remember together are more likely to stay together.

    On mind and homeostasis, are you familiar with Antonio Damasio’s theories? He sees consciousness and the self as an outgrowth of the need to manage homeostasis. I can’t say I think that’s the whole story, but it does seem to be a major part of it.

  37. 37. SelfAwarePatterns says:

    Thanks Sergio.

    Don’t feel bad about failing to make people agree. Over years of internet discussions, I can count on one hand the number of times anyone with a pre-existing position changed their mind within a discussion. However, I do occasionally run into people I talked with months or years earlier who have changed their minds since our discussion. And yes, sometimes it’s been me who has changed my mind.

  38. 38. Jochen says:

    SelfAwarePatterns:

    My point about facts and opinions was epistemic, that there isn’t a clear border between what we can know is fact and what we can’t.

    OK. As a point on what we can know, and what we can know we know, etc., I agree. For me, however, facts are essentially how the world is, they’re what we can have (imperfect) knowledge of, and what we can (hope to) approximate—be that by empirical investigation, mathematical deduction, or rational discussion. So when you talk about questions not having a fact of the matter answer, as opposed to there being (perhaps even insurmountable) obstacles to obtaining that answer with the above mentioned methods, I think that hints at a conflation between what we can know and what is the case.

    What would you say is the difference between applying a label to a model and imbuing a set of symbols with meaning?

    The point is that having a model implies already having imbued (at least) some symbols with meaning. If you consider an orrery as a model of the solar system, then you have engaged in an interpretive act, say, interpreting the third orb as a stand-in for the Earth, and so on. So when you stipulate that there is already model-building capacity within a system, you’ve already imbued that system with the ability of interpreting symbols and using representations; thus, the system can’t be of any help explaining how we interpret symbols and use representations.

    An interesting statement since computer systems can also build models based on inputs, albeit not yet with the sophistication of vertebrate brains.

    A computer doesn’t model anything without being interpreted in the right way. It’s not the case that there is, in any objective sense, a model of the solar system in a computer used to model the solar system; the modeling comes in only through it being used that particular way. A different user could interpret the computation in a completely different way, not pertaining to the solar system at all.

    Again, go back to the discussion of the adder. Now, substitute for the addition program (which can itself be considered a kind of model, of course, albeit of an abstract set of axioms) some program you hold to model something. Then, by simply ‘looking at’ the physical system in a different way (e.g. changin the mapping from voltage levels to logical values), a different user can, with just as much claim to be ‘right’, consider the system to model something else entirely, or maybe nothing at all.

    Thus, what the system models is a question of how it is interpreted.

    A mind is a system embedded in a body, in an input-output framework.

    Again, consider the adder: its inputs are voltage levels, its outputs likewise. Any interaction with the environment is mediated due to these voltage levels. Yet, those voltage levels don’t fix the question of whether the adder adds, or what other computation it performs: that is dependent on the interpretation of the voltage levels as logical values, which the environment does nothing to fix (as, again, the different interpretations don’t change anything about the voltage levels, and thus, the adder’s reactions to environmental stimuli stay exactly the same, and yet, the computation it performs changes).

    Sergio:

    Let’s forget mental life and concentrate in simple reflexes, for example: are we ok with the idea that action selection depends on this mechanism, treating signals as representations, and that there is No circularity involved?

    While I do worry about your phrasing of using certain signals as ‘standing in’ for their physical causes, essentially, this is the second option in my example with the two balls: their color merely evokes some reaction, in the same way as a stone dropping on a spring-loading mechanism might cause that to release, flinging some load. Certainly, there is no question of intentionality here: it’s merely physical causality.

    The reason that I worry about your usage of ‘standing in’ is that it already skirts perilously close to representation/intentionality/etc. The signals are simply what triggers a certain response; identical signals coming from different physical causes would trigger the same response (see brain in a vat, etc.). So the signals don’t stand for anything: they’re just like the hand that flips a switch (or the stone impacting the spring, which could equally well have been a dead bird)—the switch doesn’t care what flips it.

    IOW, we already have a system that reacts in teleological ways. You should be raising a philosophical eyebrow right now.

    I don’t agree that there is any teleology. Memories, in the way you’re describing them right now (that is, without attendant qualia, and without any intentional content), are really just further input signals, or, if you will, differentiations between states of a FSA. An FSA might react to an input in one way in state A, and in another in state B; furthermore, a past stimulus might have put it in state B; and there we have a crude picture of memory. But there’s no teleology about this, nor even a germ of representation, etc.: analyzed through, we simply have different inputs, or input histories, causing different behaviors.

    Add enough storage and pattern-detection powers and you get a system which exhibits all that’s needed for introspection (in the past doing A in response to B worked well, unless C. I’ve now received stimuli B and C so I shall not do A and try something else instead).

    I think here, or somewhere down the line leading to this point, you inadvertently substitute some high-level concepts that make you think there’s more going on than what’s actually there. For instance, you slip into intentional language—suddenly, your simple mechanism has an ‘I’, a concept of itself doing things, a concept of past, and so on.

    But actually, what you have described so far supports none of these concepts. You have a system (think of an FSA for clarity) that can react to stimuli based on its input history. It has, however, no concept of its history as such—it is simply in a specific state due to having received a specific set of inputs, and the input/output rules for that state dictate a specific reaction if it meets with a specific stimulus.

    Likewise, it has no knowledge of ‘doing things’ in the world: it simply has produced some outputs, and received certain inputs, which have triggered different outputs, state changes, and so on. There also is no ‘I’: there may be some element of its internal makeup that reliably mirrors the automaton’s actions or situadedness or what have you, but that’s again only correlation—you or I, as intentional beings, can interpret that element as a ‘self-model’, but to the automaton, it is just another modifier of its reactions to certain stimuli. The interpretation of the ‘self-model’ as a self-model is a case of derived intentionality, so if you propose that the automaton can do that, you’ve stealthily outfitted it with intentional faculties beforehand.

    In other words, the system has learned to interpret the signals it receives as representations of what’s in the world

    I see absolutely no sign of that. It reacts to stimuli in just the way it was set up to react (by either a human designer in the case of a robot, or by evolution), but it has no sense of what those stimuli are about—they’re just so many stones triggering so many springs, so many hands switching so many switches.

    There is no circularity that I can detect now.

    On that, I agree; but you also haven’t explained what you set out to explain, i.e. how stimuli come to be about something, how a system can manipulate representations, and so on. Not, that is, unless you surreptitiously (and probably unintentionally) treat your own (metaphoric) talk about the system’s ‘I’, its actions and past, its memories, and signals standing in for something more seriously than you should.

    Exercise: is there any element describing a “mental life” missing from the picture above?

    Yes: everything! As far as you have described it, there is no reason at all to believe that the system has any sort of mental content at all—it’s just a set of tightly wound springs uncoiling when impacted. It’s all just boring old physical causality (which is not to say that intentionality encompasses something ‘beyond’ physical causality, but merely that you can’t get it from the latter in the way you’re trying to, at least as far as I understand your descriptions).

    Trouble is, such a p-zombie would generate thoughts such as “ah, yes, I remember that this fruit looks and smells delicious but is terribly bitter” (such a thought is the memorable shadow of the actual decision making mechanism)

    At least the way you described it, I see no reason that a zombie would have such thoughts. It might generate appropriate utterances (sound waves), or print them out on a screen, or heck, set quill to paper and wax philosophically, but so can a tape recorder, or indeed, a series of fantastically intricate coiled springs, gears, and levers.

    But I see no reason to ascribe any intentionality to the Victorian automaton, and nor do I see reason to think your zombie would have any contentful thoughts at all—not anymore than I see how a stone impinging on a spring, releasing it, carries meaning to the spring.

  39. 39. Charles T Wolverton says:

    John Davey @24

    SAP: if the original is gone (any foreseeable scanning process would likely be destructive), is the copy a new person?

    JD: Yes. Same as a ford car is not the same car as one it was built to duplicate.

    In order to address SAP’s question, we need some definition of “person”. New Ford autos coming off an assembly line or a reconstructed classic incorporating new parts (Ship of Theseus) seem irrelevant to thought experiments involving instantaneous destruction and reconstitution of an organism.

    I think of “personhood” as largely determined by the nature of an organism’s behavioral dispositions with respect to its environment, in particular its behavior with respect to other similar organisms. And since such dispositions are embodied in an organism’s neural structures, I’d answer SAP’s question “no”. The new organism’s behavioral dispositions presumably would be identical to those of the original organism.

    However, the case of non-destructive reproduction presents a potential problem. At first, there would be two instantiations of the same “person” (ala my definition). But then like identical twins at birth, each instantiation would go its own way, having new experiences, etc, developing new dispositions and thereby becoming a distinguishable person. We don’t have a problem with identical twins, so at least conceptually, why with clones?

    JD: My guess is that most lawyers would say no, twins are not responsible, as the duplicate can’t be blamed for the past of the original.

    IMO, the concept of “blame”, which suggests moral responsibility, ia a problem in the legal arena. If one believes, as I do, that the legal system should only address practical accountability, then the clone should be dealt with exactly as the original since they will have identical behavioral dispositions at the moment of cloning. The legal objective should be only to protect society from future intolerable behavior in which case both may be potential threats requiring either therapy or removal from society. Compare quarantining a person with a communicable disease despite the person’s being “innocent”, a moral concept irrelevant to the objective of protecting the public.

    BTW, this issue is addressed in Davidson’s “Swampman” thought experiment (see wikipedia entry with that title). He claims that were he decomposed by a lightning strike but immediately reconstituted molecule-for-molecule as an identical organism, the new “swamp-Davidson” would be “psychologically different” from the original Davidson. I’ve never been able to ascertain what he means by “psychologically different”. He tries to explain that using examples, but I don’t find them convincing.

  40. 40. john davey says:

    Charles


    I think of “personhood” as largely determined by the nature of an organism’s behavioral dispositions

    Good for you ! I beg to differ, I view it solely as a biological issue.


    We don’t have a problem with identical twins, so at least conceptually, why with clones?

    I agree entirely.


    If one believes, as I do, that the legal system should only address practical accountability, then the clone should be dealt with exactly as the original since they will have identical behavioral dispositions at the moment of cloning.

    But the copy hasn’t done anything wrong. Talk about piling the sins of the father on the child ! Maybe you need to look at that definition of “personhood” again.


    The legal objective should be only to protect society from future intolerable behavior in which case both may be potential threats requiring either therapy or removal from society.

    So – the copy – who’s done nothing wrong – should go to jail on the basis that he might do something wrong. You’re in the wrong century, Charles, eugenics is so 1930s !

    J

  41. 41. SelfAwarePatterns says:

    Jochen,
    I should clarify that while I do think we have epistemic limits in all directions, I also think there are questions with no clear objective answer. Examples: What is the best color? What’s more important, security or freedom? Do strawberries or blueberries taste better? In each case, there may be an objective answer for any one person, but when asked in general, these are value questions without objective grounding. (I’d have some sympathy with the argument that the general questions are in fact so general that they’re incomplete, but that doesn’t seem to prevent people from wrestling with similarly scoped questions.)

    “you’ve already imbued that system with the ability of interpreting symbols and using representations; thus, the system can’t be of any help explaining how we interpret symbols and use representations.”
    I can’t say that I see the difficulty here. Babies do come out of the womb with instincts, emotions, and abilities (including a nascent ability to model the world) for evolutionary reasons. It doesn’t really strike me as any more mysterious than the base programming and factory defaults that come with computational devices.

    On your adder example, if it’s part of a robot that, say, uses it to determine how many carts it can stack in the space available, doesn’t that give the adder’s function an objective existence? You might argue that the robot designers were the ones doing the interpretation, which I’d agree with, but then why wouldn’t we say that neurons making up a biological mind isn’t effectively evolution’s interpretation of what those neurons are doing?

  42. 42. Jochen says:

    SelfAwarePatterns:

    I can’t say that I see the difficulty here.

    You’re trying to explain how symbols acquire a certain interpretation, and you’re relying on symbols having a certain interpretation to do so. I don’t know how to make this more clear.

    It doesn’t really strike me as any more mysterious than the base programming and factory defaults that come with computational devices.

    That base programming is only a certain interpretation of the physical states of the computing system; the baby’s facilities are not dependent on any interpretation.

    On your adder example, if it’s part of a robot that, say, uses it to determine how many carts it can stack in the space available, doesn’t that give the adder’s function an objective existence?

    Sure, you, as an intentional agent, can reasonably interpret things that way. But the robot only manipulates voltage inputs into voltage outputs, say, to its actuators.

  43. 43. Charles T Wolverton says:

    John:

    I view [personhood] solely as a biological issue.

    Then for you, the whole thought experiment raises no issues. Perhaps I misinterpreted your statement “‘person’ is a big enough concept to cope with material recycling.” What exactly makes it a “big enough concept to cope with” anything other than biological differences? Some philosophers argue that it’s a matter of interpersonal relationships, and I was just following their lead. But as one who has often argued in this forum for dropping the psychological vocabulary once the discussion gets down to the biological level, I’m happy to purge the whole concept of “personhood”. Agreed?

    As for the legal issues, we’re considering the possible ramifications of a thought experiment after all. Although even in real life I do think that “moral” is a word best purged from the legal vernacular, the rest is just pursuing the logical consequences of physicalism (where we’re allies, not foes). But even in real life, there are contemporary theories about the proper objective in dealing with legal offenders other than retributive justice, some taking into account the possibility of biological determinism. Perhaps you’re the one who is a little out of date on issues in constitutional law.

    If you insist on heaping ridicule, maybe you should start with those who introduce thought experiments into a discussion. I’m not a fan of them but am occasionally willing to play the game. And it is just a game – lighten up!

  44. 44. Charles T Wolverton says:

    Jochen –

    I’ve noticed the [Golonka & Wilson] paper (via your blog), and still haven’t given it a complete read-through; but … just because something can be used as a representation doesn’t make it a representation.

    You apparently missed it, but in a previousl comment in another thread I recommended the “Two Psychologists” blog as one you might find interesting. As I understand it (not necessarily correctly), Gibson’s “ecological” approach (the basis of their work) isn’t representational. As best I recall, the paper’s point about “representations” was only that if one insists, they can be “found” embedded in the ecological approach notwithstanding that they play no role as such in it.

    My understanding is that your emphasis is on how representations (in the form of symbols, words, thoughts, etc) can be intrinsically “about” something in the absence of an interpretation, which being a matter of interpersonal agreement can only produce “aboutness” that is “derivative”. But as I understand the ecological approach, it doesn’t involve being “about” something in the environment in the sense of intentionality. It argues that there is structure in the energy array impinging on a subject organism, that structure is perturbed “lawfully” by movements in the environment (including those of the subject), and “appropriate” reactions to such perturbations can be learned without interpersonal instruction. Although they call the variations in such time-varying structures “information” (an unfortunate term IMO), it is never interpreted as referring to an object. It is only used in determining the subject’s actions relative to the environment as a whole. Hence, there is neither intrinsic nor derived intentionality because a resulting action is no more “about” a cognizable object in the environment than stratching a mosquito bite is “about” a mosquito. Both can be merely learned reactions to variations in sensory stimulation. That the former may result in an action directed at a particular object -say, a cup – doesn’t require that the subject “recognize” it as a “cup”, only that it experiences the environment as including a “graspable” surface (AKA, an “affordance”).

    Of course, one can stretch the concept of intentionality to include the grasping to be “about” the cup and the scratching to be “about” the mosquito or the itch, but it isn’t clear to me what would be gained in doing so.

    In any event, since I have no expertise in the field, take this only as encouragement to give them a reading because it might be very rewarding for you.

  45. 45. Jochen says:

    Charles:

    You apparently missed it, but in a previousl comment in another thread I recommended the “Two Psychologists” blog as one you might find interesting.

    I hadn’t missed it (presuming you’re alluding to the discussion in the Flatlanders thread), however, my response probably got buried in an unfortunate quoting mishap (not that the response was an especially illuminating one).

    As I understand it (not necessarily correctly), Gibson’s “ecological” approach (the basis of their work) isn’t representational. As best I recall, the paper’s point about “representations” was only that if one insists, they can be “found” embedded in the ecological approach notwithstanding that they play no role as such in it.

    Well, the paper is called ‘Ecological Representations’, and the starting point is described as one of them realizing ‘Holy crap. I think ecological information is a representation.’, so I think, at least as far as that paper is concerned, it’s fair to say that it’s essentially concerned with representation (otherwise, they should really consider a new title…).

    Of course, I can’t speak to Gibson’s larger approach as such.

    As for my emphasis on representations, well, I’m not really a representationalist as such; but in the computational approach (which is what I’ve been mainly criticizing in this thread), some account of how symbols get their meaning is necessary, it seems to me.

    Both can be merely learned reactions to variations in sensory stimulation.

    I don’t really see much I find controversial in what you’ve written there, unless you want to argue that that’s all that goes on in the mind, i.e. advocating a kind of eliminativism. I’m not saying that this would be a priori false, but I’d need a convincing argument simply to account for the data—namely that we appear to possess mental content, that our thoughts appear to be about something, and so on. (Of course, I believe that this is pretty much equivalent to the problem of explaining how our thoughts can actually be about something, but eliminativists seem to think something is gained by denying this.)

    So, bottom line, I’m neither wedded to the idea of representations, nor even to the reality of intentionality; I’ve poked at the interpretation of symbols because that seems to be intrinsic to computationalist approaches, and representations seem to be the target of the Golonka/Wilson paper. But ultimately, if you can make a convincing argument to do without either of these, I’m all ears!

  46. 46. David Duffy says:

    S.A.P “mind and homeostasis”: yes, and try
    http://open-mind.net/papers/the-neural-organ-explains-the-mind
    It’s delicious…

  47. 47. David Duffy says:

    Which I should hasten to add, was discussed here last year.

  48. 48. SelfAwarePatterns says:

    Thanks David!

  49. 49. Charles T Wolverton says:

    if you can make a convincing argument to do without either of these, I’m all ears!

    Needless to say, I can’t. But as I understand the ecological approach, it does. I’m not suggesting the paper as a starting point, but rather that some years ago they did an online mini-course on Gibson that would be a relatatively painless way to get the gist of the approach. It starts here:

    http://psychsciencenotes.blogspot.com/2010/04/gibson-1979-chapter-1.html

  50. 50. Jochen says:

    Thanks Charles, I’ll have a look!

  51. 51. Richard J.R.Miles says:

    For me, it is refreshing to see that the body is being considered here as part of the package of consciousness, and the brain is not the be all and end all of consciousness, as computer obsessed people would prefer. Life exists without brains and brains evolved for bodies that move around for the 4 F’s. You can read more of my thoughts at http://www.perhapspeace.co.uk

  52. 52. John Davey says:

    Charles

    “Some philosophers argue that it’s a matter of interpersonal relationships”

    So a sole person living on a desert island isn’t a person ?! Don’t think that works.

    “though even in real life I do think that “moral” is a word best purged from the legal vernacular”

    Why? If the law is meant to be a purposive intervention in the flow of events, then it has to have morals. If you’re a determinist then there’s no point in even worryiong about it, as there is no scope for morals but the law will do what it does in any case by virtue of the rule-based physical development of the universe. Either way – no reason to get rid of them (a determinist can’t “get rid” of anything in fact)

    “Perhaps you’re the one who is a little out of date on issues in constitutional law.”

    Constitution law of where ? Not so sure that has anything to do with these issues

    “And it is just a game – lighten up!”

    I’m always light .. !

    J

  53. 53. Charles T Wolverton says:

    John:

    Re determinism, you’re essentially offering the incoherent argument that “a determinist should just sit back and wait to see what happpens” – ie, choose a certain course of action (or in this case, inaction). Sorry, I can’t do that since I’m programmed to offer my opinions and hope they have an impact – just as you apparently are programmed to refute them. One of those opinions is that certain discourse would be more productive were a certain vocabulary purged from it. Nothing inconsistent in that as best I can tell.

    Re constitutional law, it’s loosely related via the SCOTUS decision in Buck v Bell (see wikipedia) in which eugenics arguably played a role and Holmes made his (in)famous comment “Three generations of Imbeciles are enough”. As far as I know, the decision had nothing to do with guilt, blame, etc. My point was that today, at least in academic constitutional law circles, genetics and related considerations are playing an increasing role in such matters. But that has nothing to do with eugenics and is a perfectly reputable development.

    In any event, the situation of a person being punished for having “bad” genes is irrelevant to the thought experiment. The posited situation is presumably the cloning of an individual who immediately before the cloning has been convicted of a crime. Immediately after the cloning the behavioral dispositions of both instantiations will be identical. If the objective is not retribution but rather to take actions intended to minimize the risk to society, clearly they should be treated equally.

    Again, keep in mind that it’s a currently unrealistic thought experiment, so I don’t see either morality or even practical considerations as relevant. But I think logical consistency always is.

  54. 54. john davey says:

    Charles


    ” Sorry, I can’t do that since I’m programmed to offer my opinions and hope they have an impact”

    So are you offering the opinion that your opinion is pointless ? You are advocating that your advocacy is futile ?

    If you’re a determinist it follows that there are no opinions,no behaviours and no choices. Have to say, I don’t believe you’re being entirely genuine when you say that you don’t have an opinion on anything. I think even the opinion that you have no opinion is an opinion.

    Determinism flows from from physics : you can’t do anything other than that which you do because your physical construction follows the laws of physics. There are no choices, no ‘behaviour’, just a flow of material.
    There is an erroneous line which I think Daniel Denett has fallen into with his “implausible compatiblism” – which talks about “programmed” behaviour. I don’t know if you subscribe to it.

    To say that you do what you do is because you are “programmed” is actually not determinist – because you get to choose when the “programmed choice” is made, giving you free will over the “start point” of your allegedly predetermined behaviour.

    A true determinist shouldn’t even recognise the very notion of choice, as that would involve being selective about a (non-existent) single causal starting point in time – something completely incoherent to a physics-based analysis of the universe.


    “If the objective is not retribution but rather to take actions intended to minimize the risk to society, clearly they should be treated equally.”

    You’re a determinist. There are no risks. Whoever gets killed will get killed regardless. Or is this just another pre-programmed opinion ?? This feels circular.

    J

  55. 55. Sergio Graziosi says:

    Jochen (38)

    During my first read I was thinking “oh, here we go again, I’ll never get it”. On second and third readings I started thinking that maybe some glimpse of a way out from this debate is starting to appear. Just two glimpses, maybe mirages, so don’t hold your breath.
    I’ll mention what I’m mumbling about, but I do know it will be just a rushed, incomplete attempt. [On re-reading it, I also found it a bit harsh! More on this below.]

    I think you either disagree too much or too little: either the p-zombie can be built, and can learn not to do A when B but also C, or it can’t. If you agree that it can, and does so by storing some information of past choices (that is, past choices and their outcomes can alter the internal structure of the system itself), then you have conceded that the zombie “would have such thoughts” because well, my second example regarding fruits was a bait: it is precisely an instance of “not choosing A, even if B, because C”.
    Well, some additional constraints do apply, and they are important. First of all, it’s certainly easy to design a mechanism which can learn the above, but only if at design time it is entirely clear what A, B and C are. Therefore, one relevant question is whether one can design such a thing without making many assumptions about what A, B and C will look like. I do note that the failure of producing general AI does tell us that doing so is difficult!. Anyway, this generality is important, because it requires our imaginary zombie to store “general purpose” memories. I do think we can use relatively simple and well understood mechanisms (or, ahem, algorithms) to achieve this, but that’s another story and one that is the weakest point of my argument, I think.
    Nevertheless, sticking to the current script, we also need the action selection processes to leave their own trace, and the trace will need to have the potential to contain some contextual info, something like “at time 1, action A was chosen, in presence of N,M,O, expected outcome followed”, and then “at time 1+n, action A was chosen, in presence of C,N,O, expected outcome did NOT follow”, etcetera. These traces will then need to be recollected and used to influence further decisions (the structural changes which embody what I’m here calling traces/memories, need to influence subsequent action selections).
    If you let pass all that, and agree that we can at the very least design software to follow this logic, then I’m afraid you have already conceded that the zombie will produce “such thoughts”. All I’m doing is pointing out that the recollectable traces can be called “thoughts”.

    That was “glimpse #1”: it seems you may be sitting half way through two worldviews. Much of what you write “sounds like” (I know it isn’t!) uncompromising dualism (comes across like it because you deploy the same countermoves used to push against materialism/naturalism/physicalism). But of course you also want to refute dualism. My approach is to try to refute some of the pro-dualism arguments, because I do think they are wrong. You seem to be more inclined to accept them without compromises (because you think you can work around them, is my guess)…

    Second glimpse: in #25 you say

    ecological information may correlate with what we wish to represent reliably, but correlation is not representation

    I think this points to a promising disagreement! You are right that I imbue descriptions of mechanisms with intentional language, I do so more or less deliberately, because my whole position is that only correlation can (and does) sustain intentionality. I am not “stealthily [outfitting the imaginary p-zombie] with intentional faculties beforehand” I’m trying to show why correlations between collected stimuli and what happens in the environment are:
    1. the only data available to the (imagined) p-zombie to start building any “understanding” at all.
    2. the only thing that is needed to develop what we call intentional content.
    [IOW: I am trying to show that the potential for intentionality has been there all along.]

    That’s where the G&W paper comes in: among other things, in my reading, they are trying to do the same.

    However, if you start with the assumption that mere correlation can’t possibly sustain intentionality, because our thoughts are *really* about something, and knowledge is, of course “true knowledge” (by definition!), and We, “fully intentional agents” can interpret symbols, but mechanisms are “just” mechanisms, so they can’t (smuggling the impossibility of having intentionality instantiated via mechanisms), then sure, all your rebuttals do follow.

    However, 1. is manifestly the case. If you disagree, please go see a movie. You’ll see people fluidly moving on screen, and will hear words coming out of their mouths. But of course, you also know that none of the above actually happened. In the real world, still images were projected, and sound came out of loudspeakers. Still, you experienced something different: why? Because you can only rely on correlations. Another exercise: try if you may to draw the link between the above and the problem of induction (I’m deploying this strategy because otherwise you’ll tell me that I’m trying to eat more than I can chew 😉 ).
    Point 2 is the argument I’m trying to build (still ineffectively, manifestly so). However, I will never succeed if we understand intentionality as something that does not and cannot exist… Intentional content is only sorta about something, it is about something “reliably enough”. That’s because it’s grounded on correlations, and, incidentally, it is also the reason why

    facts are essentially how the world is, they’re what we can have (imperfect) knowledge of, and what we can (hope to) approximate

    Specifically, my point 1. above accounts for why we can only have imperfect knowledge of (material) facts and why we can only hope to approximate reality.
    We go back with your position being curiously ambiguous. If you espouse the weak epistemology you seem to advocate, why do you stubbornly accept to set the explanatory bar solidly out of reach? Beats me…

    Another way of looking at all this is that I’m not trying to eliminate intentionality, but I am playing the “illusion” card: I do say that intentionality (and the whole problematic bunch) are not what they seem to be. That’s what I see as weak illusionism: I’m very emphatically saying that all members of the problematic bunch (Intentionality, meaning, qualia, etc) do exist, but are not what we naturally tend to think they are. Compare with the first part of this message: a reliable enough correlation is enough to consider nervous signal generated by sensory stimuli somewhat/sorta about the outside world. An ad-hoc trace of previous decisions is now classified as “a thought” – these are “descriptions” of intentionality and (some part of) mental life, but they don’t look like our naive assumptions. If you wish, see how this picture handles the problem posed by Libet. It works, but only if you give up the idea that a feeling of burning on my skin has to be about something hot burning it. It could be acid (similar damage) or the active ingredient of chilli, which activates the “burn” receptors without making any damage at all, or it may be caused by an odd neurological disorder, an infection, and whatnot. The downstream system however can’t avoid interpreting signals as being about something (see my previous “provocation”), and rightly so, because in fact they are caused by something.

    Finally, there is Friston’s view. Consider a system which is able (within given limits) to preserve its homoeostasis by reacting to changes in the external environment. Necessarily the changes of its internal states will reflect the external changes. Thus, if you see “Information” as I do, e.g. a structural difference which makes a difference (in virtue of its structure), the system will collect and store information about(!!) the environment. This is tautological, it can only be the case. True enough, it holds if the outside changes remain within acceptable boundaries.
    You then look at naturally occurring systems of this kind, and find that they are (predominantly) systems which have survived the process of natural selection.
    Such systems therefore are certainly able to respond appropriately to naturally occurring, ecologically relevant, changes in their environment. We’re still in tautological terrain: their inner structures are able to change according to changes in their natural environment.
    Assuming you can follow, it means their structural changes strongly correlate with environment changes, and if they do, it also means that they contain/store information about the environment. In other words, what’s needed to start using these structural changes in a way which implies treating them as standing for what happened outside is already there. All you need is other mechanisms for storage and reuse. This is what I think Friston means with (emphasis added)”[their] internal states (and their [Markov] blanket) […] will appear to model — and act on — their world to preserve their functional and structural integrity”.
    Thus, if you do know the relevant characteristics of the environment where our system belongs, and want to interpret its structural changes in informational terms, you can use the environment to fix the interpretation, and say “aha, that phosphorylation (sorta)means glucose is present outside” (which, incidentally, is what molecular biologist do all the time!). In other words, the interpretation isn’t arbitrary anymore, and if it isn’t for you, a detached observer, it’s even less arbitrary for the system itself. In fact, the system does necessarily respond appropriately to most external stimuli (because those who didn’t got left behind by natural selection), so the system is already sorta interpreting the stimuli in intentional ways. To reach full/recognisable intentionality you then do need to add some sort of memory and learning. That’s the bit I’m adding, and does require some sort of recursion, which strikes me as similar to what you propose on the M&M paper. Furthermore, some sort of recursion is also what can be considered as a reference to self, justifying my use of “I” and the equivalence I’m proposing between mechanisms influenced by stored information and “thoughts”.

    All of this avoids vicious circularity, is grounded on necessarily true premises, fixes how to interpret the “information” stored within naturally occurring self-regulating systems, and describes how teleological behaviour naturally emerges (within ecological constraints). The picture offered by Friston thus follows and confirms precisely what I’ve been proposing (compare with the first paragraphs under Glimpse #2 and see for yourself).

    Thus, I am not ready to dismiss the idea because it’s a BFD and/or is underdeveloped. It certainly is a BFD (perhaps the biggest deal on offer in metaphysics), but no, it’s not vague or conveniently opaque. I do think it’s underdeveloped in the sense that it is not published on relevant peer reviewed journals, and probably not ready for it – that’s one reason why we’re discussing these things here… However, the core of what I’m proposing has been independently developed at least three times (Friston, G&W and, worryingly, myself) and in all (apparently very different) three cases it keeps holding water. It does however require to give up wishful ideas of what intentionality really is, and no, this vision cannot be contemplated if one refuses to do so a-priori.
    So, overall, for my purpose here (understanding why I can’t get the idea recognised as “worth exploring” and concurrently verify if I can’t because I fail to understand the objections I receive) maybe we can agree on a possible conclusion. I can’t get past the first hurdle because the objections on offer imply incompatible assumptions, namely that intentionality means that something really does stand-in for something else, and not that it “sorta” stands-in (i.e. that “correlation” isn’t a strong enough constraint).
    This is where glimpse #2, leads me to. If you start by assuming that “correlation is not representation” and refuse to even contemplate the hypothesis that “(reliable enough) correlation enables representation”, then we have a no-starter. We can’t even begin discussing, we can only talk past each other. The same applies if you approach the problem from the top down: if you try to understand representations by starting with idealised, fully intentional interpreters, you get nowhere, because no such things exist. The available intentional systems are heuristically intentional: they can only correctly interpret symbols within ecological constraints (formed and fixed by their evolutionary history).
    Once again, the accepted premise (full, perfectly rational, objective interpreter) refers to something which doesn’t exist, cannot exist, and is not what we wish to explain.
    It is all very similar to our discussion on epiphenomenalism: if you accept it as a premise, you get in a dead end and will never be able to explain phenomenal experience.

    Overall, the above sounds to me more like a rant that anything else. My apologies! I’ve left my frustration visible because, well, doing otherwise would be incoherent. The direction I’m exploring leads to concluding that perfect rationality, “true knowledge”, “objective observation” and third party descriptions, are all idealised abstractions that cannot actually exist. Thus, trying to hide my own biases and drives would be deceitful: I would be pretending to accept the premises that I am actually challenging. That’s to say that yes, I am frustrated, but no, I am not directing my disappointment to anyone here. Peter and Jochen in particular have both been ridiculously generous with me. My frustration is internally generated, I’m allowing it to transpire here because this place allows me to, which is a good thing!

  56. 56. Jochen says:

    Sergio:

    Necessarily the changes of its internal states will reflect the external changes. Thus, if you see “Information” as I do, e.g. a structural difference which makes a difference (in virtue of its structure), the system will collect and store information about(!!) the environment. This is tautological, it can only be the case.

    I’ll focus on this bit for now; while I have some remarks on the rest of your post, I’d like to postpone the discussion, because I think this bit here goes right to the heart of our problems.

    What you’re doing here is exactly the conflation between semantic information and syntactic information I tried to caution against earlier. True, there is a pattern of differences that reliably co-varies with the differences in the environment; but what this gets us is merely Shannon information, merely the syntactic level—it says nothing at all about what the information is about.

    To see this, consider the process of newspaper printing. In typesetting, a pattern of differences is set up, that transforms into the precise form of the newspaper—that is, the pattern of differences embodied by the newspaper reliably co-varies with the differences in the type-setting process. But that doesn’t mean that the newspaper’s information is about how to make a newspaper!

    Rather, it gives us information about, say, the state of the UK post brexit referendum, or the latest Trump idiocy, or what have you. Nothing at all to do with the process it correlates best with!

    OK, so now you might object that the pattern of differences in the type-setting process is itself correlated (at least in some indirect way) with the news the newspaper transmits. And that’s true enough. But then, where does the buck stop? The current events related in the paper are themselves consequences of, and thus, correlated with, earlier events—the post-brexit state of the UK is a direct consequence of the brexit vote outcome, which itself has again certain antecedents, and so on. Ultimately, everything is due to the boundary conditions of the universe, and (perhaps) random chance. So why isn’t every newspaper article about the big bang? How does the chain of correlations stop at some point, to pick out that concrete element of the causal chain leading to the particular pattern of letters that we associate with its meaning?

    Moreover, the correlation-as-representation view has other problems. I could use some A that’s correlated with B and C as a symbol for B, while you could use it as a symbol for C. So obviously, mere correlation cannot suffice to fix meanings. If it did, then every representation would carry its meaning right on its sleeve, and every observer would agree on this meaning; but that’s manifestly not the case.

    Indeed, a newspaper in English could mean something totally different to some observer fluent in a hypothetical language that isn’t English, but in which every valid English utterance likewise is a valid utterance. Of course, that’s a highly constructed case, but things like that do happen: I may have related this before, but when I was in Poland, and went to buy bread, the woman behind the counter asked: ‘Jeden Tag?’, which, in German, means ‘Every day?’. Taking her to ask whether I ate bread every day, I nodded my head; however, in Polish, ‘jeden’ means ‘one’, and ‘tak’ means ‘yes’—so, she had in fact asked me whether I wanted one loaf of bread, a question to which my nod was as appropriate as to the question I took her to ask.

    So there we have a situation in which all correlations are equal, and yet, there are two valid meanings that can be associated to the exchange: correlation doesn’t fix meaning.

    But then, what does? Well, you have to ask yourself how you use a certain representation to actually represent. Say, the word ‘dog’: it means “dog”, i.e. that particular four-legged barking animal you associate with the word. How does it come to mean that for you? Well, simply because the association ”dog’ means “dog”‘ is available to you, is part of your knowledge; but to use that knowledge, it must, in fact, be intentionally available to you. This is derived intentionality, and it is a sufficient condition to use a representation to represent something.

    So, that some A merely correlates with B does not suffice for A to represent, or mean, B; it must be used as a representation of B, as well, and such use depends on intentionality. In order for A to mean B, you need to know that ‘A means B’, and such knowledge is intentional. By mere correlation, you can’t pick out whether A means B, or in fact, means C, which is a necessary and sufficient cause of B; and a different observer can use A to mean C just as well. Mere correlation also doesn’t suffice to pick out any particular element of the set of all correlata of A, up to and including the big bang.

    So no, internal changes reflecting external changes do not constitute information about the environment. You can use them to represent the environment; I can use them to represent something else. Somebody not in possession of the requisite connection can’t use them to represent anything at all. Hence, all of the mystery of representation is in this use of something as something else; and the only narrative we have explaining this use is one depending on intentionality, on the intentional mental content that ‘A means B’. Thus, without finding an alternative explanation of this use (which, of course, I believe to be possible, and have tried to propose), any account will be built on sand, and likely involve some hidden circularity, or an attribution of intentionality where none is warranted.

  57. 57. Sergio Graziosi says:

    Nice one Jochen!
    Yes, I do think we’re approaching the heart of the disagreement.

    I would follow you if you could deploy counter-examples that do not depend on a fully formed intentional interpreter which is external to the entity where we are hoping to see intentionality arise. The bit I’m failing to make clear, and was intended to be implied by my ineffective provocation, is that for the system where intentionality arises, “how [it] use[s] a certain representation” is fixed by the structures that make up the system in question. You can experience this directly in a gazillion ways: despite our spectacular behavioural flexibility, we cannot help interpreting certain stimuli in certain ways; the way we use them is fixed by the mechanisms that we embody.
    I’ve made the example of movies already. If the movie theatre isn’t ill designed, you’ll perceive the words emitted by the sound system as coming from the mouths of the actors, and there is absolutely nothing you can do about it if you remain within the parameters of intended “usage” of the theatre itself. You can break the spell, but only by approaching the loudspeakers so to break the correlation that your brain latches on (i.e. make the images appear on one side, and the sound from the other).
    Other examples include:
    you see the following symbols

    ^_^

    They may mean nothing to you, but if I point out they are an emoticon representing someone satisfied and squinting, bang, the picture changes. Most people will be locked-in and won’t be able to unsee the face. In this case, the interpretation becomes fixed. Now, if I tell you that with GRDPESFDC I mean “dog”, that’s arbitrary, you may or may not be able to follow my writing if I start deploying such conventions. There is a difference here: certain stimuli, even if entirely symbolic (think of stylised animations) are inevitably interpreted by us in certain ways, others not at all, and others (the emoticon above) are somewhere in between.
    Why? Because certain stimuli are ecologically relevant and stable enough: how we interpret them is fixed because the correlation between the ecologically useful interpretation (how they are used, what they make the system do) and the shape (structural properties) of receptor activation has been reliable for enough of our evolutionary history to become hardcoded. If you see the picture of an angry face, you are looking at a representation, but you can’t interpret it as happy, you can decide to act as if it was happy, but you can’t change your perception.

    This is why approaching the problem from the top-down (starting with a fully intentional agent, able to decide how to interpret arbitrary symbols), or, with G&W, by assuming that we perceive extensional properties (read their paper!) instead of intensional ones, you cannot solve the problem. You get stuck, either with an infinite regression of fully intentional agents or with totally arbitrary interpretations (anything may mean anything). We agree on this, I hope that much is clear.

    But still, I do not agree that I’m conflating semantic and syntactic information, I am proposing how to move from the latter to the former. Once more, if you accept Searle’s position and start with the assumption that you can’t move from syntax to meaning, then once again, we’re stuck. But just like your numerous examples, the Chinese room deploys a fully intentional agent, entirely free to “understand” the squiggles in whichever way he wants, they are arbitrary to Searle, they don’t necessarily trigger any inevitable reaction inside him. Thus, the experiment fails to address the mechanism it wants to investigate, because it starts from a point where complete arbitrariness is already possible (and assumed).

    But as I show above, even ourselves, the presumed fully-intentional, free-to-interpret-symbols-in-whichever-way-we-choose agents, are not, in fact, fully intentional in the sense of such complete freedom. The sound of spoken German sounds angry to my ears, Spanish sounds hectic and French snotty. These are again examples of the somewhere in between territory, as they are all languages I’m not able to interpret fluently. As such, the automatic interpreters rule, while for English and Italian, the learned interpreters have the upper hand (utterances may or may not sound angry, hectic or snotty, depending on much more sophisticated interpretations). I mention this because by learning a language well enough, the interpretations change, but they remain inevitably automatic. If I detect anger, I detect anger, I cannot help it. This halfway-through example is useful to point out the importance of past experience to fix certain interpretations: my understanding of English is learned, but still not arbitrary.

    Thus, the top-down approach based on an idealised interpreter fails, because it seeks to explain something which doesn’t exist: a fully intentional agent who is able to choose how to interpret incoming stimuli. There is no such thing, proper, real-world intentionality is located between the entirely fixed and the somewhat learned, moving a little bit towards the entirely arbitrary end of the spectrum, but only a tiny, almost insignificant, bit. Specifically, learned interpretations are still necessary, they are merely built by integrating collected stimuli across the time domain. But since we live in the moment, and can’t reliably relive past experiences, we tend to mistake learned interpretations with arbitrary ones: it seems that “Dog” could mean Cat, but it can’t. The meaning of “Dog” (to English speakers) was either fixed from the big bang (in a fully deterministic universe) or the result of historic accidents, but in both cases, it is necessary, the opposite of arbitrary (by the way, isn’t it interesting that the same notion of “arbitrary” seems to rely on the kind of libertarian free-will which we both find incoherent?).
    On the other hand, if we build a fully arbitrary interpreter, it won’t show any teleonomic/teleological behaviours, and would loose its structural integrity very soon, unless it is actively being protected in some way. If it is protected, however, it may eventually hit (by chance) some appropriate interpretations – that is, interpretation which for one reason or the other have self-sustaining qualities. Thus, if we allow it to exist for enough time (by chance or by protecting/repairing it) sooner or later, by means of natural selection alone, such a system will stabilise onto necessary interpretations and will start to use the symbols generated by its sensory system.

    When we are born, we display only teleonomic behaviours. Those organisms which are unable to learn will remain fixed into teleonomy. Other organisms, able to learn, i.e. to detect and react to patterns arising across the domain of time, will gradually move towards full teleological behaviours. However, this is important(!!), they never get there, because after all, we cannot choose what we wish for. Teleological behaviour implies mechanically reacting to patterns present over the domain of time, it is merely teleonomy exhibited on an additional dimension (one over which we can exert no control whatsoever).

    I’ll repeat myself one more time: you cannot follow me if you embrace the wrong (self aggrandising) premises. If you assume there is an insurmountable ontological barrier between (pick your favourite):
    Body and mind,
    Teleonomic and Teleological behaviours,
    Correlation and aboutness,
    Syntax and semantics,
    Mechanism and inner feeling,
    etc.
    You are assuming that it’s impossible to move from the left to the right hand side without some magic ingredient. In such cases, all arguments which don’t rely on magic will always fail to convince. Interesting how hypercomputation, which by definition we can’t understand, fulfils the role of the magic ingredient. I say so as a compliment: you’ve found a genuinely interesting way out of the self-imposed deadlock…
    Perhaps the examples I’m proposing can help spotting the fact that we can find in-between cases, which should suggest that the distinctions above aren’t binary, and that therefore there is no ontological barrier between the extremes.
    Perhaps, but for some reason, I doubt it!
    Hope you are enjoying this exchange as much as I am.

  58. 58. Charles T Wolverton says:

    John:

    The problem for a determinist who isn’t also a neuroscientist is that (as far as I’m aware) there is no generally accepted vocabulary that lies between the psychological and the neurological with which to state his or her position. I try to express my position in terms of what I call a “context-dependent behavioral disposition” (CDBD), which I envision as a sensorimotor neural structure that take’s as input excitation consequent to external and internal sensory stimulation together with excitation that captures somehow context and produces an action (possibly latent, ie, a modified or new CDBD). My assumption is that all of our historical “nature” and “nurture” up to a point in time is captrued in our current set of CDBDs. I can’t justify this as a model, but it seems to me a reasonable way to think about the issue.

    Then my determinism is the belief that such a model captures all there is to be said about our behavior. (I don’t know how you’re interpreting “behavior” in concluding that for a determinist there should be none, but all I mean by the word is action – what we do.

    It should be clear from that sparse definition that most of your inferences about my position are incorrect. I have “opinions” because what I mean by that is having a CDBD to express certain positions in certain contexts. What I mean by being “programmed” to take an action in a specific context is having acquired a relevant CDBD.

    Your comment about risk seems close to the familiar, but incoherent, argument that a real determinist “should” just sit by and see what will inevitably happen because he can’t affect anything. But as you surely agree, while a given determinist may or may not do that (ignoring that it isn’t even clear what it would mean in practice), it isn’t an option he can “choose”. And in any event, one’s actions obviously can affect the “flow” of events, ie, of human as well as non-human activity.

  59. 59. Jochen says:

    Sergio:

    I would follow you if you could deploy counter-examples that do not depend on a fully formed intentional interpreter which is external to the entity where we are hoping to see intentionality arise.

    The thing is, for me, all your proposed scenarios include a huge leap somewhere, when from things like ‘correlation’ or ‘memory’ or ‘recursion’ or whatever, suddenly intentionality appears. My examples are really just attempts at completing the picture you suggest the only way I know how—which is to appeal to an intentional agent.

    Or, in other words, I think you’re seduced by your own usage of intentional language to attribute intentionality, which is however only the projection of your own intentionality onto the system you’re discussing; this I try to make explicit. So, when you claim that some A reliably correlated with B can be used as a representation of B, I agree; but if you claim that hence, this correlation is representation, then you have inadvertently smuggled in just what you’re trying to explain—or at least, left open what you want to substitute in place of an intentional agent.

    So in brief, we know that representation can be achieved with the following two components:

    (1) A reliable co-variance between two entities A and B.
    (2) A belief that ‘A means B’.

    This is how derived intentionality works; note that (2) explicitly appeals to the intentionality of whatever entity it is that uses A to represent B. Furthermore, (1) alone isn’t sufficient: otherwise, we end up with newspapers only talking about the printing process, or maybe the boundary conditions of the universe; and we fail to be able to explain how different agents can use the same symbols as representations of different entities.

    Now, to say that (2) is necessary would be to appeal to a kind of irreducible intentionality position, and I want to make clear that I don’t do that; however, we know that (2) is sufficient, and have, as of yet, no convincing narrative of what could replace (2).

    The problem is, you don’t provide such an alternative; hence, my examples all use the one narrative we know works in order to complete your mechanisms, to highlight where something’s missing. Hence, I’m not saying that there must be some intentional agent somewhere in there in order to complete the explanation, I’m merely saying that the only way I know how to complete your examples is to appeal to such an agent—or else, be utterly mystified as to why one would believe there to be any intentionality attributed in the first place.

    The bit I’m failing to make clear, and was intended to be implied by my ineffective provocation, is that for the system where intentionality arises, “how [it] use[s] a certain representation” is fixed by the structures that make up the system in question.

    And this is exactly the point where I always find myself wondering: but how? You claim that this is the case, by some means, but I haven’t yet been able to guess what exactly the means are you have in mind. So it seems to me that either you want to claim that this is somehow a necessary consequence—that, for example, correlation necessarily just is representation. This, I think, is trivially false—all of our symbols are correlated with many things, without meaning them, such as the newspaper’s correlation with the printing process; moreover, it is even the case that correlation only follows interpretation, as for proper names—the association between appearances of myself and appearances of the symbol ‘Jochen’ is not prior to Jochen being my name, but a consequence thereof. My parents didn’t notice a correlation between occurrences of the sound ‘Jochen’ in my immediate vicinity, and hence, deduced that that must be my name; they named me that, and only then became that sound correlated with me.

    So then, there must be some mechanism that makes a particular correlation into a representation—but the only mechanism I know (barring my own idiosyncratic thoughts, of course) is just that of having an appropriate belief regarding the connection between symbol and interpretation. Yet clearly, that can’t be what you have in mind, since it leads to unavoidably circularity (as you are then necessitated to give an account of what it is to have that particular belief, which, if analyzed again in terms of correlation and an appropriate belief, doesn’t get us any step further).

    So all I’m asking is: give me something that substitutes for (2) above in fixing the referent of a representation relation.

    Now, I recognize you’re trying to do so in your latest post; but you’re again just kicking the can down the road. Yes, there are certain symbols that seem to carry their interpretation on their sleeves, to us; but that merely is a statement of fact, an explanandum, because the question is, again: how?

    And again, here, I find it easy to appeal to a narrative of an intentional agent: it’s simple to imagine that certain beliefs come hard-wired, perhaps due to evolutionary expediency. So, when you say we’re not free to arbitrarily choose our referents, then I can only agree; but I don’t see how this buys you any new ground, because it doesn’t touch the question of how referents are chosen at all!

    That to all the world, X means Y, doesn’t mean that there is something about X that is in some way Y-ish; it just means that to all the world, X means Y, which could be just as well explained by our belief systems having a hardwired ‘X means Y’-belief. Indeed, a mutation could arise that, against evolutionary utility, takes X to be Z; thus, the connection between X and Y does not rest with X, but with the entity making that connection.

    There’s an analogy to the naturalistic fallacy here: just because something is pleasant or desirable does not make it good—because there is nothing inherently good about something’s being pleasant. Everybody prefers pleasant states to unpleasant states, but that is just a statement of fact, not a statement of ethics.

    Likewise, if everybody interprets X as Y, this doesn’t indicate that X ‘just means’ Y; it’s just a statement of fact, and contingent at that, and thus, tells us nothing about why X means Y, or how X comes to mean Y.

    Your evolutionary explanations may hint at how such belief fixing comes about, but they tell us nothing about how beliefs themselves work.

    But still, I do not agree that I’m conflating semantic and syntactic information, I am proposing how to move from the latter to the former.

    And that’s just what I’m not seeing: you say things like (paraphrasing) ‘the structural differences of the organism, reflecting the structural differences of the environment, constitute information about the environment’. This is not a proposal of a mechanism, but rather, a statement I can either take on faith, or not! To me, there is simply no reason to believe that the structural differences of the organism constitute information about the environment—sure, you can interpret it that way, but then, you’re exactly doing what I said earlier, namely, confusing your own intentionality for that of the system. And this is the only way that I can see of substantiating your assertion. So at that point, there always seems to be a jump in your argumentation: all of a sudden, the co-variation of physical variables becomes information about something; all of a sudden, bang, intentionality. And I’m always left wondering, but how?

    After all, the system simply is in a given physical state, and reacts accordingly to the environment—there is no description in terms of ‘information about’ anything necessary, and moreover, no mechanism given by which it could emerge.

    You are assuming that it’s impossible to move from the left to the right hand side without some magic ingredient.

    I think saddling me with this assumption is a tad unfair. I don’t just start out assuming that semantics can’t come from syntax; rather, I find myself forced to draw that conclusion by argument (and example: there evidently are situations in which syntax doesn’t fix semantics, such as my Polish misadventure). After all, I used to think differently, but eventually realized that I was simply misled by an over-reliance on computational metaphors.

    In fact, ultimately, the impossibility to derive semantics from syntax simply follows from the mathematical fact that for any collection of relations, any set of a certain minimum cardinality supports these relations—which just follows from the axiom that any set implies its powerset. So a syntax is just, as you say, a pattern of differences—hence, a collection of relations. Going from syntax to semantics means to identify the elements that bear these relations. But this is impossible. (Which is, in fact, just a restatement of Newman’s Problem.)

    But whether you agree with me on this or not, what you have to provide in order to make it plausible that semantics comes from syntax is a mechanism for it to do so; it’s not enough to have your argumentation hinge on the supposition that it just does somehow. And as far as I can tell, that’s completely missing right now.

  60. 60. Sergio Graziosi says:

    Nice again, Jochen (thanks!).
    It seems that the whole centre of our discussion is summarised in the following quote:

    Or, in other words, I think you’re seduced by your own usage of intentional language to attribute intentionality, which is however only the projection of your own intentionality onto the system you’re discussing; this I try to make explicit.
    So, when you claim that some A reliably correlated with B can be used as a representation of B, I agree; but if you claim that hence, this correlation is representation, then you have inadvertently smuggled in just what you’re trying to explain—or at least, left open what you want to substitute in place of an intentional agent.

    Your first sentence is what I’d expect given the (I admit, a bit unfair) assumption I’m attributing to you (a wrong assumption, iff I’m right). To your big merit, I can see the efforts you are making to avoid the mistake I claim you are doing, BTW.

    Let’s unpack the second part: if A reliably correlates with B, then A can be used as a representation. So far, we agree, yay!
    Next bit is that in your understanding I’m claiming that “therefore this correlation is representation” which is not what I’m doing (well, trying to do!).
    As you do note, we don’t use all correlations as representations. Furthermore, the quote above does not provide any explanation of why the “therefore” is supposed to apply. Clearly, I’m not being clear, so I’ll try a schematic approach.

    1) if A reliably correlates with B, then A can be used as a representation of B (agreed).
    2) biological systems, because of their structural features, respond to certain stimuli in teleonomic ways.
    3) to do so, changes in their internal structures reliably correlate with specific changes in the environment (the biochemistry and biophysics disciplines have been documenting these changes for decades, this is a statement of fact).
    4) changes in their internal structure = A. Changes in the environment = B.
    C1) therefore, it is possible that some organism do use A as a representation of B.

    C1 doesn’t mean that all organism do, maybe some do, maybe none. However, I’ve already done some important work, I’ve observed that some organisms (even assuming they are entirely constituted of molecular mechanisms), might, just might, use their internal structural changes as representations.
    Now, in “Sergio Resurgent” (IIRC), I’ve used the example of a bacterium (B). (B) can go into a kind of hibernation, a virtually zero-metabolism state which allows it to retain its structural integrity in the absence of sources of energy. If you put glucose in its environment, it will bind to a given receptor on B’s surface, this will in turn activate a cascade of structural changes (CSC) which wake up B, allowing it to actively suck-in glucose and do it’s “fully active” thing.

    This is a bird’s eye view, but is sufficient to support 2 and 3 – 4 rests on logic, so I’ll accept it as is.

    Enter your usual evil scientist, who finds that aspartame binds to B’s glucose receptors, activating them as if it was glucose, but with more strength. Thus, tiny amounts of aspartame will wake the bacterium, even if there isn’t enough energy sources around. This second story allows to highlight that the structural changes triggered by the receptor activation are already being used “as if” (with scare quotes – see below) they meant “hey, glucose is present, let’s party”. But this second story also allows to say that the specific structural changes necessarily make B react as if (without scare quotes, as I’ve used the weaker verb “react”) glucose was present.

    Thus:
    C2) organisms (all of them) necessarily react to some stimuli as if they where standing for a precise distal cause (the one which is prevalent in their ecological nice).
    Now we’re in an interesting half-way-through zone. We are still firmly in teleonomical ground, our bacterium has no “understanding” of the world, no mental life, just mechanically reacts to stimuli as and when they occur. However, in the weakest possible interpretations, you can say that any step in CSC makes B behave as if glucose was present. In a weak, mechanical sense, B is treating the signals (structural changes) of CSC as if they meant “glucose is available”. Despite the current weakness, C2 is very important, because of the “necessarily” qualifier. B has no choice, any B’ that did otherwise would last one generation and die, or would be crowded out in a few generations.

    But hang on a second, B has no choice, it can only use the reliable correlation Glucose/CSC in one way: by using CSC as a “stand in” for glucose. The key point here is that ok, you may say that we (the external observers) are arbitrarily interpreting CSC as a cascade of signals, and that phosphorylation of protein P which is the first step in CSC “means” (with scare quotes, as we are doing the interpretation here) “glucose is present”. However, we can (and should) also observe that so far there is no arbitrary interpretation and no vicious circularity, B reacts to CSC in one way and one only. This gives it teleonomic abilities in virtue of B’s syntactic qualities (fixed rules).

    You write:

    for the system where intentionality arises, “how [it] use[s] a certain representation” is fixed by the structures that make up the system in question.

    And this is exactly the point where I always find myself wondering: but how? You claim that this is the case, by some means, but I haven’t yet been able to guess what exactly the means are you have in mind.

    Well, the above story is “How”. B has no choice. If you want more details, buy a biochemistry book, or enrol in a biology 101 course, otherwise (more wisely), take my word, as I’ve gone through the pain of doing both already ;-).
    Before finishing off this part, let’s see if from another angle, since I’ve mentioned undergrad pains…
    Enter the evil professor, who writes the following exam question:
    We’ve kept a population of bacteria B in controlled environment, mimicking their natural conditions. At some stage we’ve sampled the population, destroyed all the bacteria in the sample, which allowed to run the resulting smelly goo through a series of biochemical essays. Results are below.
    [results are skipped, because I’m not that evil – and I’m certainly too lazy! this would be lots of numbers, or even pictures of the various test results]

    Exam Question: was glucose present in the culture medium when we extracted the sample?

    Now, from the results that I’ve skipped, a clever student may notice that Protein P is present in the usual quantity, but that its molecular weight is a bit higher than what its nominal structure mandates. Our clever student will do a bit of calculations, and figure out that P appears to be heavier than the expected because it’s phosphorylated (the difference in weight is just right). Thus, he’ll find the correct answer, report a decisive “Yes” and explain why.

    So what? Well, this is to show that interpreting phosphorylation of P as a signal that glucose is present, provided all the other things the student knowns, is NOT arbitrary. To the exam test, one answer is correct and its correctness was fixed by a number of facts about the world. Thus, we can conclude that phosphorylation of P is (sort-of) a signal of glucose being present. It’s only sorta-so because aspartame, and who knows how many other stimuli, may lead to phosphorylation, but in all cases, there are facts about the world that fix the interpretation. Saying that phosphorylation of P is naturally a signal that Trump said something outrageous just isn’t true.

    I’ll stop here, because so far I have addressed some of your perplexities, and have done so using only established facts. I’ve been in the wet lab, and this stuff works (astonishingly!).
    The perplexities:
    – no circularity is yet to be found, but teleonomy is already possible.
    – Interpretation of the signals identified so far isn’t fixed, but is almost fixed. The correlation is stable enough for B to thrive in its natural environment. Thus, the worry that signals have arbitrary meanings doesn’t even begin to apply (but the worry that we might never fully understand the universe starts to loom!).
    – Moreover, fairly reliable information
    about external glucose is manifestly stored inside our bacteria, we know this for a fact, because we know the evil test does admit a correct answer, it does in virtue of the structural state of our bacteria population.

    For now, we’ve covered enough road. Bacteria use internal signals about the outside world. They just do, get over it. As per my provocation, they have no choice, but they can be “wrong” (“mistake” aspartame for glucose). The exam bit demonstrates that our interpretation of phosphorylation as a signal isn’t arbitrary, and if it isn’t, on a weak interpretation, we already have intentionality: something is about something else. If we are happy with calling the reaction of B an interpretation (I’m not, but some might be) then we have a signal which has a meaning to B.
    This is too far for me, however, as having a meaning requires much more.
    But dry intentionality? Hmm, I’m tempted! In your classification, syntactic properties (how the mechanisms of CSC fit together) allow our student to derive one semantic fact “yes, that change in molecular weight means that glucose was present”. Actually, I’m more than tempted…

    Going back to the beginning one worry you had is that I’m claiming that “(all) correlations are representations”, which I don’t, but I do claim that “strong enough correlations can be used as representations”. Furthermore, I claim that some correlations *are* used as representations (in a weak sense) by organisms everywhere (including you and me), as Friston claims, this is “life as we know it”. I also claim that all such correlations are imperfect, which will be important later on. Finally, I claim that internal biological mechanisms which support teleonomic behaviours necessarily store information about the environment. Thus, their interpretation, if it requires to understand their function, is not arbitrary. If life is as we know it, all living things implement a weak form of (somewhat unreliable) intentionality.

    The key so far is embodiment: our own (yes, us, already formed intentional agents) interpretations of the mechanisms within B are not arbitrary because of what B is and how it behaves. The physical state of the world fixes the agreeable interpretations, which is radically different from what you get if you start from mathematical principles and disembodied, purely abstract symbols. Crucially, I can’t see how you can conclude that phosphorylation of P does not signal glucose presence, it does, otherwise our student would have been lied to. If it does, intentionality is already, at the very least, potentially present in B, all we would need is a mechanism that uses it. But then I’d argue that “waking up B” is already an unsophisticated way of using it. If it’s not, then you are assuming that “using information” requires some magic, non-mechanical step.

    Anyway, I will stop now. The story so far is merely enunciation of facts, from here stuff becomes much trickier and speculative, so I’d rather pause and check. Are you still with me? If you are not, we should dig more, because on the above, I really don’t see how we can reasonably disagree.

    [You should know where I’m going: if information about the environment is present within the organism, then all we need to progress further is to find different mechanisms which use the same information in more sophisticated ways: instead of fixed reactions, we could have conditional reactions, allowing to increase the teleonomity of the resulting behaviours.]

  61. 61. john davey says:

    Charles


    I try to express my position in terms of what I call a “context-dependent behavioral disposition” (CDBD), which I envision as a sensorimotor neural structure that take’s as input excitation consequent to external and internal sensory stimulation together with excitation that captures somehow context and produces an action (possibly latent, ie, a modified or new CDBD). My assumption is that all of our historical “nature” and “nurture” up to a point in time is captrued in our current set of CDBDs. I can’t justify this as a model, but it seems to me a reasonable way to think about the issue.

    You mean people act on instinct ?

    Do you believe that the brain’s physical state at time T, S(T) determines it’s subsequent state at an incremental time later, S(T+dt) and that is an entirely sufficient descriptiom of what the brain does ?

    If you do, you’re a determinist. If you’re a determinist there is only matter in motion – no choices,no behaviour .. no people even, as biological organisms are just collections of particles.

    It doesn’t sound to me like you’re a determinist – you have the same beliefs as about 90% of the population – namely that human beings can make choices – hence are NOT subject to determined behaviour, but their instincts get in the way, making some paths of behaviour more likely than others.


    It should be clear from that sparse definition that most of your inferences about my position are incorrect. I have “opinions” because what I mean by that is having a CDBD to express certain positions in certain contexts. What I mean by being “programmed” to take an action in a specific context is having acquired a relevant CDBD.

    You seem to keep on insisting about your position being inscrutable. Unless you’re really bad at expressing yourself I just don’t think that’s true. There is nothing you have just said that doesn’t contradict what I believed in the first place – it’s just your lexicon is a novel one, and you seem to have developed an elaborate vocabulary for a fairly common set of beliefs.


    And in any event, one’s actions obviously can affect the “flow” of events, ie, of human as well as non-human activity.

    Of course they can’t if you’re a determinist ! Your behaviours are mere window-dressing for physical forces – they have no impact, none at all.
    But as I say, you’re evidently not a determinist.

    J

  62. 62. Jochen says:

    A good effort, Sergio; however, to me, it really highlights just how pernicious the danger of substituting your own intentionality in place of that of the system you’re analyzing is, when even a sophisticated commentator such as yourself gets taken in by it. So, where you don’t even see logical room for disagreement, I think there’s plenty, and I’ll make use of it. But one virtue of the preceding discussion is that I think it indeed allows to pinpoint our point of departure—essentially, it’s this:

    2) biological systems, because of their structural features, respond to certain stimuli in teleonomic ways.

    Teleonomy is recognizing a system to behave as if it were trying to achieve certain goals. This is something you ascribe to the system; I think your main confusion is then to take it for a property of the system, that it just has, absent your ascription. This isn’t the case: in order to take a system to behave teleonomically, you need to, e.g., represent the goals you consider the system to move towards; you need to represent the system as having a certain behavior; and so on.

    So, you could consider a stone rolling down a hill as behaving teleonomically: if it wanted to get to the bottom, then it ought to take just those actions you see it taking. However, absent this ascription, there is no teleonomy—the teleonomy is purely a result of your intentional apprehension of the system and its putative goals. Absent that, the system just behaves, well, nomically—it follows the laws of gravity, and the constraints of its environment.

    All mention of goals, and even of behavior, comes solely from you, the intentional observer. Subtracting the observer from the equation, all you’re left with is a stone rolling down a hill, because given the forces acting upon it, that’s the only thing it can do. The teleonomy is thus an intentional gloss brought to the system from outside; but you then proceed to take it as a property of the system, thus smuggling in your own intentionality as that of the system.

    The bacterium can be treated in exactly the same way, although its treatment is somewhat more complicated—but, like all essentially classical physical system, we can represent it by a point in configuration space, and its time-evolution by a trajectory through this space, given constraints imposed upon it by the forces it is subject to. This will be a much more high dimensional space than that of the boulder careening downhill, but there is no essential difference in the description safe that complexity.

    The clue is in the fact that even the brightest student can’t figure out the presence of glucose from the exam situation you describe: he needs, additionally, at minimum the beliefs that ‘P-phosphorylation changes the molecular mass of P’, and ‘P-phosphorylation is indicative of the presence of glucose’. These are just the ‘A means B’-beliefs I mentioned in my previous post, and absent these beliefs (which are certainly absent in the bacterium, or else, we’re being circular again) it is simply not the case that ‘P-phosphorylation means glucose’. In other words, you need representational capacities to unpack your putative representation of the presence of glucose.

    Thus, via accepting 2), you (unintentionally) effectively sneak in representation and belief at the ground level; and all further appearances of these elements can be traced back to this unfortunate mixup.

    Hence, the following conclusion does not follow:

    This second story allows to highlight that the structural changes triggered by the receptor activation are already being used “as if” (with scare quotes – see below) they meant “hey, glucose is present, let’s party”. But this second story also allows to say that the specific structural changes necessarily make B react as if (without scare quotes, as I’ve used the weaker verb “react”) glucose was present.

    The ‘as if’ is purely your addition to the system; it reacts simply nomically to the presence of certain chemicals in its environment. Subtracting the external observer, there is nothing that makes its reaction to aspartame a reaction ‘as if’ there were glucose present; it simply makes it react to aspartame in the same way as to glucose. This is just the same as I reacted to the question ‘Jeden Tag?’ in the same way as I would have reacted to the question ‘Jeden, tak?’: the syntax underdetermines the semantics.

    Or, to put it another way, reacting ‘as if’ requires a belief regarding the way the bacterium ought to react in a given situation; but, for instance, consider the case in which a bacterium is only subject to stimulation by aspartame (or, at any rate, in which you only observe the cases in which this is so, since presumably the bacterium needs glucose once in a while to, you know, not die): then, there is no way for you to decide that it reacts ‘as if’ there were glucose; this needs an additional belief on your part as to what the reaction to glucose is. The bacterium does not possess this belief; that’s what you bring to the table, via your intentionality, i.e. the knowledge of the appropriate facts, represented to you as beliefs about the bacterium’s reaction.

    Furthermore, even if I were to follow along with your story, what you’re describing here is actually a massive problem for causal theories of intentionality, i.e. the problem of disjunction (see section 2, fourth paragraph, and especially section 3): if a symbol means whatever causes it to be activated, then it’s not a mistaken reference if it is activated by something else than its intended referent; rather, it means that the symbol actually refers to the disjunction of whatever activates it. So if the symbol for “dog” gets activated by a cat every once in a while, and if a symbol means whatever activates it, it follows that its’s not actually the symbol for “dog”, but rather, for “cat (under some conditions) or dog”. Thus, you don’t get a mistaken representation, but rather, a representation that doesn’t represent what you at first thought it does; and in fact, how our perceptions can be erroneous, which error we nevertheless are able to detect and correct, is hard to explain on such a model (the problem of error). But that just as a side remark.

    Saying that phosphorylation of P is naturally a signal that Trump said something outrageous just isn’t true.

    Right, but it also isn’t true that P-phosphorylation is a signal for glucose; rather, it is an effect of the presence of glucose. (Without observer, we’re limited to the nomic, not the teleonomic.) Certainly, such effects can be used as signals, but as shown above, the only narrative that accounts for how such use works is one in which the belief ‘glucose causes P-phosphorylation’ figures, and hence, not one of which you can avail yourself in an explanation of intentionality.

    In regards to my perplexities:

    – no circularity is yet to be found, but teleonomy is already possible.

    Teleonomy is only possible because you interpret the system as behaving as if it were behaving goal-directedly; that’s exactly the source of the circularity.

    – Interpretation of the signals identified so far isn’t fixed, but is almost fixed.

    The interpretation of the signals is solely due to you as intentional observer; there is no interpretation on the side of the bacterium—it simply reacts nomically to the environmental chemical makeup. Saying that there is some interpretation there independent of an observer is as wrong as saying that the stone rolling down a hill interprets gravity as a signal that it should move downwards.

    – Moreover, fairly reliable information about external glucose is manifestly stored inside our bacteria, we know this for a fact, because we know the evil test does admit a correct answer, it does in virtue of the structural state of our bacteria population.

    The information ‘stored’ inside the bacteria is just information if it is coupled with the belief that ‘glucose causes P-phosphorylation’; that is, if it is interpreted by an intentional agent. Without said agent, no information is present (no semantic information, that is; I agree that there is information in the syntactic Shannon sense of structured differences; but again, those are very different things).

    But dry intentionality? Hmm, I’m tempted! In your classification, syntactic properties (how the mechanisms of CSC fit together) allow our student to derive one semantic fact “yes, that change in molecular weight means that glucose was present”. Actually, I’m more than tempted…

    No, it’s not the syntactic properties that allow the student to draw that conclusion: without the propositional knowledge that ‘P-phosphorylation means the presence of glucose’, there is no way for him to answer the question. Thus, there needs to be a bit of ‘bridging intentionality’, a connection of the ‘A means B’-sort—but then, the intentionality you see in the bacterium is derived from that of this bridging belief.

    Note that ‘enlarging’ the data by, say, giving a list of how various changes in environment influence the biochemistry of the bacteria doesn’t help, either: the interpretation of that data will be rooted likewise in intentional bridging beliefs, coming down ultimately to the student’s ability of interpreting the text before him in terms of concepts and ideas.

    Finally, I claim that internal biological mechanisms which support teleonomic behaviours necessarily store information about the environment.

    Information which, however, solely originates in your (or another observer’s) ascription of teleonomy. There is no information about gravity or the terrain in the stone, unless you interpret it as such; absent any interpretation, it’s simply a system evolving according to forces and constraints placed upon it. There’s no light on inside, unless you bring a candle.

    Crucially, I can’t see how you can conclude that phosphorylation of P does not signal glucose presence, it does, otherwise our student would have been lied to.

    You almost realize it here: the fact that the student could have been lied to implies that whether he interprets P-phosphorylation as signalling glucose is dependent on his conceptual knowledge of the world. Without this, there is no account I know of how the exam question could be answered.

  63. 63. Charles T Wolverton says:

    John:

    You mean people act on instinct ? … no choices,no behaviour .. no people even … human beings can make choices

    Except for “behavior”, I used none of those words – and I explained what I mean by that word, viz, mere activity. If you consider that to be a determinist one has to argue that organisms engage in no activity (and because I mean “activity” in its most basic sense, neither do oceans, plants, trees, etc) then I guess I’m not one. But that seems a rather bizarre posture, no?

    Maybe I should note that I’m not a college sophmore who recently stumbled on the concept of determinism and thoughlessly adopted it because it sounds cool. I’ve been a committed strict-sense determinist in just the way you express it – we’re just matter in motion – for roughly six decades. In which time I’ve read many essays, seen/read many debates, participated in blogs, etc, on free will versus determinism. Altho obviously not being immune to logical lapses, I’m less likely than most to be caught in a glaring inconsistency. So, you might consider being a bit more charitable in how you interpret what I say. Eg, I have no idea how you got from that quote to “instinct” – a word I’d be hard pressed to even define, hence would never use in this context.

  64. 64. Charles T Wolverton says:

    Jochen @62 –

    That seems to me a convincing argument for why there’s no intrinsic intentionality. In which case it would seem to follow that neither is there “derived intentionality”, at least in the sense that I understand that phrase.

    Of course, there is linguistic intentionality – talk among English speakers using “dog” can be “about” that creature over in the corner, or even a creature in another room. But that is a product of agreement among the speakers, which isn’t how I understand “derived intentionality”. There is, of course, the question of how way back in time some proto-speakers came to agree to mean by grunting “dog” a creature of that sort, but I don’t see any problem in imagining how that happened – Davidson’s concept of triangulation seems a good starting point.

    So, what am I missing that leads to such lengthy exchanges between you and Sergio? (BTW, as I scan some of those, I see traces of the between John’s and my exchange re determinism.)

  65. 65. Jochen says:

    Charles:

    That seems to me a convincing argument for why there’s no intrinsic intentionality.

    Well, I think that’s a bit quick. Even if my arguments are successful, that just shows that Sergio’s attempted explanation, and others like it, don’t do the work they need to; to conclude from there, that hence the phenomenon they’re trying to account for doesn’t exist, requires commitment to at least the further theses that no other explanation is possible, and that we can explain everything that exists. Both go far beyond what I think we could reasonably assume.

    Indeed, the argument above depends on intentionality; so a genuinely intrinsic—non-reducible—intentionality would in fact be one possibility for things to come out the way they do. But I don’t think intentionality of that kind exists; I believe there is intentionality, but that it’s completely accountable for in terms of fundamental, ordinary, physical dynamics, and needs no additional mysterious primitive meaning added to our ontology.

    Furthermore, ultimately, I simply don’t think the thesis of eliminativism is coherent. So it seems, to me at least, that we’ll have to walk the hard road: explain a real feature that isn’t amenable to the style of explanation Sergio favors—that it takes us so long merely means the problem is hard, not that there’s nothing to explain. Indeed, I think the latter conclusion is generally drawn too rashly; but then again, we probably all inherently think that if we can’t explain it, well then, it must be inexplicable. But that’s an overestimation.

    So to me, the situation is simply that we have data to account for—that goes beyond the linguistic intentionality you mention: symbols don’t just have meaning as some sort of agreed-upon means of synchronizing behavior, but rather, my thoughts have meaning to me, or at least, they seem to. That’s something that I think is genuinely in need of explanation, and isn’t touched by the interpersonal-agreement account.

  66. 66. Charles T Wolverton says:

    we have data to account for … [viz, that] my thoughts have meaning to me, or at least, they seem to.

    Well, I consider it quite a leap from “it seems to me that X” to “we have data for X that needs explaining”.

    Do you have any opinion on the relationship between thought and language? Following Davidson (and to some extent based on my own attempts to stop the internal dialogue, thereby achieving what I take to be Zen’s “no mind” state) it seems to me that thought requires language. If so, the apparent intentionality of thought – which can then be viewed as a slent rehearsal of speech – could be derived from that of language.

    Of course, my “seems to” is no better justified than yours, but it leads to a much simpler explanation. Of course, it ignores the role of mental images, but not being very visual I consider their role minor, at least for people like me.

    And I’m not sure that “eliminativism” is really the right descriptor for my position. I guess I’m an “eliminativist” wrt the psychological vocabulary in certain contexts, but there’s no question about its existence. My problem with non-linguistic “intentionality” is that I’m not aware of a precise definition in an appropriate vocabulary that might warrant even an assumption that it has a referent. Eg, I don’t find defining it as “aboutness” helpful.

  67. 67. john davey says:

    Charles

    “But that seems a rather bizarre posture, no?”

    I’m not a determinist so I agree with you. Determinism arises solely because of a clash between mental phenomena and the structure of physics. As physics cannot account for mental phenomena, physics is clearly inadequate to fulfil the task of the universal scientific solution that current academic propaganda claims it is.
    So I’m happy to live in an undetermined world.


    “I have no idea how you got from that quote to “instinct”

    From the mixed bag of “Instinct/Disposition/Programming/HardWiring” that suggests that people have fixed responses to certain situations and to that extent are “determined” by them (although as I’ve said, this is emphatically not “determinism” in the true physical sense).

    In fact I’d probably agree that most mental activity is characterised by a good deal of automaticity, but not all of it – and that amount, however small, makes me sceptical of both “types” of determinism.

    J

  68. 68. Jochen says:

    Charles:

    Well, I consider it quite a leap from “it seems to me that X” to “we have data for X that needs explaining”.

    I think you misunderstand: the data is that there ‘seems to be X’; how things can ‘seem’ a certain way to me is what needs explanation.

    Regarding linguistic thought, while I tend to think of myself as rather a linguistic than visual thinker, there is much about my thought that doesn’t seem to have a linguistic component—when I write down a mathematical proof, for example, or plan a move in chess. Furthermore, I find that I can cut the mental formulation of a thought short, and nevertheless know what the thought is about. Plus, we often struggle for the right words to express our thoughts; if thoughts were themselves linguistic, that seems difficult to explain. Thus, I ultimately think that language is really more of an after-the-fact gloss on thought, rather than its foundation. (We briefly discussed this issue in the comments on this post.)

    Plus, I don’t think that ultimately, the explanation of ‘linguistic intentionality’ is likely to be much simpler: we still have to explain how symbols can come to be about something, rather than just how they can serve to synchronize behavior (including verbal behavior). If you’ve read my exchange with Sergio, you may have noticed my anecdote of buying bread in Poland: there, the behavioral facts don’t suffice to settle the question of whether the topic of conversation was the number of loafs of bread I intended to buy, or if I eat bread daily. Nevertheless, I don’t doubt that there’s a fact of the matter regarding what the vendor meant to ask of me; but merely thinking about what she said, and what I did in response, won’t serve to settle that matter.

  69. 69. Sergio Graziosi says:

    Jochen (#62),
    yes, we are moving forward, can’t stop thanking you.

    Situation is almost getting out of control, however, because my temptation of pulling in a handful of other BFDs is high! I think I see a way to avoid doing so, but we’ll see…

    Teleonomy is recognizing a system to behave as if it were trying to achieve certain goals. This is something you ascribe to the system; I think your main confusion is then to take it for a property of the system, that it just has, absent your ascription.

    Straight to the heart, you are pointing the finger in precisely the right direction, but then you reach the wrong conclusion ;-).

    It is exactly right that I take teleonomy as a property of the system, because it is. It is there, whether I look or not, it is a fact of the world, solely fixed by syntactic (mechanistic) properties of the world.

    What follows is mainly a supporting argument for my claim above.

    What distinguishes living things is their detectable teleonomy, and the fact that their teleonomy is emergent, not designed. It is a second order fact about the world because yes, whether something behaves teleonomically or not is a function of what the apparent purpose is supposed to be. However, for living things which have emerged naturally from the mechanism of natural selection, the “apparent purpose” is once again, fixed, and this time, fixed by a tautological property of the universe: those structures that better preserve (and reproduce) their structure, better preserve (and reproduce) their structure. This is a fact that is necessarily true, and one which transfers to teleonomic behaviour: those behaviours that better preserve the ability of their producing system to manifest them will be happening more often than behaviours which don’t.

    We can only recognise these facts while accepting that us, the intentional interpreters of reality, are indeed interpreting reality, but this interpretation doesn’t make the teleonomic behaviour happen, the behaviour is there, has been on planet earth for million of years, and will probably keep happening after the last human has amused herself to death. Teleonomy is emphatically a property of the systems we’re looking at, it can only be *recognised by intentional agents*, but that’s another tautology which in this case isn’t adding anything to the picture (unlike the previous two).
    Once again, your error is to use the abilities of (idealised) fully intentional agents to offer counter-examples. Yes, we all can arbitrarily ascribe teleonomic qualities to any behaviour, but ascribing it to life is not arbitrary, it’s exactly at the other extreme: it’s tautological, so your main objection fails.

    Furthermore, ascribing teleonomic behaviour to living things picks up an important (perhaps the most important, for our purposes) regularity of the world. [Dangerously flirting with a BFD, now] You can always describe/model a system in equivalent different ways. For a living organism, you can choose to describe it nomically (without deploying second-order considerations, if you are really, really careful, it may be possible, in theory at least ;-)), and it will also work (if done correctly), but doing so will be more expensive computationally (in producing predictions) and, unlike the case of the stone, will require to use more variables, more precise measurements and more calculations to achieve the same level of precision/reliability in producing predictions. Why? Because teleonomic behaviour happens whether we are observing it or not. It is a (tautological) fact about the world – a regularity which can be, should be, and is regularly exploited to make predictions.

    Thus, the picture I’m trying to piece together is one of gradual emergence (with a few singular qualitative changes, unfortunately), but for now, I can stop at noting that some spark of intentionality is present in B, whether we recognise it or not. If I’m right, then as per my preview, we need to understand how it is used in more complex organisms, with the expectations that these will include us.

    Back to your unresolved worries:

    – no circularity is yet to be found, but teleonomy is already possible.

    Teleonomy is only possible because you interpret the system as behaving as if it were behaving goal-directedly; that’s exactly the source of the circularity.

    Nononono! Teleonomic behaviour is happening whether we interpret it as such or not. Our interpretations have no direct effect on what B is going to do – what B is going to do is also clearly following a regularity: as long as B is in its normal environment, it will most likely do things which heuristically preserve its ability to survive and reproduce. Interpreting B’s behaviour in this light allows to clarify an important feature of our world and allows to produce better (cheaper, more precise, more reliable) predictive models of the world. You are personally injecting circularity here by not accepting that the behaviour of living things is objectively teleonomic (assuming objective descriptions are possible, that is), the fact that this observation is possible only by also observing additional facts about the world (the two tautologies which happen to be useful) bears no effect on circularity. Teleonomic behaviour is as factual as the need of your stone to drop. Re the stone, [flirting with a BFD again] is the stone falling because of the force of gravity, or is it falling because spacetime is curved? See, even for the stone, the facts of the world are interpretable in different (and somewhat equivalent or overlapping) ways, depending on what believes you are accepting already. Precisely like the case of teleonomy. The stone still falls, and B still preserves its own integrity (within limits).

    – Interpretation of the signals identified so far isn’t fixed, but is almost fixed.

    The interpretation of the signals is solely due to you as intentional observer; there is no interpretation on the side of the bacterium—it simply reacts nomically to the environmental chemical makeup. Saying that there is some interpretation there independent of an observer is as wrong as saying that the stone rolling down a hill interprets gravity as a signal that it should move downwards.

    Ok, my bad, should have clarified what I was aiming at! My own comment was about the situation where you have some phenomenon and an external observer interprets it as some sort of (symbolic) signalling. In the synthetic (purely theoretic) case, how to interpret the signals is entirely arbitrary, to this we agree. Thus, your worry is that interpreting some of the stuff that goes on inside living things as “signals” encounters the same problem: those interpretations are all in our head and 100% arbitrary. I was remarking that my way (well, not mine, as it’s shared across all or most biochemistry, biophysics and neuroscience practitioners) of looking at mechanisms inside living organisms is not encountering this problem: how we (the aspiring objective interpreters) interpret the signals isn’t arbitrary, it’s fixed by other facts of the world, namely those that make teleonomy necessarily emerge via natural selection. Thus, your old-time worry isn’t resolved, it is not even encountered, because it never applied. Once again, your error is in admitting the possibility that the signals themselves necessarily can mean anything (so carry no information whatsoever), but they might not, their meaning is fixed by the function they fulfil (actually, I’m tempted to say that their meaning IS their function – remember Sergio’s Functionalism?).

    – Moreover, fairly reliable information about external glucose is manifestly stored inside our bacteria, we know this for a fact, because we know the evil test does admit a correct answer, it does in virtue of the structural state of our bacteria population.

    The information ‘stored’ inside the bacteria is just information if it is coupled with the belief that ‘glucose causes P-phosphorylation’; that is, if it is interpreted by an intentional agent. Without said agent, no information is present (no semantic information, that is; I agree that there is information in the syntactic Shannon sense of structured differences; but again, those are very different things).

    Yes, information is present, but only in the purely syntactic form. I need nothing else for now. Structured differences are there, ready to interact with other additional mechanisms which will enable more complex organisms to display even more teleonomy.

    But dry intentionality? Hmm, I’m tempted! In your classification, syntactic properties (how the mechanisms of CSC fit together) allow our student to derive one semantic fact “yes, that change in molecular weight means that glucose was present”. Actually, I’m more than tempted…

    No, it’s not the syntactic properties that allow the student to draw that conclusion: without the propositional knowledge that ‘P-phosphorylation means the presence of glucose’, there is no way for him to answer the question.

    Disagree again. The student needs propositional knowledge to answer exams questions, sure. S/he won’t be able to be a student without her intentional abilities. However, you are the one injecting this fact into my own proposition, muddying the waters. For example, you chose to write (my emphasis) “the propositional knowledge that ‘P-phosphorylation means the presence of glucose’”, but this isn’t the propositional knowledge needed to be possessed by the student. What she needs to know is that “presence of glucose necessarily makes P-phosphorylation happen”. That is: she needs knowledge of syntactic (mechanical) rules. To provide an answer the student needs an “intentional bridge” that’s agreed, but for there to be a correct answer, no such bridge is needed.

    More!

    Finally, I claim that internal biological mechanisms which support teleonomic behaviours necessarily store information about the environment.

    Information which, however, solely originates in your (or another observer’s) ascription of teleonomy. There is no information about gravity or the terrain in the stone, unless you interpret it as such; absent any interpretation, it’s simply a system evolving according to forces and constraints placed upon it. There’s no light on inside, unless you bring a candle.

    Nope. The dry, syntactic, Shannon’s “as structural” information is there already, inside the “waking up” B, by your own admission. Full intentionality is indeed needed by us to recognise this fact, but (dry, structural) information remains where it is regardless. Note that I’m not saying that there is any light in our bacterium. I am merely stating that, as far as our bottom-up, embodied story is concerned, one of the ingredients needed for making some light is present.
    By noting this fact, I’ve also shown that this basic ingredient is very different from what top-down stories would imply: it depends on a mere correlation, so it’s intrinsically unreliable; it is also necessary, entirely the consequence of mindless mechanisms, so doesn’t allow for arbitrariness, not even a trace. All that is arbitrary in our interpretation is down to what we may choose to ignore, and/or what we actually don’t know. Both these factors have nothing to do with the system, though, so we don’t need to worry about them while discussing pure theory.

    Overall, we are getting closer to identifying the first obstacle, but we’re still talking past each other, methinks.
    The points I’m making through B’s thought experiments, are still solidly on the mechanistic side, I am currently making no claims about how our full intentional abilities (ability to ascribe referents to signals) arise. That’s because I’m still far away from discussing how semantic abilities arise: ascribing referents to signals is part of the larger faculty of interpreting signals, it’s one possible step in the process of deciding what something means. It is certainly true that my final aim is to propose a speculative way to reduce semantics to mechanism, but I’m not trying to, yet. In the last two of my comments here, I’m doing something less ambitious:
    First and foremost, establish the (tautological) fact that for all self-regulating mechanisms (with Friston) “[their] internal states (and their [Markov] blanket) […] will appear to model — and act on — their world”, that is: internal state-changes will necessarily correlate to external changes. I don’t care too much about labels, but we could say that these changes are proto-intentional, as they contain (structural, purely syntactic) information about the external environment.
    Thus, I propose to try building a mechanistic picture of both (classic)intentionality and semantic by building on this basic observation, which I claim is tautological at best, hard to deny at worse. I am not saying anything about how to proceed, just identifying one “fact” which I wish to use.

    However, accepting to explore this route, does already highlight a few interesting patterns. For example, we may be tempted to think that symbolic signals can be interpreted to mean everything, but the (ultra-basic) picture I’ve built already shows that signals of the purely syntactic/mechanic kind which I’ve described so far (those happening inside organisms, having the function of mediating self-regulation / homoeostasis preservation) should not be interpreted arbitrarily. You can then observe that us, fully formed humans, automatically and necessarily (mechanically?) interpret some of the signals that travel inside our bodies in limited, somewhat predetermined ways. Signals carried over C fibers are painful to us. If we could arbitrarily interpret them in whichever way we wished, no one would ever feel pain again. This consideration strengthens my argument in two ways: first, there is a feeble connection between the necessity of the mechanisms I’ve discussed so far and the necessity of feeling pain, with a candidate mechanism participating in pain (C fibres). Second, we get the suggestion that (classic)intentionality and semantics may not be arbitrary after all, and also that they may not be separable: C fiber signals are painful, and pain usually means bad news to us. Thus, I do stretch myself and propose that (classic)intentionality and semantics may not be what we think they are. I do so because, unless this possibility is fully accepted, I know for sure that my next attempts will fail to convince.
    There really isn’t much to it: all I’m saying should be obvious. Certain mechanisms in our bodies are best described as signals, most of them have a limited range of possible effects on our conscious experience. Hence, via B, if we must, we observe that it’s likely that some mechanism is responsible for those effects.
    [If that’s the case, it is reasonable to expect that somehow some mechanism interprets incoming signals and that, as/when they do reach consciousness, in at least some cases we can’t help but interpret them in certain ways (strong-enough pain is bad/undesirable, and there is no way for us to reinterpret it as irrelevant, nice or welcome). But this latter part is a speculative anticipation of what I plan to do, you are free to disagree with it and don’t even need to explain why: your disagreement is taken for granted.]

    On the other hand, Jochen, you seem to be willing to run ahead, and ascribe to my argument more ambition that it currently has. Once again, I am not commenting on how we arrive to what we consider full intentionality and semantic abilities, not beyond suggesting that we might need to redefine them a little. In the examples where I do include fully intentional agents (the student) you have consistently injected concepts like “means” and “belief”, which I was careful to avoid (not as a rhetorical trick!). For example:

    The clue is in the fact that even the brightest student can’t figure out the presence of glucose from the exam situation you describe: he needs, additionally, at minimum the beliefs that ‘P-phosphorylation changes the molecular mass of P’, and ‘P-phosphorylation is indicative of the presence of glucose’. These are just the ‘A means B’-beliefs I mentioned in my previous post, and absent these beliefs (which are certainly absent in the bacterium, or else, we’re being circular again) it is simply not the case that ‘P-phosphorylation means glucose’. In other words, you need representational capacities to unpack your putative representation of the presence of glucose.

    This is all basically correct, but completely misses my point. We can certainly say that the student “believes” this or that, instead of saying that she knows this or that. Normally, I’d be the first to do so. But doing it here obscures my point. B’s mechanism exist (ex hypothesis, for our thought experiment) and work in certain ways. These ways produce certain behaviours, which happen to be teleonomic. This teleonomy is a property of B, independent from all observers/interpreters: under normal ecological conditions, B’s behaviours do tend to preserve its integrity and capacity to reproduce. (C3) Thus, P-phosphorylation can be used as a signal indicating that glucose is present. In fact (C4), B “uses” it in this way (with scare quotes: “use” = produce mechanical consequences, implying no inner light).

    So, hopefully for the last time, I’ll ask again: are you willing to concede these extremely weak (but very important) points? The minimum agreement is about C3 and C4, nothing more.

  70. 70. Jochen says:

    Sergio:

    It is exactly right that I take teleonomy as a property of the system, because it is. It is there, whether I look or not, it is a fact of the world, solely fixed by syntactic (mechanistic) properties of the world.

    Well, it might not surprise you, but I disagree. Teleonomy is recognizing certain behaviors to by as if they were directed at certain goals. Such recognition is inherently intentional: without, e.g., the concept of ‘goal’, there would not be any teleonomy.

    In a non-intentional world, even though the behaviors would all be the same (well, the behavior of the bacterium, at least; let’s assume that there are no higher organisms in this world), there would be no teleonomy: it is an after-the-fact gloss of the bacterium’s behavior. Think about it: a ‘goal’ is just a physical event, unless it is recognized as a goal; without something to do the recognition, there are no goals. Likewise, apprehending behavior as behavior towards some state of affairs requires just that directedness of apprehension that comes with intentionality.

    Let’s suppose that just by pure chance, and perhaps a flash of lightning for drama, a bacterium, B’, came into being that’s the exact physical copy of your bacterium B. Since it’s not a product of natural selection, it doesn’t have any of the functions you ascribe to B. Yet, its study would lead to the same ascriptions of teleonomy: it still behaves as if it were trying to reach certain goals. Hence, the teleonomy is not a property of either B or B’; rather, it is a judgment on the part of those studying it. Behavior is not intrinsically teleonomic; its teleonomy can only be ascribed to it through studying its behavior, and recognizing that such behavior would facilitate certain goals.

    Let’s go further with that. Suppose Last Thursdayism were true: the entire universe sprang into existence via a fantastically unlikely spontaneous quantum fluctuation last Thursday. All our memories would be the same; we would be holding the same conversation; but there would be no products of natural selection around, no mechanism would have evolved to fulfill any function—and since that is what (ultimately) you propose to use to ground intentionality, there would be none of that around, either. Consequently, the question of whether there is intentionality in the scenario you describe is not fixed by the physical facts (at some given time t). And that’s just what we should expect: the intentionality you wish to derive is rooted in an ascription of a certain property—teleonomy—to physical systems; not in the physical systems themselves.

    Those physical systems, to the extent they can be described as ‘knowing’ anything, only know their behavior; the way they react to environmental stimuli. They know nothing of goals, nothing of goal-directedness, and so on. All that is brought in by the observer—which, by the way, is not my addition to the description, but is just my attempt at bringing out what you inadvertently assume, and then fail to account for.

    Moreover, different observers can rationally disagree about the goals to ascribe to a system. Does the bacterium try to preserve homeostasis? Or maybe just lower the concentration of glucose in the environment? Ensure its survival? Spread its genes? Or just increase the entropy of the universe? All of these are valid interpretations of the bacterium’s behavior; because all of them are just that—interpretations.

    Again, the fact that you can describe a stone tumbling down a hill as behaving as if it wanted to reach the bottom does not make it so; this is a property of the description, not a property of the stone. It’s the same with the bacterium—the bacterium has additional constraints, sure, but just adding constraints does not add intentionality; it’s like adding zero to zero again and again, and hoping to get some finite quantity out of it eventually.

    You wouldn’t describe a stone teleonomically, presumably. But what if the stone impacts another, and that one another, and so on, yielding a great big avalanche which gives rise to complex emergent behavior? Certainly, for all of the stones within that avalanche, the same would still hold as for the single stone. But of course, the bacterium really isn’t anything else but that avalanche, ultimately.

    So, what happens in order to give rise to a qualitative difference between the avalanche and the single stone? I submit that the only difference is that once things get complex enough, their detailed description becomes too complicated to bear in mind; and in that situation, we use heuristics, we chunk behaviors together, we identify patterns, and so on. Ultimately, that’s why we see intent in random movements, tigers in bushes, and deities on burnt pieces of toast. So in the end, I think that your seeing (some antecedent of) intentionality in complex behavior is ultimately best considered a form of pareidolia.

    While the behavior happens without observers interpreting it, without observers, it is not teleonomic. Teleonomy is a judgment. This is why the beliefs of the student are important: only by virtue of having beliefs can he ascribe goals to the bacterium; a difference in his beliefs—while the physical situation of the bacterium and its environment remains the same—yields a difference in the teleonomy he ascribes to the system.

    A non-intentional robot would not consider the system to show any sort of teleonomy. It could set a flag if the bacterium behaves one way, and another if it behaves differently; then, an intentional observer could come in and interpret the first flag as glucose-searching chemotaxis, and the other in whatever other way. But without this interpretation, we just have one stone from the bacterium-avalanche kick loose a robot-sets-flag avalanche.

  71. 71. Jochen says:

    Or, to summarize the above: a non-intentional world is a world without goals. So, how does one define what behaviors appear to be goal-directed in such a world?

  72. 72. Charles T Wolverton says:

    Jochen –

    there is much about my thought that doesn’t seem to have a linguistic component—when I write down a mathematical proof, for example

    When thinking seriously about anything, I typically experience an accompanying internal dialog. I see writing down a proof as merely one way of capturing such a dialog – possibly using symbols, each of which could – in principle, though awkwardly – be replaced by a phrase for which the symbol is a shorthand. And when a cross-country friend of mine tries to explain category theory to me over the phone (a task suitable for Sisyphus!), the communication obviously is entirely verbal. So, I don’t see math as being a special case.

    I can cut the mental formulation of a thought short, and nevertheless know what the thought is about

    And the hearer of a only a phrase from a familiar sentence can often complete the sentence for the speaker. As can google in response to a partial query from an anonymous person, but I don’t think of google as having thoughts. It just takes advantage of the fact that most of our speech is not all that original.

    we often struggle for the right words to express our thoughts

    But that assumes that a thought is some entity that can precede a verbal representation of it – which is what I question. I can offer a candidate explanation of how an utterance can exist independently – the idea of a context-dependent behavioral disposition (CDBD – see comment 58) which I envision as a sensorimotor neural structure, the motor part of which may produce verbal activity, either overt or internal, in response to sensory stimuli, either internal or external. I’m not familiar with any analogous candidates for reifying the concept of a thought, but would be delighted to get pointers to any that are out there.)

    we still have to explain how symbols can come to be about something

    Which is what I think Davidson’s idea of “triangulation” addresses. In brief, it goes something like this (my adaptation of his scenario, which is actually about translation between members of different existing linguistic communities). Two prelinguals are on the savanna, one experiencing visual stimulation that results in neural activity that includes a recurring pattern. He grunts (perhaps “lion”), and points in the apparent direction of the source of the stimulation. The other observes the pointing, turns in that direction, and experiences a (presumably) similar pattern. Each makes an association between pattern and grunt – a CDBD. Subsequent similar events strengthen their respective associations between pattern and grunt. Ie, the grunt first becomes “about” the pattern and later becomes “about” the source of the sensory stimulation (that’s where the “triangulation” comes in). In that sense, the grunt comes to “refer” to any source causing the pattern. Not to suggest that’s what really happens, but it seems a plausible candidate for a primitive explanation.

    Words have “meaning” (defined however one likes – my preference is in terms of responsive action) only within a linguistic community. Your anecdote seems to demonstrate only that you assumed the wrong linguistic community but produced an appropriate answer to the intended question purely by chance – the confluence of your being a German speaker, that the expressions “jeden tag?” and “jeden, tak?” are essentially aurally indistinguishable, and that your nodded “yes” was a satisfactory response to the intended question. I agree that your response says nothing about the questioner’s intent (or thought, if you will), but I don’t see how that observation relates to our exchange.

  73. 73. Jochen says:

    Charles:

    I see writing down a proof as merely one way of capturing such a dialog – possibly using symbols, each of which could – in principle, though awkwardly – be replaced by a phrase for which the symbol is a shorthand.

    Well, maybe that’s your experience, but I don’t think it’s universal. To me, I find that I engage in conceptual manipulations before I phrase them in terms of symbols. I vaguely remember a quote by Feynman (though I can’t seem to find it right now) where he describes his thought process when he’s explained some mathematical concept—he starts with a blank object, and for all the definitions that are made, imagines it getting additional properties—growing hair, changing shape, or whatever.

    Now, I’m certainly no Feynman, but I think there’s something to be said for such conceptual imagination preceding linguistic formulation. After all, when we encounter complex data, we typically understand it much better in terms of a visualization rather than a description; if language were the medium of thought, then one should expect that it’s the description that is readily appreciated, while the visualization would need translating into the appropriate verbal form first.

    Also, I think that it’s no accident that modern mathematics comes more and more to embrace graphical notation—if you know a little category theory, you’ll know about commutative diagrams, which often yield the relationship between various objects ‘at a glance’, whereas its description might take a much more complex understanding process. Also, there is a graphical notation for tensor calculations, Penrose notation, or the also category-theory derived string diagram notation, and of course, Feynman diagrams. All of these cases, I think, are to the skilled practitioner much more readily appreciated than the corresponding mathematical symbology would be—that’s after all why they’re so successful.

    It just takes advantage of the fact that most of our speech is not all that original.

    It also works only in really limited cases. But, there really isn’t any ‘completion’ done in my mind in such a case: I know what the thought was about, even if I don’t know its formulation.

    But that assumes that a thought is some entity that can precede a verbal representation of it – which is what I question.

    I don’t see how it assumes that; rather, it’s just my experience that I sometimes have to struggle to ‘find the right words’, while nevertheless knowing what I want to express (how else would I know what words to struggle for?)—so this is evidence that language comes later.

    Which is what I think Davidson’s idea of “triangulation” addresses.

    I don’t really think that this account serves to ground meaning (even Davidson himself makes reference to a shared concept of objective truth in order to facilitate communication). It seems obvious that the process you describe can’t be enough: after all, the noise my companion hunter makes is available to me only as another pattern of neural excitations; so then I have two patterns, one correlated with the lion, the other with the noise. To recognize the correlation between them, I’d presumably need a third pattern, corresponding to the realization that pattern A co-occurs with pattern B. But then, how does that third pattern come to mean that?

    Words have “meaning” (defined however one likes – my preference is in terms of responsive action) only within a linguistic community.

    But then, how do you define ‘linguistic community’? Merely by shared understanding? That’s circular. By consistency in actions and expectations? Then, me and the shopkeeper were indeed part of a linguistic community. What happened was such that, say, a third observer could take the shopkeeper’s question to be in either German or Polish, on equally well-justified grounds. However, only one of these interpretations is right. Hence, the mere behavioral data doesn’t suffice to fix meaning in this exchange.

  74. 74. Sergio Graziosi says:

    Jochen (70 & 71),
    Perhaps I shouldn’t be, but (by the FSM!), I am surprised!
    If I understand what you’re saying (I really hope I don’t), you are denying the possibility of doing science.

    Teleonomy is recognizing certain behaviors to b[e] as if they were directed at certain goals.

    Compare with:
    Electricity is recognising certain [behaviours/]physical phenomena as if electrons were moving around.

    You are not saying that the latter shouldn’t be done, because it makes “Electricity” just a figment of our minds, are you?
    If you’re not, what is so special about recognising that certain behaviours promote the persistence of the mechanisms which produce them?
    In a world without the concept of electrons, your idealised scientist will “invent” the concept, notice that it allows to produce reliable enough predictions about the behaviour of certain systems and therefore use the concept to do just that.
    In the same way, in a world without the concept of “goal”, our idealised scientist may invent the “goal” concept and observe that it can be used to make the behaviour of living things more predictable.

    Moreover, the “as if” implicit in the concept of teleonomy is only a shorthand for a longer explanation, nothing more. I refuse to believe you’re unable to see it yourself. Thus, it seems to me you’re just clutching at straws. As far as I’m concerned (if I am understanding your argument) your last rebuttal has the same strength of a very light breeze on Cheops’ pyramid. I.e. it does have some tiny effect, but in practice the wise option would be to ignore it completely. [Of course, I’m not wise!]

    Besides: “Last Thursdayism”, really? This from the same Jochen who repeatedly exhorted me to keep it simple and avoid tackling tricky epistemological conundrums? I can follow you there, and can still argue that you’re wrong, but I can also predict what will happen if I do. You’ll tell me I proposing another underdeveloped BFD and that I need to go home and study more.
    No thanks, I already have the T-shirt! 😉

  75. 75. Jochen says:

    Sergio:

    If I understand what you’re saying (I really hope I don’t), you are denying the possibility of doing science.

    Well, of course there’s no possibility of doing science in a non-intentional world! There are no beliefs, no knowledge, no models, no predictions and tests, etc., etc.

    But that’s not really the point. Of course, there are electrons in a non-intentional world. And these electrons have a certain behavior. There are no models of electron behavior, that’s true, and hence, depending on what you mean by it, no theory of electricity—but the electrons flow just the same.

    However, goals aren’t like electrons. They’re not just lying around out there in the world. In order for there to be goals, there needs to be intentionality—and hence, in a non-intentional world, there is no resource for goal-ascriptions. So, once you ascribe goals to something, you’re importing intentionality into the world, from your own.

    In a world without the concept of electrons, your idealised scientist will “invent” the concept, notice that it allows to produce reliable enough predictions about the behaviour of certain systems and therefore use the concept to do just that.
    In the same way, in a world without the concept of “goal”, our idealised scientist may invent the “goal” concept and observe that it can be used to make the behaviour of living things more predictable.

    The thing is, how an electron behaves does not depend on the concept of electron. The electron would just zip off its merry way, whether there is an idealized scientist studying it, or not.

    Not so with goal-directed behavior. There is only goal-directed behavior if there are goals; the existence of such behavior depends on the concept of goal. While thus behavior of every physical system the idealized scientist studies would be the same as if he wasn’t studying it, it can only be properly called goal-directed if its putative goals are identified—which is an act depending on intentionality.

    Furthermore, goal-identification can only be heuristic at best: in principle, every causal consequence of my actions at a given time can be called their ‘goal’. If, for instance, my goal were to produce clicking noises, then my current actions—merrily hacking away at my keyboard—would be appropriate to achieve this goal. So which of all the possible consequences of my actions—which even within my future lightcone, basically—is to be my ‘goal’? Again, we find that this is relative to the beliefs of whoever ascribes these goals to me.

    Deciding this always depends on the model used for the system. So, for instance, the argument is often made that the heart evolved to pump blood, not to produce thumping noises, and hence, its function is the former. But that itself depends on the recognition of the mechanism of evolution—and as I said, in a non-intentional world, there is no science, and hence, no theory of evolution. There is just behavior.

    Likewise, this argument falters in the wake of the ‘swampman’-style objection: a randomly assembled heart never evolved, and thus, one has no recourse to this counterargument. So, since you ignored the point in the last post: do such random copies then just not have intentions?

    To me, this highlights that these function or goal-ascriptions ultimately really just are rooted in the beliefs of some external observer. If they believe that a heart, or a bacterium, evolved, then they may want to assign blood-pumping or glucose-seeking among their functions; but a change in beliefs, without any change in the physical system they are about, will bring about a change in this assessment. Hence, these assessment are not fixed by the physical systems they’re supposed to apply to, but instead, in the mind of the observer.

    Regarding Last Thursdayism: I’m of course not advancing this as a metaphysical thesis. But if you intend to defend the thesis that goals (or apparent goal-directedness) are inherent to physical systems, then this must hold in every circumstance—there must be a relation of logical entailment from the system to the apparent goals. However, exhibiting a counterexample—no matter how remote it may seem: all that’s needed is logical possibility—shows that no such relation exists.

  76. 76. Sergio Graziosi says:

    Jochen (75),
    the first draft of this reply was harsh – luckily I had the good sense of sleeping over it. I was (re)deploying sarcasm as a weapon of last resort: reading you without concentrating solely on charity makes you seem prepared to use any available trick to obscure my point. You’re burying my argument under thousands of words, while refusing to actually address what I’m saying.
    I’m re-writing my reply from scratch, but be aware that I am losing interest.

    For example, I chose to ignore swampman-like arguments because they have nothing to offer me and I find it embarrassing that you thought they are appropriate.
    The world could have been created last Thursday, or even next Thursday, for all we know, with fossils, background radiation, memories of what we’ll write until then and all.
    However, studying how our universe appears to work can’t be concerned with such possibilities, we can only use the evidence available, and this applies to all scientific endeavours. Conversely, they all are equally vulnerable to such arguments, because the problem of induction doesn’t allow for special exemptions.
    If we study bacteria (or any organism, for that matter), one of them, some, or even all, could conceivably have come into existence by accident (or deceitful design), but nevertheless, just like cosmology, we build theories that attempt to account for the structures we find, and how they come into being.
    Thus, replying to this sort of argument might have some metaphysical interest, but is squarely outside my current remit: producing convincing answers, or failing to do so, might have some impact on the theoretical weight we attribute to science as a whole, but does not apply selectively to biology or evolutionary theory. What troubles me is that you know all this perfectly well, so I’d suggest to try being a little more careful in the future. Please stop this scattergun approach, simply firing all guns, hoping something will hit a vulnerable target makes the job of replying tiresome, wasteful and uninformative.

    Of course, there are electrons in a non-intentional world. And these electrons have a certain behavior.

    Are we actually that sure? I mean: is this a metaphysically established? I thought that the honest stance was:
    “Well, stuff really does work as if electrons really existed, but we can only have indirect access to measurements, so whether they actually exist as we imagine them remains impossible to demonstrate.”

    Noticed the “as if”?

    Moreover, for the Nth and last time, the “as if” included in the concept of teleonomy is just a shorthand, which I’ve unpacked multiple times, but thought it would be more graceful if I avoided rubbing your metaphorical nose in it. (Sorry!)
    Will be a little brutal:
    Shorthand: “B behaves as if its actions where directed at preserving its own integrity and ability to reproduce.”
    Unzipped Shorthand: “B’s behaviours, when considered in its natural, ecologically relevant environment tend to heuristically preserve B’s integrity and its ability to subdivide in copies of itself (copies which retain many of B’s structural qualities). Heuristically in this case means that, in response to environmental changes, B’s integrity may occasionally be destroyed, but nevertheless, B will retain its own integrity and ability to subdivide many more times than chance alone would allow.”

    Do you want to rephrase everything I wrote so far by employing the unzipped version? Seriously! It’s a drag, but I’ve been extremely careful and I’m pretty confident it’s possible.
    Once again, the fact that B has the abilities unzipped above is a fact about the world, which can be detected. You can design a machine which will detect it (within limits, of course). There is no actual concept of goal buried into the unzipped idea of teleonomy, in the zipped one, there is a shorthand that uses the “as if” qualifier to exploit the fact that we are naturally inclined to understand the world in terms of goals. The “as if” bit apparently escapes you completely: it implies there actually is no goal (“X behaves as if it had goal Y, while in fact it doesn’t”). Moreover, the picture I’m painting starts with the apparent (illusory as in “it doesn’t actually exist”) goal: preserve integrity and ability to reproduce. Thus, all the worries about teleonomy being dependent on what we arbitrarily or heuristically pick as the (illusory, not really there) goal also fail to apply: we are not arbitrarily picking the goals, the (apparent, not really there) goal is fixed by the two tautologies I’ve mentioned earlier (which you have chosen to ignore).
    I am using the concept of teleonomy, precisely because of this: I need to be immune from the kind of attack you’ve tried so far, and I’ve chosen my strategy precisely because it is immune. You, on the other hand, keep telling me things like “there are no goals without pre-existing intentionality” which systematically ignores the fact that I am NOT ascribing goals to the kind of systems I’m discussing. [Pointing out all this feels like insulting your intelligence, explaining my reluctance.]

    Thus, there is exactly nothing in your last replies that gets near my argument. You are barking at the wrong tree, while all the time I’ve been standing beside you, my argument in hand, trying to stop you from barking and to get your attention. I am trying one more time, without punching you in the (metaphorical) face, but I do note that the temptation is strong. Can you please turn around, stop looking up, and acknowledge where I am?
    Sure, I do want to start climbing, but I haven’t already.

  77. 77. Jochen says:

    Sergio:

    For example, I chose to ignore swampman-like arguments because they have nothing to offer me and I find it embarrassing that you thought they are appropriate.

    These arguments are a common objection to teleosemantical theories; sure, you can ignore them—as you can ignore any argument that doesn’t fit with your predilections—, but it’ll make your position much weaker.

    However, studying how our universe appears to work can’t be concerned with such possibilities, we can only use the evidence available, and this applies to all scientific endeavours.

    Again, you mistake the purpose for which I introduced that argument: if you claim that there is a relation of logical entailment from physical systems to their apparent goal, then exhibiting a situation—regardless of its facticity—in which that’s not the case shows your claim to be wrong. As an analogy, consider Plantinga’s Free Will Defense: it is an answer to the question of theodicy that presupposes that free will is an inherent good, and that God could not have eliminated the evil in the world without sacrificing this good. Now, the key realization is that this defense works even if we have no free will: because it shows a consistent possibility in which God is omnipotent, omniscient, and omnibenevolent, and this isn’t in conflict with the existence of evil; hence, the tri-omni characteristic is not itself inconsistent with the existence of evil.

    Likewise, if Last Thursdayism were true, then it is not the case that one can define goals based on evolutionary notions of function; and hence, one can’t in general define goals based on those notions (a counterexample exists).

    Please stop this scattergun approach, simply firing all guns, hoping something will hit a vulnerable target makes the job of replying tiresome, wasteful and uninformative.

    My arguments all aim at the same target: breaking you free from your habit of seeing the world in intentional terms, and confusing the intentionality of your view on the world with the intentionality present in the world. In particular, the Last Thursdayism and swampman arguments—regardless of your disdain for them—show that it does not suffice to take the world as it physically is, in order to derive putative goals.

    Are we actually that sure? I mean: is this a metaphysically established?

    ‘Electron’ is just a word for a set of effects, of interactions, if you want to take an operational stance. The point is that this set of effects and interactions is independent of any observer, while the set of (putative) goals of a system isn’t.

    Unzipped Shorthand: “B’s behaviours, when considered in its natural, ecologically relevant environment tend to heuristically preserve B’s integrity and its ability to subdivide in copies of itself (copies which retain many of B’s structural qualities). Heuristically in this case means that, in response to environmental changes, B’s integrity may occasionally be destroyed, but nevertheless, B will retain its own integrity and ability to subdivide many more times than chance alone would allow.”

    And it’s exactly this unzipped version I’ve been attacking: in order to say any of this, you need to have a model of B; you need to be able to answer counterfactual questions about it, such as ‘if B were to do x, y would happen’. There is no such model in a nonintentional world.

    Sure, as such, these counterfactuals merely assert physical truths, provided the model is a good one. But, you then go on and substitute these goal-ascriptions for actual goals the bacterium may have; and then, go on further and say things like ‘our interpretation of phosphorylation as a signal isn’t arbitrary, and if it isn’t, on a weak interpretation, we already have intentionality: something is about something else’.

    This simply mistakes the apparent goal we can associate with the bacterium, using our intentional facilities of modeling and counterfactual reasoning, with the actual goal the bacterium has—you’re saying that since we can treat the bacterium as if it had that goal, we therefore can consider it to have that goal (otherwise, the phosphorylation simply would mean nothing—or rather, would mean the same thing as the flicking of a switch does to a lamp: it simply triggers a biochemical response).

    It’s this (inadvertent) bait-and-switch I object to: you use your intentional faculties to find apparent goals for the bacterium, then claim that those in some sense are the goals of the bacterium, and then that because of these, physico-chemical signals can come to be ‘about’ something to it. But any intentionality (even ‘dry’ or ‘proto’) in this scenario is just the afterglow from the intentionality you put in at the start.

    Once again, the fact that B has the abilities unzipped above is a fact about the world, which can be detected. You can design a machine which will detect it (within limits, of course).

    You can design a machine that will, say, flash a red light if certain conditions are met (or draw a pretty graphic, or print some letters, all of which is just the same thing); but to interpret that as a detection of certain abilities of the bacterium requires intentionality—it requires connecting the data structures of the machine to a certain physical system, i.e. for those data structures to be about, or pertain to, that system.

    Seriously, read again your ‘unzipped’ description: it is laden with intentional acts. Without intentionality, there is no saying what B’s behaviors do: doing so requires a model of B. It requires, for instance, to be able to hypothetically subject B to different conditions, and derive what would happen to B in such a case.

    So whether you want to talk about goals or not, you are relying on your intentional faculties to derive behavioral consequences for B, and then use the conclusions of these intentional acts of reasoning to motivate characterizing it in a certain way, suggestive of intentionality of its own. But you only get out what you put in.

    Let me try to re-trace your reasoning. First, you note that the waking-up response of the bacterium in the presence of glucose is conducive to it preserving its structural integrity, and that preserving its structural integrity is an evolutionary response, since all the bacteria who didn’t bother with that stuff just aren’t around anymore. Hence, you conclude that the bacterium acts as if it were trying to preserve its structural integrity. This is true, of course; but it’s also true that it’s a fact you need intentionality to assert.

    Then, you go on to note that P-phosphorylation due to the presence of glucose is what triggers the waking-up response. But now comes the crucial part: from this, together with the bacterium acting as if it were trying to preserve its integrity, you conclude that P-phosphorylation carries information about the presence of glucose. And it does, but it does so to you: you are the origin of the premise that the bacterium is acting as if to preserve its integrity. This is the bridging belief that I mentioned earlier, and it’s you who has that belief, not the bacterium. The P-phosphorylation is not information about glucose to the bacterium, but to you; but you go on to reason as if it was just information about glucose simpliciter, without any qualifications.

    This is illustrated by showing that what information the phosphorylation contains is relative to your beliefs: if you didn’t know about the role of glucose, you couldn’t draw the above conclusion. Sure, in the real world, your belief happens to be correct, and believing that glucose, say, attacks and destroys cell membranes would be wrong, but this misses the point: you still need to have a belief (whether it’s justified and true or not) in order for the phosphorylation to constitute any information at all, and the bacterium has no beliefs.

    This is why I introduced the swampbacterium scenario: there, it’s not the case that the waking-up response is an evolved mechanism to preserve structural integrity, and hence, that is not its objective function; and yet, if you believe the bacterium to have evolved, due to its physical identity to other bacteria of the same kind, you would feel as justified as you do in the case of an actually evolved one, and conclude that P-phosphorylation is about glucose, even though in that case it actually isn’t.

    Again: it is your belief that makes P-phosphorylation about glucose, and the bacterium simply doesn’t have any beliefs. It doesn’t matter whether that belief is true or false; all that matters is that it’s a belief, and that without this belief, there would be nothing that P-phosphorylation is about. It is its belief-like nature from which the meaning of P-phosphorylation is derived, not its factual content; i.e. it’s not what it’s about that allows you to use it in the way you do, it’s just the fact that it’s about something.

    Without some belief to play the bridging role, what you have is basically a bunch of stones impacting one another (I do introduce these examples for a reason, you know: to try and get you to actually consider just the physical behavior of a system, rather than view it from within an intentionality-laden interpretation). It’s from this, the bare physical dynamics, that you have to derive intentionality, if you want to derive it at all (which is of course what I try to do with the cellular automaton dynamics in my paper). Sure, the bare physical dynamics is only available to us from within a model, and thus, exists in an intentional frame; but one must try and avoid to import facts about the model into the system—but that is exactly what you do when you say that it behaves ‘as if’ it were, say, attempting to uphold its structural integrity.

  78. 78. Sergio Graziosi says:

    Jochen,
    according to your logic, we can’t produce an hypothesis about intentionality because the act of producing an hypothesis is intentional, and that’s not allowed as you can play the circularity card.
    If that’s the case, why do we even bother? It would apply to any attempt, fullstop.
    I point to mechanisms and patterns, point that mechanisms can respond to patterns, then point that one of these mechanisms closely resembles what may allow to build a mechanistic view of intentionality, and you say, “ah, you’re pointing, that’s not allowed”. Or better, when I describe a pattern, you say “ah, you’re importing intentionality, because your description is intentional”. Please!

    Resembles the epiphenomena discussion: if MechanismX produces a strong epiphenomenon Y, there is no way to prove that it doesn’t – there also is no evidence that Y exists, otherwise it won’t be an epiphenomenon. So all you do is say “ah, but you can’t prove that Y isn’t there” and retreat in your self-constructed dead end.

    If you forbid intentional language in any discussion that tries to address intentionality, you forbid all discussions about intentionality, because language is intentional, by definition. You can selectively forget your criteria when it suits you, but that’s cheating.

  79. 79. Jochen says:

    Sergio:

    according to your logic, we can’t produce an hypothesis about intentionality because the act of producing an hypothesis is intentional, and that’s not allowed as you can play the circularity card.
    If that’s the case, why do we even bother? It would apply to any attempt, fullstop.

    No, that’s not quite right. What I do is to try and get you to only use the physical behavior, rather than the properties of the model of that behavior. See, you have a physical system, B, and a model of that system that you use to derive conclusions about B, M(B). This is completely fine; however, what you can’t do is to use a property of M(B) as if it were a property of B. So, the fact that B behaves a certain way is (obviously) a property of B; your belief that B behaves a certain way is a property of M(B).

    Your conclusion that P-phosphorylation is about glucose is mediated by your belief about B’s behavior; thus, this conclusion is not one that holds of B, but rather, one valid within M(B). It’s not the fact of B’s behavior that allows you to draw the conclusion you wish to draw, but rather, your belief of that fact. If you carefully distinguish between these levels, everything’s fine; but to me, you blur that distinction.

    So it’s perfectly fine to use intentional language—indeed, unavoidable. But what you can’t do is to use intentional concepts—elements of M(B)—as if they were properties of B. It is not merely the fact of B’s behavior that makes P-phosphorylation pertain to glucose, it is your belief (or knowledge) of that fact; without that belief, P-phosphorylation would simply be an effect, a consequence of the presence of glucose—but effects are not about their causes in and of themselves. Looking at just the behavioral facts, without looking at beliefs about these facts, does not allow you to draw the conclusion you wish to draw.

  80. 80. Sergio Graziosi says:

    however, what you can’t do is to use a property of M(B) as if it were a property of B

    Uh? Care to remind me why it is that we bother building models? Any prediction we make based on a model rests on our knowledge/belief that the model is modelling the relevant properties of original system. We use properties of M(B) to hypothesise about the properties of B, that’s what models are for.
    That’s why I find your line of attack so infuriating/surprising/hard to believe. If taken seriously, it does negate the possibility of making legitimate science.

    My conclusion so far is that P-phosphorylation behaves as if it were about glucose. I’m tempted to go further, just tempted, remember?

    Pattern A: some system has some abilities of self preservation, via certain mechanisms.
    “Pattern A” is our model of “some system”.
    Property B(A): for A to be true, certain mechanisms must act “like” (as if they were) signals. This is equivalent to saying “via certain mechanisms”: if “some system” has more than two possible states, which is implied by using “mechanisms” in the plural, then intermediate states act “like” signals.
    Pattern C(A): thus, for all systems which display Pattern A “[their] internal states […] will appear to model — and act on — their world”.
    We obtain B and C by manipulating Pattern A (the model) in our heads. We think about the model because it captures what we are interested to consider, while excluding what we don’t care about. That’s what models are for.
    Forget the other conclusions so far, it is all “as ifs”. No one is saying “we’ve found Intentionality” (capital I and no scare quotes), not yet at least. Where do you start disagreeing?
    I can’t proceed if you don’t even agree with the above!

  81. 81. Jochen says:

    Sergio:

    Uh? Care to remind me why it is that we bother building models? Any prediction we make based on a model rests on our knowledge/belief that the model is modelling the relevant properties of original system. We use properties of M(B) to hypothesise about the properties of B, that’s what models are for.

    You misunderstand. Let me be a little more explicit. X can be used as a model of B, if X is, in some way, isomorphic to B—say, it embodies a set of structural relationships isomorphic to those of B. If it is used as a model, this means that we interpret the structural relationships of X as those of B, or as representing them. Then, X is a model M(B) of B.

    Now, we can use this isomorphism to derive conclusions about B, since whatever holds of the set of structural relationships of M(B), by virtue of this isomorphism, also holds of the relationships of B. This is the ordinary way of using models.

    A while back, I used the example of a stack of books of different thickness modeling the set of paternal ancestors of a person, where ‘thicker’ gets interpreted as ‘ancestor of’. Thus, if Moby Dick is thicker than The Tropic of Cancer, and Moby Dick is mapped to John, while The Tropic of Cancer is mapped to Jack, you know, by just comparing book thickness, that John is an ancestor of Jack; if there are three books of intermediate thickness between these two, you even know that John is Jack’s great-great-grandfather. Such deductions are completely unproblematic, and are how models are usually used.

    However, X has unavoidably properties that are not those of B—otherwise, X just would be B. Now, some of these, it has by virtue of being used as a model—that is, because it is a model of B, not B itself, it has properties by which it differs from B. One is that it is used, for instance—it is not the thing it is (a model of B) out of itself, but because it is interpreted as such. Another may be that it’s in your head. Surely you agree that to conclude because M(B) is in your head, B is in your head, too, would be a grave mistake!

    It is basically this what I mean in my arguments above: you take something true of M(B)—essentially that it’s an intentional object, that it supports meanings and the like—and then conclude this to hold for B. But this exceeds the range of the structural isomorphism between the two; hence, that it’s true of the model does not entail that it’s true of the system it models.

    So no, I’m not denying ‘the possibility of doing science’, or anything like that. I’m merely urging caution.

    My conclusion so far is that P-phosphorylation behaves as if it were about glucose. I’m tempted to go further, just tempted, remember?

    Well, to be honest, I read you as being a little more than just tempted—you have at several points drawn conclusions like:

    The original recognisable (proto)intentionality of the signals I have described (the bit that was circular, because of my own interpretation), has become available to the system itself, because it was there from the very beginning!

    But that’s just an example of what I’m cautioning against: you’ve used the behavior of the system as if P-phosphorylation were about glucose, and then gone on to claim that this gets instantiated as some (proto)inentionality in the system. But in doing so, you’ve conflated model and system, because the ‘behavior as if’ is an interpretational fact about the system, one of M(B), not, however, one of B itself.

    If you now want to retract this, and merely claim that the system can be interpreted as behaving as if P-phosphorylation signals glucose, then sure, knock yourself out—but that interpretation can’t figure in the grounding of intentionality for the system (hence, I would prefer leaving it out altogether, as one gets tangled up too quickly—the intentionality inherent in the view on the world is so all-pervasive that, like air, it’s all too easy to miss; imagining a world without it is a bit like imagining nothing: you almost always end up imagining something that you call ‘nothing’).

  82. 82. Jochen says:

    Or perhaps the following helps to make things more clear: what you’re saying when you say that B behaves as if P-phosphorylation means glucose to it, then you’re saying that B can be modeled by a system to which P-phosphorylation actually means glucose. But to conclude some (proto)intentionality from that is as wrong as concluding that great-grat-grampa John starts with the words ‘call me Ishmael’.

  83. 83. Sergio Graziosi says:

    Jochen,
    that’s good. More kudos for not giving up: if we’ll get somewhere it will all be due to your astonishing patience (I’m sure you’re at as frustrated as me).
    I can run with models and isomorphism, but I still think you’re wrong.
    What you call an inadvertent bite and switch, I call deliberately constructing a second order model, in this case specifying a model of model-like mechanisms.
    Of course, when thinking about stuff, the distinction between adding more to a model or building a second one on top of the first is blurry and down to convention, so I’ll make up a convention that should help unravel what’s going on here. This is all really confusing because modelling models is far too abstract for my little brain and at the same time we end up with a vocabulary clash (it’s hard to make it clear if you refer to the model of models, the contents of a model, or the original system which gets us started). I’ll try!

    I’ll start from quoting myself:

    Pattern A: some system has some abilities of self preservation, via certain mechanisms.
    “Pattern A” is our model of “some system”.
    Property B(A): for A to be true, certain mechanisms must act “like” (as if they were) signals. This is equivalent to saying “via certain mechanisms”: if “some system” has more than two possible states, which is implied by using “mechanisms” in the plural, then intermediate states act “like” signals.
    Pattern C(A): thus, for all systems which display Pattern A “[their] internal states […] will appear to model — and act on — their world”.
    We obtain B and C by manipulating Pattern A (the model) in our heads. We think about the model because it captures what we are interested to consider, while excluding what we don’t care about. That’s what models are for.

    From the first line, “Pattern A” is our first order model. A conceptual representation of what systems which have self-preservation qualities necessarily have in common.
    When we observe B(A) and C(A), we’ve already moved onto second order reasoning. We take properties of our first order model (we already have plenty of evidence that they hold for the real-world actual systems), and make considerations about the model-like properties that we are finding therein – crucially, these depend on what we’re including in the model (the modelled mechanisms) these properties wouldn’t be there otherwise.
    Thus, we’ve changed the subject, we are still modelling self-preservation, but at a distinct (additional) level of abstraction, we are in fact, modelling the model-like qualities of the original systems.

    You say that doing so isn’t allowed, because it injects model-like qualities derived from the pre-existing idea of models (or intentionality) that I hold.
    I say that this objection has no teeth, because isomorphism is all I need to care about. If patterns within pattern A happen to be isomorphic with patterns found both in the actual systems and in my pre-existing idea of models, then that is precisely the reason why I’m allowed to latch on them, notice the correspondences and see if they allow to produce new interesting hypotheses. Call it abduction if you must, the bread and butter of multidisciplinary theoretical cross fertilisation.

    A few words on B(A) and C(A): they really are one and the same, if something acts like an internal signal about something outside our system, then the system’s changes of internal states will follow the changes of some (limited) variables on the outside. Thus, our system A will show some of the properties we ascribe to models, allowing us to use the “like” or “as if” qualifiers.
    At this point, we are solidly on second order grounds, building a model of naturally occurring systems which happen to have some interesting model-like qualities.

    Of course, that is where I wanted to get from the start, and I couldn’t help running ahead (yes, I am more than tempted, but we still need to agree on premises, assumptions and methods), anticipating my first milestone in ways that have been manifestly unhelpful (my bad!). Nevertheless, the aim was always to model stuff that has some interesting model-like properties. Naturally, I can’t do so without deploying concepts about models, otherwise I remain without model-like properties to model, right?
    You insist that I’m not allowed, while I insist that this way of proceeding is precisely how (generating hypotheses useful for) theory building is supposed to happen. We have some useful concept, we find that the properties of this concept are isomorphic to some properties of a given system, so we try to model the given system on this basis, and see if the model picks up some more qualities of the actual system. If it does, and these are also qualities we didn’t know how to model before, then hey, we have a result.

    So, you can try to claim that modelling models is circular, and therefore not allowed (objection 1). To this I reply: well, then all attempts of understanding intentionality are doomed (intentionality is a concept that is, as far as we can tell, necessary to describe models, an unavoidable ingredient), because building models, or providing explanations, are necessarily activities which rely on intentionality. My rebuttal is weak, you may still be right, but the rebuttal succeeds because it shows that we can only try and see; perhaps we’ll fail, but if we want to try we can only start by hoping that objection 1 doesn’t apply.
    Otherwise, you can say that what I’m trying is not permissible because I’m by modelling anything, I’m injecting pre-existing, unrelated concepts (our ideas of what models are like and what intentionality is), and the projecting these qualities into the modelled system, while they are merely an artefact of models. This is objection 2. To this I reply: I do so because existing isomorphisms (not ones I’ve added myself) suggest it might work, maybe our model-of-models is isomorphic to some or all self-preserving mechanisms in ways we still have to identify. Call it abduction, if you must. Of all the modes of reasoning, abduction is the most likely to fail, but again, that’s no reason to avoid trying. We should try and see if it works. Sadly, I’ve been trying all along, but you’ve been insisting that my premises are flawed, so I’m still not allowed to try, apparently.

    What you can’t say is that I’m claiming that properties of the models I’m building simply are also properties of what I’m modelling: I’m not claiming it! (but yes, sometimes I do slip)

    A bit on the side: I have indeed made the mistake of running ahead, assuming that you conceded B(A) and C(A), guilty as charged. However, perhaps now you’ll see how I unpack the following:

    We obtain [B(A)] and [C(A)] by manipulating Pattern A (the [first order] model) in our heads. We think about the model because it captures what we are interested to consider [properties of models, we are now modelling models], while excluding what we don’t care about. That’s what models are for.

    If this isn’t allowed, then you are forbidding derivative or second-order models, or abduction, or speculative hypothesising. They are all part of standard scientific practice, so I am still worried that you are being a tad too strict ;-).

    Re your books and ancestor analogy (I am running ahead now, mind it!), I will need to extend it in absurd ways, in order to map what I’m doing accordingly. We have books, with property “thickness” which maps onto people’s ancestry. We have (arbitrarily modified) people, with ancestry/descendency properties, but also other properties (PropA, PropB and PropC). We can measure PropA, know very little about PropB, and close to nothing about PropC.
    We build our model with books; we have a big library, so we can find books which are of the right thickness, but which also have “start with” values which can be algorithmically mapped onto the the PropA of people. Note thus that we’re following strict additional constraints. We do all this and notice that all books also have an “ends with” property. We also notice that “ends with” maps well into the elusive PropB of people. Ah! That’s interesting. Maybe we can use the “ends with” available property to make inferences about PropB of any people (if we can find the right book, according to thickness and “starts with” constraints!) – we’ve now shifted our subject, from simply modelling ancestry we are now modelling also other properties of people – we’re extending our model, or better, building a derivative model. Moreover, if we can, we’ve stumbled on an odd regularity, which suggests there may be a relation between books and people, one which we might have suspected, but not something we could explain in full. Eventually me may find that PropC curiously correlates with the “language” property of our books!

    In my case, “ends with/PropB” are the set of puzzling qualities of intentionality (I think!). Books are still our first order model, the stuff that we can easily manipulate (in our minds). We’ve built our model following the “starts with” constraint because we did suspect that there are relations between “starts with” and “ends with” and between books and people.
    In my case, these relations are somewhat like the (help! now I’m modelling the relation between first order models and models of secondary properties built on top of it!) “as if” qualifier, while intentionality is closely related to the mysterious relation between books and people (or something like that, I’m lost!). Maybe, just maybe, we’re doing some progress. If we’ll find other relations between books and people (perhaps language/PropC?), maybe we’ll be able to derive some knowledge about people by just looking at books (subject to empirical verification, of course).

    So, in my extended version, saying that great-great-grampa John starts(/PropA) with something corresponding to “Call me Ishmael” is fine, as we built our model according to this constraint (mechanisms in the bacterium have important isomorphisms with “models”). Saying that John ends(/PropB) with something corresponding to “another orphan.” is speculative, but is the reason why we bothered to model people with books.

    Back to square zero, you say:

    you take something true of M(B)—essentially that it’s an intentional object, that it supports meanings and the like—and then conclude this to hold for B. But this exceeds the range of the structural isomorphism between the two; hence, that it’s true of the model does not entail that it’s true of the system it models.

    Well, no. I take the M(B) property of being an intentional object, and show that there are interesting isomorphisms with B. The most important one is that the structural, purely syntactic information is present in both, while in M(B) there is also the fully semantic information. Crucially, the presence of semantic information in M(B) requires the presence of its syntactic precursor. From this interesting isomorphism and lack of others (the structural, purely syntactic information is present in both, while in M(B) there is also the fully semantic information), I plan to build more (see my unfortunate runs ahead). [Running ahead = Thus, maybe something similar to what enables semantics in M(B) (isomorphic, “like”, “as if”) is also present in B itself, if not, maybe adding an isomorphic ingredient to B++ (isomorphic, not identical to what allows semantic information to be present in M(B)) can allow us to explain how a speculative B++ might acquire abilities which are themselves isomorphic of what we call semantic abilities…]

    Questions (please do not comment on the Running ahead sections, I still can’t help it, but at least I can label them!):
    Am I allowed to point out that B(A) and C(A) appear to hold?
    If I am, can I therefore use “Property A” as the foundation of a (speculative) model-of-models?

  84. 84. Jochen says:

    Sergio:

    You say that doing so isn’t allowed, because it injects model-like qualities derived from the pre-existing idea of models (or intentionality) that I hold.

    No, this is not what I’m saying. It’s perfectly fine to use intentional systems to model non-intentional ones, as long as one is careful. So, for instance, I can model the stone rolling down a hill with an intentional system that wants to get to the bottom; but that doesn’t imply that the stone acquires any intentionality (proto or otherwise).

    Of course, a bacterium is different from a stone in several ways—your chief anchor seems to be the presence of evolved reactions to environmental conditions, which give us a notion of ‘function’ or even ‘goal’, at least in the sense of ‘acting as if it wants to achieve that goal’ (i.e. the intentional system we use to model the bacterium is constrained via the presence of evolved functions).

    But this can be duplicated in the stone, too. Picture a whole population of stones, of different sizes, shapes, and materials. Now let all of them tumble down the hill. Some of them—say, big, flat ones, or cube-shaped ones, and so on—will come to a halt almost immediately. Others might last longer—say, a disc shaped one, until it eventually tumbles on its flat side.

    At some point (imagine this is a very looong hill), only those stones which are ‘well-adapted’ to their putative task will still be in the running—that is, those that are best shaped to roll, are those who roll the farthest (as you point out, this is nothing but a tautology).

    Now, imagine, say, the terrain changing: say, from a grassy ground, to one of sharp stones and edges. Now, stones that have survived the selection process up to this point may come to a halt here—say, a stone that isn’t very elastic might not be able to jump obstacles; a stone that’s too fragile may get shattered; and so on. But some stones, which have just the right elasticity and hardness, may persist—perhaps they might get more and more rounded by abrasion, by having small pieces chipped off, etc.

    In each stone that has made its way so far, we have effectively a trace of its ‘selection history’ up to this point, in its form—it has responded to changes in environment by adaptations. Again, this is tautologous: if it hadn’t, it wouldn’t be rolling anymore.

    But this is, it seems to me, sufficiently analogous to the case of the bacterium to equally well motivate a notion of ‘function’ to anchor our intentional model (although of course I’m not saying that it’s a complete analogy for the evolutionary algorithm)—like the stone, the bacterium has gone through changes in its evolutionary history in response to environmental pressures, reflected in its current form—i.e., its biochemical makeup. The ‘waking up’-response of the bacterium is, on this perspective, not different from the stone’s rounded shape—both are adaptations that help the systems respond to environmental ‘signals’, in order to further their apparent goals—getting to the bottom of the hill, or preserving structural integrity.

    But of course, there is no question of intentionality in the case of the stone. Here, it’s easy to smoothly differentiate between model (which has intentionality, i.e. wants to get to the bottom) and the original system, and there is no temptation to assign the model’s intentionality to the stone.

    If that’s the case, however, then the same should be true of the bacterium: it (or rather, its germline) is the result of a selection process analogous to that of the stone’s form; thus, anything that holds for the functions of the bacterium, including any intentionality-bestowing powers, should hold for the stone, as well. But there’s none of that in the stone.

    One can, of course, work this analogy out to more closely resemble the selection process in the stone’s case—interpret the landscape as a literal fitness landscape, have the stone subject to some random jitter to occasionally leave local minima, and so on. But this, I trust, won’t be necessary. The point is that if you’re ready to ascribe (proto)intentionality to the bacterium—by means of ascriptions of function, goals, behavior as if—, I don’t see how you could keep from doing the same to the stone. But I also don’t believe you’d be willing to do so: after all, in the case of the stone, it’s completely transparent that any intentionality just derives from that of the intentional modelling system. But then, the same is true in the case of the bacterium—albeit, it may be harder to spot there.

    Thus, to try and answer your question: yes, we can model a system such that B(A) and C(A) hold. But we must not loose sight of the fact that this is a property of how we model the system—it may be also a property of the system itself, but this is only certain if we know the system is intentional. Conversely, when we know it’s not (as in the stone undergoing the selection process), even though we can model it such that B(A) and C(A) hold, we know that this isn’t the case within the system. Hence, pointing to B(A) and C(A) doesn’t tell us anything about whether there is intentionality within the system (with the usual proto-etc qualifiers as you deem necessary).

    What you’re saying when you point to B(A) is that there is a model such that this model possesses signals; for C(A), you’re saying that there is a model that contains an internal model of the world within itself. This is unproblematic—you can model any non-intentional system using an intentional system (I can become a model of a stone rolling down a hill by trying to get to the bottom of it).

    But note that both are true of the stone, as well: say, e.g., that areas of large boulders are often preceded by smaller rubble. Thus, a stone of the right elasticity will, due to its momentum, impact the smaller rubble, bounce high up, and thus, hopefully clear the obstacle—it can be modeled by a system which interprets the smaller rubble as a sign of oncoming bigger obstacles, which prompts it to jump higher.

    Furthermore, the form of the stone (in a general sense, including qualities like weight, elasticity, and so on) is a consequence of the properties of the terrain; knowing the form of the stone, it is possible to extract some of these properties. Hence, it can be modeled ‘as if’ it contains a model of the environment—that is, there is an intentional system that we can use as a model of the stone, which has a model of the environment that contains just those properties of the terrain that can be extracted by study of the stone.

    However, concluding from there towards any intentionality of the stone is the confusion of model and system I am cautioning against. But then, such a conclusion is not in general permissible (there exists a counterexample). Hence, using these facts about the modeling of the bacterium to derive facts about any putative intentionality associated with it is not logically sound.

    Regarding your extension of my book-model example, yes, it’s indeed sometimes the case that one is fortunate enough that an intended model for some properties of a system ends up modeling additional properties. But this is not what you’re doing (well, or wanting to do—if you can keep running ahead, then so can I!). Rather, you’re making the erroneous leap that because in the model, properties are connected to each other in a certain way, then they are also so connected in the modeled system. That is, the model maps behaviors to behaviors: behaviors of the physical system, e.g., a stone, to behaviors of an intentional system. In the intentional system, now, those behaviors are connected just by the intentional qualities of the system; but to conclude that therefore, they are connected in this way also in the system that is being modeled, is a leap to far, and in fact, logically unsound.

    Consider that, for example, in the set of books, a given start and ending are connected because that’s how the author wrote it. To now conclude that PropB and PropC are connected in the people because an author wrote them leads to meaninglessness.

    Or, consider that other favorite model of the modeling relationship of mine, the orrery. In the orrery, the little spheres that stand for planets are kept on their orbits by a sophisticated clockwork mechanism; but that is no license to conclude that thus, the planets are likewise kept on their orbits by a clockwork mechanism. Although of course, something like that conclusion has, historically indeed been drawn—and turned out to be wrong.

  85. 85. Sergio Graziosi says:

    Thanks Jochen!
    I’m still a little confused, though (at least one of us is).
    In the example of the stones, the “strongest anchor” isn’t their evolutionary history, it’s whether the stones are still rolling or not. I.e. if their behaviour matches the isomorphic one in our model. And yes, there is no temptation of seeing true intentionality (the one we think we possess) into the very long hill with rolling stones, that’s a plus. [Running ahead: it’s a plus, because we don’t have the true intentionality we think we possess. With Dennett, true intentionality is like true magic, it can’t exist.]
    So, on one hand, I’m perfectly happy to count the hill of the rolling stones (must be somewhere near Glastonbury, I assume) as another (invented) system which complies with the required isomorphisms.
    Such a system has a quality, that of “false” intentionality (false with scare quotes, to help reassuring we don’t think it’s the true one, but also cheekily remind us that perhaps what we call true intentionality is not what we think it is). I.e. such a system has the right kind of isomorphisms to map onto a model of stones which behave as if they wanted to roll down the hill. Pretty weak isomorphisms, as they will all disappear in easy to predict finite time, but if we stretch the imagination and propose an endless slope with endless supplies of stones with infinite starting shapes and materials, then it’s fine.
    The original constraint, “preserves its structural qualities”, for the hill of the rolling stones becomes “keeps rolling”, much more limited in degrees of freedom, so far less interesting, but still ok. Despite how you’ve understood my argument, the evolutionary considerations allowed us to (somewhat pre-theoretically, if we’re in the business of building a theory, but post-theoretically if we consider that we are using existing theories to provide the initial ideas) identify a criteria that can be detected in the real world and one which (tautologically) guarantees the right kind of isomorphism.
    In other words, the evolutionary considerations play an ancillary role in justifying why “preserves its structural qualities” is a promising constraint, while “keeps rolling” isn’t. If we didn’t want to use evolution, we could select random criteria and test whether the isomorphism we require does apply and how well it does (or how many epicycles we need to invent in order to make it fit – as far as the rolling stones go, that’s a good number, I think we agree). Eventually, we’ll stumble on “preserves its structural qualities” and notice that as a criteria it works effortlessly (relative to the alternatives on offer).

    So, to give the deserved attention to the poor swampman, who has been swamping around unnoticed, the fact that he’s been created by a freak accident is perfectly inconsequential (explaining my rudeness in ignoring him). If he does preserve his structural qualities, then fine, we count him in as well. The same applies to anything that does: as we are solidly considering mere mechanisms, allowing similar freak limit-cases isn’t a concern, doesn’t even register as biting the bullet.
    This does make the swampman indirectly interesting, because allows us to note that if a system reacts to its environment in ways which tend to “preserve its structural qualities”, then it can be modelled in intentional terms – whether it’s evolved, designed or accidental.
    [Digression: Perhaps there is some scope in exploring the possibility of defining “functions” in these terms: I’m bugged by the fact that since “Sergio’s functionalism” I’ve kept using the concept of “functions” in intuitive, pre-theoretical terms.]

    Back to my confusion:

    However, concluding from there towards any intentionality of the stone is the confusion of model and system I am cautioning against.

    Well, that’s why I’ve been screaming at you. I’m positive that I’m not making this mistake. The “as ifs” grant it. I’m merely noting the existing isomorphism. They hold effortlessly for the kind of systems I’ve chosen, they require more work for rolling stones, but can still get crammed in.
    [Running ahead: I’m sure I’m holding on the “as ifs” also because I need them further down the line, I can’t allow “true intentionality” in the rolling stones, nor in our bacteria, because I want to show that what we call “true intentionality” is at best an unreachable abstraction (i.e., it doesn’t exist in the form we attribute to it).]

    It’s all good an well, but still haven’t cleared my starting blocks. You also say:

    The point is that if you’re ready to ascribe (proto)intentionality to the bacterium—by means of ascriptions of function, goals, behavior as if—, I don’t see how you could keep from doing the same to the stone.

    and

    Thus, to try and answer your question: yes, we can model a system such that B(A) and C(A) hold. But we must not loose sight of the fact that this is a property of how we model the system—it may be also a property of the system itself, but this is only certain if we know the system is intentional. Conversely, when we know it’s not (as in the stone undergoing the selection process), even though we can model it such that B(A) and C(A) hold, we know that this isn’t the case within the system. Hence, pointing to B(A) and C(A) doesn’t tell us anything about whether there is intentionality within the system (with the usual proto-etc qualifiers as you deem necessary).

    You then conclude with stuff which supports my impression. You seem to underestimate (or even negate! cfr. “proto-etc qualifiers as you deem necessary”) the importance of the “as ifs”, and based on that, I think you’re still telling me that I can’t proceed.

    Now, this last reply of mine should clarify that the “as ifs” only point to isomorphisms, stuff in our intentional model happens to behave is ways which resembles the behaviour of the modelled system. The only ontological claim I’m making is that therefore the system possesses the quality of behaving as if it had intentionality (given it satisfies our chosen constraints). If I include the “as if” in my definition, then I can apply the label of “proto-intentionality” (or, as I do in this comment “false intentionality”) to whatever behaves as if. It’s merely syntax, with not semantic/ontological weight. I’ve called this (proto)intentionality, or, in this comment “false intentionality”, but I could call it anyway you like.
    Given the model we’re using, certain systems map onto it very well, certain only in limited ways (like an actual hill with actual rolling stones), others not at all. The “as if” operator picks up the isomorphism, nothing else. Because our model is constrained, and built around stuff that is logically necessary, we can evaluate how well any given system maps onto it, and we can do so algorithmically, so we remain confident that we aren’t presuming that well-matching real system contain any true intentionality. They merely match well the behaviours of our model. Nothing more.

    See one analogy:
    we can have three hypotheses (proposed models), that the Earth is flat, spherical or an oblate spheroid. We can measure the earth and see which model better fits the evidence (while knowing no one fits perfectly). In this analogy, I’m keeping the system fixed while changing the model. I’ll assume we agree that this approach allows to conclude that “oblate spheroid” is the best fit. But if we can proceed in this way, then I expect you’ll have no objections if I propose to keep the model and see how different systems fit (once again, expecting that no system will match perfectly, i.e. no physical system always preserves its structure). If we do change the organisms, we end up being able to order them by their “intentional-likeness”. We are producing relative measures of false intentionality, mind you, but it would still be somewhat interesting if we now have an empirical method to measure something which seems to haphazard correlate with true intentionality in some way.
    We could also go further and ask philosophers to rate different systems based on how much or how little “real intentionality” they attribute to them. We could see how well our empirical results match the survey. The results of such exercise should be somewhat tautological, as we have shaped our model on the idea of “true intentionality” (for the record, I’d expect big deviations for many living forms, as most philosophers would be ignorant of how many living forms actually behave), but at last we would be doing empirical work – suddenly we’d have empirical hypotheses to test! All this because we are latching on isomorphisms, it’s “as ifs” all the way down. Still convinced we can’t do anything with them?

    Consider where we started: true intentionality is genuinely puzzling, is it not? Symbols can refer to anything, so by definition they carry no information, but yet, information processing (or transmission) requires symbols. Whaat? There is a reason why philosophers keep bickering about alternative theories of intentionality, but even mentioning them will immediately send me in BFD territory, so I won’t. Why is intentionality so hotly debated? Because we still can’t say that we understand what intentionality is. Thus, telling me that I ascribe true intentionality (the one we don’t even know how to define) to any system, including our poor Bacterium, is a bit harsh, I’m the first to admit I can’t even tell you what it is, let alone ascribe it to anything.
    {Running ahead: yes, I do send contradictory signals. That’s because what I’m here calling true rationality looks impossible to me, while building on top of the “as if” version seems perfectly viable to my optimistic eyes.]

    Overall, in light of all these clarifications, am I allowed to play with my “as ifs” and see if they lead us in interesting places?

  86. 86. Charles T Wolverton says:

    Jochen:

    I don’t really think that this account [triangulation] serves to ground meaning (even Davidson himself makes reference to a shared concept of objective truth in order to facilitate communication). It seems obvious that the process you describe can’t be enough

    It isn’t by itself intended to ground “meaning”. It’s an attempt to explain how two precognitive creatures might come to associate experienced patterns of neural activity consequent to sensory stimulation apparently from a point in their common environment with shared responsive action. It’s what Davidson speculates might be the basic requirement for the development of the ability to have a “thought”. It seems to me that you should welcome such an attempt at explanation since as I understand your disagreement with Sergio’s arguments, their common feature is that each includes an already cognitive agent with intentionality, which raises the likelihood of derived intentionality. Davidson’s scenario involves precognitive creatures, hence with no intentionality. So, if they manage to evolve a primitive language that has intentionality, it’s not derivative.

    Davidson himself makes reference to a shared concept of objective truth in order to facilitate communication

    Actually, he adds that concept as an additional step beyond mere communication on the way to “The Emergence of Thought” (his 1997 essay). He explains “objective truth” in terms of human interaction ala Wittgenstein (presumably his “beetle in a box” thought experiment In Philosophical Investigations). And I concur in principle – although I prefer Sellar’s concept of knowledge (as opposed to the questionable concept of “objective truth”) as a result of such interaction. But again, I see the relevant question as being whether those steps lead to speech that is “about” the world. If so, it seems to be a path to liguistic intentionality that isn’t derived. Which I assume – perhaps mistakenly – is the objective of your and Sergio’s exchanges.

    how do you define ‘linguistic community’?

    In its most basic sense, a shared natural language – but in specialized contexts, a shared vocabulary, possibly within some natural language (eg, the vocabulary of the community of readers of this and related blogs). When in Poland, presumably in an establishment marked “Piekarnia” (bakery), it seems to me that the natural assumption would be that the linguistic community would be Polish speakers. For some reason, you assumed that the person uttering “Jeden, tak?” was, contrary to the evidence, a member of the community of German speakers, and was uttering the homophonic “Jeden, tag?”. An innocent mistake, but seemingly demonstrating nothing relevant to the issue at hand. Speakers of a natural language presumably already have linguistic intentionality

  87. 87. Jochen says:

    Sorry, folks, I’d been hoping the last few days that I’d find some spare time to return to this, but I’m afraid I’m just too buried in work right now to give this the attention it needs, so I must bow out of the discussion.

    Anyway, in a sense, it’s a natural stepping-off point for me—you’re both at least inching closer to eliminativism, and I find it’s just not an option I have great hope in, and furthermore, I think it’s premature—so yeah, intentionality is hard to explain, but I don’t think there’s any reason to throw in the towel just yet. To me, eliminativism, to the extent it’s a coherent thesis in the first place, should at best be a last-ditch effort; and as things stand, no eliminativist thesis I’m familiar with has so far made any headway into the problem—indeed, most flounder on their own premises.

    But of course, I’m not the arbiter of which approaches to the problem could bear fruit; I can merely decide what I find promising, and focus my all too limited capacities in that direction. And, well, eliminativism just isn’t for me.

  88. 88. Sergio Graziosi says:

    Jochen,
    not to worry! Hope you’ve enjoyed the ride as much as I did, but of course life off-line should take precedence, in all cases, without regrets. I keep reminding myself that too much work is a problem that’s good to have, much better than too little work! Hope that being buried in work means being productive and getting loads of gratifications out of it. (If not, I’m sure they will come in due time.)

    Just for the record, if self assessment counts for something, I regard myself more of a Catholic priest than an eliminativist, that’s to say not at all. Sorry to hear that I failed to make it clear. I just fail to see how the idea is even coherent: explaining something by saying it doesn’t exist? Oh please – in fact, I fail to count self-professed eliminativists as such (hence my disclaimer), in my eyes, they end up being unwilling “substitutionists”…

Leave a Reply