Sergio’s Computational Functionalism

sergio differenceSergio has been ruminating since the lively discussion earlier and here by way of a bonus post, are his considered conclusions…..

Linespace

 

Not so long ago I’ve enthusiastically participated in the first phases of the discussion below Peter’s post on “Pointing”. The discussion rapidly descended in the fearsome depths of the significance of computation. In the process, more than one commentator directly and effectively challenged my computationalist stance. This post is my attempt to clarify my position, written with a sense of gratitude to all: thanks to all for challenging my assumptions so effectively, and to Peter for sparking the discussion and hosting my reply.

This lengthy post will proceed as follows: first, I’ll try to summarise the challenge that is being forcefully proposed. At the same time, I’ll explain why I think it has to be answered. The second stage will be my attempt to reformulate the problem, taking as a template a very practical case that might be uncontroversial. With the necessary scaffolding in place, I hope that building my answer will become almost a formality. However, the subject is hard, so wish me luck because I’ll need plenty.

The challenge: in the discussion, Jochen and Charles Wolverton showed that “computations” are arbitrary interpretations of physical phenomena. Because Turing machines are pure abstractions, it is always possible to arbitrarily define a mapping between the evolving states of any physical object and abstract computations. Therefore asking, “what does this system compute?” does not admit a single answer, the answer can be anything and nothing. In terms of one of our main explananda: “how do brains generate meanings?” the claim is that answering “by performing some computation” is therefore always going to be an incomplete answer. The reason is that computations are abstract: physical processes acquire computational meaning only when we (intentional beings) arbitrarily interpret these processes in terms of computation. From this point of view, it becomes impossible to say that computations within the brain generate the meanings that our minds deal with, because this view requires meanings to be a matter of interpretation. Once one accepts this point of view, meanings always pre-exist as an interpretation map held by an observer. Therefore, “just” computations, can only trade pre-existing (and externally defined!) meanings and it would seem that generating new meanings from scratch entails an infinite regression.

To me, this is nothing but another transformation of the hard problem, the philosophical kernel that one needs to penetrate in order to understand how to interpret the mechanisms that we can study scientifically. It is also one of the most beautifully recursive problem that I can envisage: the challenge is to generate an interpretative map that can be used to show how interpretative maps can be generated from scratch, but this seems impossible, because apparently you can only generate a new map if you can ground it on a pre-existing map. Thus, the question becomes: how do you generate the first map, the first seed of meaning, a fixed reference point, which gets the recursive process started?

In the process of spelling out his criticism, Jochen gave the famous example of a stone. Because the internal state of the stone changes all the time, for any given computation, we can create an ad-hoc map that specifies the correspondence of a series of computational steps with the sequence of internal steps of our stone. Thus, we can show that the stone computes whatever we want, and therefore, if we had a computational reduction of a mind/brain, we could say that the same mind exists within every stone. Consequently, computationalism can either require some very odd form of panpsychism or be utterly useless: it can’t help to discriminate between what can generate a mind and what can’t. I am not going to embrace panpsychism, so I am left with the only option of biting the bullet and show how this kind of criticism can be addressed.

Without digressing too much, I hope that the above leaves no doubt about where I stand: first, I think this critique of computational explanations of the (expected) mind/brain equivalence is serious, it needs an answer. Furthermore, I also think that answering it convincingly would count as significant progress, even a breakthrough, if we take ‘convincingly’ to stand for ‘capable of generating consensus’. Dissolving this apparently unsolvable conundrum is equivalent to showing why a mechanism can generate a mind, I don’t know if there is a bigger prize in this game.

I’ll start from my day job, I write software for a living. What I do is write instructions that would make a computer reliably execute a given sequence of computations, and produce the desired results. It follows that I can, somehow, know for sure what computations are going to be performed: if I couldn’t, writing my little programs would be vain. Thus, there must be something different between our ordinary computers and any given stone. The obvious difference is that computers are engineered, they have a very organised structure and behaviour, specifically because this makes programming them feasible. However, in theory, it would be possible to produce massively complicated input/output systems to substitute the relevant parts (CPU, RAM, long-term memory) of a computer with a stone, we don’t do this because it is practically far too complicated, not because it is theoretically impossible. Thus, the difference isn’t in the regular structure and easily predictable behaviour of the Von Neumann/Harvard and derived architectures. I think that the most notable differences are two:

  1. When we use a computer, we already have agreed upon the correct way to interpret its output. More specifically, all the programs that are written assume such a mapping, and would produce outputs that conform to it. If a given program will be used by humans (this isn’t always the case!) the programmer will make sure that the results will be intelligible to us. Similarly, the mapping between the computer states and their computational meaning is also fixed (so fixed and agreed in advance, that I don’t even need to know how it works, in practice).
  2. In turn, because the mapping isn’t arbitrary, also the input/output transformations follow predefined and discrete sets of rules. Thus, you can plug different monitors and keyboards, and expect them to work in similar ways.

For both differences, it’s a matter of having a fixed map (we can for simplicity collapse the maps from 1 & 2 into a single one). Once our map is defined and agreed upon, we can solve the stone problem and say “computer X is running software A, computer Y is running software B” and expect everyone to agree. The arbitrariness of the map becomes irrelevant because in this case the map itself has been designed/engineered and agreed from the start.

This isn’t trivial, because it becomes enlightening when we propose the hypothesis that brains can be modelled as computers. Note my wording: I am not saying “brains are computers”, I talk about “modelled” because the aim is to understand how brains work, it’s an epistemological quest. We are not asking “what brains/minds are”; in fact, I’ll do all I can to steer away from ontology altogether.

Right, if we assume that brains can be modelled as computers, it follows that it should be possible to compose a single map that would allow us to interpret brain mechanisms in terms of computations. Paired with a perfect brain scanner (a contraption that can report all of the brain states that are required to do the mapping) such a map would allow us to say without doubt “this brain is computing this and that”. As a result, with relatively little additional effort, it should become possible to read the corresponding brain. From this point of view, the fact that there is an infinite number of possible maps, but only one is “the right” one, means that the problem is not about arbitrariness (as it seemed for the stone). The problem is entirely different, it is about finding the correct map, the one that is able to reliably discern what the scanned mind is thinking about. This is why in the original discussion I’ve said the arbitrariness of the mapping is the best argument for a computational theory of the mind. It ensures the search space for the map is big enough to give us hope that such a map does exist. Note also that all of the above is nothing new, it is just stating explicitly the assumptions that underline all of neuroscience; if there are some exceptions, they would be considered very unorthodox.

However, this where I think that the subject becomes interesting. All of the above has left out the hard side of the quest, I haven’t even tried to address the problem of how computations can generate a “meaningful map” on its own. To tackle this mini-hard problem, we need to go back to where we started and recollect how I’ve described the core of the “anti-computalism” stance. Taking about brain/mechanisms, I’ve asked: how [does the brain] generate the first map, the first seed of meaning, a fixed reference point, which gets the recursive process started? Along the way, I’ve claimed that it is reasonable to expect that a different but important map can be found, the one that describes (among many other things) how to translate brain events into mind events (thoughts, memories, desires, etc.). Therefore, one has to admit that this second map (our computational interpretation) would have to contain, at least implicitly, the answer on the fixed reference point. How is this possible? Note that I’ve strategically posed the question in my own terms, and mentioned the need for a fixed reference point. You may want to recall the “I-token” construct of Retinoid Theory, but in general, one can easily point out that the reference point is provided by the physical system itself. We have, ex-hypothesis, a system that collects “measurements” from the environment (sensory stimuli), processes them, and produces output (behaviour); this output is usually appropriate to preserve the system integrity (and reproduce, but that’s another story). Fine, such a system IS a fixed reference point. The integrity that justifies the whole existence of the system IS precisely what is fixed – all the stimuli it collects are relative to the system itself. As long as the system is intact enough to function, it can count as a fixed reference point; with a fixed reference, meanings become possible because reliable relations can be identified, and if they can, then they can be grouped together and produce more comprehensive “interpretative” maps. This is the main reason why I like Peter’s Haecceity: it’s the “thisness” of a particular computational system that actually seeds the answer of the hard side of the question.

Note also that all of the above captures the differences I’ve spelled out between a standard computer and a common stone. It’s the specific physicality of the computer that ultimately distinguishes it from a stone: in this case, we (humans) have defined a map (designing it from scratch with manageability in mind) and then used the map to produce a physical structure that will behave accordingly. In the case of brains/minds, we need to proceed in the opposite direction: given a structure and its dynamic properties, we want to define a map that is indeed intelligible.

Conclusions:

  • The computational metaphor should be able to capture the mechanisms of the brain and thus describe the (supposed) equivalence between brain events and mind events.
  • Such description would count as a weak explanation as it spells out a list of “what” but doesn’t even try to produce a conclusive “why”.
  • However, just expecting such mapping to be possible already suggests where to find the “why” (or provides it, if you feel charitable). If such mapping will prove to be possible, it follows that to be conscious, an entity needs to be physical. Its physicality is the source of the ability of generating its own, subjective meanings.
  • This in turns reaffirms why our initial problem, posed by the unbounded arbitrariness of computational explanations, does not apply. The computational metaphor is a way to describe (catalogue) a bunch of physical processes, it spells out the “what” but is mute on the “why”. The theoretical nature of computation is the reason why it is useful, but also points to the missing element: the physical side.
  • If such a map will turn out to be impossible, the most likely explanation is that there is no equivalence between brain and mind events.

 

Finally, you may claim that all these conclusions are themselves weak. Even if the problematic step of introducing Haecceity/physicality, as the requirement to bootstrap meaning, is accepted, the explanation we gain is still partial. This is true, but entails the mystery of reality (again, following Peter): because cognition can only generate and use interpretative maps (or translation rules), it “just” shuffles symbols around, it cannot, in no way or form ultimately explain why the physical world exists (or what exactly the physical world is, this is why I steered away from ontology!). Because all knowledge is symbolic, some aspect of reality always has to remain unaccounted and unexplained. Therefore, all of the above can still legitimately feel unsatisfactory: it does not explain existence. But hey, it does talk about subjectivity and meaning (and by extension, intentionality), so it does count as (hypothetical) progress to me.

Now please disagree and make me think some more!

109 thoughts on “Sergio’s Computational Functionalism

  1. Hey Sergio, thanks for the interesting guest post!

    I don’t have time to go into any great detail in wrestling with your ideas, but I want to quickly confirm whether I got at least the gist of it right. So first, you point out that we can, having access to both the physical processes underlying cognition, and the cognitive processes they give rise to—in a sense, to both the hardware and the software, so to speak—find a map that connects both; that is, that allows us to take brain states, and decide what computation is being run, i.e. what cognitive processes occur. Is that about right?

    Then the going becomes a bit tougher for me, and I’m not quite sure if I’m interpreting you right. Basically, I read you as saying that the system itself is an object of its cognitive processes—that is, there are some brain states such that, using our mapping, we can see them as pertaining to the system as a whole, perhaps as situated in its Umwelt.

    This now ‘grounds’ the mapping, in some sense: in general, an arbitrary mapping will not yield cognitive states such that they include a representation of the system that ‘matches up’ with the situatedness of the actual physical system; thus, the map that does is singled out, and we get rid of the arbitrariness (and in particular, can stop worrying about conscious stones and the like: the maps that they anchor by means of representing their physicality accurately are simply trivial ones, which just return the inert physical stone going through its different states).

    If I read you right, then I think that might be an interesting idea. However, my worry lies with the ‘matching up’: this is, in a sense, an interpretational question—we can say that the representation of the system the map produces and the physical situatedness of the system ‘match up’ by means of our understanding of that which the map produces, and our understanding of the physical world. But the map does not produce something that needs no interpretation in order to be understood; hence, we need in fact (or that’s how it seems to me right now) another map that translates this representation into human-intelligible terms—say, something like a viewscreen, aimed at somebody’s brain, that produces a picture of the self-representation that the map leads to.

    But even something like a picture on a screen is not unambiguous; it makes sense to us due to our particular way of looking at the world, but to another creature, fundamentally different from us, it might be incomprehensible symbols, or even something imbued with a wholly different meaning. Hence, such different creatures might conceivably single out a different map as the one map that produces a representation of the system within its environment, and thus, it’s really only our particular wiring that singles one map out after all…

    But I need to think about this more.

  2. Or perhaps more succinctly: we need a map to decide whether the representation matches up with the physical system; but in deciding which map to use, we again have arbitrary freedom. Hence, different ‘matching’-maps will lead to different maps from brain states to cognitive content being singled out.

    But again, I’m not entirely sure if I’m doing your proposal justice. I’ll sleep on this…

  3. Excellent article. Sergio, if I understand your position you are saying that a computational model is useful but programs themselves can’t actually be conscious entities?

    Or am I wrong on that?

  4. I can’t really explain ‘what it feels like’ to see my words published here on CE. Mostly, it feels good, augmented with a good dose of “am I talking nonsense?” anxiety. Thanks again to Peter, for sparking the discussion and hosting me so generously.

    @Jochen 1&2
    Will wait for your afternight thoughts. For now, just a few disjointed comments:
    1. “in a sense, to both the hardware and the software, so to speak” glad you added diminishing qualifiers. Thinking of brain as the combination of hardware and software is forbidden in my quarters: it seems pretty clear that such a distinction isn’t going to help to understand what neurons do.
    2. “Basically, I read you as saying that the system itself is an object of its cognitive processes—that is, there are some brain states such that, using our mapping, we can see them as pertaining to the system as a whole, perhaps as situated in its Umwelt.” I think I know what you are saying, if I’m right you’ve got the gist of my argument, but I’m not sure. The symmetry within the subject is obfuscating our communication, so I’ll propose a bit of standardised (invented here and now) terminology.
    First, we start with an organism, containing a brain, the brain receives sensory signals and somehow uses them to figure out what’s out there. Thus I propose that the brain is in the business of creating an egocentric interpretative map, something that allows to figure out what causes the incoming signals. What we want to understand is how this mapping is even possible. Let’s call the map generated by the brain (pretending for simplicity that it is a fixed thing, which it isn’t, of course) Map1 or “the explananda” (with quotes so to mark it as special term).
    Then we have us, the neuro-consciousness-philo-science junkies, we are trying to build a different map. This map would enable us to translate from the mechanisms we can measure in a brain to the thoughts and/or experiences that correspond in the first-person perspective (remember dual-aspect monism?). Let’s call this Map2 or “the explanation”.
    In this way, the challenge posed by anti-computationalists is that they say “if Map2 is possible, it follows that Map1 can be generated by mere mechanisms/computations, which is impossible”, or “semantics can’t pop-out of syntax” or “computations are arbitrary abstractions”, etcetera.
    In your quick summary, I’m not always sure to which map you are referring, I can always pick one so that your rephrasing does match what I was trying to say, but you know, my picking is arbitrary, so I’d rather check with you! 😉

    @Sci #3:
    Yes, I am saying also that. And it’s not really news (unless you are Disagreeable Me, or espouse some sort of Platonism): to be conscious you need to be physically situated in the physical world, and you need ways to potentially interact with it. The computational account can help us ‘describe’ how brains work, but reproducing the same (massive) algorithm without suitable I/O channels is not going to work; these channels are the starting point of the whole architecture. It’s fortunate that Haikonen joined us, because his efforts clearly point in this same direction (to me, at least).

    @All:
    I have a very busy day of work ahead of me. Also: I’m not eager to write a lot and think relatively little, so my replies will be sparse, apologies for that. (Yes, I’ve been out-text-walled, I admit defeat and I hesitate to re-enter in the competition)

  5. Hmm, I’m still not sure if I understand you quite right… I thought you meant that there is one map that translates, basically, sense data (or states of the world mediated to the brain by the senses) to brain states (mapA). Then, there’s a second map (mapB) that translates brain states to cognitive content. In principle, that map is arbitrary, and hence, we can’t ‘nail down’ which first-personal content is associated to what brain states.

    I thought now that your idea was to use the system itself as a fixed point, by admitting for mapB only that map that leads to a representation of the system itself, accurately reflecting the actual situatedness of the system, thus eliminating the arbitrariness. This would then single out a mapB—or at least, narrow it down enough so that one could admit the actual first-person viewpoint of the system, modulo certain imperceptible differences, as arising uniquely from the brain’s state.

    And the problem I saw now, if that’s an accurate summary, is that you need, in order to do the comparison, another map (mapC) that mediates between the system as represented, and the system as situated physically in its surroundings. But there is again arbitrariness in this map, in that you can view just about any representation as accurately pertaining to the physical reality if you choose your mapping right; but then, also mapB is no longer uniquely singled out by the fixed point.

    In other words, it seems to me that you need what a category theorist would call a commutative triangle of maps:

    1) mapA: world -> brain state: this is what the sense organs provide, and which we can assume to be fixed

    2) mapB: brain state -> cognitive content: the map that we can use in order to tell what thinking’s going on given the state the brain’s in

    3) mapC: world -> cognitive content: this is the map that we need in order to tell whether the cognitive representation of the system matches up to the physical situation

    Now, if two of these maps were given, we could find the third one as a simple composition; but as it is, I think only the map from sense data to brain states is fixed, and hence, we’re left with arbitrariness.

    But I’m still not sure (and more doubtful after your last post) whether I got the gist of your proposal right.

  6. It’s strange that I’m arguing the particulars of a refutation of the stone, given I refute that the stone example has any merits, but…

    However, in theory, it would be possible to produce massively complicated input/output systems to substitute the relevant parts (CPU, RAM, long-term memory) of a computer with a stone

    How?

    It sounds more like in the end the rock would just be a spare wheel to the computer. That’s not a rock doing computations.

    A transistor allows a current, by it’s presence or absence on one wire, to either allow or dissallow current to flow through a second wire.

    There’s really no paralel to that in the rock. The specific physics of how a semi conductor (or even a relay!) works just don’t exist in a rock. It’s like saying the rock can be a morse code key, therefore pan-morsecodism (as if that’s some sort dismissal by argumentum ad absurdum, rather than raw absurdum!)

    OR is there a way to make the rock do the same things as a transistor semi conductor WITHOUT simply building a computer (which has all the right physics for this) around a rock and insisting it’s the rock doing computations (when really the rock could be removed and the computer could keep working!)??

    The way a computer works/alters state is by physics, not by anyone insisting it does computations. When one material lacks the physics that are involved in that process, obviously it doesn’t happen. The rock lacks the physics of a computer – that is the refutation that applies. OR does a rock somehow have the same physical/materials as a computer, somehow?

    Saying the rock can do computations is like saying glass can conduct electricity as much as copper wire can, surely?

  7. @Callan: There is no computation in the regularities of the actual world, which to my understanding was the point Jochen was making. It gets strange – to me anyway – if someone is claiming the physical medium of the Turing Machine is what makes a particular program – but apparently not every program? – conscious. I don’t know if this is what you are claiming, and I don’t want to put words in your mouth, but it seems to be a common computationalist position and I’ve – perhaps incorrectly – interpreted your posts to be arguing for this position.

    Borrowing from Searle’s Is the Brain a Digital Computer + Lanier’s You Can’t Argue with a Zombie -> You can take a snow storm, the motion at the molecular level, or meteors and then attach computatonalist interpretations to that isolated aspect of the physical world. This is ultimately arbitrary because these are all cases of derived intentionality applied to an indeterminate physical world. The only way around this I can see is if the particular placement of physical stuff intrinsically pointed to something. As Rosenberg notes, there’s no evidence this is the case for any kind of combination of matter.

    Perhaps I’m not understanding your argument properly, and missed a prior post about this but it seems to me adding the caveat that it’s the particular physical aspects of a particular kind of instantiation of a Turing Machine is, barring a solution to the Hard Problem of Intentionality, just special pleading? Why should electricity, semi-conductors, or whatever hardware fix meaning of indeterminate physical stuff?

  8. Hi Sergio,

    Nice article. As you know I too am a computationalist so I’m quite sympathetic to it. But at the same time I take Searle’s challenges to this kind of argument quite seriously and think they can only be resolved with Platonism.

    In particular, how would you answer the following objections?

    If you are a computationalist, presumably you can seriously entertain the idea of a simulated world with simulated people, operating autonomously with no input and no output. Whatever consciousness there is in such a process, it bears no relation to the physical context of the computer — there is no correlation with the outside world and so no means of arriving at a reasonable interpretation if we don’t already have one. Now, you may say we already have one, because we built the computer with a certain interpretation in mind, but that seems to make the consciousness of a physical object contingent on how it came about (e.g. by intentional design rather than materialising from a quantum fluctuation like a Boltzmann brain). If this is indeed your view it seems bizarre to me.

    But if it is possible for consciousness with no causal relationship to the external world to supervene on a computer, then why not the same for a rock?

    There is actually an interpretation of the computation of the rock which does correspond to its physical situation. We can interpret it with a computation which corresponds to a mind which perceives its location in the world and which learns and updates its internal representations as events transpire around it. We can imbue it with opinions and beliefs and emotions. Even though it can do nothing, that doesn’t mean it isn’t conscious. It could have locked-in syndrome.

    Of course there are many such interpretations for the rock. But if this is so for the rock, might it not also be true for the brain? Particularly for the brain of a patient with locked-in syndrome, say.

  9. “Right, if we assume that brains can be modelled as computers,”

    I’ve yet to see any evidence of someone modelling a brain on a computer. Neural networks are brain-like, to some extent, but as far as I know we have not yet shown that a brain can be successfully modelled. The OpenWorm project seems to be about as far as we’ve got, and though the C. elegans brain only contains 302 neurons, no one has modelled it in a way that reproduces the behaviour of the worm. Even at this very basic level, where the physiology of the brain in question is known in great detail (every synapse has been mapped), we do not have the ability to model a brain.

    So let’s *not* assume this after all. On face value modelling a brain is far more difficult than anyone might have imagined when neural networks became popular. The leap from our present inability to model the simplest of brains, to consciousness actually being computational is too great for the idea to be interesting. It may well turn out to be true, but at present it is a stab in the dark.

    For all we know consciousness may just be a very complex Calder mobile.

  10. @All:
    Thank you! I’m getting exactly what I was hoping for: more challenges to ponder on, and plenty of indications of how my argument looks from different perspectives.

    I’ll warm up by starting with what feel like easier answers to me. I will then lay down my main (additional) supporting argument and refer to that in the following replies.
    @Jayarava #9
    “Let us not assume that brain can be modelled as computers”
    What you are doing here is challenging the whole foundation of modern neuroscience.
    For example, the European Human Brain Project allocates €555 million to pay for the salaries of 7400 person-years (http://www.humanbrainproject.eu/documents/10180/17648/TheHBPReport_LR.pdf, page 62). The main aim is to produce simulations of human brains! The US BRAIN Initiative is a little less grandiose, but certainly is comparably expensive and shares the same fundamental assumptions.

    Brains have inputs and outputs, within the brain, signals get exchanged and manipulated, thus the inputs are transformed into outputs. Computers do exactly the same thing, they transform inputs into outputs, thus the idea that brains compute seems very attractive and, as the sheer scale of projects based on this premise testifies, it is shared across the field.

    You wish to challenge this foundational premise, and I must say that this is a worthwhile aim: I am the first one to think that whenever a scientific orthodoxy gets established, the risk is that people will put on their blinders and stop questioning the basic assumptions they make. Give it enough time and the basic assumptions get so entrenched that they become invisible and thus unquestionable. This road leads to pseudoscience, so we should always spend some time trying to question what looks unquestionable (see: http://sergiograziosi.wordpress.com/2013/09/15/normal-science-versus-scientific-attitude-the-unscientific-side-of-main-stream-research/).
    That’s a long premise to say that your objection does indeed strike at the foundations of neuroscience, and that I really am not trying to dismiss your remarks by an appeal to authority, I am Not going to say “your criticism is refuted by the experts, therefore it must be wrong” (If anything, I’m saying “your criticism is refuted by the experts, therefore I’m happy to take it seriously”), but I still think you are mistaken, on the basis of straightforward observations of how science and technology proceed.

    I think you are wrong because your main point “nobody managed to do this, and sure as hell lots have tried” is so weak that it effectively counts as nothing. If everyone accepted this line of reasoning, we wouldn’t have aeroplanes, wireless transmission, computers and technology in general. Evidence that many people have failed to do something certainly means that doing it is difficult (in this case, we agree that it is extremely difficult, I’ve said before that we can’t even begin to comprehend how difficult it is: http://sergiograziosi.wordpress.com/2015/02/15/complexity-is-in-the-eye-of-the-beholder-thats-why-it-matters/), but does never mean that it can’t be done.
    If you wish to challenge the fundamental assumptions of neuroscience, you need to propose a positive case: you need a theoretical demonstration of why a given assumption has to be wrong, and thus invalidate all efforts based on it. This is the kind of criticism (to computational approaches) that Jochen is representing, reporting and proposing to us here at CE, and this is why I’ve tried to write a rebuttal. Please do join in, the more substantial criticism we can find, the better.

  11. @Sci #7
    Are your second and third paragraphs directed to me or Callan? If the former, I should definitely try to address them, if the latter, then I don’t understand what you are saying…

    @Callan #6

    “However, in theory, it would be possible to produce massively complicated input/output systems to substitute the relevant parts (CPU, RAM, long-term memory) of a computer with a stone”

    How?

    Well, I wouldn’t bother trying to defend this claim in practice, but I’m explicitly accepting it in theory. The theory is that given anything that changes its internal states (a stone contains molecules that vibrate, bounce, etc, and lots of them, so we have an excess of internal states to pick from) you can arbitrarily assign a computational meaning to the different states (carefully picking them so that the cause-effect relations will be equivalent to the physical transitions of a built-for-purpose computer). Having done that, you will need a sci-fi sort of device to generate your inputs (put the state of the stone in the desired starting state) and another sci-fi contraption to read the states and produce some intelligible output. We are already in sci-fi territory (I myself used the concept of the perfect brain scanner), so arguments that rely on the implausibility/impracticality of such devices do not need to apply. What I’m trying to do is accept this particular assumption (you can use a stone to sustain the core functions of a computer), its theoretical grounding (computations are abstract, we can arbitrarily ready any physical process in terms of computations), and still refute the final conclusion (that you can’t explain brain/minds in terms of computations because you leave intentionality unexplained).

    All: while I’m mentioning the gist of the assumptions and aims, I may I as well clarify one point. The physicality/thisness/Haecceity component of my argument has an important consequence. I am specifically saying that computation is Not the full story, but nevertheless it is our best candidate for what we expect to be the main character in the story. I am defending the thesis that most of the explanatory work can be done via the computational/algorithmic lens, not that it is the only explanatory concept overall. Moreover, physicality/thisness/Haecceity allow us to take intentionality as granted, but I still have to explicitly unpack this side, that’s my next topic.

  12. Sergio,

    The issue of arbitrariness or nonarbitrariness of the interpretation of a physical system as a computation brings up, to my mind, the further issue of levels of interpretation. You say that for actual computers that we program, the interpretation is fixed by convention or agreement. That’s only true at one level.

    For the purposes of connecting a computer to monitors or printers or other peripheral devices, the only aspect of the computational interpretation of the computer that is nailed down is the interpretation of electrical signals on certain ports as bits. For the purposes of being able to install new programs requires a little more to be nailed down, namely the interpretation of the internal state as some kind of state machine. That state machine, together with some initial state information (in the form of certain bit-patterns loaded into particular places in memory) can be interpreted as running a higher-level language such as C. The program running the higher-level language in turn can be interpreted as something that’s humanly meaningful, such as “displaying a video” or “calculating my taxes” or whatever.

    If someone wants to analyze a human brain from a computational point of view, I would think there would be corresponding layers of interpretation. The lowest layer of interpretation is perhaps understood already (I’m not sure, though); the interpretation of the physical system as an abstract state machine. Perhaps neural nets are the answer to this level of interpretation. Then neural nets would be the equivalent of the register machine interpretation of electronic computers. But knowing that you have an abstract state machine in terms of neural nets is still a huge distance from knowing what the brain is “thinking about”, in the same way that there is a huge distance between knowing the machine language for hardware and knowing that the current state of the machine is computing computing somebody’s taxes.

  13. Another comment: Given the infinite number of interpretations of a physical system as a computation, what gives a particular interpretation a special status as the “right” one?

    I believe (I think may be due to Dennett) that ultimately what gives meanings to natural creatures is evolution and natural selection. For example, a particular physical state can be interpreted as “being in pain” or “being afraid” because creatures evolved responses to situations that threatened their survival (or that of their offspring). Human behavior is pretty far removed from the bare requirements of survival, but I think think it has its roots there. To survive, we had to be able to make sense of the world, to anticipate dangers, to take steps to deal with them. So the connection between our brain states and the real-world situations that they are “about” is not arbitrary, simply because only certain connections make sense from the point of view of survival and evolution.

    So to me, that’s what makes it so difficult, or impossible, to come up with a unique interpretation of a rock’s physical states as mental states. You point out that unlike a computer, a rock is not the product of engineering, so it isn’t given a unique interpretation by its creators. But a rock also is not a product of evolution (there is no notion of it surviving or not, it has no offspring) and so its states lack that connection to the world.

  14. Time to spell out my main supporting argument.
    How does physicality/thisness/Haecceity allow to dissipate the mystery of intentionality?

    I will try to answer this question with a very basic argument/example. I left this side out of the main essay for brevity’s sake, and I’m not sure this was the right choice (especially considering the reactions: I don’t think I’ve managed to make myself clear enough!).
    You can read what follows as an unpacked version of what I’m trying to say in the paragraph that starts with “However, this is where I think that the subject becomes interesting.” I am hoping this detour will help me answer the other comments in an intelligible way.

    We start with a fictional over-simplified unicellular organism. This relatively simple bacterium contains all the molecular machinery necessary to survive and multiply. All it needs is a source of energy, and this is provided by glucose molecules. When glucose is present in the solution where the bacterium lives, it is transported inside and used to produce all the bacterium needs to preserve its integrity, divide, and thus “thrive”. In order to keep entropy at bay, when glucose is not present, the bacterium needs to slow down (virtually stop) its internal metabolism: failing to do so will deplete the energy stores, so that the systems that keep the internal structures within their functional boundaries will be unable to do their job and the bacterium will die. Thus, this simple organism needs “to sense” the presence of glucose, and react accordingly. We could assume it to oscillate between two states: on and off. When “on”, glucose is present, the bacterium actively sucks it in and uses it to keep all it’s metabolic pathways active. When “off”, all metabolism is down-regulated to the bare minimum, conserving energy for as long as possible.
    However, to make the computational metaphor marginally more meaningful, I will make this fictional organism one notch more complex, and say that it actually measures the concentration of glucose on the outside, so to regulate its internal metabolism on a one-dimensional axis between completely On and completely Off. This seems as a good strategy: if lots of energy is available, we can proceed at full speed, otherwise throttle down as resources start to diminish, and go into full “preservation” mode if no external source of energy is currently present.

    To regulate its internal machinery in such a way, the bacterium has membrane receptors (RE), these bind to glucose, and change their structure when bound (RE+). On the inside, the change of structure allows some other protein to bind to the receptor. When bound, this protein can then modify a third protein (TF), a transcription factor (say, phosphorylate it, producing TF+) in a reversible way. When phosphorylated, TF+ activates a key gene which is ultimately responsible for making the bacterium move towards the “full on” state. Within the bacterium, there is another protein (PR) which removes the phosphate group from the TF+, producing it’s inactive version TF. The activity of PR is constant, not influenced by the arousal level of the main metabolic pathways that are regulated by glucose, it constantly drives the bacterium towards its resting state.
    In this way, since the bacterium has a roughly constant number of RE proteins, and since the RE affinity to glucose is finely tuned to match the metabolic requirements, at any given time, the arousal state of the bacterium will be regulated by the amount of glucose on the outside. The more glucose, the more RE+ and TF+ proteins are around, and thus the bacterium would do its thing faster. This machinery integrates the input signals (one glucose molecule bound to one RE) to produce the appropriate level of metabolic activity.

    [Note: this oversimplifies biology, but it does construct a biologically plausible fiction; in a nutshell, all metabolic activities are regulated in analogous ways. I’ve just spelled out the typical structure of what you get to study in biochemistry/cellular biology 101 courses.]

    Questions for all of you:
    Q1: Would you find it reasonable to say that the molecular machinery I’ve described computes the correct level of metabolic activity?
    Before you answer “No”, consider what this system does: it integrates a number of binary signals (how many RE+ molecules are present?) and uses this integrated result to regulate other internal mechanisms. This is done in an inherently non-linear fashion, as each step in the chain relies on molecular interactions that do not need to linearly follow the relative concentration of the molecules involved. Also: when modelling such pathways, molecular biologists will use formulas to describe affinities, and thus “compute” finely grained predictions of what would happen if we add or remove a little glucose. My point here is that the relevant features of this simple signal transduction pathway can be (and normally are) described in structural/computational terms. The moment you describe them in terms of mathematical formulas and numbers, you are already accepting that treating this system in computational terms does capture the interesting things that help explaining what the system does (and importantly, what the system is for, with caveats that I will assume are understood).

    Q2: Would you agree that the phosphorylation of TF, to produce TF+ is best understood as being about the amount of glucose available?
    I do not know how reasonably maintain a negative answer, but I suppose some of you might want to give it a go. To my eyes, answering “No” to the question above is equivalent to say: your so called “understanding” is an illusion. The mechanism is just that, a mechanism, as such, it isn’t about anything at all. This is an ontological argument, and is correct, but it is also a bit moot: epistemologically, it does provide a warning against teleological explanations (which has its place), but doesn’t help us to understand a given mechanism and make it predictable. Thus, it does not help producing new knowledge, it merely reminds us that we shouldn’t be arrogant and that this “being about” is a metaphor with weak links to reality.

    Nevertheless, if we accept that this “being about” is weak, but it has some explanatory value (it is the key that makes the whole story coherent!), we have already reached my own conclusions.
    C1. We can describe (understand and predict) this mechanism in computational terms (a “Yes” to Q1).
    C2. Doing so implies that we are granting to this system a first minimal amount of intentionality (a “Yes” to Q2). We are describing this system as being about glucose, implying that the signals it integrates have intrinsic intentionality.

    Thus, we need to accept that this first seed of intentionality comes from the physicality/thisness/Haecceity component of the system we are describing. It is about glucose because glucose molecules actually do bind to RE and this binding really does put the receptor in the RE+ state. When you model this system computationally, you are making it abstract, and the aboutness that we can easily see in the physical substrate becomes arbitrary. Thus, the challenge we started with, the arbitrariness of the computational interpretation, is valid, but inconsequential: it applies to our own understanding, our own need to describe and model a given system. The moment you do this (and we have to do it, because that’s what thinking entails), the intentionality of what is now an “abstract” signal integration mechanism becomes arbitrary, but in reality out there, it isn’t: what is happening in the bacterium really is about glucose.

    Some caveats: a mischievous biologist, can of course design a drug (fake-glucose FG) that can bind to RE and put it in RE+ (or prevent it from ever transitioning to RE+) and thus “fool” the system. We may therefore ask: is the fooled system about glucose, FG, or both? We can’t really say, can we? However, assuming that our bacterium is real, it came into being with a regulatory system that really is about glucose, and not FG. If it wasn’t actually using this system to regulate metabolism as a function of the available glucose (and was measuring the concentration of some other, irrelevant substance), one of the bacterium ancestors would not have survived and multiplied. As a result, our bacterium would not exist. Thus, natural selection ultimately generates systems that have intrinsic intentionality. They have to. My own existence guarantees that all my ancestors, all of them, down to the very first structure that we can agree to consider as a (formerly) living organism, had intrinsic intentionality. If they didn’t, they would have been unable to survive and produce an offspring.

    To wrap up:
    1. I’ve now made explicit my previous statements such as:

    The integrity that justifies the whole existence of the system IS precisely what is fixed – all the stimuli it collects are relative to the system itself. As long as the system is intact enough to function, it can count as a fixed reference point; with a fixed reference, meanings become possible because reliable relations can be identified, and if they can, then they can be grouped together and produce more comprehensive “interpretative” maps.

    I am hoping that you now can understand better what I was trying to say. In the example I’ve made, there is only a very dry causal relationship to track, but the moment you add another need to the bacterium (say, oxygen, to remain biologically plausible), you can see how it will need two systems to track the two needs, and how, in case in the world out there the two requirements correlate or interact in some way, it would be beneficial to have an internal system that reflects the outside-world relations. Thus, the system may evolve in such a way that it will represent the external relations between otherwise independent variables. That is, it will build and maintain more comprehensive “interpretative” maps.
    2. the problem of intentionality is only a problem if you start from the end of the story and try to move backwards. Starting from full, self-conscious, spectacularly rich, human-style intentionality, and find a way to ground it in mere mechanisms is extremely difficult. Perhaps impossible, but we don’t know.
    3. The reverse problem: seeing how mere mechanisms can embody intentionality is a non-problem. It’s what biologists do (implicitly) without even noticing.
    4. Moving from 3. to 2. isn’t trivial, but is not theoretically impossible. Thus, the main objection I’m refuting is shown to be invalid.

    Re my point 1. above. You may find my own trivial story too simplistic. If that’s the case, you may want to bang your head against:
    Friston K. (2013). Life as we know it, Journal of The Royal Society Interface, 10 (86) 20130475-20130475. DOI: http://dx.doi.org/10.1098/rsif.2013.0475 (http://www.fil.ion.ucl.ac.uk/~karl/Life%20as%20we%20know%20it.pdf)

    I discuss this same concept in my own “The predictive brain (part two): is the idea too generic?” (http://sergiograziosi.wordpress.com/2015/01/24/the-predictive-brain-part-two-is-the-idea-too-generic/). That’s to say: I over-simplify things in an effort of making myself understood. Also: this whole line of reasoning applies in a straightforward manner to the debate between Jochen and Disagreeable Me (DM) about the intentionality of a thermostat. (See also “The predictive brain (part one)”)

    This whole comment is a long detour that I needed to write explicitly in order to have some hope that Jochen might understand where I find the necessary grounding and hope that DM may see why I can/have to stay away from mathematical Platonism. I will tackle these two challenges when I’ll recover from the present effort!
    Until then, enjoy :-).

  15. Sorry, but one more comment.

    I want to connect my last comment to your criterion of the “fixed reference point”. Clearly, for less intelligent creatures than ourselves, there is no self-awareness. Lower-level creatures may have some rudimentary mental states, such as “that looks like food” or “that looks like something scary”, but they probably don’t have anything corresponding to self-awareness, a mental concept of themselves as creatures contemplating the world. So there isn’t a fixed point for lower-level creatures. What nails down their mental states, as I suggested above, is evolution and the imperative to survive.

    Now, you might say that for creatures that are so unintelligent, it’s exaggeration, or sentimental anthropomorphism to describe them as “having thoughts” at all. But I really think that there is a continuum between them and us. The lower-level creatures may only categorize objects as food/possible mates/possible dangers. But the demands of evolution lead to more sophisticated categorizations, where the creature has a special category for other creatures that “think”. So I tend to believe that the full recursion that you are talking about–we interpret ourselves as creatures capable of interpreting themselves as creatures capable of interpreting…–might just be a consequence of the evolution of more and more accurate models of the world, which is driven by the needs of evolution and survival.

  16. @Daryl #12
    Yes. The distance between mechanism and mental/psychological states is gigantic. See my last link on #10. In practice, we will need multiple levels of nested “interpretative maps”, and this is what makes the whole effort so extraordinarily difficult.
    Moreover, this is why I’m thinking about consciousness: at the moment, we collectively can’t agree about what would count as the convincing Top-level map. Without something that allows us to agree on how to consider the last step (from level X, whatever that might be, to conscious experience), science can only hope to produce bottom-up theories, start from raw data and produce ad-hoc interpretative theories. This is a necessary approach, but given the distance you correctly highlight, it requires a prohibitive amount of trial and error. I hope that specifying a reasonable proposal of what the top-level map might look like (and get somebody to agree!) would then help to start producing top-down reasonable theories, thus covering some of the distance in the other direction.
    At least, that’s the game I’m trying to play. It’s almost a solitaire, though.

  17. @Daryl #13
    I enthusiastically agree. See #14, for an over-expanded version of what you seem to be saying.
    @Daryl #15
    (why do you apologise??) Yes, once again. You are quite precisely following the first lines of thought that eventually brought me here (I’ve tried, and so far failed to get a paper published which starts from the concepts you describe).

  18. Sergio, something that’s interesting about the levels of interpretation comparing natural creatures with computers is the “top-down” versus “bottom-up” difference. With an engineered computer, you start by implementing a very primitive, but Turing-complete, abstract machine (register machine, or Turing machine, or whatever). Then there is an enormous amount of programming and designing and so forth to get to the top levels, where the machine can interact with the real world so as to control a vehicle or display movies or whatever.

    With naturally-evolved creatures, the starting point is the real-world connection. As you point out, even primitive creatures like bacteria do their processing in terms that have real-world meaning: glucose for metabolism, for example. In naturally-evolved creatures, there is not initially any need for any abstract state machine. There is no need for the computations to be “Turing-complete”, they only have to correctly do a few basic functions. But evolution drives the computational aspects to be more and more sophisticated until eventually a mechanism evolves that is sophisticated enough to perform arbitrary computations. So human brains are capable of doing abstract things like number theory, but that’s almost a side-effect of evolution. Evolution didn’t start with a abstract, Turing-complete state machine.

  19. Sci,

    There is no computation in the regularities of the actual world, which to my understanding was the point Jochen was making.

    But why use a rock for that. Use a semi conducting transistor for your example. I’d agree – computation is just a simplification understanding. There’s just physics going on.

    But to treat a rock as if it can do the same things as a transistor as a disproval of something – that’s just shooting yourself in the foot! It’s like trying to say water has the same flamable properties as petrol, to try and disprove someone elses position. Water don’t burn, rocks don’t do computation/transistor things. It’s a terrible case of apples an oranges treated as if it proves something?

    It gets strange – to me anyway – if someone is claiming the physical medium of the Turing Machine is what makes a particular program – but apparently not every program? – conscious.

    Not every dust devil is a hurricane, either.

  20. Sergio,

    you can arbitrarily assign a computational meaning to the different states (carefully picking them so that the cause-effect relations will be equivalent to the physical transitions of a built-for-purpose computer).

    This isn’t how a computer works – there isn’t just matter bouncing around higgildy piggildy and somehow we assign meaning to the different states. We have inputs – chain reactions that we trigger, then it reacts to the inputs. Tell me how a rock reacts to inputs?

    Comparing the internal states of a rock to something like a book, interpreting part of the rocks states to be like the paterns of ink on paper – I could understand that comparison.

    But were not talking about a book.

    I’m gunna have to call it – comparisons to a rock aught to be dismissed from the get go as just really bad understanding of how a computer works.

  21. @Callan #20
    Computers are designed to make them manageable (I’d say ‘easy to use’ but too many people might disagree!). Stones aren’t, so yes, there is a fundamental difference. Problem is, I think conceding the equivalence point is rhetorically/tactically useful: you accept the strongest case made by the opposition, and show that it doesn’t lead to the conclusions they reached.
    Since I think this can be done (laboriously), I’m trying.
    Therefore I’m am not going to argue with you on this, for all practical purposes you are right, but in strictly theoretical terms, I still think it’s worth trying to persuade people that the refusal of compuationalism based on the arbitrariness of computation doesn’t work (although their theoretical premise is sound, if we ignore the practicalities).

    Also: since computationalism is so important to current neuroscience, it’s worth making really sure that it can withstand criticism: if it does not, it means we are wasting huge amounts of money and work on a doomed effort.

    I sympathise with your frustration, though, not everyone needs to follow my craving for self-punishment. 😉

    @Daryl #18
    Good points! We seem to share the same fundamental outlook.

  22. @Disagreeable Me #8
    Thanks, I find it very fascinating that we start from antithetic approaches and reach the same conclusions, in some weird way, this should be a good sign, I suppose.
    I will not try to hide my dislike for all forms of idealism, please do not take this as a personal attack: I start from the premise that a multitude of approaches (no matter if incompatible) is a good thing, which should be embraced. See also my comment here(http://egtheory.wordpress.com/2015/03/16/tools-and-problems/#comment-7669) for a brief hint at the conceptual problem that this overarching approach generates: we will no doubt experience the effects of this tension in our dialogue.

    Well, I’m trying to take all criticism seriously, although I think that Searle (along with almost every positive computationalist!) starts by underestimating how difficult it is to build a complete computational account of mind/brain, but that’s another topic altogether (see links in #10).

    Coming to your challenges, I can start from my comment #14. The role of physicality in the bacterium example is that it specifies non-arbitrary causal chains which are independent from the proto-intentional system I’ve described. If there is no sugar, the bacterium can’t rely on an external source of energy. Thus, the first spark of intentionality comes from the evolutionary drive towards the construction of internal systems that represent and track causal relations “out there”, it is crucial that these chains are independent from the system itself (in fact, the intentional system needs to be embedded, be part-of the causal chain).

    If we build a simulated world with simulated agents inside it, we will by definition (and necessity, see below), design the simulation in the same way (and then as per your hypothesis leave it alone without external interference). The rules that govern the simulated world will not change according to the action of a simulated agent, or if they do, they will change according to some other rule, making the rules of the simulation invariant at a higher level of abstraction.
    Now we come to the important, and really tricky bit: the rules will be necessarily fixed, because we will be running the simulation on some hardware. This means that we will be exploiting the external (to us, the designers) fixed rules of physics to generate and “enforce” the rules we are designing for the simulation. Whatever the hardware, we’ll have transistors and logic gates that really switch from one state to the other, and, as long as the hardware works correctly, these state transitions will occur following the rules of our own reality. However, because of the design process, within the simulation they will also reflect our abstract design principle.

    What does this mean?
    1. The simulation correlates to the external reality exactly as everything else. You unplug the system and you sure as hell will have visible effects on the simulation (it ceases to exist – yes I know you refute this, but please bare with me – the simulation would indeed stop in the observable world).
    2. The simulation looks uncoupled to us (mere humans) because no human being is able to encompass in one single cognitive sweep the very long and very complicated causal chain from electrons changing the state of gates in a CPU all the way to the simulated events in the virtual world.
    3. Nevertheless, the coupling is very real. We, the ones who wrote the simulation, specifically designed how it works, forgetting this small detail is inexcusable.

    Thus:
    Of course the consciousness of a physical object is somehow contingent of how it came about. In the simulation case, we had teleological design. Having understood how to recreate consciousness in silico, we did. In the case of ourselves, consciousness came about by the process of blind trial and error design (AKA evolution), which is a higher-order description of a fundamental property of structures: structures that make it more likely that more similar structures will be generated, tend to accumulate over time. That is all there is, a conscious organism can preserve itself and produce more new conscious organisms with good efficiency, and thus they do.

    If “by chance” the same structure happens, then it will be conscious. However, I think the Boltzmann brain/paradox is a bit moot. For starters, structures accumulate according to natural selection, whether you like it or not, and thus the probability distributions are all but flat. As long as entropy isn’t saturated across the whole universe, and therefore no structure is possible, replicators will pop into existence, because they can. But I’m digressing.

    I am not saying that the same structure can be conscious or unconscious depending on how it came into being. I am saying that consciousness can be modelled computationally because it starts with the grounding of sensory information. There is a fixed causal chain that we can follow, and this applies to us in the physical world we know, but applies in exactly the same way to our hypothetical simulation, in the latter case the causal chain is longer, and, because our computers are reliable, we can take the first part (the one about the physicality of the computer) for granted, but this isn’t a license to jump to the conclusion “therefore the simulation exists independently”.
    Therefore no: “consciousness with no causal relationship to the external world” can’t “supervene on a computer”, in the same way as you can’t play Halo without switching one or the other gaming rig on. Do I really need to remark this? In all of the above you would be right in detecting a visceral dislike for mathematical Platonism. I apologise for this, and I assure you that I’m doing my best to keep my inclinations under control; but it’s hard, so I might be failing here and there.

    Second objection: interpreting the inner states of a stone as a series of computations that generate consciousness should be always possible. No, it isn’t. We can do this only by neglecting the one thing that would otherwise ground our interpretation (and make it not-arbitrary): the causal relationships with the external world. You are free to toy with the idea of a stone which internal states go through their own series of transformations completely independently from the external world, that’s a cute story, but has no counterpart in the world we know. In the physical world, such a stone can’t exist, if necessary I can defend this claim in a number of ways, but surely we can readily agree, I do hope.
    What I am saying is that if we follow the causal chain from perception to internal state transitions, we can single out the set of non arbitrary maps of the corresponding computations (I’m admitting here that a multitude of equivalent computational accounts is in theory possible, I guess).
    If we don’t follow this causal chain we can interpret anything as we like, but so what? It would have no predictive power over what is actually going to happen out there.

    I hope I am answering to your objections, but I am not entirely sure because the Mathematical Platonism stance is so alien to me that I probably fail to understand your position in full. Therefore, please do ask for more explanations, as and if you wish to.

  23. Pingback: ICYMI: my contributions elsewhere | Writing my own user manual

  24. @Jochen #5
    I’ve kept postponing my answer because I was hoping to get an “a-ha!” moment and be reasonably sure that I’ve understood your comments, but the truth is: I don’t.
    Specifically, I do not understand where your mapC is coming from. mapA and B seem to correspond to my map1 and 2, but at this point I’m not even sure.

    Anyway, since this whole effort is the result of the food for thought you’ve been providing over the past weeks (you and many others, of course), I’m naturally eager to hear if I’ve managed to make my position a little less obscure. If not, I’d be glad to try again!

  25. @Sergio: My response was technically directed to Callan, though to put it more fairly it’s what I see as the fundamental challenge to the idea that a program could be considered a conscious entity. (I don’t want to claim he has a position he doesn’t.)

    To be honest I’m not sure if the same critique applies to utilizing the idea of computation as a way to sort the data coming in from cognitive neuroscience. I actually lean toward agreeing with you but will have to chew on it.

    I think there are potential issues with trying to explain intentionality arising from evolution – I had the same feeling with Peter’s account in his book – but I need to read some new things and reread some old things before I can be sure.

    Nevertheless, even with that caveat I think we can make at least some progress with your line of thought.

  26. @Sci #25
    Sounds good to me, please chew away and do let me know, if you get a chance. I will be particularly interested on the “potential issues with trying to explain intentionality arising from evolution”, as that is the very core of what I’m trying to propose: if you have (or find) strong rebuttals of this idea I will be thrilled to learn more.

  27. Sergio,

    I can’t even say their rock position is theoretically sound, so I can’t genuinely humour the rock as computer idea at all. But I’ve had my turn at saying my piece, so I’ll just leave what I’ve said on this for general consideration.

  28. Daryl McCullough,

    Yeah, I think we’d have to consider ‘meaning’ as more like a reflex (much like the reflex we have if tapped on the knee) that forces various processes in the mind to focus on that as a problem to solve. I mean it’s pretty clear if you take someone and inject a local anaesthetic in their arm, then work on it behind a screen while they sit on the other side of the screen (a fairly common medical practice), then don’t scream or writhe or snatch their arm back as the blades cut. Such problem solving actions only happen when the reflex triggers them. I believe I’ve heard some medical accounts of people who are wired for pain in a weird way, who will be sitting there knitting but if asked will say ‘oh, I’m in terrible agony’ (but I don’t have a source for that, sadly, so take me with a grain of salt. Hopefully I can find a source latter!). Somehow in their case agony does not force their mind to go into a problem solving focus.

    People can say ‘But it’s pain!!’, but at the same time if they would humour they are a problem solving process and that process can be lassoed into focusing on a particular problem, then obviously enough in such a case they would be focused on that pain!

  29. Hi Sergio,

    I don’t intend to debate the merits of Platonism here, but I will say that I don’t think it is a form of idealism at all and I’m no more an advocate of idealism that you are.

    I think it was a talk by Roger Penrose where he put it quite clearly. He identifies three important domains we can think of as the three M’s – Mind, Matter and Mathematics and the ways in which they may relate to each other. Physicalism or materialism has Matter (or strings or whatever) as fundamental. My view has Mathematics as fundamental. Idealism has Mind as fundamental.

    For the purposes of this conversation, you can take me simply as a defender of Searle’s argument (even though I don’t agree with his anti-functionalist conclusion).

    Firstly, let’s recap what that is, because your argument about what makes computation really computation doesn’t seem to me to distinguish the case of the rock from a computer.

    In any physical object, there will be a series of events. Electrons will get excited. Atoms will bump into other atoms. Photons will be emitted. These events will cause further events. Given an exhaustive list of such events and their causal effects on other events, it is plausible that we could define a mapping whereby each event (or a cherry-picked subset thereof) is interpreted as a computational interaction of some kind and which has the same causal effects on later events as we would find in a particular algorithm. So, ex hypothesi, given any algorithm run on any input for a finite time, there exists an interpretation of physical events so that that algorithm can be seen to be physically instantiated by real causal effects (this mapping can be as arbitrary and complex as we like and it can vary with time so as to keep a match between the algorithm and the events).

    Like you, I think it is best to concede this rather than argue the point.

    > Whatever the hardware, we’ll have transistors and logic gates that really switch from one state to the other,

    So the point is that this also happens in the rock. The rock really does have atoms that really do bump into each other and emit photons and so on, and ex hypothesi these participate in a chain of causal effects isomorphic to what happens in a computer. The difference is only that the interpretation of the computer’s state is relatively simple and consistent, while the interpretation of the rock’s state is ridiculously arbitrary and byzantine and probably changes over time, to the extent that interpreting the rock’s state is actually harder than performing the computation in the first place. This is what makes computers useful and rocks useless as calculating devices. However if the only difference is in ease of interpretation, this seems to undermine the idea that we can say computers are *really* computing and that rocks are not.

    > Nevertheless, the coupling is very real. We, the ones who wrote the simulation, specifically designed how it works, forgetting this small detail is inexcusable.

    I’m not forgetting this detail. I’m just saying that there exists a coupling for the rock too. The problem is that nobody knows what it is and it is too complicated to be of any practical use.

    > Of course the consciousness of a physical object is somehow contingent of how it came about.

    Well, you seem to disagree with this later on when you say that a Boltzmann brain would be conscious. What I mean here is that the consciousness of a particular object must have a dependence only on its current physical state and not on how it came about. I think here you are saying that its current physical state is a consequence of how it came about. I think this is a different concern.

    > There is a fixed causal chain that we can follow, and this applies to us in the physical world we know, but applies in exactly the same way to our hypothetical simulation,

    And the point is that this is also true of the rock, given a specially-crafted interpretation of physical events in the rock.

    > but this isn’t a license to jump to the conclusion “therefore the simulation exists independently”.

    Again, I’m not going to get into defending my (admittedly fringe) views right now. Let’s keep the focus on whether your argument succeeds against Searle.

    > We can do this only by neglecting the one thing that would otherwise ground our interpretation (and make it not-arbitrary): the causal relationships with the external world.

    Ex hypothesi, this is not so. The rock is getting inputs from the external world all the time. For instance, photons land on it and sound waves hit it, causing a cascade of physical events that could be interpreted however we like, including some interpretations which would be isomorphic with the information processing of a conscious entity.

    > What I am saying is that if we follow the causal chain from perception to internal state transitions, we can single out the set of non arbitrary maps of the corresponding computations

    What is arbitrary is in the mind of the beholder, I feel. Which means that what is computation is in the mind of the beholder.

    > It would have no predictive power over what is actually going to happen out there

    When we’re in the business of figuring out what consciousness is, predictive power doesn’t come into it much. Consciousness is not something we can detect, so the prediction of consciousness or non-consciousness is meaningless.

  30. Hey Sergio, with regards to what you said earlier:

    I am specifically saying that computation is Not the full story, but nevertheless it is our best candidate for what we expect to be the main character in the story.

    I think we’re not that far apart in position, ultimately. What goes on in the brain certainly permits talking about in computational metaphors, and our understanding of it benefits from them. But like you, I also believe it’s not the full story—in my own private metaphor, the computation merely provides the ‘rails’ that guide the carts supplying the real intentionality (which I also think comes from the environment and from the organisms embeddedness and exchange with it, though on slightly different terms than you).

    By the way, are you familiar with W. Tecumseh Fitch’s concept of ‘nano-intentionality‘? IIRC, his discussion is very similar to yours, even using a similar cellular example for the grounding of intentionality.

    Also, I think I have found what may be the source of our earlier miscommunication. I thought you intended to find some way to objectively single out a mapB from all the possibilities, i.e. arguing that while any computation can possibly be attributed to the sequence of brain states that are traversed, there exists one that is objectively special in such a way as to be pronounced the actual computation that is performed, since it leads to a representation of the system in its environment that agrees with objective reality. To make such an argument, you would need a mapC that actually tells you how this matching up needs to be performed.

    But now I think you might have made more of a descriptive, FAPP kind of argument: there is a mapB that we as external onlookers, attempting to give a description of what happens ‘inside the mind’ of the system, can use as if that were the actual computation that goes on, and thus, use it to study the mind/brain connection in this way. In that case, we can disregard mapC, because it is intrinsically supplied by us as external onlookers, and thus, can be viewed as fixed, and hence, such a mapB can be found. And I think that’s basically right—ultimately, this allows us to engage in meaningful study of the computational processes we can take to occur, while deferring the mystery to how the mapC is singled out, which (as we may suppose) is ultimately not fully accountable in computational term.

    If that’s about right, then I really think we’re not too far from each other in our position.

    And I think maybe I’ve found the source of our miscommunication.

  31. Disagreeable Me writes: “Ex hypothesi, this is not so. The rock is getting inputs from the external world all the time. For instance, photons land on it and sound waves hit it, causing a cascade of physical events that could be interpreted however we like, including some interpretations which would be isomorphic with the information processing of a conscious entity.”

    The big difference is that the rock is not connected to the world in a way that its processing (assuming you’re right, that it is isomorphic to that of a conscious being) makes any difference.

    That’s a problem, it seems to me, for a purely “isomorphism” notion of consciousness. The rock might be doing something that is isomorphic to me thinking: “Boy, I’m hungry, I think I’ll order a pizza”. But since rocks don’t get hungry and are incapable of ordering a pizza, does it really make sense to say that that’s what the rock is thinking? To me, thoughts are only meaningful relative to an entity’s connection to the world.

    Of course, I realize that this view has its own problems. A brain in a vat would presumably continue to think about ordering pizza, even though it no longer has the ability to order pizza or to eat it.

  32. Hi Daryl,

    > Of course, I realize that this view has its own problems. A brain in a vat

    Well, yeah, to me that defeats the view entirely.

    Similarly, we can think of a locked in person. Or a rock that is just admiring the view and content to stay as it is. Or a rock that is running an internal simulation of a world.

  33. @Disagreeable Me #29
    (I’m going to write an embarrassing amount of “That’s agreed” statements, I fear, so I will get them out of the way at the beginning)
    First, yes, my lumping together idealism and Mathematical Platonism was careless and wrong. Second, I’m OK with avoiding treacherous (and OT) discussions on entire philosophical architectures, it’s far more interesting to see if we can somehow translate each-other’s arguments and maybe even conclude that for all practical purposes (FAPP: Jochen introduced me to the handy acronym) they are equivalent (on the subject at hand).
    Going on with the agreements:

    What I mean here is that the consciousness of a particular object must have a dependence only on its current physical state and not on how it came about. I think here you are saying that its current physical state is a consequence of how it came about. I think this is a different concern.

    Yes, that’s what I was saying: the minds that we actually find in the world out there are likely to occur because of natural selection, minds occurring by random assembly in a structureless soup are on the other hand extremely unlikely and won’t last, because in a uniform medium they won’t have any ability to self-preserve, but by occurring they make medium not uniform any more (but I’m digressing, so let’s just agree the main point and move on).

    Finally (on the agreements side), your description of Searle’s argument works for me, down to almost the last detail:

    There is a fixed causal chain that we can follow, and this applies to us in the physical world we know, but applies in exactly the same way to our hypothetical simulation

    And the point is that this is also true of the rock, given a specially-crafted interpretation of physical events in the rock.

    I’m with you so far, but then depart at the very last stage (with thanks to all: I would not have been able to put my finger on it without your help!).

    We can do this only by neglecting the one thing that would otherwise ground our interpretation (and make it not-arbitrary): the causal relationships with the external world.

    Ex hypothesis, this is not so. The rock is getting inputs from the external world all the time. For instance, photons land on it and sound waves hit it, causing a cascade of physical events that could be interpreted however we like, including some interpretations which would be isomorphic with the information processing of a conscious entity.

    This is where the “inexcusable forgetfulness” (I’m using this expression in jest, don’t take it too seriously!) comes in. Yes, the rock is getting inputs, but it’s physicality is such that it does not react to them in ways that ultimately (attempt to) preserve some of the structures that are part of the rock itself. There is a frightening depth behind the argument I’m trying to make, so please brace yourself and see if you can follow me. [It may be useful to quickly nip over to my own blog and refer to What the hell is information, anyway? to get an idea of why I keep talking about structure and how it relates to natural selection]
    In the OP I say:

    We have, ex-hypothesis, a system that collects “measurements” from the environment (sensory stimuli), processes them, and produces output (behaviour); this output is usually appropriate to preserve the system integrity (and reproduce, but that’s another story). Fine, such a system IS a fixed reference point. The integrity that justifies the whole existence of the system IS precisely what is fixed – all the stimuli it collects are relative to the system itself. As long as the system is intact enough to function, it can count as a fixed reference point

    This argument cuts in two ways: within the system, the first spark of intentionality is provided by the (implicit, in the case of the non-self-aware bacterium) relation between certain inputs and the consequences they have (or potentially have) in terms of preserving the structure that receives such stimuli. In my example, the presence of glucose signals the possibility of doing stuff and ultimately produce a copy of the original structure. Thus, the bacterium becomes active and qualifies as an intentional agent. The stone doesn’t, so what’s missing? For the bacterium, the causal relations can be modelled as:
    World > (input) > Intentional System > (output) Where output increases the probability that Intentional System will remain intact enough to re-enact the same sequence for an unbounded number of times.
    This definition applies to living organisms (and to some extent, parts thereof) but NOT our stone. If it did apply to a given stone, that stone would have intrinsic intentionality and we could consider it alive. Thus, intentionality derives from natural selection insofar natural selection is the shorthand for how “self-preserving (and multiplying) structures” appear on the time-axis (or, if you prefer, the mechanisms that confer the self preserving [and multiplying] qualities to a given structure). What you (and Searle) are forgetting is the output side, and the fact that this output is non-random: it consistently has a precise effect. This precise effect (“As long as the system is intact enough to function, it can count as a fixed reference point”) allows us, a third party, to ground our understanding of what the system does. For the system, this same effect is the one that allows to imbue a mere mechanism with intentionality, and, much further down the line, produce the kind of consciousness that we (humans) experience.
    I’ll recall my premises (and be grateful to my past self for being careful): what we are searching here is the link between mere mechanisms (which can in turn be described as computations) and meaning/intentionality (assuming we can agree that once you have something that is imbued with intrinsic intentionality you can build on it and reach meaning). What we are not trying to do is go all the way to full-blown consciousness, we would be very happy to “just” find a first seed of intentionality.

    I will end this reply here because I feel the next step will be somewhat in common between you and Jochen. It’s about the epistemological role of everything I’ve written on these pages, and it is equally (if not more) important than the present comment.
    Thus, please hold your horses, bare with me a little longer and allow me to finish my case before replying.
    [In all this, I can barely keep control of my own enthusiasm: you guys are helping me to sharpen my argument is ways that I could not anticipate, I had hopes, but this is better!]

  34. Well, I’m not Sergio, so I’m not particularly defending his view. I’m just thinking out loud.

    So for a locked-in person, what is the sense they are “thinking about pizza” (or whatever)? It seems obvious to me that there is nothing intrinsic in the chemical/electrical activity in that brain that makes it about pizza. It’s about pizza in the sense that the state is similar to a state which, in another person, or in the same person at an earlier time, would have been associated with the activity of ordering pizza. The concept of what the state would mean in a “normal” person gives a preferred interpretation of that state. In the case of a rock, there is no preferred interpretation.

    But a question for the computational view of mind is this: Is there an objective “correct” interpretation of physical states as mental states? Or is it just a matter of convenience? For evolved creatures, evolution gives a preferred interpretation because of the imperative to survive and procreate. For human-built machines, there is a preferred interpretation, which is derived from the purposes and goals of the designers. For some system that is subject neither to evolution, nor to “intelligent design” (by humans, or gods, or extra-terrestrial intelligent creatures, or whatever), there would be no preferred interpretation of their states as mental states. Does that mean that they just don’t have mental states?

    Imagine (I think philosopher David Chalmers would call this the Swamp Thing thought experiment, but I don’t get the connection with the comic book the Swamp Thing) that purely by coincidence (an event with probability .0000…01% with maybe a trillion zeros) a bunch of chemicals arrange themselves into an exact replica of a human being who happens to be thinking about ordering pizza. On the one hand, there is no evolution or design that would connect this thing’s thoughts to the real-world situation of ordering pizza. On the other hand, if it’s exactly like a human being, then how can we say that it’s NOT thinking about pizza, if the identical human being is?

    My way of avoiding the problem is to say that I don’t care about consciousness in the abstract; I only care about consciousness as situated in the world, able to interact so as to attempt to avoid pain, pursue goals, indulge in pleasure, etc. I consider anything that can usefully be described in such terms to be conscious. That isn’t to say that other things might not also be conscious, but I’m skeptical of our ability to make much progress on the question.

  35. @Jochen #30 and @Disagreeable Me #29 (cont)

    Overall description of what follows: I will fist address some details about Jochen #30, but I will not attempt a point-to-point reply. Instead I will then continue from my last comment (#33). For readability, I will signpost the different sections.
    [Short comments]
    Yes, Jochen, I don’t think we are far apart, this is the main reason why I’ve asked Peter to host my lucubrations: I felt that there was something valuable in trying to explore the disagreements. You and DM have been circling around the same issues and it does now seem to me that we’re finding a useful way to harvest the potential energy stored in our criss-crossing disagreements ;-).
    I didn’t know about Fitch’s concept of ‘nano-intentionality‘, judging by the abstract alone (I’m at home today!) it does seem to be precisely my point.
    Also: my remark about not making an ontological argument was meant to help clarifying that I wasn’t even trying to “single out the one special computation that can be pronounced the actual computation that is being performed”. I don’t think this is necessary or possible, but to see why I need to unpack my epistemological stance. I do think this will count also as a direct reply to your own concerns.
    [/Short comments]

    Main argument (continues from #33):
    I’ve left one side of the argument unfinished, specifically, I’ve written “This argument cuts in two ways” but then I’ve only described the view from “within the system”. The other side is from a third-party observer, or, if I may, from as close to objectivity as it is possible to get. DM concluded his comment #29 with to important points:

    What is arbitrary is in the mind of the beholder, I feel. Which means that what is computation is in the mind of the beholder.

    And:

    When we’re in the business of figuring out what consciousness is, predictive power doesn’t come into it much. Consciousness is not something we can detect, so the prediction of consciousness or non-consciousness is meaningless.

    I agree with the first statement and will come to it later on, but I vigorously disagree with the second. It is of course (almost) true that we can’t detect consciousness (with exceptions on cutting edge work being done on the basis of IIT, I’ve provided one reference in an older thread). However, here we are discussing intentionality, and the view I’m proposing (along with Fitch, it seems) is that we can detect intentionality almost directly: if a system reacts to external stimuli in self-preserving (and multiplying) ways, the view I espouse makes it entirely legitimate to claim that such a system has intrinsic intentionality.
    The real question is another: why should we bother? My answer is that we might want to do this because it is equivalent of recognising a structure as a legitimate agent. This is turn is useful FAPP because agents behave in very different ways when compared to “dead” matter. Sure as hell all this is circular (the conclusions are hidden in the premises), but starts from a question that can be approached with some ambition of objectivity: “does this system react in self-preserving ways?” is an empirical question. Getting correct answers is not trivial at all, as contingency is the only rule of the game (the self-preserving “ways” of all intentional systems are exclusively the result of their evolutionary history), but the question remains 100% empirical.

    Thus, prediction enters in the picture in a big way: we now have a formal way to discriminate agents from non-agents, and can learn when it makes sense to try predicting the future behaviour of a system based on its intrinsic intentionality. To remain within very practical examples we now know why saying “the computer crashed and the hard disk died because I had almost finished writing my thesis (with no backups)” is exactly wrong. Again, following what looks like Fitch’s discourse, we have a way to agree on what systems behave in complex but mindless ways and which ones are best understood as agents instead. As I’ve said in the premises, this is a big prize. What is necessary to understand is that it is an epistemological prize. When we declare “System X is an agent”, we are pinpointing what sorts of computations we should try to identify in order to make System X predictable: we need to look for self-preserving feedback loops between System X and its natural environment (the environment that shaped its evolutionary history). This same process is what we could use to figure out if a system is best understood as an agent (and of course, we do this all the time, in practical life).

    Finally, we can re-connect to the idea of computations. The account above, and the empirical question we can now legitimately try to answer, are all about building models/simulations. We are now officially in the “making predictions” territory, and thus we want to model our agent. In practice, this requires to describe the agent-like qualities of System X in terms of formulas, inputs and outputs: thus, we are already trying to build a computational model of System X. At this point we get to the problem that I think troubles Jochen: are we “really” reproducing the computations that happen within System X? No, we aren’t. For starters, we have no reasons to expect that there is only one Correct Answer: the same input to output transformation can usually be reached following uncountable different routes. However, assuming we did pin down the self-preserving function of such transformations, one route or the other would be epistemically equivalent (and we should pick the easier one!). By doing so, we would not be reproducing the internal computations that happen in System X, because there are no such things. As DM pointed out, “computation is in the mind of the beholder”.
    If you recall that we started from the assumption that computations are 100% abstract and thus how we attribute computations to physical systems is 100% arbitrary, we can (as I declared from the start) turn this argument into a resource, and find the computations that are 100% functionally equivalent to what System X does internally (in theory. In practice we’ll be striving to get as close as possible to full functional equivalence). However, the moment we use all this to create an actual implementation of our modelled System X (a simulation), and we run it in a computer, we would need to simulate the inputs as well, but if we do this, we are re-creating something that really is equivalent to System X, we will need to also simulate the causal connections from environment to System X (at least the ones we have singled out as relevant, and at the very least some of System X’s inputs). These peripheral elements of our simulation would then be all we need to declare that also the simulated System X is an intentional system. Importantly, we would be implementing different mechanisms (now best described as fluxes of electrons in semiconductors) but reproduce the same causal relations (using different but functionally equivalent physical structures). This last remark accounts for the physicality that I’ve declared to be a requirement: we have built a new system (Simulated System X plus some environment) which mirrors our original subject of enquiry in ways that matter epistemologically, but we have still built a physical system. A simulation is not an abstraction: it can be described and understood in terms of abstractions, but it still has its own Haecceity.

    I think all this was intuitively grasped by whoever coined the “computational functionalism” definition. The second word is important: it’s the functions that ground the whole apparatus and drag it into the empirical domain.

    The last thing I have to say is about the “brain in a vat” counter-argument. This can be brushed aside with ease: ex-hypothesis a brain in the vat has precisely the same causal relations with its natural environment. Except, the brain is now in a vat: we’ve changed the environment is such a radical way that we broke down all the meaningful causal relations. The poor brain is still potentially able to perform all the original functions, so it remains an intrinsically intentional system. It just is a system that has been artificially (and cruelly) prevented from completing the expected functions. The same applies to a locked-in patient (save for the cruelty, I guess): full consciousness can still be present, the fact that downstream there is an obstacle that interrupts the chain of causal effects in the output side has no bearing on what can happen upstream. This also completes Daryl’s (#34) Swamp/Pizza scenario: the account I’m proposing allows to reach a strong conclusion, the poor Swamp Thing would really want to order a pizza (if only it could).

    [Final and marginal notes:
    I’ve been carried away. I wanted to write down my thoughts while I could keep them in-focus. The downside is that I have not tried hard to make them intelligible, and I have not tried to carefully make them watertight. Therefore, I do expect most of you to come back questioning a lot of major or minor details. Please do, but please keep this disclaimer in mind!]

  36. I think all this was intuitively grasped by whoever coined the “computational functionalism” definition.

    Well, I think that who I see as the originator of computational functionalism, Hilary Putnam, would probably disagree: he abandoned the position precisely due to things like the rock argument.

    As for the rest of your posts, most of which I thought was excellent, I’ll have to think about it before I formulate a reply; I just wanted to note that, besides Fitch, I also feel very reminded of some ideas from Terence Deacon’s book Incomplete Nature, who attempts to build up intentional (or ‘ententional’, a somewhat broader category for anything that bears some form of relation to or about something else) systems in a three-step process, starting with thermodynamics, where the ‘natural tendency’ is to dissipate, but where dissipative tendencies can collide to generate self-organizing or ‘morphodynamic’ processes, from which by a similar dynamics then arise ‘teleodynamic’ processes, which ultimately manage to genuinely refer to other entities by means of the constraints (absences) on their evolution. I’m not sure I’m ready to buy that wholesale, but it’s genuinely fascinating stuff, so if you haven’t read it, I’d suggest it’s well worth the (albeit substantial) investment of time.

  37. Thanks Jochen!
    As you’ve probably sensed, I’m pretty much isolated in my current effort, so reading suggestions can’t be more welcome (my reading list is frighteningly long already, so I must be very picky, these days). I come from biology: parallel approaches that build on any of the other disciplines are frequently news to me (but my ignorance is boundless, it gracefully covers a good deal of biology as well!).
    As for who coined the “computational functionalism” expression, I’m happy not to speculate, Putnam would likely disagree with me, that’s a given.

    On the matter at hand, the pill might become easier to swallow once we realise that everything I’ve said is about empirically identifying a “best explanation” as defined by its predictive value (and secondarily, intelligibility). I think there is some value in trying to minimise the ontological claims: I am hoping it would make my position more palatable to a wider audience because it makes only weak statements on “what really is out there”. On the other hand, accepting this view instantly carries us a long way on the otherwise elusive explanatory bridge.

    By the way, the same stance allows to make interesting claims about causality, but that really is another story (and probably even less easy to accept for most).

  38. Hi Sergio,

    > Yes, the rock is getting inputs, but it’s physicality is such that it does not react to them in ways that ultimately (attempt to) preserve some of the structures that are part of the rock itself.

    Agreed, but then neither does a standalone computer simulating a world, or a locked in person, or any number of other examples.

    > What you (and Searle) are forgetting is the output side

    Firstly, I would say there doesn’t have to be an output side for consciousness (standalone simulation). Secondly, I would say that we can interpret as output however we like whatever we like, so even a stone could be said to have output.

    > However, here we are discussing intentionality,

    Well, fair enough. You have given one way to detect intentionality. But how do we know if it’s right? Is there any test we can do to see if this really picks out the intentional systems from the non-intentional systems? The point you disagreed with vigorously was about consciousness rather than intentionality though.

    > we now have a formal way to discriminate agents from non-agents

    Which personally I disagree with. For you, it is important that agents act in self-preserving ways. For me, it is important that agents act in goal-directed ways. Self-preservation need not be that goal. So a stock-trading algorithm is an agent in my eyes, for instance.

    I don’t think your response to the brain-in-the-vat succeeds. Your suggestion is basically that the brain-in-the-vat *could* interact with the world if its inputs and outputs were properly causally connected. But the same is true of the algorithm-in-the-rock. If the ridiculously arbitrary and time-sensitive mapping we’re talking about were implemented and embedded in a robot body, then potentially the rock could walk about and experience the world too.

    Now, of course this is *entirely* infeasible practically, but as an argument in principle I think it rather undermines your argument.

    Finally, how would you position your argument in relation to Dan Dennett’s “Intentional Stance”?

  39. Great piece, Sergio, but I think you’re actually engaging the debate at the wrong juncture. Why worry about *computations* at all when difference-making differences are all that you need?

    We posit meaning for same reason we posit monetary value, because it does a tremendous amount of work. Nothing in the book of nature possesses monetary value, but positing it provides an incredibly effective way to understand and predict local behaviours. What we do is assume what might be called ‘free-standing efficacy’ – since the vast systems of pre-established harmonies that makes money efficacious reliable enough to ignore, we ignore it, and simply treat money as if it possessed its own intrinsic ‘powers.’ Given outright ignorance of the enabling system, we simply assume that intrinsic value exists, we ontologize it.

    Humans possess what might be called ‘plug-in’ cognition, where we understand systems by plugging them into larger systems, and we also possess what might be called ‘bootstrap’ cognition, where various factors render plug-in cognition impossible. Our metacognitive blindness means we have no intuition of when we’re using one or the other.

    We found money, of course, relatively easy to see through–the harmonies are local and prone to crash. Meaning is more difficult to see through for numerous reasons, but it bears all the same hallmarks of heuristic essentialism. When you listen to your mechanic explain what’s wrong with your car, you don’t go through the trouble of plugging him or her into the personal and evolutionary history that knapped their systems such that they can effectively repair your car, you say that they ‘know.’ Cognition on the cheap.

    All the regress arguments show is that there is no such thing as ‘computation’ in any high-dimensional sense. They do show the need for vast pre-established harmonies–the need for evolution and training. What they show, in other words, is the fact that meaning is heuristic, a way to work around our own ignorance.

    All you need are systematic difference making differences in an evolutionary and training context.

    Of course, this doesn’t give you an account of what consciousness is, but it does give you an account of what consciousness is *not,* namely anything computational or anything else ‘intrinsically intentional.’ It strikes a good number of apparently impossible to explain phenomena from the ‘to-be-explained’ list. As soon as we have an account of the brain’s broadcast mechanisms (my personal guess is that EMF’s will be involved) I think we’ll a solid basis for an empirical, that is, consensus commanding, account of consciousness.

  40. @Disagreeable Me #38
    I see two points of disagreement here, perhaps only one. Correct me if I’m wrong:

    Disagreement 1. You disagree with my steps towards embodiment, more precisely you say that input and output don’t need to enter the picture, on the basis of the (somehow equivalent, or complementary, for the first) examples of: (e1) the rock hooked up with Sci-Fi input/output devices that make it act as an agent, even a conscious one. (e2) A simulation with no input/outputs into the containing world. (e3) locked in patients.

    Disagreement 2. The possibility of singling out a correct interpretation, and even more, as in “You have given one way to detect intentionality. But how do we know if it’s right?”.

    Around this core of major disagreements, there is the minor point of my restrictive definition of agents/intentional systems. This last is the easiest to deal with: my definition of Agents. You are right, we probably do want to include in the Agents set all sorts of different agents, such as trading algorithms, self-driving cars, drones, and probably much more. However, the problem here is that we are mixing intrinsic and derived intentionality, wouldn’t you agree? So yes, there are other ways to be an Agent, having intrinsic intentionality (and means to act in the world, I would say) is one, but there are others. I do hope we can both accept the revised and inclusive definition. If we can, I would restrict the scope of my original empirical “answer” and say “we have a way to detect intrinsic intentionality” which is still close enough to my original aim.

    On Disagreement 1: (e1) the rock hooked up with Sci-Fi input/output devices that make it act as an agent, even a conscious one. Here you are moving the goalposts, or inadvertently accepting my position; you are also simultaneously giving support to Callan’s (and to some extent Scott’s) critique to the whole approach. But I don’t think this is what you wanted to do, so I will need to explain how I read this particular objection (up to you to explain me why I’m reading it all wrong). The crucial consideration to me is that the moment you add the unimaginably complicated contraptions that are required to turn the rock into an autonomous robot, you are teleologically (and very laboriously) adding all the sources of intentionality that I have been pinpointing. You have a system that measures some of the things that happen out there, and translates them into who knows what, so to appropriately put the stone in the desired state. This contraption accounts for the first spark of intentionality, and I must add, that it should count as a source of derived intentionality, but I also think the distinction (intrinsic versus derived) is irrelevant in this context.
    You then add some equally complicated measuring devise to track the progression of internal states, but you also need to add some means to act into the world. By doing so, you are necessarily adding a lot of algorithmic stuff, and I would have no problem to agree that the whole you’ve created is an intentional agent (if the whole does behave in the specified ways, see below) but you’ll have to admit 99% of the work is done by what you’ve added. Importantly: according to my definitions, you would have added all the parts that provide intentionality, effectively forcing the internal mechanisms of the stone to conform to your plan. You would have done so by designing the intrinsic map on purpose (teleologically, even). Thus, for me e1 is an argument in favour of my stance. I’m sure you’ll disagree, but I do not know how or why (look forward to your re-rebuttal!).

    On (e2) A simulation with no input/outputs into the containing world:
    I don’t think you’ve selected this example carefully enough. First, according to my materialistic stance, if you unplug the computer where the simulation runs, the simulation ends. Second, if I change the code, I am providing input to the system, and have effects on the simulation. Third, if drown the computer in water, or hit it with a magnetic pulse strong enough, or mess with it in a gazillion other ways, once again I would disrupt or interfere with the simulation. Thus, the simulated system, in order to exist, can’t be really isolated, the “no input and outputs” is only provisionally and approximately true. Furthermore, all the disruptions I can cause would act on the physical part, specifically altering the causal chains, and thus most likely breaking the (simulated) causal chain that we have instantiated in the simulation. This is important to note, but not enough to reject your counterargument, I will return to it shortly. The next step requires to look into the simulation: such a thing will need at the very least an agent and some simulated inputs. Thus, once again, the intentional agent within the simulation does have inputs that are causally connected with its internal states, and in a way that is somehow connected to what the agent is for. If you follow the causal route, you’ll find that this includes the computer hardware. This is why altering the state of the computer has measurable (and unavoidable) effects on the simulation. Thus, the agent itself is required to not be isolated within the virtual environment, and if this wasn’t enough, is certainly not isolated from our own physical reality, it is only apparently isolated.
    On the two ‘levels’, there is the (simulated) input/output causal routes that connect the agent to the simulated world, making it a valid simulation of the key causal elements that I’m trying to define (the fact that there is a tight causal link between stimuli and their effects on the inside, and the fact that these inputs eventually generate outputs that at the very least are not random), so we can’t say the agent is truly isolated. On the other level, because our simulation is designed, there is a one to one correspondence between the causal links in the simulated agent and the causal chains that regulate the state transitions in the hardware. Thus, the system is still embodied, albeit much further removed from our reality than ourselves. For all these reasons, I think this objections doesn’t apply, to make it a bit more relevant, you need to produce a simulation that only includes the core of an isolated agent, but that’s equivalent to your e3 (below).

    On (e3) locked in patients and/or a brain in a vat.
    This is more interesting and does expose the weakest side of my argument, which is also very well isolated by Disagreement 2: I will tackle both at the same time. By mixing the two arguments I can build a stronger case in favour of your doubts, and then see if I can still survive the challenge. Imagine an alien brain in the vat. Originally it evolved to track whatever alien ecology there was, it was then deprived of all input and output organs, put in a vat that keeps its functionality intact, and somehow shipped on earth. We can now ask: will we have any hope of figuring out if it’s an intentional system (I’m trying to behave and avoid talking of full consciousness – I was the first one to conflate the two and that was another careless slip)? Would we be able to recognise that this thing has the potential for an inner life (see? I’m conflating again, I hope you get the gist of this…)? I expect you to answer “no” and even if we could, how could we possibly agree on how to interpret the inner states in a reasonable way?
    That’s fine, I completely agree: if the alien brain in a vat is alien(/different) enough we would have no hope to reach or recognise the correct conclusions, and we could argue forever. This is why I’ve myself written “Getting correct answers is not trivial at all, as contingency is the only rule of the game”. The crucial thing here is that the game we are playing is entirely empirical and epistemological, this is due to the premises we’ve agreed to take as valid: computations are abstract, entirely abstract. To me, starting from a physicalist stance, this makes the game far removed to ontology: I don’t end up claiming “this set of algorithms is what is happening the alien brain” at best, I end up claiming “this set of algorithms, when attached to this and that I/O devices, reproduces all the functions of such embodied brain: it is functionally equivalent”. The ontological claim, if I really need to make one is that “therefore the two objects, the original and our functional equivalent, have the same kind of inner life (if any)” but nevertheless, because we can only study the first empirically, and then we need to instantiate the latter in physical terms (however far removed, see the simulation side), I also wrote that “In practice we’ll be striving to get as close as possible to full functional equivalence”.
    The intention there was to point out that full functional equivalence is never-ever going to be guaranteed, perhaps never fully achieved. This is a consequence of having reached the empirical domain: we are now doing science, and because the mystery of reality is unassailable (or if you prefer, because models imply approximation) we can only reach provisional answers. There is always hope to find better answers or at least improve the precision of the current ones.
    Therefore, the fact that we can imagine concrete scenarios where the contingency of the situation doesn’t even allow to reach any degree of certainty is neither surprising nor invalidates my argument. It barely remarks the expected limitations of the approach, which were known from the start.

    We can now go back to the locked-in patient. This case is illuminating because for human patients we can make very well substantiated hypotheses on how the I/O systems would normally work, and pinpoint where the expected causal chain gets interrupted. What happens however is that in this way we expose the very arbitrariness of assuming there is a meaningful way to separate agent from non-agent in the various domains (space, time, and consequently causal chains). Think of a quasi-infinite set of “locked-in” patients. The first has all the sensory organs that function perfectly, and all input is processed normally. The whole brain also works normally, but the cholinergic neurons that send motor signals all misfire: they do fire, but the muscles fail to contract (with the exception of those muscles that keep the patient alive). The rest of the set is made up by patients which include all the other possible additional failures, moving backwards from the effector synapses, to interneurons and all the way back to disable the whole brain with the exception of the incoming neurons. At some stage along this chain we would stop thinking of the patient as locked-in and declare her brain-dead instead. You should now be screaming “exactly! don’t you see it’s still all arbitrary?” and I would softly answer: “precisely: I’ve been saying the same all the time”. Because it’s an empirical question, this means that we have to start by accepting that there is no final and correct, answer: in the reality out there there is no sharp and uniquely defined separation from me and not-me, dead and alive, mammal and non-mammal, the road and the pavement, the sea and the coastline. All the things I’ve mentioned are, as I hope at least Scott will agree, the result of the fact that cognition, in order to work, needs to assume there are objective and meaningful boundaries (so to divide reality in sub-sets and then work symbolically on the representations of the sets), but in the reality out there, you can expect to find that most, but not all, these distinctions (albeit very meaningful, FAPP) are ultimately arbitrary. Thus, finding that also the criteria I’ve been proposing here breaks down if you look close enough is not a valid rebuttal: it is the unavoidable consequence of having found a way to move into the realm of empirical science.

    I realise this is very hard to swallow, probably for all but one or two of the regulars, but this is just rephrasing the official mantra that “science embraces uncertainty” or that “all scientific theories are only provisionally accepted, until we find better ones”. The consequence is that we have to restrict ourselves to the epistemological domain, and I’m comfortable with that, while I guess you are not!
    Note that maths (and other abstract domains) does allow for certain answers, see http://sergiograziosi.wordpress.com/2013/09/06/so-after-all-i-do-know-something-just-another-story-part-3/ and http://sergiograziosi.wordpress.com/2014/03/09/sources-of-error-essentialism-fallacy/ if you want to understand better my position: once we move into the realm of knowledge about knowledge, we can find definitive answers.
    I realise all this probably starts form the exact negation of all the premises of Mathematical Platonism (although I’m no expert!), so I do expect you to run away screaming in horror! I apologise for trying to shove it down you throat.
    Does any of the above answer your objections?

    PS @Scott: I need to eat and sleep, will try to answer your comments sometime tomorrow.

  41. Very nice post, Sergio! Thanks for taking the time to clarify your ideas publicly.

    Terrence deacon’s work “incomplete nature” may be of interest in this discussion. He talks about a precellular form of life, which he calls a teleogen. He argues that this organism’s ability to self-reproduce and self-repair is the most basic form of reference.

    Do you think that the abilities to self-reproduce and self-repair are evidence of referencing (physical representation)? If so, could these abilities fit the idea of a “fixed reference point”?

  42. Sergio, your remarks about what it would take to make a rock into an agent are well-taken. Dennett has talked about this issue:

    Often people think of a human’s processing of information from the world in terms of three stages: (1) Processing of incoming signals to get them into some suitable internal form. (2) Reasoning about this internal form to come to some decision about an action to take. (3) Processing of this decision to figure out specific responses of specific muscles. In other words, there is an incoming path for information, internal processing, then an outcoming path. Many people think of stages (1) and (3) as non-conscious, and that the only stage that deserves to be considered a “mind” is stage (2), the internal processing of information.

    What the rock example shows is that it is possible to put ALL of the thinking into stages (1) and (3), so that there is really nothing left in stage (2) to be considered “thinking” or “consciousness”.

    This relates to the “brain in the vat” scenario (which doesn’t seem different than the “locked-in patient” scenario–is it?). We (or some surgeon with godlike skills) can restore the brain to full consciousness by reconnecting it to sense organs and muscles in the appropriate way. But how do we know that we’re not doing the same thing as with the rock–creating intentionality through our choice of I/O devices?

  43. @Scott #39
    Thanks Scott, I was starting to worry that you got lost at Disney’s 😉

    I substantially agree with the positive examples you mention (not a surprise!), but have a probably very different aim that justifies the current effort.
    The meta-approach that I take requires to embrace the plurality of theoretical frameworks, to me, this is a necessity once I accept that all knowledge is heuristic. If it’s heuristic, it sometimes goes wrong, and if it does, then it becomes useful to nurture a diverse bag of tools. Thus, a theory that starts from a given set of assumptions will work well in illuminating some things, and will obscure, hide or misrepresent something else. Any theory, including my own.
    As a result, trying to accept different sets of assumptions, and see how they relate to the different conclusions they lead to is something that greatly interests me: it is the only way I know to explore the unavoidable blind spots that obsess me. If I’m right, each theory will have blind spots and we should therefore try to identify them (and thus learn when not to trust a given theory). Once we have a reasonable idea of where the blind spots are, we can also try to minimise their scope.

    In this meta-context, what I’m trying to do should be more clear: I’m embracing (as much as I can, I have to admit I can’t thoroughly explore Platonism, for example) a set of postulates specifically because I think that (since they are not the ones I’d pick) I can be less blind about their limits, and thus show why some of the conclusions that seem to follow do not actually apply. I am actively trying to reduce the size of the blind spots I can see. Unfortunately, I can’t see the blind spots that my own theories generate, and this is why my kind of effort needs to be dialectic: I need the feedback from people that disagree with me in order to learn about/mitigate my own shortcoming, as exposed by their different perspective. Also: for a dialectic approach to be mutually informative, I need to accept upfront that I might be wrong, otherwise I’ll be participating in a confrontation, not a dialogue.

    There is also an element of persuasion in all of this: the epistemic foundations of anyone’s intellectual stance are exactly the things that people are less likely to change. It doesn’t really matter how earnestly one participates in a debate, no matter how hard they try, I expect that everyone will start from some fundamental beliefs that are not, in their eyes, questionable. Thus, rhetorically, I am hoping that accepting (as much as I can) their premises, and challenge their conclusions is a more promising strategy to reach some consensus. Blunt opposition is known to be ineffective, so I might as well try what my natural inclinations suggest.

    To say it as a joke, I like putting myself in someone else’s epistemic boots, see where they strap me, and learn something by the hilarity I generate in the process.

    I had a baffling conversation on Twitter about this: my counterpart seemed to be violently against provisionally accepting someone else’s premises, and I think he concluded that I’m intellectually dishonest. I can see why, but I beg to differ.

    Coming back to your conclusions: you know we share very similar views. Starting from your own premises, I can very much conclude that
    “consciousness is *not* […] computational or […] ‘intrinsically intentional’”. I can reach this conclusion from radically skeptic premises on what cognition is. However, I can (try to) start from strong views on what computation and intentionality are, provisionally accept them, and then gradually erode their strength from within. In the process, I hope to learn about the weak points in my argument and perhaps persuade someone to look at things the way I do. Other approaches are equally valid, but this is how I get my kicks!
    Does the above explain why I don’t think I’m approaching the subject from the wrong direction?

  44. Hi Sergio,

    > However, the problem here is that we are mixing intrinsic and derived intentionality, wouldn’t you agree?

    I don’t see it that way. For me, whether the intentionality is intrinsic or derived depends on how it came about and has no bearing on what is actually happening right now, so the distinction is historical and not terribly important. For instance, self-driving cars could be designed with explicit representations of roads, other cars and so on (derived intentionality) or they could arrive at such representations themselves through the operation of a genetic algorithm or neural network (intrinsic intentionality).

    So evolution/training/feedback is one way of producing intentionality, but the goal of this development need not be survival of a physical object such as an organism.

    > This contraption accounts for the first spark of intentionality, and I must add, that it should count as a source of derived intentionality, but I also think the distinction (intrinsic versus derived) is irrelevant in this context.

    I agree that this contraption has intentionality. But I also don’t see an in-principle way to say the rock absolutely doesn’t have intentionality. The difference between the contraption and more straightforward sensory and motor apparatus seems to me to be a difference in degree. The apparatus is effectively translating the world into the rock’s language and vice versa. This is not so different from a contraption that senses the world, parses it, understands it, re-renders it as a 3D scene, encodes this rendering as a series of pulses and feeds it into my brain via my optic nerve. The fact that the apparatus has its own intentionality doesn’t mean that I don’t also have my own.

    > On (e2) A simulation with no input/outputs into the containing world:

    I’ll happily agree with you that changes to the computer (disrupting its electicity supply, reprogramming it, cutting it in half) will have implications for the world it is simulating. I don’t think this gets at the point I was trying to make though.

    An agent within the computer has no access to what is happening in the real world. It cannot even know that it is within a computer — for all it knows its simulated environment is the real world. An agent outside the computer has no access to what is happening inside the black box, except perhaps by coming up with some kind of interpretation of the physical goings on within it. And there are many such interpretations which vary only in arbitrariness and complexity.

    So, ex hypothesi there is a fundamental insurmountable barrier between an observer within and an observer without the simulation (you could reprogram the simulation to allow input and output but then you’re changing the thought experiment). This is what I am talking about when I say it is causally disconnected — I’m assuming only that the machine is allowed to continue running. This disconnection means that you can’t use correlations with the real world in order to choose a preferred interpretation. There are no correlations other than that between the computer getting electricity and the simulated world’s continued existence, but that correlation is so simple that it doesn’t tell you much.

    Given the interpretation we might be most inclined to use, disrupting the electricity will disrupt the simulation, but similar points can be made for the rock. Given a particular interpretation, then slicing the rock in half or even just tapping it absently also disrupts (in effect destroys) the simulation. Only interpretations which deliberately accommodate (indeed, require) these specific interactions in advance will survive intact.

    (As an aside: it occurs to me that this may point the way to choosing a preferred interpretation after all. This is just a sketch of an idea, but it seems to me that some interpretations are more robust to physical changes than others. An arbitrary Searlian interpretation of the physical events in a wall as computing something is likely to go haywire if you so much as breathe on the object in question without planning for this in advance, whereas the interpretations of brains and computers as computing devices are relatively undisturbed by mild physical interventions. For instance, kicking a computer that is calculating some function has no effects at all on that calculation as long as you don’t damage or destroy it. Even so, there may be a way to paint an arbitrary interpretation more robustly than I suppose, or ways to undermine the supposed robustness of our preferred interpretations.)

    On the points pertaining to the causal connections between the agent within the simulation and other entities within the simulation, I’m more or less inclined to agree apart from the fact that I think we could say the same about agents and objects in any other arbitrary interpretation we could project onto the physical object.

    > On (e3) locked in patients and/or a brain in a vat.
    > Would we be able to recognise that this thing has the potential for an inner life I expect you to answer “no”

    I wouldn’t answer “no”. For me, whether something is intentional or conscious or whatever depends on the interpretation of its functional structure. We may project many interpretations onto the brain in the vat, some conscious and some not. I personally would prefer the simplest one that seems to predict how its neurons fire and so on — i.e. the same interpretation you would likely prefer. The problem is not how we recognise that it has a potential for an inner life, but how we single one such interpretation out as correct and discard the rest.

    > “this set of algorithms, when attached to this and that I/O devices, reproduces all the functions of such embodied brain: it is functionally equivalent”

    That’s not much better than what you say you are not claiming. Because once you think you can enumerate all the functions of an embodied brain, you are singling out a particular interpretation. Different interpretations will have different functions and different algorithms will be deemed functionally equivalent.

    > but in the reality out there, you can expect to find that most, but not all, these distinctions (albeit very meaningful, FAPP) are ultimately arbitrary.

    I don’t need convincing of this. You’re preaching to the choir in this whole section, Sergio!

    But what I’m saying about the locked in patients is that it’s not easy to single out a preferred interpretation by looking at how they interact with their environment, because they don’t. I don’t think you’re really addressing this point. The issue is in choosing a preferred interpretation, not in whether a particular interpretation is conscious or brain-dead. If you can’t pick out an ironclad set of criteria for preferring one particular interpretation over others, then we can’t say that *this* experience supervenes on a physical substance any more than *that* one does. We haven’t explained what is it that makes one particular experience real and not the others.

    (Of course there is always my preference: to bite the bullet and say that they are all real but abstract and so not really located in one particular time or place. This is not so much a problem on the MUH where they are all real anyway).

  45. @Brian #42
    I’m the one that should be issuing “thanks”: besides the guilty pleasure of getting so much attention, the challenges and suggestions I’ve collected have been above my best expectations, and I had high hopes. Also Jochen suggested reading Incomplete Nature, so I guess it will make it into my reading-list.
    The ability of Self-reproduce and self-repair, and in general self-preserve (there is also “avoid getting damaged”) do look to me as the abilities that point to naturally-occurring (not designed) intentionality. For such systems, I’m proposing that the “fixed reference point” is indeed whatever sustains such abilities (functions), in my view. I’m now very much ready to add that other (designed) abilities can sustain other types of intentionality, as emerges from my exchange with Disagreeable Me.

    @Daryl #43
    I’ve been ignoring hints towards Dennett because I would need to refresh my memory on exactly what he says. As pointed out, I touch many Dennettian themes, and I find myself unable to pinpoint what part of my thoughts I’ve taken from him, what coincides but predates my reading of Dennett and what is actually not in common. Blush.

    The human locked-in patient is radically different from the alien-brain-in-a-vat, but is equivalent to the human-brain-in-the-vat (am I the only one that finds this scenario so horrible that it almost causes me pain to just write about it?). The alien is different because in that case we don’t have access to any information on the contingencies that shaped the brain structure, for humans, we already have plenty of the information we’d need.

    how do we know that we’re not doing the same thing as with the rock–creating intentionality through our choice of I/O devices?

    The consensus that is emerging (in this tiny corner of the internet) is that we can’t be absolutely sure, but we can make well educated guesses: to successfully “restore the brain to full consciousness” we would need a plan, and that plan would be grounded on our theoretical understanding of the original “expected” causal links between world and brain. As I was hoping to show with the example of the set of locked-in patients, there is no sharp distinction between input and processing components, so whether we’ll judge that we created a new consciousness or re-enabled the existing one is a bit of a false dichotomy, there is no ultimate correct answer. But who cares? In the whole it seems that we (you and I, at least) agree that for all practical purposes the position we’ve been exploring does look promising (at least it didn’t crumble at the first challenge).

    [Irrelevant nitpicking:] I wouldn’t say that “it is possible to put ALL of the thinking into stages (1) and (3), so that there is really nothing left in stage (2) to be considered “thinking” or “consciousness””, because that’s a bit moot (unless I’m missing your point). Once our theory is detailed enough (here I am only saying that despite the objections, there is hope that such a theory is possible) we could do all the work without the stone, but we would be the ones moving the goalposts, right?

  46. Sergio: “Coming back to your conclusions: you know we share very similar views. Starting from your own premises, I can very much conclude that
    “consciousness is *not* […] computational or […] ‘intrinsically intentional’”. I can reach this conclusion from radically skeptic premises on what cognition is. However, I can (try to) start from strong views on what computation and intentionality are, provisionally accept them, and then gradually erode their strength from within. In the process, I hope to learn about the weak points in my argument and perhaps persuade someone to look at things the way I do. Other approaches are equally valid, but this is how I get my kicks!

    Does the above explain why I don’t think I’m approaching the subject from the wrong direction?”

    Well, dialectically speaking, the problem with immanent critiques is that only outsiders find them convincing. For me, the question BBT – or more generally, heuristic neglect – poses regarding intentional cognition is devastating: Given that intentional cognition is adapted to solving complex systems neglecting what is actually going on, why should we think it can tell us what’s actually going on with consciousness? In fact, we should expect it to jam the gears exactly the way it jams the gears. If enough people begin posing this question regularly enough, I think it could dramatically reorient the dialectical pitch.

    That’s the strategy I’ve adopted, anyway. The ‘are we there yet?’ version of communicative reason!

  47. @Disagreeable Me #45
    I think we’re converging, and quickly: if we aren’t careful we might bump on one another.

    First of all: I think you are allowing the discussion to become a little dispersive. By replying to verbatim quotes, and pretty much skipping the rest, you force me to do a bit of synthesis work each time round. I’m not very good at writing summaries, so I guess we should all make an effort to list the conclusions at the bottom of each intervention (or the aim at the top).
    Right now, I see one main disagreement; naturally, I trust you’ll correct me if I’m wrong.
    I’ve constructed an overall approach that allows to identify and analyse Intentional systems (we are also calling them Agents), by, ahem, focusing on their intentions. This places algorithms as the main point of interest, because I’ve implicitly assumed that the physical properties of what we would now model as an I/O system can be productively reduced to computational functions (transformations acting on signals).

    Within this scheme, you seem to point out that there is still too much space for arbitrary interpretations “The issue is in choosing a preferred interpretation” (a way to translate from physical transformation to computation). I do accept that choosing a preferred interpretation is problematic:
    a) a priori, we already know that sometimes it will just be impossible,
    b) perhaps with a few extra-lucky exceptions, even in more approachable situations (where we have access to the contingencies that produced a give structure) it will never be perfect, as for all scientific modelling.

    So I don’t think we are divided by a substantial difference here. What may be a bigger disagreement is that I have singled out a method to work towards convincing interpretations, and you haven’t really accepted it. Perhaps not even “provisionally”, to see if it does allow you to make some progress.

    However, I do think we are converging because the “sketch of an idea” that you had, when developed, is exactly what I’m trying to explain. For living things, however, it extends well beyond robustness of the interpretation: the whole point of being an Agent is that it reacts to stimuli in non-random ways, the reactions of an Agent somehow conform to the agent aim. For naturally occurring living things, this, by contingency (the fact that what tends to survive survives) generates a common denominator: you and I, as well as my fictional bacterium, tend to preserve the functionality of our I/O (and internal) systems. The stimuli we register are tuned to provide the kind of information that allows us to succeed in this endeavour more often than not. Thus, the structures that make up my physical being do not only measure what is happening in the world, but also provide an intrinsic reference: while they exist, the way they respond to what happens outside tends to ensure that they will keep existing. If you know enough about what influences my own homoeostasis, as an external observer, you can therefore start understanding why my internal structures work in the way they do. The simple intentional system that is my fictional bacterium can be readily understood in this way, the same is not so for myself: explaining why I’m typing this comment in terms of how it comes from my self-preservation needs isn’t going to be straightforward (and almost certainly requires to encompass the structures that may potentially survive my persona: that is, DNA sequences and perhaps memes).
    Compare to the stone: if you want to explain the “mechanisms” that process “sensory” information in the stone, you’d be a bit foolish. You can explain why the stone traverses a given sequence of states in far more parsimonious ways by using standard (or quantum, depending on the detail you want to capture) physics. If you want to explain why the stone has a particular structure, you’d use the same tools (and some knowledge of the contingent causes that made the stone what it is now).

    On the other hand, if you want to describe me, you can (in theory) use the same tools of physics to describe (and predict) the sequence of states I would traverse, given my current status and a stimulus of choice. But if you want to explain “why” my structure is what it is, and in particular, why it is so feature-rich, so spectacularly organised, why and how it repels entropy, and so on, in one way or the other, your description will imply my intrinsic intentionality. If we can agree on this, it follows that my internal structures can be efficaciously described in functional/computational terms, once my intentional capacities have been described in enough detail. This is because we have identified a higher-level regularity (the fact that I act in a way that preserves the integrity of some of my structures), and therefore we can use this knowledge to move our description at a higher level of abstraction: from the physical components and their physical relations to the functions that these structures sustain. If we do this, the Input-Processing-Output metaphor becomes powerful and more parsimonious than fundamental physics. In other words, the regularity I am highlighting allows to do two things:
    – At the pure philosophical level, I hope it allows me to explain how mechanisms can generate intentionality.
    – On the empirical side, the regularity acts as a constraint on the more abstract models (the computational ones), making sure they are not completely arbitrary. To make them objective, we would need to know all the contingencies that made a given intentional structure what it is, as well as all the physical parameters that fully describe the current state of the structure. Both endeavours are impossible to complete, thus some level of uncertainty will remain. Ex-hypothesis, some residual arbitrariness will always characterise our models.

    Thus, you say:

    Given the interpretation we might be most inclined to use [in the case of a computer running a simulation], disrupting the electricity will disrupt the simulation, but similar points can be made for the rock. Given a particular interpretation, then slicing the rock in half or even just tapping it absently also disrupts (in effect destroys) the simulation.

    This leads you to the first seed of my own proposal: perhaps the robustness of an interpretation can be used as a guide to select better interpretations. This is true, but you can go further, as I’ve been trying to show.

    In short: Agents react in ways that can be understood in terms of aims. Naturally occurring agents (as a general rule, with exceptions that aren’t interesting for our current aim) would normally embody at least the “self-preservation” aim. They can be described in terms of structures that actively act on the environment in ways that tend to preserve their existence. These structures act as self-reference points: as long as they exist, their existence can provide the first seed to move from mere mechanism to intentional content (the phosphorylation is well understood as a signal about glucose, after all). Realising this is an intellectual (epistemological) exercise: it is useful to understand (model and predict) such structures, it allows to move our model/understanding to a higher level of abstraction, and thus to create more parsimonious descriptions. This is done without neglecting, but rather by emphasising what distinguishes an agent from a stone.

  48. Hi Sergio,

    Sadly, I’m not as sure that we are converging. The basic problem I identify seems to remain.

    It seems to me that your focus is on identifying as preferred one particular interpretation of the computations being conducted by an agent on the basis of predictive power, parsimony, reasonableness and so on. Such interpretations are “convincing”, in your words. I can go along with all that. I agree that any two reasonable people should usually agree on what computation is being performed by a particular physical object (if any). That is not actually the problem, though it seems to be the problem you are addressing.

    The problem is that there are unreasonable interpretations which might be chosen by unreasonable people. These interpretations might be “foolish” as you suggest, but there doesn’t seem to be be any absolute, objective criteria for distinguishing between the reasonable and unreasonable ones. As you said in a different context, I believe that “in the reality out there, you can expect to find that most, but not all, these distinctions (albeit very meaningful, FAPP) are ultimately arbitrary” and that this is an example of such a case.

    But though the distinction between a reasonable interpretation and an unreasonable one is either arbitrary or a matter of degree, it is much harder to accept the same of the existence of this or that conscious mind. Either a mind exists or it doesn’t, we suppose. The problem is therefore that you seem to have an absolute objective fact depending on an arbitrary or vague or subjective one.

    It seems to me that the computationalist needs to argue that there is precisely and absolutely only one objectively correct interpretation of any physical object as a computational device (is this really what you have been doing, in your view?), or accept the absurd conclusion that the same object can support the existence of many minds in parallel (each corresponding to a different interpretation), or argue that there is some mistake in the assumption that an absolute fact cannot depend on a vague one.

    Until you do that, I don’t think you really have an account after all of why a brain has the mind and internal experience corresponding to one particular interpretation instead of another. At best you’ve recapitulated Dan Dennett’s intentional stance argument: that an entity can be said to have intentions if it is useful to regard it so.

    And perhaps you might acknowledge that. You seem to agree that this is not the full story. But that is just Searle’s (and the other anti-computationalist’s) point. Searle has no problem in viewing a brain as a computer from a functional third-person perspective. He just doesn’t think that this goes any distance towards explaining first person phenomenology. If you concede that there is more to it than your account explains, then you are basically conceding the anti-computationalist point and not really putting forth a particularly controversial view. You would in effect be computationalism-agnostic.

    By the way, I don’t think I am really “pretty much skipping the rest”. It seems to me that much of what you have written to me is either preaching to the choir or addressing a view I don’t have or otherwise not terrifically in need of a response. That’s no problem. I don’t expect you to be particularly familiar with my views. I’ve read everything you’ve sent my way me and I don’t think I’ve left any important point unaddressed. Please let me know if you feel otherwise.

  49. @Disagreeable Me #49
    (there is nothing sad about disagreeing, tu quoque!?)
    Short and simple, I hope:
    Yes, you never really “skipped the rest”, I was clumsily asking you to help me focus on the core problem(s), and you did exactly this, so thanks (and apologies)!

    I do think I’m adding something to Dennett (IIRC) though, or if you prefer, I don’t think what I’m doing falls neatly in any one of the three options you give me.

    So what is it that I’m trying to say?

    First, I say that there is an empirical question that can be legitimately asked, with the possibility of exploring the answer space scientifically. The question is “is any given something an intentional system?”. The challenge I was trying to address is explained in the premises section of the main article. In short: because computations are arbitrary (granted), they can’t produce the first spark of “aboutness”, thus relying on computation to explain intentionality leads to infinite regress. I accept this and propose that the physicality of a system can be used to get started. It does seem that we agree on this.
    Then I explain why modelling intentional systems in computational terms is useful, once again for empirical purposes. We can abstract something out, describe the flow of information and avoid having to deal with the nitty-gritty physical details. This does follow Dennett, but the way I recollect his argument, it applies more to common life (understand things on the basis of what they are for and people on the basis of what they want) than the computational case for intentionality (as I’ve said elsewhere, I might be very wrong in my recollection!).

    In this second step, we can use standard “scientific” heuristics to discriminate alternative interpretations and order them in terms of efficacy, predictive power, parsimony and so forth. Thus, I don’t need to argue that “there is precisely and absolutely only one objectively correct interpretation of any physical object as a computational device”. In the first step I’ve already eliminated the stone (and all non-intentional systems), so I’m not talking about “any physical object”. In the second step I argue that there is a set of “correct” interpretations (it’s likely that the same functions can be implemented by many equivalent algorithms), and also that, since we are talking of interpretations, we can’t have “objective”, but only “somewhat accurate” and order (non-functionally equivalent) different options by verifying their accuracy. This is a strong claim, which again you seem to accept: because cognition arbitrarily segregates the one reality, the one objective truth can’t ever be fully described. But if you accept this claim, the list of three possibilities does look incomplete, there must be another way!

    In other words, I don’t think I am arguing for your first option, but I don’t seem to fit in the other two neither. Perhaps I’m arguing for the last one (that there is some mistake in the assumption that an absolute fact cannot depend on a vague one), by saying that the objective matter at hand can never be fully resolved, only approached, and that we can only have access to information that is vague and under-determined… That’s not how I would choose to describe my position, but it may make it clearer to you.

    Finally: I have been trying, and failing, to restrict the discussion to the first-spark of intentionality, which we all agree is necessary (but not sufficient, at least in my view) to sustain full human-like consciousness. The way consciousness may enter the picture is by some additional specific mechanism, which, if I’m right, can be theoretically modelled at any arbitrary level of precision in computational terms. I have not said a single word on what this mechanism might be (I don’t think I’ve even mentioned the necessity of some specific mechanism, but I do think this is the case: Intentionality != Consciousness).

    I meant to leave consciousness (/inner life) aside because here I would be happy to establish two smaller conclusions:
    1. Intentionality can be the result of mere physical mechanisms. I propose some criteria that can in principle help reaching a consensus for any given “object” we might consider (including simulated agents), and decide whether it makes sense to treat it as an intentional system or not.
    2. Physical mechanisms can be modelled in computational terms, and the ones that sustain intentionality are particularly prone to be modelled in such ways.

    Thus, I hope that if we are leaving full consciousness aside, we are indeed not too far from agreeing to points 1 & 2.

    If we do, at least the problem of infinite regressions is dealt with (assuming also Jochen may agree, we would have a super-local consensus). The problem of infinite interpretations is only mitigated: for example, an interpretation of the simple bacterium that leads to the prediction that the presence of glucose will make the bacterium slow-down its metabolism would be exactly wrong. In other words, we can now rule out some but not all interpretations because we (third party observer) can use the same reference point that the system itself uses to generate the first spark of intentionality. If an ‘interpretation’ does not leave the system potentially intact, when situated within the environmental parameters that would allow the original structure to exist, we can say that it is a wrong interpretation. Also, an interpretation that fails to make verifiable predictions on how the bacterium will behave is equally wrong. Therefore, we are now doing science: slowly and laboriously building better and more accurate models, and we can legitimately check the accuracy of our predictions via empirical verification.
    To my relief, the foundations of most of neuroscience and as a consequence, the ambitions of cognitive neuroscience (remember Revonsuo a few weeks ago?) are safe, despite Searle. Also: we now have a bit more hope that it might be possible to find a computational function (or a set of functions) that is(/are) functionally equivalent to consciousness, because the first obstacle, the one about intentionality is surpassed. That’s ambitious enough in my book, even if I haven’t solved the hard problem in 12 simple moves ;-).

    If you prefer, I’m saying that viewing a brain as a computer can cover some distance towards explaining first person phenomenology. How? Apart addressing intentionality (pointing out that we need to ground it on embodiment, not computation, and that embodiment enables computationalism), I didn’t particularly say. I’ve tried to refute one of the theoretical reasons to deny the possibility.

  50. My comment didn’t make it through because it had a link, I think.

    Here it is again, link expunged.

    Hi Sergio,

    OK, it’s clear that you’re trying to answer whether any given system is an intentional system rather than trying to answer whether any given intentional system is conscious. So you’re not really addressing the contentious issue, it seems to me. As long as we’re not getting into whether there is *actually* a mind there, then Searle’s criticism is not particularly relevant. We seem to be staying in the realm of whether it is useful or meaningful or reasonable to describe a system as intentional, i.e. that of Dennett’s Intentional Stance.

    I agree that you’re not falling into any of the three options I gave you. Indeed that was the point I was trying to make. In not pursuing one of those options it seems to me that you are not really arguing for computationalism but for the usefulness of adopting the Intentional Stance for some systems and not for others. The only difference between what you are saying and what Dennett said, it appears to me, is that you are interested on characterising intentional systems as computational whereas Dennett is not. You are also a little more focused on intentions directed at keeping an organism alive, and with the origin of those intentions whether evolved or designed. But this is only a minor shift in emphasis as far as I can see.

    Don’t get me wrong. I agree with Dennett. I agree with you. I just don’t think it is a sufficient answer to Searle’s challenge (equivalent to John Mark Bishop’s Dancing with Pixes reductio: (google “A Cognitive Computation Fallacy? Cognition, Computations and Panpsychism”).

    As well as the fact that you seem to be heading in a direction that would explain the existence of consciousness (an absolute) based on a subjective, vague set of criteria, I still don’t think the Intentional Stance copes very well with scenarios such as the locked-in patient. It’s a useful heuristic, sure, but it is not adequate to defend computationalism against these criticisms.

  51. Hi Disagreeable,

    “it’s clear that you’re trying to answer whether any given system is an intentional system rather than trying to answer whether any given intentional system is conscious”.

    Yes, I have a long term plan and I’m trying to proceed methodically. The first hurdle, which when I wrote this looked as a major obstacle to me, is how does intentionality (aboutness) come into being, and whether we can find a convincing reason to expect intentionality to be generated by mindless mechanisms (Dennett says “Yes” but I find his explanation to rely a bit too much on “then complexity fills in the blanks”).
    Assuming this can be sorted, the next step, which I am also addressing here, is to establish what tools are best suited to study what happens next. Clearly, I’m proposing “computational tools”.
    The criticism to computationalism, as far as I can see, happens at two levels. The first I am fully addressing here, I am however compelled to ignore part of the second because we need to agree on the foundations, before we can legitimately proceed further.
    The two levels are:
    Level 1. You can’t generate intentionality via mechanism! This sort of challenge allows for infinite variations, I have spent many words rephrasing and exploring it here, so I’ll assume we understand the issue.
    Level 2. Computations are 100% abstract, therefore any computation can be attributed to any mechanism, thus an explanation of consciousness that relies on computation will imply that everything is conscious in all possible ways.

    The route I’m trying to follow is:
    A. Show that intentionality requires certain mechanisms. (if true, Level 1 criticism is destroyed, not just addressed)
    B. Mechanisms can be simulated; for the ones that ground intentionality, simulations via computations get closer than usual to “the real thing”: the metaphor of “transforming/interpreting incoming information” is unusually pertinent to capture what counts in the phase transitions of an intentional system. (level 2)
    C. Thus, if we want to understand what a system does with the intentionality it entails, using the computational metaphor becomes very promising. (level 2)
    D. This approach allows to produce verifiable predictions, making the absolute arbitrariness of matching algorithms with mechanisms less absolute and less arbitrary. Some descriptions/mapping/matching will be demonstrably superior to others. (level 2)
    D.1. (Corollary): if we are to study consciousness scientifically, objecting that this approach only leads to weak, somewhat uncertain conclusions/predictions/answers, does not count as an objection – this is the only thing that science can do.
    E. We can now legitimately start looking for computational functions that would correlate well with “consciousness/inner life”. (level 2 would be fully addressed only if this last effort was successful, but I’m not even trying at this stage)

    I will now attempt to match your disagreements with my sketchy roadmap.
    Dis1. You might be trying to stop me at the very start, and object to A, if so, I’ve failed to find the key argument that refutes A.
    Dis2. you think that I “explain the existence of consciousness (an absolute) based on a subjective, vague set of criteria”. Because you rely on full Consciousness, it’s not clear what stage you are addressing, but I guess the following would be a fair matching (do correct me as much as you like). Effectively, you are saying that the uncertainty I blissfully accept in D.1 leaves too much room for subjective preference (you mentioned unreasonable people producing unreasonable interpretations) (I’ll call this Dis2.1) . At the same time, you say that my criteria are too vague, this may count as a challenge to A. (Dis2.2).

    Both Dis2.1 and Dis2.2 can be sorted out, I believe.
    Rejecting Dis2.1:
    I only need to re-state D., there is room for subjective interpretation just as with all other factual matters. You can ask if Julius Caesar was vegan, and there is only one correct answer. The fact remains that however much evidence we might be able to find, we could still reach the wrong conclusion, even if it seems clear from the start that the correct answer is “no”. So what? We can work towards reducing uncertainty, thus the objection that “an absolute fact cannot depend on a vague one” is exactly what we always have to live with. We are accepting that we can never perfectly identify all the causal chains that sustain reality; instead we make do with what scraps of truth we can find. I really can’t produce a coherent representation of your objection here because you also said that I’m preaching to the choir, and that you accept my counter-argument in full. So why is uncertainty a problem? If you indeed are taking this position, how are you not setting the bar at a height that you know is impossible to reach and then say “see? you can’t jump that high, try harder!”. I might have missed something terribly obvious, but at least the above should explain why I sound frustrated here and there.

    On the other hand, I can understand Dis2.2, and I thing it carries some weight, thus some additional remarks are required. If I’m interpreting your stance correctly, the problem here is that the set of criteria I’m proposing is in itself too vague and subject to arbitrary interpretations. This is almost true: I’ve allowed some vagueness, but it is a necessary slack, unavoidable while we are not discussing specific cases.
    The generic criteria is: An intentional system needs to possess a structure that measures some quality of the outside world. Such measures will need to have structural effects on the inside (otherwise nothing is actually measured). As long as this structure is intact, the intentional system can treat the collected values (the internal structural changes) as being about whatever it is that they measure. Thus, such a system can process information that really is about the measured something.
    All the ambiguity is about the something, how it is measured (the structural effects it has inside), and how it is processed (idem). This is necessary when describing the generic case: what I’ve left out is the specific hypothesis that should apply to a specific, physical structure. This is exactly the process of generating a scientific hypothesis “Robins are sensible to magnetic fields, and this particular mechanism is measuring magnetism in this and that way”, and therefore I can’t specify it in advance. Once the hypothesis is made, you may or may not have the means to verify it, and may remain with complete uncertainty, but surely this is not our problem, right?

    Finally, I can find another possible disagreement, Dis3: the intentional stance doesn’t cope well with locked-in patients (and similar), it is a useful heuristic and as such it can’t count as a strong rebuttal of the criticism to computationalism. Thus, we go back to levels 1 and 2. Level 1 doesn’t apply because you accept the intentional stance, but you deny that it does enough work. Thus, we are left with level 2, and in this case, I suppose you would say that we can in principle attribute any computation to the brain of a locked-in patient, and that I’m offering very weak ways to prune down the possibilities. Therefore my approach does not work.
    Here we have a case where, because of practical constraints, there are strong limitations on what can be verified in practice. We have special constraints to data collection and thus can only reach weak conclusions (especially for the alien-brain-in-a-vat). Sure, but how can we then conclude that the whole approach does not work in theory? As per Dis2.2 you either embrace the uncertainty that underpins science or you don’t. You do, so why are you surprised to find cases where it becomes important? Once again, I find your challenge difficult to represent coherently, because it does seem that you want to find a set of criteria that can generate certainty, while I’m proposing a set of criteria that can be used/verified empirically, and therefore I’m ruling out ever reaching absolute confidence from the very start.

    As a result: I cannot accept the hypothesis that I’m representing your criticism fairly, because if I am, you are manifestly wrong, and you are far too smart to be so wrong. Incidentally, this is certainly coloured by my prior judgement of Searle/Putnam/Lanier, to me they all do the same mistake I’m now attributing to you: they set the bar too high, they want something that is able to produce results with 100% confidence, but you can’t do this if your question is empirical, and “does this something have an inner life” is an empirical question from whichever angle I approach it.
    Thus, it’s more likely that I am the thick one and that I’m failing to understand the core of your rebuttals (and/or the challenge I’ve accepted). This leaves me in an uncomfortable situation: if it’s really me the one that fails to understand, asking you to explain your position once more would be exploiting your time and kindness, and I really don’t want to do so. On the other hand, if I am missing your point, your time and kindness would give me some hope to finally “get it”… Thus, it’s up to you, and I mean it.

    The tl;dr:
    “you seem to be heading in a direction that would explain the existence of consciousness (an absolute) based on a subjective, vague set of criteria”, but I don’t think I am. The criteria I propose are designed to generalise, FAPP they are neither vague, subjective nor arbitrary.
    “I still don’t think the Intentional Stance copes very well with scenarios such as the locked-in patient”. For any empirical question it is possible to imagine a scenario where contingent factors will preclude finding the correct answer at the desired level of confidence. Therefore this isn’t a valid objection, at most it says “you are stealing our subject, we want definitive answers and therefore don’t want the question to be treated empirically!”.
    “[my proposed approach to detect intentionality is] a useful heuristic, sure, but it is not adequate to defend computationalism against these criticisms”, we agree that it does not close the case (point E is still missing), but do you agree that it does move us forward? The moment you label it “a useful heuristic”, I think “therefore it *must* count as progress”.

  52. @Scott #47

    (I’ll try my luck with blockquotes…)

    Well, dialectically speaking, the problem with immanent critiques is that only outsiders find them convincing.

    Woops! I genuinely have no idea of what you mean, but sounds important: are you saying that my approach can’t convince insiders? If you are, insiders of what?

    Given that intentional cognition is adapted to solving complex systems neglecting what is actually going on, why should we think it can tell us what’s actually going on with consciousness? In fact, we should expect it to jam the gears exactly the way it jams the gears.

    We agree on this, but I do have a problem with BBT. BBT itself, but also science and philosophy in general, all rely on intentional cognition. Thus we have a methodological problem: if you are right, how could any one of them *not* throw their own spanner in the works? If you have addressed this problem somewhere, I’ve either missed it or failed to recognise your attempt. (Perhaps this is something we should discuss elsewhere? It would be manifestly OT)
    For now, this objection of mine (I do think it can be answered) has a methodological consequence: I focus on how to reduce uncertainty and try to jump the gun (manage the problem) in this way. That isn’t in any way or form intended to mean “this is why my approach is the right one”. I’m just naming the different corners that I think we are cutting, respectively.

    If enough people begin posing this question regularly enough, I think it could dramatically reorient the dialectical pitch.

    Yes, it could work (but we don’t know *yet* 😉 ). Please feel free to smack me in the head if you ever catch me suggesting otherwise (I might, because I get hasty far too often).

  53. Scott: “BBT itself, but also science and philosophy in general, all rely on intentional cognition. Thus we have a methodological problem: if you are right, how could any one of them *not* throw their own spanner in the works? If you have addressed this problem somewhere, I’ve either missed it or failed to recognise your attempt. (Perhaps this is something we should discuss elsewhere?”

    Why wouldn’t I use intentional cognition? It’s a genuinely powerful way to leap to conclusions absent information… heuristic. The problem has never been intentional cognition, it’s always been our intentional understanding of intentional cognition. There *is* a methodological problem with the use of intentional idioms in eliminativist discourse, the same as there is with the use of ‘design’ in evolutionary discourse–but it’s the methodological problem that all discourses face: to what degree are the cognitive modes utilized adapted to the problem at hand? It is in this sense that intentionalism quite obviously fails. There’s a reason why everyone endlessly chases the same tails on this website and endless others: because intentional cognition does not lie within the ‘solution space’ of intentional cognition.

    “Woops! I genuinely have no idea of what you mean, but sounds important: are you saying that my approach can’t convince insiders?”

    I was referring to your point of ‘critiquing from within.’ Individuals committed to a position always resort to favourable interpretations to defend their view. Reductios, immanent critiques, strategies that attempt to show inconsistencies within the implicature of some position always rely on what are inevitably deemed ‘unfavourable’ interpretations, typically post hoc.

  54. @Scott #54 & 55
    no worries and apologies for the late reply: I’m slowly drifting into “ruminating” mode again. It’s just that I’m obsessed with my own blind spots, so I was projecting and asking “what is BBT blind about? what does focussing on blindness/neglect inherently hide?” It does seem that we are more or less on the same page (or parallel routes, more likely) but we won’t discuss this here, I guess.
    The “whoops” point is understood, thanks!

  55. Hi Sergio,

    You argue that my worry about the vagueness or arbitrariness of your approach is misplaced, as science and empirical undertakings generally are always in the business of managing uncertainty.

    I agree with the latter but not with the former. The problem is not that there are empirical difficulties with measuring or detecting the intentionality of an object, but that you have failed to give an adequately specific account of what it is we are looking for. Since you’re comparing it with other scientific endeavours, I would contrast your proposal with the kinetic theory of gases, which explains that temperature is just a measure of the average kinetic energy of gas particles. That is absolutely clear, no vagueness at all, even though epistemologically we are very limited in our ability to directly detect the kinetic energy of individual gas particles in a given room. If there is an objective fact of the matter regarding whether a particular object is conscious or not, there must be a set of similarly clear criteria to determine it, even if applying those criteria in practice is infeasible or difficult and error-prone, yielding uncertain results.

    You are focused on epistemology when Searle/Dancing with Pixies etc are focused on ontology. We can’t be sure if Julius Caesar was vegan (although it seems quite implausible to me — isn’t veganism a more modern thing?), but there are some pretty clear criteria for determining if he was (at a specific point in his life at least), i.e. whether he avoided certain foods simply because they were derived from animals. The epistemological question cannot be answered, but ontologically the matter is relatively clear.

    It’s an interesting example though, because even those criteria are not absolutely clear. If on one particular day Caesar happened not to eat animal products then does that mean he was a vegan on that day? It depends on how you deal with cases like this.

    It seems unlikely that we could ever pin down a definition of veganism that completely eliminated all such ambiguity and vagueness. This implies to me that no absolute state of affairs could hinge on whether Caesar was a vegan or not. Something like veganism cannot work as a criterion for consciousness or intentionality, and neither can it stand in as an analogy for it.

    > The criteria I propose are designed to generalise, FAPP they are neither vague, subjective nor arbitrary.

    Yes. FAPP. That FAPP is the problem. I am not conscious FAPP. I am conscious.

  56. @Disagreeable Me #57
    Glad to find that I haven’t managed to discourage you, thanks for sticking with me!
    Apologies for the delay: it’s hard to find scraps of time when I can also think clearly during the working week.

    I think we agree on where we disagree, and that’s good news, as we can focus on the tricky bits and avoid the risk of becoming dispersive.

    In a nutshell: you think that the criteria I’m proposing are too vague/subjective.
    There are two ways to unpack this problem: you may claim that some vagueness is acceptable (to allow for the required flexibility), but that I am exceeding the allowable amount, I would call this the weak disagreement (WD). Alternatively, you may claim that vagueness is not allowed at all (strong disagreement – SD). I think that most of your last comment points to SD, but I may be wrong, so I’ll try to answer to both possible interpretations.

    Starting with SD:

    The criteria I propose are designed to generalise, FAPP they are neither vague, subjective nor arbitrary.

    Yes. FAPP. That FAPP is the problem. I am not conscious FAPP. I am conscious.

    (FAPP = for all practical purposes)

    Your position looks very reasonable, but it isn’t: as I’ll try to explain, you are “conscious FAPP”, not “just conscious”. The apparent reasonableness of your objection is one of the reasons why I’ve tried to stay away from consciousness. The thing is that FAPP is (needs to be) good enough on the lower levels (please bear with me, I’ll explain why), but suddenly it does not look not good enough when we reach conscious thought. Specifically, you either think that you are conscious or you don’t, there may be some borderline cases where you are not entirely sure if you are, like on the fringes of sleep, or when recovering from anesthesia, but most of the time, you yourself (and all of us) “just know”. If you are able to have a conscious thought, you would be the first one to know and would be able to say “I am conscious” without any doubt. Thus, if the task is to find a way to interpret your brain states and answer the same question (are you conscious?), without asking you, and with equal confidence, FAPP criteria just won’t do. I guess this is where you stand; if so, I agree.
    However, because you agree that “science and empirical undertakings generally are always in the business of managing uncertainty” you have to admit that a certain amount of uncertainty has to creep in, because the third party observer simply isn’t you, and the question we are asking is still empirical.

    Once more, the route I’m trying to start is:
    (1) find out what generates intentionality (in mechanistic terms)
    (2) find out if you can use more mechanisms to move from intentionality to (subjective) meaning
    (3) find out what more mechanisms (if any) you need to generate various aspects of consciousness (not limited to Phenomenal Experience)
    anyone of the stages above may fail, but we are still discussing if (1) looks possible, so jumping to (3) is premature.
    [also: the expectation is that there are systems that are intentional but don’t generate meanings (the bacterium) and so on: (1) contains (2),(2) contains (3), so (3) is the smaller set included in (2) and (1)]
    However, there is already an interesting question here: I am effectively proposing that we can, following (1) ask “is this system intentional”? and get answers with degree of confidence X. We then climb up (answer “yes” to (1) and (2)), reach (3), and ask “is this same system conscious?” In this case we expect to systematically find an answer with confidence Y and have Y > X, which looks absurd on the face of it. I’m building on your skepticism, assuming this captures the reasons why you insist in disagreeing.

    However, the counter-intuitive flow of certainty (starts low and increases along the way) is exactly what you should expect:
    You start making an hypothesis at (1) (examples are below), you verify it successfully and conclude “systemA seems intentional with confidence X”. Confidence is low because you had just one confirmation. Assuming you have no other way to verify the current conclusion, you can only test the next step, and ask “does this system instantiate meanings” (I’m not discussing the hypothetical criteria for this step, but let’s assume we do have them), you find a tentative Yes (confidence W) also to this, so what happens to your confidence levels? Well, your confidence on your first yes just grew, because now you have a system that seems intentional and also seems to generate meanings – you have to update your likelihoods: since all systems of type (2) are also of type (1) evidence of (2) has to reinforce my already existing confidence about (1). Thus, once we reach full consciousness (3) the fact that our initial criteria for (1) only allowed relatively low confidence is a non issue. Taking a shortcut, for a supposedly conscious entity, we can (assuming the criteria exist) ask “is this system conscious” and we would automatically know that it is also intentional. That’s riddled with ifs, but it’s the best plan I can produce.

    Now, going back to the matter at hand, we are still stuck at (1) and presumably you still think that my criteria are too vague and subjective. However, if you are still with me, you should now see that the objection about full consciousness does not apply: uncertainty is bound to decrease if, for any given entity, we’ll be able to find and verify criteria for each level. So, in theory, we have a route to follow, that should allow to scientifically probe some entity and say “this thing is conscious with probability W”. This may be possible if I can show why my criteria for (1) are acceptable, so I still need to refute SD and WD.

    I can only brush aside (SD), for one simple reason: for all given “systems” that you can define, out there there isn’t always a right answer to the “is this system intentional?” question. To begin with, you can arbitrarily choose what is considered “this system”, it could be myself, or me and my PC, my PC alone, or the PC with keyboard, or the whole local area network, etcetera.
    To start your whole route, for a given system, you would want to have reasons to formulate (and verify) some preceding hypotheses: “this thing seems to retain enough integrity to consider it a discrete system” is a good one to start with. This kind of hypothesis doesn’t allow complete certainty, because reality doesn’t work like this. I’ve already said it, but I have to repeat myself: you can’t say with objective certainty where my persona ends and the outside world begins, you can’t pinpoint the one moment when I became a person, and you can’t say when exactly one passes from life to death. That’s because there is no objective answer to any of these problems, but still, we use these concepts all the time, have no trouble (virtually no uncertainty) in distinguishing what is part of our bodies and so on, why? Intentionality: we are intentional systems, we start all our thinking from dissecting the world in a multitude of subsets (there is no aboutness without discrete objects). I can think about a tree because I’ve accepted the underlying hypothesis that the tree has boundaries, the fact that it doesn’t is irrelevant because, For (almost) All Practical Purposes it has. At the same time, I can think about trees in general, because I’m inclined to assume that one can define what trees are. However, you can’t define what trees are, not in an objective manner at least, but again, FAPP you can neglect this annoying detail . This is where BBT comes in, but that’s another story. For our purpose it’s useful to observe that because of neglect, when you ask a question, the minimum level of uncertainty depends on how abstract you question is. 100% abstract questions sometimes allow for 0 uncertainty (2+2=4), exclusively empirical ones never do, and there is a gradient in between.

    So, back to our minimal intentional system: the signal pathway I’ve described is “about” glucose because we can agree that this description suits all our practical purposes. Importantly this does hold some water, because for the bacterium, the same limitation (the FAPP qualifier) applies: sure, other things may interfere with the pathway, making it about other things as well, in strict objective terms, there is no unique pathway and it isn’t about anything, there is just some mechanism. However, to understand why it exists, how it works and what consequences it has, conceptualising the mechanisms we see as being about glucose is perfectly justifiable: the FAPP qualifier that we attach to our definition mirrors the FAPP qualifier that made such a structure possible, the mechanism is about glucose because that’s what it reacts to in the ecological space that made it emerge.
    Thus, I can only reject SD:
    – The question “is this system intentional?” implies arbitrary distinctions, so worrying about arbitrariness is premature. The important thing is that answering the question allows to reduce our uncertainty on the underlying premises (if we find that a system does qualify as an intentional system, we are confirming that it makes sense to consider it a system).
    – At this level, the FAPP qualifier is necessary, because it is the first stage on a route that goes from 100% empirical towards the purely abstract. At this point we can only look at things from the practical purposes stance, because intentionality itself does just that, it generates distinctions that are useful in practice. If you want, when analysing these distinctions, you uncover the reasons why they are not really arbitrary after all (they follow some statistical regularity of the world).
    Therefore: the question about intentionality can’t avoid dealing with some subjectivity (arbitrariness) because this arbitrariness pertains the core of the question itself.
    SD, when applied to (1) is refuted and dismissed. But so is SW. The reason why also SW doesn’t work is circumstantial: I am defining a generic framework, that tries to be applicable to many specific cases. I wanted something that I could use to distinguish between a stone, a machine, a simple organism or a complex one. In doing so, I have to leave the underlying hypotheses unspecified: I do not start by saying why we are analysing this or that system. Thus, if we go out in the field, and find a strange goo in some pond, we could ask: is this an intentional system? Or is it made by a cluster of intentional systems that just stick together? Is it something in between? (yes in the real world you even get things in between!). Based on the things we know about life on earth, we can (and have to) start from selecting reasonable hypotheses on whether we are dealing with a colony of unicellular organisms, a multicellular single organism or something in between. Only then we can start making our criteria on intentionality less vague: once we have defined what our system is, we can make hypotheses on what (if any) of its structures can measure things about the world outside. Asking me to remove this kind of slack a priori, before we specify what system we are dealing with is manifestly premature.

    There is a third possible interpretation of your disagreement: (UD) undefined disagreement, you accept that the kind of vagueness I’m allowing in refuting SW is necessary, but you also think I’m allowing for even more vagueness than that, and that it is this additional uncertainty that invalidates my attempt. Fine, if that’s the case, you need to specify why, what your requirements are and how would I know if I can meet them (I think I’m explained all the vagueness I allow, but I might be missing something!). I don’t think you’ve even started to do so, and therefore I’ve called this disagreement “undefined”.

    An interim tl;dr:
    once we empirically consider the question “is this system intentional?”, we can start reducing the level of uncertainty, and simultaneously move towards more conceptual (and less inherently uncertain) domains (intentionality -> meaning -> consciousness). Thus: because the game is about what sort of simplifications does the studied system rely on, we have to accept and embrace a-priori arbitrariness, with the aim of narrowing it down progressively.
    Therefore, the FAPP qualifier is needed: it mirrors the causal links that we are trying to identify.
    Also, the FAPP qualifier can become less and less relevant if indeed the route allows to proceed from (1) intentionality to (3) consciousness.
    Thus: some slack/vagueness/arbitrariness is necessary in defining the criteria we’ll use because it’s needed to get started, and to start we need to make some educated guesses (on what to analyse and where to search, at the very least). Also, we expect that we will be able to validate our initial arbitrary hypotheses by producing verifiable predictions, this follows mainstream scientific approaches, so it should raise no eyebrows at all.

    Practical examples:
    A stone. Is it intentional? We have no reasons to believe it is. There is no internal mechanism which distinctively measures what’s outside the stone, and makes the behaviour of the stone change in ways that make it more likely that this mechanism will survive another day (or reliably have other effects on the environment). Thus: no, the stone doesn’t have intentionality.

    A thermostat, the one you and Jochen have been discussing for weeks. Does it have intentionality? Just a tiny bit: it does have a structure that reliably reacts to what’s outside, it also does have effects on the environment, but doesn’t really alter the probability of its persistence. Thus, it’s a borderline case (because we expect to find them!). It has (proto)intentionality with respect to temperature alone, nothing else. Following your discussion, if we change what it does measure, its (proto)intentionality changes in an obvious way: it is about whatever it happens to measure.
    [Side point: I go the way of (proto)intentionality because we are mostly interested on naturally occurring systems, but I won’t object if we want to broaden the scope and say the thermostat is fully-intentional over one (external) variable alone. It shifts (and generalises) the focus a bit, opening up more room for criticising the amount of vagueness that I allow, so there is a cost implied, but if you want to take this route, I can easily follow.]

    We’ve seen how to apply the same approach to simple bacteria, and colonies thereof.

    A more interesting one is plants: a single house plant has mechanisms to change how it behaves depending on light, water availability, and to a lesser extent temperature. Thus it is an intentional system with respect to these three variables (and many more, to be honest). Because of what we already pre-theoretically believe, it’s also easy to predict that it doesn’t generate meanings and that it isn’t conscious.
    What is vague/arbitrary about this process? Nothing, because we can start from very well established founding hypotheses, we know how to identify a single plant and why it makes sense to do so (and yes of course, FAPP does apply).

    Trickier cases:
    A locked-in patient. Assuming we have someone who only lacks the ability to initiate voluntary action, finding that it has intentionality about things like temperature (regulates blood flow to the periphery capillaries), punctures (blood coagulates, inflammation, tissue repair, etc.), and many other things will be straightforward. That’s because we already know a lot about how human beings work. At the same time we can, because of prior knowledge, use the kind of reactions I’ve just mentioned to evaluate how much of the expected structures (the ones that functional humans all rely on) are still functioning, and which ones do not. We do this all the time for medical reasons, of course.
    The same applies to a brain in a vat, assuming we started with knowledge about how it would work if it had a body.

    The case of an alien-brain in a vat is extreme (we don’t even know it’s a brain in a vat!): we don’t have any knowledge of how its intentional mechanisms work, starting with ignorance of what they were expected to measure out there, thus I can’t here and now provide a narrowed down set of criteria we might use to figure out how it works (or ultimately, what it thinks). But you can imagine what would happen if we really did put our hands on one: people will start probing it, and try to identify reliable cause-effect chains. Assuming whoever has access to it does want to keep it functional (does not probe in a disrupting way), we may find out basic facts: the maintenance system feeds it with chemical X and drains away chemical Y, X can become Y in known ways and release energy in the reaction, so X can be considered as the probable source of energy for the mechanisms inside, which may or may not allow us to formulate more hypotheses and keep going. Does this mean we could figure out what such a brain thinks (or whether it is conscious?) I don’t think so, not if the brain is different enough. However, this test-case shows the limits of the approach, and by doing so, it does not even remotely prove that the approach is inherently wrong. Once again: for questions on intentionality, we have to accept FAPP because intentionality (as understood in my terms) relies on FAPP, once you take this in, you have to accept the epistemological cost. If you have to account for “practical purposes” it goes without saying that if you know nothing about the practical purposes which shaped a given system, you will find it very difficult, and frequently impossible to figure how it works/what it does, absent any other clues.

    Thus: I don’t think the framework I propose breaks down for such tricky cases, it merely draws the attention to what kind of information is needed to get us started. It also shows that, lacking such information, the route becomes even harder. However, we can already see that the current efforts we are making to discriminate between locked in and unconscious/vegetative patients are sound empirically and theoretically: we do so by measuring the internal mechanisms, and draw tentative conclusions using the preexisting knowledge on how they would work normally, which is precisely what my approach would suggest. As expected, we don’t get strong answers, but varying degrees of confidence, precisely what I’m describing here. It also shows that scientists and clinicians quite rightly ignore this sort of philosophical disquisition: they just do what they can with the knowledge and tools they have. In this area, philosophy is lagging behind, and that’s an unfortunate circumstance.

    Finally, a little detour on the question “was Caesar a vegan?”. As you point out, picking a precise criteria is tricky, if you look close enough, which is one reason why I came up with this example. If you exclude subjective meaning and intentionality, you will have to figure out about all the food that Caesar ingested, at least in a period of time, but you will have no option but to pick the period arbitrarily. Over certain periods, everyone is a vegan (I am right now, as I’m not eating!), over the whole life, virtually no one is, as we usually start life by consuming milk! Thus, it seems wiser to let some subjectivity slip in and use a much more loaded criteria: someone is vegan if s/he considers him/herself to be so. This allows for people to be vegan even if sometimes by accident, mischief or weak-will they do eat some animal-derived product. Thus, for Caesar, this criteria makes things simpler: as we are almost sure there was no concept of veganism back then (not in the modern form!), we can confidently conclude he wasn’t one. Little problem: doing this changes the ballpark so dramatically that we aren’t even sure it’s still the same game; before we were looking for facts about physical reality (did he ingest this sort of food?), now we ask things about his subjective self-image, and yet, in both cases we can’t be terribly sure. Interestingly, I’d say that our confidence levels don’t change much (and I would add, that’s also because when we estimate them we can’t intuitively separate the two sides of the question). Why is this interesting/relevant? Because I hope it helps understanding why focussing on the ontological is strangely circular. Ultimately, there just isn’t a right answer to Caesar’s veganism question (probably because the question doesn’t make sense to begin with: it was designed to uncover these problems!), and if you look closely, how you pick your criteria has to either be utterly arbitrary or justified by the reason why you are seeking an answer. Also: you can pick criteria that are strictly objective, stand firmly on ontological grounds (did he ingest animal-derived food?), and actually answer something that looks very objective, or remain closer to the mark, but smuggle epistemology and subjectivity in: asking “did Caesar consider himself a vegan?” implies intentionality and gives ontological weight to Caesar’s self-knowledge. It transfers the epistemological task to Caesar, and relies on that, giving the impression we are still on straightforward ontological grounds. Thus, I say: forget ontology, it just doesn’t make sense. When you are on intentional grounds and above, everything starts and ends with epistemology. In other words, DwP, the Chinese room and similar arguments seem to work because they rely on the neglect which gets knowledge possible, but without even realising they do. They place the bar far too high on the basis that there seem to be objective facts to be discovered, there are, but they are epistemological facts (they relate to self-knowledge or the knowledge/beliefs that a given entity possesses). I’ve said it from the beginning that odd symmetries and circularities abound.

    Thus, when you ask things like “is this system conscious?” you implicitly include FAPP in your original criteria: (if I’m right) this is because (naturally occurring) intentionality gets to exist only because making (otherwise arbitrary) distinctions is useful, for (all) practical purposes! Simple, right?

  57. Something of an aside, as I’ve missed a lot of responses here. While I still think Platonism is flawed philosophy, it’s interesting to see some articles in Aeon supporting the idea that Forms are in the world rather than either their own reality or the metaphysical ground of ours. (Aristotle’s position as I understood it).

    1)Mathematical World – James Franklin’s argument for Math Forms:

    http://aeon.co/magazine/science/what-is-left-for-mathematics-to-be-about/

    2) Andreas Wagner argues evolution needs Platonism:

    http://aeon.co/magazine/philosophy/natures-library-of-platonic-forms/

  58. Hi Sergio,

    I don’t feel you are really addressing the point that while there is always uncertainty in science (such as whether a given theory is correct, or whether our measurements of a given system are correct) the theories themselves cannot be vague if consciousness depends on them while being the kind of binary (true or false) absolute property that is objectively either there or not there. We cannot be certain that any such theory is right, and we cannot always perform the measurements needed to be absolutely certain of any prediction, but it has to be possible to define the criteria you are looking for.

    For instance, it should be possible to define an algorithm that would read in a description of my physical state or the state of any other system and print out whether your theory would predict consciousness or not, just as we could an algorithm which would predict temperature from the individual momenta of a collection of gas particles. Or forget that and just define an algorithm which reads in the state of a physical system and identifies what computation (if any) it is performing. Good luck!

    You talk about verifying the hypothesis that certain systems have intentionality. I have to admit I have no idea how one would do such a thing. As far as I can see you are only producing hypotheses and not verifying anything.

    I don’t think the problem in identifying what is “the system” as opposed to “the system plus peripherals” is a real one. One can define the system arbitrarily as far as I’m concerned. We can then ask for any such system whether there is intentionality to be found within it somewhere.

    I agree that there is no objective answer to where your persona begins or ends, or when precisely death occurs. But I see these as valid arguments against theories that take these as significant metaphysical moments. For instance, if the soul leaves the body on death, then the fact that the moment of death cannot be precisely defined counts as (pretty lightweight) evidence against the notion of a soul leaving the body on death. Again, for most of us, there is no doubting that we are each conscious. That is (commonly accepted as) an objective fact. And objective facts cannot hinge on states of affairs which are open to interpretation.

    > Ultimately, there just isn’t a right answer to Caesar’s veganism question

    Exactly right. But people are not willing to accept that there isn’t a right answer to the question of whether they are intentional or are conscious. That’s the objection Searle and DwP makes, and that’s what you’re failing to answer in my view.

    I’m on board with 99% of what you’re saying. I agree that bacteria and thermostats and so on can be considered intentional or proto-intentional, but when I say this what I mean is that the abstract causal systems we most naturally interpret them to be instantiating are intentional. But I don’t think there is a fact of the matter on which abstract system is being instantiated by a particular physical system — this is a question which is open to interpretation, although some interpretations are more reasonable than others.

    Since intentionality and consciousness depends on the structure of the abstract system and not the physical substrate there cannot be a fact of the matter about whether a physical object is conscious or intentional.

    Yet there is a fact of the matter about whether *I* am conscious. To me, this means that I am not a physical object. Instead, I identify with the abstract structure of my mind and not with the physical structure of my brain, suggesting that were this structure instantiated on another medium (e.g. in a brain uploading scenario) then that would still be me.

  59. Hi everyone.

    I only discovered this site yesterday, and I’d love to join in the discussions. Just to give you an idea of where I’m coming from, I tend to take the “BitBucket” point of view. I broadly agree with Daniel Dennett’s views on intentionality, and on some other things too (though not all). I would say I take a somewhat more Wittgensteinian view than Dennett does.

    I’d like to give my own response to Jochen’s argument about computations, an argument which seems similar to one made by John Searle, and which I reject. Here’s what Jochen wrote in the Pointing thread. (If I’ve missed a better exposition of your argument, Jochen, please feel free to draw it to my attention.)

    …whether a system is computational—in the sense that it performs some computation—is not a property of the system itself, but of the way it is interpreted. A system performs a computation if there is a mapping between the states it traverses in its evolution and the logical states of some computation; and such a mapping (at least for systems with a given minimum complexity) always exists.

    Jochen, you’ve only mentioned states, and not behaviours. I say we can’t make any sense of the concept of computation without considering the behaviour of the system. If we think of a computation in the abstract, as defined by the rules of a formal system, it’s the transformational rules which specify the behaviour. If we’re going to say that a physical system instantiates that formal system, it must instantiate those transformation rules, so we have to be able to identify some behavioural pattern in the system which corresponds to those rules. In a real computer, it’s the behaviour of the processor. We interpret a particular machine instruction as Add because of the particular change of state that occurs when the processor executes that instruction. We cannot sensibly interpret it as a Multiply instruction. The behaviour of the processor constrains how we can sensibly interpret the instructions. The processor and program together constrain how we can sensibly interpret the entire computation. They determine that (for example) a computer is performing a chess-playing computation and not a checkers-playing computation. That’s why it makes no sense to call a chess program a “checkers program”. The interpretation that it’s a chess program is not arbitrary, or at the whim of the user or the programmer. It’s required by the behaviour of the system. Behaviour is relevant at each level of description, whether we’re describing individual machine instructions, the system as a whole, or possibly some other level of description.

    It seems very odd to say that a stone can be interpreted as performing any computation, including presumably the same computation performed by a chess-playing program. If that were so, it should be possible in principle to set up some kind of appropriate interface between the stone and a chessboard, and play chess against the stone! That seems absurd, which casts doubt on the claim. Sergio mentions that the stone has changing internal states. I don’t know what states he has in mind, perhaps vibrations of the atoms, or the movements of subatomic particles. In any case, I don’t believe that any appropriate mapping is possible. First it would be necessary to establish an appropriate mapping between those behaviours and the machine instruction set of the computer. That would in turn determine which states can be interpreted as which instructions. Then it would be necessary to find a sequence of such states corresponding to the whole chess-playing program, such that the sequence of instructions could be executed. I consider it utterly implausible that such a mapping exists, and that’s why no possible interface could allow us to play chess against a stone.

  60. Oh dear, I got the nesting of tags wrong. Is there any way to edit or preview a comment here?

  61. Richard Wein, I’m a bit reluctant to hijack yet another discussion into the direction of what could be considered computation and what couldn’t; I think I’ve really said everything I know to say in the Pointing-thread. Anyway, in regards to this:

    Then it would be necessary to find a sequence of such states corresponding to the whole chess-playing program, such that the sequence of instructions could be executed. I consider it utterly implausible that such a mapping exists, and that’s why no possible interface could allow us to play chess against a stone.

    There’s a nice paper by John Mark Bishop, where he describes in detail how such a map between the states of an arbitrary physical system and those of an arbitrary computation can always be constructed; so, no matter how implausible you may find it, I’m afraid that it is indeed the case that you can always construct such a mapping (and interface).

  62. Thanks, for the reply, Jochen. I appreciate that you’ve discussed this a lot in the past, and don’t want to start again. So I won’t press you.

    Perhaps my comment will be more appreciated by those who are skeptical of the claim, but having difficulty putting their finger on just what’s wrong with the argument.

  63. @Richard Wein
    Thanks for joining in!
    Don’t worry about the quote nesting, we’ve all made the same mistake one time or the other, normally it’s easy to understand what’s going on and nobody will complain about unfortunate formatting.
    Your view more or less starts from the same intuition that inspired mine, I think. The compact version would be: real system will include a cascade of mechanisms between input and output, we can model those in terms of (abstract) computations, but we can do so in intelligible ways only by grounding our abstractions on the actual (physical) input and output mechanisms. This would be a good summary of what I’m trying to say. If you are with me, then there is no reason to deny the admittedly extremely difficult to accept case of a chess-playing stone.

    Furthermore, the case being made is solid. As pure abstractions, computations can explain nothing in the non-abstract domain: we need to explain how to bridge the physical to pure abstractions.

    Jochen: Bishop’s paper is nice! Disagreeable Me pointed me to it earlier on, by the way. However, I have a question: can you pinpoint the reason why he writes that computations are “neither necessary nor sufficient for mind” I can understand the “not sufficient” but the necessary part eludes me: to me he just states it as a matter of fact, without explaining why, but it’s a long paper, I might be missing something! I can stretch myself and accept the “not necessary” part, but only if we agree that computations are likely to be very handy in epistemological terms.

    All: Bishop’s FT is here: http://www.doc.gold.ac.uk/~mas02mb/Selected%20Papers/2009%20Cognitive%20Computing.pdf

  64. Nice to see you joining in here, Richard. I remember having some very enjoyable exchanges with you on my blog in the past.

  65. Hi Sergio,

    Thanks for the link to the PDF. I reject its central claim that there is an “observer-relative” mapping of physical states to computational states. So I’ll jump straight to the section where the author responds to this objection: “Computational States Are not Observer-Relative but Are Intrinsic Properties of any Genuine Computational System”.

    I don’t like the expression “intrinsic properties”. I would say, rather, that the behaviour of the system constrains how the physical states of the system can be interpreted as computational states.

    The author responds to this objection by giving an example of a logic gate, which he claims can be interpreted as either an AND or an OR. But this is far too trivial an example to make the point. Yes, in the absence of any context it can be interpreted either way. But it can’t be interpreted as an XOR gate. Remember, the claim I’m rejecting is not just that there may be more than one possible interpretation, but that any system can be interpreted as performing any computation. More importantly, in the context of a significant computational system, the context would force the conventional interpretation of the voltage levels and of the logic gates (the one the designer intended). As I said earlier, when we consider a chess-playing system, we cannot sensibly interpret it as a checkers-playing program, a program for predicting the positions of planets, or most other computations. It just doesn’t behave the right way to perform those functions. Having accepted that it’s a chess-playing system, reverse-engineering the system would force an interpretation on individual gates, resolving the ambiguity between AND and OR. Or imagine replacing all the AND gates in the computer with OR gates, and vice versa. It wouldn’t work any more, because the rest of the system is designed on the basis that the gates have the function the designer assigned to them. The interpretation of a gate as AND or OR is not arbitrary. It’s forced by the design of the system. (But, at the same time, it’s not the designer’s intention per se that matters. The effect of the designer’s choices is now embedded in the system, and it’s the consequent behaviour of the system that constrains the interpretation.)

    The author proceeds with an example of a program, which he says, could be interpreted either as a chess-playing program or as an interactive art exhibit, depending on the form of the display. Since he describes the exhibit as “interactive” I assume he has in mind that a user could input moves. So really this is still a chess-playing system; it’s just that the output has been made more difficult for the user to interpret. Even if there was no output or input, and the program played both black and white, the system would still be playing chess internally. The display is irrelevant.

    I’ve only said that the behaviour of the system constrains how the physical states of the system can be interpreted as computational states. I haven’t said that there’s no room for any variety of interpretation at all. There’s some fuzziness. But not enough to allow a stone to be interpreted as playing chess, or a simple program to be interpreted as a conscious AI.

  66. Hi DM,

    Nice to see you too. I still drop in on your blog occasionally, but unfortunately there haven’t been any updates for a while.

  67. Remember, the claim I’m rejecting is not just that there may be more than one possible interpretation, but that any system can be interpreted as performing any computation.

    I think even this poses huge problems for computationalism: If you accept that there is some ambiguity in the interpretation, it seems that the computational facts don’t fix the phenomenal facts, and that thus, computation alone doesn’t account for the phenomenology. For instance, there’s a simple mapping between my phenomenology and that of a colour-invert; thus, if I were a computer program, it seems equally plausible to interpret that program as generating my phenomenology, and that of an invert. But I am experiencing this particular phenomenology, not that of the invert; and hence, my phenomenology is underdetermined by the computation, and we need some further facts to fix it.

    As I said earlier, when we consider a chess-playing system, we cannot sensibly interpret it as a checkers-playing program, a program for predicting the positions of planets, or most other computations.

    Then I would like to ask you what you mean by a ‘sensible interpretation’. Mathematically, it’s trivial that you can interpret the chess-playing program either way: the states of the physical system implementing the program bear a mapping to those of a chess-playing system, but, as long as there are sufficiently many states, they can equally well be mapped to checkers-playing or planetary evolution computing. All that would be needed is a change in peripherals; for instance, one could imagine adding an auxiliary screen and an auxiliary keyboard, such that the keyboard implements a mapping from checkers-inputs to chess-inputs, and the screen implements a mapping from chess-outputs to checkers-outputs. That these mappings exist is in either case merely a question of the cardinality of the state space of the computer.

    That is, from some given input, the computer is put in, say, state S1; the algorithm then executes some finite sequence of steps, evolving through some sequence of states S2, S3, …, Sn, then produces an output. If now there is a checkers algorithm that executes between some input at most as many steps as the chess algorithm, then the mapping is simply as follows: the user provides input that would put a checkers algorithm executing machine in state C1; thus, C1 is mapped to S1. The checkers algorithm then executes steps C2, C3, …, Cm, with m less or equal n. Then, the required interpretational mapping simply is a one-to-one or many-to-one (in the case n > m) mapping S1, S2, etc., to C2, C3, and so on. At the end, the chess algorithm will produce a move, corresponding to it being in the state Sn, which is mapped to Cm, producing the move that the checkers algorithm would have come up with. If we can assume that the computer is a physical system which effectively traverses infinitely many (or arbitrarily many) states in some time interval, depending on the level of resolution at which we view it, then such a mapping is always possible, period.

    Of course, this is just a simple-minded illustration, and more complex interpretational mappings are possible (in general, in order to reduce one problem to another in the computational complexity-theoretic sense, there is not necessarily a restriction on the number of computational states that are traversed). But I think it illustrates the problem well enough: which computation a physical system implements is never determined by the physical system alone, but always only fixed by the user.

    I mean, it really is the same thing as interpreting an arbitrary string of bits—‘001011101001’ can mean absolutely anything, depending on which code you use. But the set of states a computation traverses—its execution trace—can be just as well written as such a bit string; and once again, there is no unique favoured interpretation, no way to give an answer to the question ‘what does this system compute?’ that isn’t ineluctably subjective. But if the origin of subjectivity is what we are trying to explain, then this appeal to subjectivity makes the whole deal circular.

  68. @Disagreeable Me #60

    I think we are getting to the point where we can’t ignore our underlying differences…

    Ultimately, there just isn’t a right answer to Caesar’s veganism question

    Exactly right. But people are not willing to accept that there isn’t a right answer to the question of whether they are intentional or are conscious. That’s the objection Searle and DwP makes, and that’s what you’re failing to answer in my view.
    […]
    Since intentionality and consciousness depends on the structure of the abstract system and not the physical substrate there cannot be a fact of the matter about whether a physical object is conscious or intentional.

    Yet there is a fact of the matter about whether *I* am conscious. To me, this means that I am not a physical object. Instead, I identify with the abstract structure of my mind and not with the physical structure of my brain, suggesting that were this structure instantiated on another medium (e.g. in a brain uploading scenario) then that would still be me.

    Here is the thing: for me “the structure of the abstract system” is nothing but our best way to understand what’s really going on. Because what really is going on is forever somewhat out of reach, and because we can only understand the world in terms of arbitrary (however sensible) segmentations and relations between segments: we produce abstract descriptions of the structures we infer exist in the world.
    Among the distinctions we invent to make sense of the world, there are intentionality, consciousness, life, death, the idea of species and so on. Thus, the facts of the matter, if we really want to hold on the idea of “facts” are all confined within our own understanding, they are not out there.

    You say “people are not willing to accept that there isn’t a right answer to the question of whether they are intentional or are conscious”, I’m ok with not accepting it, as long as we realise that intentional and conscious are concepts that apply to our understanding of the world, not directly to physical reality.
    If there is no discrete physical distinction between alive and dead, there isn’t one between conscious and unconscious, or intentional and non-intentional; but if all these distinctions can’t be reduced to binary distinctions, then “I’m conscious” is a matter of opinion. What we are trying to explain therefore is what generates these opinions, and on this already abstract level, there are facts to be found: your cognition, unlike the world, does make sharp distinctions, in fact you have a mind because it all starts from making these distinctions.

    Thus, if we want to maintain that there are facts about consciousness/intentionality we have to call them epistemological facts.

    The reason why computational functionalism is rejected by people like Searle is that they mix the domains: on one side, consciousness is supposed to be a fact that directly applies to the reality out there (it isn’t, the relationship is very indirect), and thus not explainable via abstract concepts like computations. On the other, computations are abstract, thus arbitrary, and can’t be uniquely matched to reality. This isn’t an argument, it’s the three cards trick! They merely change the rules of the game to make sure your goal is always on the other pitch: their criteria are like asking someone to slam dunk in a football(/soccer) field. Or do a checkmate while playing tennis. You may do something that metaphorically resembles what you’re being asked to achieve, but you can never fulfil the original request in full. I’m sure most aren’t doing this on purpose, but the issue remains.

    However, we do need a bridge, so I try to show how mechanisms generate distinctions (our bacterium ‘reacts’ to glucose, the internal structural changes can be understood as being about glucose): from that point on, many other things may happen, and because sharp distinctions are being approximated by the structures themselves, when we try to understand what is happening, abstracting away from the physical structures becomes legitimate. The conclusion is your own: if we can manage to replicate the correct set of abstract features of a given system (instantiating them on a different physical mechanism), and the original system “considered itself to be conscious” the result will do the same. We still have no idea of what the correct set is, though. We are arguing over the possibility that a correct set may exist.

    To me, abstractions are “useful approximations”, to you, they are “what really counts”. Your approach admits the existence of hard facts, and thus does grant some sense of relief. Mine does not: the very idea of a “fact” is just another abstraction, a useful approximation. My vision rescues a shadow of positivism in that second order distinctions, distinctions about distinctions can be made. Thus, there can be meaningful correspondences between “distinction generating mechanisms” (a metabolic pathway that is about a given quality of the environment, but not most others), the mechanisms that rely on the initial distinction, and abstract concepts that symbolically describe the interactions.

    But let’s go back to the empirical.

    You talk about verifying the hypothesis that certain systems have intentionality. I have to admit I have no idea how one would do such a thing. As far as I can see you are only producing hypotheses and not verifying anything.

    In this discussion I am not verifying anything because this is a blog, not a laboratory…
    Hypothesis: this and that protein react to glucose. This third protein gets modified as a result and promotes transcription of genes X, Y and Z.
    Verification: a lot of lab work! You are not asking me to specify how this is done in the real world, right? (and no, there is no single catch all algorithm, as how to do it depends on whether you are studying a bacterium or a whale, the methods change quite a bit and are not algorithms, they require to interact with the physical world)
    Result: if your hypothesis can be verified, the structure/system you are dealing with is able to generate signals about glucose.
    Or better: if your hypothesis can be verified, then it becomes reasonable to treat/analyse the structure/system you are dealing with as something that is able to generate and process signals about glucose. You assume intentionality because it is epistemologically fruitful.

    There is nothing mysterious, vague or arbitrary about this.
    Same with reverse-engineering man-made artefacts. You can hypothesise: this mechanism responds to changes in temperature and sends a signal to this other mechanism when a threshold is reached, this other mechanism then opens up a switch, etcetera.
    You can do your verifications with your engineering tools, if successful, you’ll conclude “it’s a thermostat” which is equivalent to say “this thing shows some intentionality with respect to temperature”. Again: you can’t seriously expect me to spell out a single algorithm to explain how to reverse engineer all man-made mechanisms, and maybe one that can also be applied to all of biology…

    All I’m saying is that intentionality isn’t mysterious: it is a very practical thing. Once it’s there, it makes sense to understand the structures that rely on the intentional qualities of our first “receiver” in terms of signal transmissions and transformation – in practical terms it’s the best way we currently have to describe what’s going on. At the end, you’ll have some output, and to understand that you need to go back to the physical and see what effects the output normally generates on the world. Chinese rooms, shadows, pixies, facts and grumpy philosophers are all irrelevant.

    If you accept the above, then the rest should be trivial:

    For instance, it should be possible to define an algorithm that would read in a description of my physical state or the state of any other system and print out whether your theory would predict consciousness or not, just as we could an algorithm which would predict temperature from the individual momenta of a collection of gas particles.

    Yes! But we are not there yet. Once again, all I’m doing is responding to a-priori criticism that says “this approach will never work” or, in your own terms “it will never be possible to define an algorithm that would read in a description of the physical state of a known system and print out if it is conscious”. I am not saying I know this algorithm, I’m saying “this algorithm is likely to exist”. But still: we need to first understand the system(s) we are dealing with. Currently, we don’t, we are many miles from understanding flatworms in computational terms, light-years from understanding human brains… So no, such an algorithm is unknown for now. And also: unless you want to have a gargantuan one, you would expect to use different algorithms to answer the same question for a human, a cat, a robot and a simulated consciousness. It all depends on the different structures, remember?

  69. Hi Jochen,

    If you accept that there is some ambiguity in the interpretation, it seems that the computational facts don’t fix the phenomenal facts, and that thus, computation alone doesn’t account for the phenomenology.

    The ambiguity and fuzziness I’ve been talking about is a matter of words. When we describe such-and-such a computation or state, should we call it an A or a B? And I’m saying sometimes you can call it either. I don’t think our inability to label things in a perfectly determinate way is a problem. It’s just the nature of language that it’s rather fuzzy.

    I’m rather skeptical about the coherency of concepts like “phenomenal facts”, qualia, colour inversion, etc. But I feel much less confident talking about those subjects and the “hard problem” than about intentionality, meaning and language, so I probably won’t have much to say about them. I prefer to concentrate on the less-hard problems.

    …the user provides input that would put a checkers algorithm executing machine in state C1; thus, C1 is mapped to S1.

    I don’t understand what kind of mapping you mean. You seem to be giving a counterfactual: it would produce state C1 (but apparently it doesn’t). How is this relevant? Sorry, I just can’t understand this argument at all, so please excuse me if I jump ahead to the final paragraph.

    I mean, it really is the same thing as interpreting an arbitrary string of bits—‘001011101001? can mean absolutely anything, depending on which code you use. But the set of states a computation traverses—its execution trace—can be just as well written as such a bit string; and once again, there is no unique favoured interpretation, no way to give an answer to the question ‘what does this system compute?

    It’s not just the code and data that matter, but also the behaviour of the processor. We’re talking about a particular processor executing a particular chess-playing program. The trace presumably doesn’t specify the processor (the instruction set). But, if the trace includes the program, you might be able to reverse-engineer the instruction set from the trace, build an emulator and run the program. From running the program you could see that it plays chess, or possibly that it predicts planetary positions. But why mention the trace? It’s the program and processor that determine what the system does.

    I’ve agreed that, in principle, if you could identify a processor at the subatomic level and a checkers-playing program which that processor executes, then you could say the system also plays checkers. But I don’t believe that such a processor and program exist, and I still haven’t seen anything that begins to persuade me. I don’t accept that such a system exists just because there are a vast number of microstates at the subatomic level. The number of states is not infinite, and I’d like to see a probability calculation to show that we can expect such a system to exist purely by chance. It seems to me incredibly improbable. It’s not enough for particles to be in a set of states that could be interpreted as a trace. In fact it makes no sense to interpret anything as a trace until you have a program and a processor (a machine) that executes it.

  70. @Jochen #69
    Perhaps now it’s more clear (see #70). The whole objection is a category error: computations are explanations, they are not mechanisms.

    1. The relationship between consciousness and a physical state is far, far removed. It’s the result of a very long list of structures that “approximate distinctions”, and almost certainly do so in very organised ways.
    2. We want to understand these “organised ways”, and the computational metaphor is the best one we have to do so.
    3. We can proceed in a systematic and meaningful way by observing that the incoming signals are about physical things from the very beginning. At the same time, output signals will have specific effects on the world (they build on the original aboutness, after all). Actual physical systems react to some things but not others, their interpretation is defined by the rules of physics. When we want to reverse engineer mechanisms in terms of computations we fix the interpretation by mirroring the grounding that makes the system itself tick.
    4. Computations, intentionality, meaning, consciousness are abstract distinctions. They are real only as a result of their own series of many “approximate distinctions”. Nothing computes, but mechanisms do interact. Causality itself is just an approximate distinction. Our quest deals with epistemology.

    In your checkers/chess distinction: an instantiated program that actually does run in a real computer is about chess when I can use it to play chess because it shows chess-pieces on the screen (since its intentionality is derived, you can also say that it’s about chess because the programmer wrote a chess program). It’s about nothing if it isn’t running at all, if it doesn’t have any structured input/output and so on. Yes, you can interpret the discrete steps it traverses (when it’s running! it doesn’t traverse any state when it’s on-paper or on-disk) in different ways, but the real-world interpretation is fixed by the I/O implementation. For the real world, the “meaning” of instantiated mechanisms, what if anything they are “about” is fixed well before you can ask what algorithm best describes the mechanism.

    Of course “the computational facts don’t fix the phenomenal facts”, there are no computational facts to start with: you need to know how the computational abstractions are instantiated in the physical world before you can even start.
    Or: a sensible interpretation is one that fits the actual I/O structures, includes the causal links with the outside world and ignores all considerations about what is trivial mathematically 😉 .

  71. Hi again. In my earlier comment about Bishop’s paper, I only addressed his response to the rejection of observer-dependency. I didn’t address his main discussion of Discrete State Machines, on which he bases his claim that there is an “observer-dependent” mapping. I think it’s worth addressing that discussion, and showing more specifically where he goes wrong.

    Bishop’s case for an “observer-dependent” mapping is based on a distinction between two separate sets of states. In his initial (input-less) wheel example, these states are referred to as W1-W3 (“physical”) and Q1-Q3 (“computational”). But in this trivial example the distinction is meaningless, because every W state corresponds to a Q state, and vice versa. We could simply use the symbols W1-W3 or Q1-Q3 throughout the discussion. We don’t need both. It’s irrelevant whether we think of them as “physical states” or “computational states”. It’s just two different ways of describing the same states. If we focus on the machine we might be more inclined to think of them as “physical” states. If we focus on the corresponding transition table we might be more inclined to think of them as “computational” states.

    Let’s say we have a wheel machine which can be in three states, which we’ll call W1-W3, and there’s a light that comes on when the machine is in state W3. If we then produce a corresponding table of state transitions, it makes sense to use the same symbols (W1-W3) in the table. If we’re determined to introduce redundant terminology, we could create a table using the symbols Q1-Q3 instead, and arbitrarily pick Q2 as the symbol representing the state with the light on. But then we’re not mapping state W3 to state Q2 in any meaningful sense. We are simply choosing the symbol Q2 to represent the state also represented by the symbol W3, the state in which the light is on. Whether we’re designing a machine based on a table, or producing a table to describe a machine, it makes more sense to use the same symbols in both cases. If we choose to have two separate sets of symbols (i.e. names), that’s just introducing extra terminology, with no substantive significance. This would all be clearer if we used some more meaningful (mnemonic) terms in the first place. We could replace the symbol “W3” with the symbol “W-LIGHT-ON”, and, if we were determined to introduce a second set of symbols, we could make one of them “Q-LIGHT-ON”. Then it would be quite obvious that W-LIGHT-ON corresponds to Q-LIGHT-ON.

    Once we turn to the less trivial case, with inputs, it becomes useful to have two sets of symbols. Since the inputs are specified a priori, we can think of the input list (I = BRAKE-ON1, BRAKE-ON2, BRAKE-OFF3, BRAKE-OFF4, BRAKE-OFF5, BRAKE-OFF6) as a program. And the wheel machine plays the role of the CPU executing that program in accordance with the transition table (Table 2). But in describing this arrangement, Bishop replaces the original 3-state wheel with a 6-state wheel. Why? I think the system must have 6 different states, because it needs to know which brake instruction to read next. In effect, the wheel acts as a program counter, pointing to the next instruction in the program. Bishop then gives a mapping from the the wheel states W1-W6 to the “computational states” Q1-Q3. But note that Bishop’s mapping is not a matter of choice. It’s forced by the transition table and the input list. Now again, we could give the items in the transition table different labels (symbols). We could switch round the symbols Q1 and Q2. But we would have to make the corresponding switch in the mapping too. No real change would have been made; just a different choice of symbols (terminology). Q1 would have the meaning that Q2 previously had, and vice versa.

    So, in neither case do we have any genuine choice of mapping between states. At most we have a choice of symbols, i.e. a choice of terminology.

  72. I’m rather skeptical about the coherency of concepts like “phenomenal facts”, qualia, colour inversion, etc.

    But such skepticism does not absolve you from having to account for, at the very least, their appearance. Because while I may not have any genuine subjective experience, I certainly perceive myself as doing so; and this must be part of what a successful theory of consciousness explains. Hence, if there is some ambiguity in what phenomenology a computation leads to—and I can’t see how there couldn’t be—computation can’t be used to provide such a theory.

    I don’t understand what kind of mapping you mean. You seem to be giving a counterfactual: it would produce state C1 (but apparently it doesn’t). How is this relevant? Sorry, I just can’t understand this argument at all, so please excuse me if I jump ahead to the final paragraph.

    I think this is just the thing that’s well explained in Bishop’s paper I linked (and DM linked) above. Basically, you have access to an input device. You use it to input your checkers move. The input device changes the state of your computing unit (to C1). Based on that, the computing unit traverses a series of states. It eventually reaches some output state (Cm). Your output device interpretes that as the computers’ checkers move, and provides you with some description of the move intelligible to you.

    But the mapping used by input- and output-devices is arbitrary. There’s nothing about the comuting unit that dictates you have to use this mapping, rather than that one. So, you can set up a case such as the following: your input unit is wired to the input unit of a chess computer. After you make your input, it translates it to an input for the chess computer. The chess computers’ input unit then puts the chess computer into state S1. The chess computer then traverses its sequence of states, and arrives at state Sn. It produces an output, which then is transduced by your output device to the output that the checkers-computer would give if it is in the state Cm. Thus, you have used the chess computer to compute the next move in a game of checkers. This is possible whenever the set of states the chess computer traverses is larger than (or as large as) the set of states the checkers computer traverses (again, this restriction only holds in this simple-minded implementation).

    For instance, let’s suppose the chess computer, being given input S1, traverses the sequence S1 -> S2 -> S3 -> S4. The checkers computer, given input C1, traverses C1 -> C2 -> C3. Then, we need the following mapping:
    S1 –> C1
    S2 –> C2
    S3 –> C2
    S4 –> C3
    An output device designed to implement this mapping (in the same sense that a conventional screen is designed to implement a particular mapping from voltage patterns to pixels lighting up) will then output the checkers move, using the chess computer as a computing engine.

    This mapping now simply maps logical states of the chess computation to logical states of the checkers computation. A more general mapping simply observes the physical states of the system and uses those to implement the computation. And while you’re right that even that will leave us with a finite number of states, the number is still sufficient to implement all computations we’ve ever implemented in a relatively short amount of time. Thus, as Putnam puts it, every open physical system implements every finite state automaton; and every computation conventional computers can perform can be considered to be such an FSA.

  73. The whole objection is a category error: computations are explanations, they are not mechanisms.

    That’s your stance, but it’s not the stance of your average computationalist, who would indeed claim that the computation is what gives rise to conscious experience. This is the sort of stance that things like the rock argument falsify, hence Putnam’s falling away from the computationalism he founded. Viewing computation as a metaphor which one can profitably use to describe what’s going on is in my view a quite different thing.

  74. Hi Jochen,

    But the mapping used by input- and output-devices is arbitrary.

    This claim seems to be central to your argument, and to our disagreement. I’ll assume the mapping you’re referring to here is equivalent to the mapping Bishop refers to, between “physical states” and “computational states”. I’ve written two comments now showing that Bishop fails to make any case for this claim. He mistakes a meaningless choice of labels for a significant choice of mapping. In fact Bishop’s second case (the wheel with brake) supports my position. Understood correctly, we can see that Bishop’s mapping is forced by the behaviour of the system, and is not arbitrary.

  75. Hi Jochen,

    I’ve realised that I didn’t adequately address your comment #74. You seemed to be making the same argument as Bishop, so I took the shortcut of pointing out that I’d already responded to his argument. On further consideration, your argument seems significantly different from Bishop’s, so I’d like to respond to your argument too.

    But first let me clarify my position on Bishop’s argument. In his section headed “Discrete State Machines”, Bishop argues for an arbitrary or “observer-relative” mapping from physical states to computational states. Then in the next section, headed “Open Physical Systems”, he uses that conclusion to argue that the physical states of any open system can be mapped onto any set of computational states. I reject the first conclusion, on which the second argument depends, and I think I’ve given pretty good reasons for rejecting it. For that reason I haven’t addressed the second argument specifically.

    You hypothesize an “input device” that maps checkers moves to chess moves, so it can take my checkers move and input a corresponding chess move to the chess program. The problem with this is that there can be no effective mapping, since there is no genuine correspondence between chess moves and checkers moves. An arbitrary mapping means that a checkers move will get translated into a chess move that is independent of the current state of the game, and will often not even be legal (e.g. tries to move a piece that doesn’t exist). What would happen when my legal checkers move gets converted into an illegal chess move? Presumably the input device would get a message back from the main computer saying that the move was illegal. To continue the game it would have to ask me to enter a different move. So I would sometimes find myself prevented from playing legal checkers moves. When I do manage to play a move, the output device will produce a response that again has no relation to the state of the game, and is often not even legal. There would be no real checkers game being played, just a sequence of pseudo-random (and often illegal) moves.

    (I’ve assumed that the moves are input and output textually, e.g. on a teletype, with no display of the current state of the game. How could we have such a display? If we had a device that mapped the chess game state to a checkers game state (on a screen), this would be another arbitrary mapping, and the game state would vary pseudo-randomly from turn to turn.)

    You were better off when you were only claiming a mapping between computational states and unspecified physical micro-states. Then you could imagine that there’s a chess-playing program being executed inside a stone, and claim that, if only we could find the right interface between human-readable checkers moves and the relevant micro-states of the stone, then we could play checkers against the stone. But now you’re replacing the stone with a chess-playing program, and replacing unspecified physical micro-states with well-specified chess inputs (moves). This means you’re claiming a mapping between two well-specifed sets (checkers moves and chess moves), and it’s easier to see that no possible mapping will allow me to play checkers in this situation.

    I also wonder whether you’re conflating two types of mapping. On the basis of the original “arbitrary mapping” claim, you claim that the micro-states of a stone can be mapped onto any computation we like, including playing checkers. If we then wanted to play checkers against the stone, we would need some interface that maps between the relevant micro-states of the stone and human-readable states. But that second mapping couldn’t be arbitrary. It would have to be a mapping that produces the right sort of output. Similarly, your mapping between checkers moves and chess moves can’t be arbitrary.

    When you wrote, “But the mapping used by input- and output-devices is arbitrary”, I took this as an assertion of Bishop’s arbitrary-mapping claim. Now I’ve realised that this is a new arbitrary-mapping claim, very different from Bishop’s. This comment has been a response to the new claim.

  76. P.S. One more thought, Jochen. When you “this is just the thing that’s well explained in Bishop’s paper”, I thought you were referring to his main argument. But I’ve just realised that your argument is more similar to one of his subsidiary arguments, the one about about using a chess-playing computer as part of an “interactive art” exhibit. I addressed that argument earlier. But let me add something further.

    Of course it’s possible to use devices for unintended purposes. But there are limits, and they’re usually in the direction of doing something less functionally specific. You can use a lamp as a doorstop, but you can’t use a typical doorstop as a lamp. Similarly, you can use a chess-playing computer for driving an art exhibit, which is functionally rather non-specific: almost anything can be called “art”. But you can’t use it for playing checkers. Playing checkers is extremely functionally specific, and a chess program can’t do it (unless of course it’s been programmed to play checkers as well as chess).

  77. OK, so it seems the thread is going to go over that ground again; well, anyway. Let’s try to go step by step, in order to see where we loose each other.

    So in this post, I’m merely going to try and establish the statement: ‘the mapping between computational states and physical states is not uniquely determined by the physical system’. I don’t think this is a particularly controversial claim, and in fact, I’d guess it’s even accepted by a significant fraction of researchers in information science (you definitely find sentences like ‘the user determines the computation’ and the like scattered throughout various textbooks).

    But anyway, let’s get going. We’ll start with a thought experiment that’s as concrete as possible. Imagine you have before you a black box with five diodes arranged in a row. On the push of a button, the system traverses a series of states, indicated by the diodes lighting up in various configurations. In fact, let’s suppose that the series is as follows, where o is an unlit diode, x a lit diode:

    oooxo
    xoxoo
    ooxxo
    ooxxo
    xxxxo
    xxxox
    xxxxo
    oxoox
    ooxxo
    ooxoo

    Now, this is a computation, or rather, it can be interpreted as one. It’s the execution trace of an inputless finite state automaton; it can be shown that all computations that can be carried out on finite machines can be carried out by such an FSA. Choosing one particular of these automata does not entail a loss of generality, as we are just using it as an example to demonstrate the existence of a certain property (the mapping arbitrariness).

    What, however, does this FSA compute? Well, that’s where it gets interesting: there’s no unique answer to the question. How you choose to look at this series of blips matters; only after making such a choice can we meaningfully talk about the computation that is being performed.

    Let’s make a choice. Let’s interpret the blips as binary numbers, a choice that is as good as any other (although, it should be noted, an arbitrary one). Then, the interpretation—the mapping—reads:

    ooooo –> 0
    oooox –> 1
    oooxo –> 2
    oooxx –> 3

    Using this interpretation, we can ‘decode’ the sequence of flashes of our FSA. We get the sequence 2, 20, 6, 6, 30, 29, 30, 9, 6, 4.

    Now, this is of course a perfectly legitimate computation. But how can we be confident that it’s in fact the ‘right’ one? And the answer is, we can’t. For let’s suppose we perform a trivial change: rotate the box by 180°. This does obviously not change anything about the physical system’s evolution; it merely changes the way you look at it. The new sequence reads (for convenience, I’m indicating the numbers alongside):

    oxooo ~ 8
    ooxox ~ 5
    oxxoo ~ 12
    oxxoo ~ 12
    oxxxx ~ 15
    xoxxx ~ 23
    oxxxx ~ 15
    xooxo ~ 18
    oxxoo ~ 12
    ooxoo ~ 4

    Now, this is just as valid a computation, and just as good a mapping, but a different result: the string of numbers is different from the one we got before, but nothing about the physical system has changed, merely the way we view it. Hence, there is no unique way to associate a computation with a physical system; there is always a choice on the part of the user involved.

    But let’s continue the game. Suppose something lends you a card, on which it says:

    1 –> A
    2 –> B
    3 –> c
    4 –> D

    26 –> Z
    27 –> Ä
    28 –> Ö
    29 –> Ü
    20 –> ß

    So, this gives us a way to further decode the message. Looking at the first string, we get… BTFFßÜßIFD. Well, that doesn’t seem to make any obvious sense. So let’s try it on the second sequence. That yields… HELLOWORLD. Ah! This makes sense. So have we now arrived at the ‘right’ decoding? Again, the answer is ‘no’. The fact that this makes sense is contingent on us as the users: we speak English; to a non-English speaker, this string of symbols would be just as much gibberish as the first one. In fact, one might imagine a language in which the first string is meaningful!

    Or, of course, neither of them might be right—in fact, the interpretational mapping might just as well be different. So let’s imagine a mapping that’s the same as above, except the following:

    1 –> D
    4 –> A
    13 –> W
    23 –> M

    Using this mapping, the message works out to be—HELLOMORLA, a greeting to the ancient turtle from the neverending story. So again, which one is right?

    I hope that this example suffices to make clear that there is indeed no fact of the matter regarding what computation a given physical system implements; there is always at least some arbitrariness in the interpretation. More formally, given a system that traverses a set of states S = {s1, s2, s3, …, sn} of cardinality n, a mapping can be found such that every computation traversing a set of states C = {c1, c2, c3, …, cm} can be implemented—that is, the physical system’s evolution can be interpreted as the execution trace of the computation—provided m is less or equal to n; in case of m < n, one can just coarse-grain, that is, lump certain physical states together as referring to the same computational states.

    Are we so far on the same page? If not, where exactly do you disagree?

    Now, as we've seen, interpretational mappings can be composed: if I have one mapping from physical states to computational states, and another mapping from computational states to different computational states, I can compose those mappings and get a new mapping to a different computation. This is what we did when we interpreted the sequence of diode blinkings as writing out 'HELLOWORLD': we mapped the diode state first to numbers, and then, the numbers to letters. (Note that I'm not here distinguishing between computational states and outputs they yield; since each state gives an output, we can use these outputs to label the computational states.)

    But this now poses a problem for computationalism: suppose we have a physical system that implements a computation giving rise to a mind with some phenomenology. This means that there is some mapping between the physical states and the computation, and hence, the phenomenology. But as we've just seen, in general, there is some arbitrariness in this mapping; in particular, I can compose it with some mapping that takes computational states to different computational states. In this case, let's take a mapping that takes states to their colour inverted versions. But then, this means that we can interpret the computation equally well as giving rise to a different phenomenology; and hence, the computational facts underdetermine the phenomenal one, and computation is insufficient for consciousness.

  78. But in this trivial example the distinction is meaningless, because every W state corresponds to a Q state, and vice versa. We could simply use the symbols W1-W3 or Q1-Q3 throughout the discussion. We don’t need both. It’s irrelevant whether we think of them as “physical states” or “computational states”.

    No, it’s an important distinction to draw, even in the case of a one-to-one mapping: we could imagine a different physical system with three available states, such as a box with three diodes, one of which may be on at a given time. Le’t call these states V1-V3. One could associate computational states R1-R3 with these physical states. If now we implement some transition table by means of the box’s physical evolution, and that transition table is equivalent to that of the wheel, then it makes sense to identify the computational states R1-R3 and Q1-Q3—there’s no meaningful distinction. However, obviously the physical states of both systems are quite distinct. Hence, the computational states are the same, but the physical states aren’t—but then, the computational states can’t be the physical states.

    Thus, changing the mapping from physical to computational states changes the FSA implemented (by implementing different transition rules), and hence, the computation being performed.

    Bishop then gives a mapping from the the wheel states W1-W6 to the “computational states” Q1-Q3. But note that Bishop’s mapping is not a matter of choice. It’s forced by the transition table and the input list.

    And that’s the point: Bishop is here simply concerned with showing that inputless FSA can implement any execution trace of an FSA with input, and that hence, pointing to the fact that brains have inputs does not point to meaningful difference. So, for an arbitrarily chosen FSA with input, he gives a procedure for implementing it on an inputless system. Hence, conclusions reached by talking about inputless FSAs are generalizable to FSAs with input.

  79. Jochen: Bishop’s paper is nice! Disagreeable Me pointed me to it earlier on, by the way. However, I have a question: can you pinpoint the reason why he writes that computations are “neither necessary nor sufficient for mind” I can understand the “not sufficient” but the necessary part eludes me: to me he just states it as a matter of fact, without explaining why, but it’s a long paper, I might be missing something!

    Sorry Sergio, I missed this bit earlier. I think the ‘nor necessary’ part comes about as follows: we have some bit of matter, say a brain, that instantiates some phenomenology P. Likewise, we can view it as implementing a computation C. However, the latter part is arbitrary: we can choose to instantiate any given computation whatsoever. But to the owner of the brain, that doesn’t make any difference re their phenomenology—whatever C you choose, their experience will still be P. Hence, the phenomenology is fixed independently of what computation is being performed; but then, we could just regard it as not performing any computation at all. But then, the fact that there is some computation going on is wholly incidental to, and hence, not necessary for, the phenomenology.

  80. Hi Sergio,

    Firstly, I think I want to highlight that you appear to be conflating what I’m going to call ambiguity with what I’m going to call vagueness (I’ve probably done similar). Whether a dying person is alive or dead is vague, because there is no precise point where one crosses the threshold. Even after what we would normally call death most of the cells are still living and metabolising, for instance.

    But whether a computer is performing a certain computation is instead ambiguous. It’s not that there is a continuum with no precise distinction between states of affairs, it is rather that more than one interpretation of the given state of affairs is possible.

    Similarly, people are comfortable with the idea of consciousness being vague, but not ambiguous. It is vague because we can drift continuously in and out of consciousness, and we can imagine that we are more conscious in some sense than monkeys, they than dogs, they than fish, they than insects and so on. But it is not ambiguous in that people are not, by and large, willing to accept that the consciousness of an alert and awake adult human being is open to interpretation.

    If your argument depends on that being the case, that “the only facts about consciousness or intentionality are epistemological facts”, then rightly or wrongly that is where Searle and Bishop and Putnam and Chalmers and so on would say your position falls apart, and nearly everyone would agree with them. If you’re looking for feedback on where your argument fails to persuade, I’d say this is it.

    So, being an epistemological one, your argument really boils down to Dennett’s Intentional Stance, and I can’t see that it adds anything. But you’re in pretty good company with Dennett, I would say! He’s one of my favourite philosophers and I agree with most of what he has to say (and admire a great deal how he says it!). So that you have come to the same conclusions as Dennett is no bad thing in my book.

    But it doesn’t answer the criticisms of Searle and Bishop or so on because they are assuming there *is* a fact of the matter regarding consciousness and intentionality. That is a powerful intuition, and if you want to challenge it you need to bring further arguments to bear.

    As for myself, I’m actually reasonably happy to go along with you on that. As Jochen has criticised me for on another thread, I am all too eager to dismiss questions as having no fact of the matter, and this isn’t really any exception.

    But if there is no fact of the matter on whether I am conscious, then there isn’t really a fact of the matter on whether I as a conscious person actually exist. The materials of which my body is formed continue to exist after I die, after all, so I am not simply an aggregation of molecules. There is more to me than material, even if this something more is no more mystical than arrangement in a specific kind of pattern.

    For this and other reasons, I choose to identify with a particular pattern of information processing, i.e. my mind as opposed to my brain or my body. So, following your line of argument, there is no fact of the matter on whether this mind exists. If there is no fact of the matter on whether I exist, then I don’t see why there should be a fact of the matter on whether anything at all exists, since I only know of anything via my own experience.

    This appears to me to make a nonsense of the whole concept of existence, which is OK, but it suggests that perhaps a more useful definition of existence could be arrived at, and that, at a minimum, this definition should include my own existence. I think this is a sufficient justification for Platonism, since I am a mind, my mind is a pattern and a pattern is a mathematical object.

    It is not that mathematical objects definitely exist in some sort of absolute framework-neutral way. Rather, existence as a concept is meaningless except with reference to some particular framework, and the above considerations lead me to regard Platonism as a useful framework.

  81. “If a given program will be used by humans (this isn’t always the case!) ”

    This IS always the case. A program that interprets computer output always traces, at some point, to a human function or consumer. Two programs that interact amount to a larger program, or state machine.

    “Thus, there must be something different between our ordinary computers and any given stone. The obvious difference is that computers are engineered, they have a very organised structure and behaviour, specifically because this makes programming them feasible”

    If this meant to be the main argument, it seems to me it falls over at the every first hurdle. This is a cultural distinction and not one remotely likely to lead to scientific conclusions. Whether a computer is engineered, or produced at random by chemistry, will have no impact – at all – upon the causal capability of a computer, which will be determined by its physical construction.

    “the programmer will make sure that the results will be intelligible to us.”

    I think this is the point – the programmer determines the meaning. The problem with the “computers can generate semantic” argument is that it fails to see what the information flow in a typical human-computer scenario is. Information flows from computer programmer to computer user via the value-added telephony of a computer. The computer is always the intermediary, in exactly the same way that gas molecules mediate speech between humans. No sensible person would attrbute intelligence or consciousness to the air molecules in the space between two speakers. But with computers everybody seems to think it’s fine!

    The input is mapped (computationally and arbitrarily) by the programmer to be comsumed on rules that are ‘agreed’ – as you say – but not ‘agreed’ computationally, but culturally. In terms of implmenting consumption – displays, keyboards etc – the computational mapping is totally arbitrary and not ‘fixed’ as you claim. You seem to be mixing cultural standards – languages – with computing ones. There is nothing fixed about character standards – ASCII, Unicode etc

    In fact let’s say I had a couple of programs (you’re a prgrammer, so I’ll use a ‘C’ syntax’) :-

    static void i_am_happy()
    {
    printf(“I am happy\n”);
    return void ;
    }

    static void i_am_sad()
    {
    printf(“I am sad \n”);
    return void ;
    }

    The compiled versions of these programs would differ only in the (typcially) ASCII sequences linked to the printf statements stored somewhere in the function stack. But they could be compiled to be encoded arbitrarily of course – in fact the encoding of one could be made to look exactly like the encoding of the other. The “sad” program could be made to look exactly like the “happy” program. The only difference would be the configuration of the peripheral that displayed it.

    And that’s the problem. There is no meaning to numbers other than numbers. I don’t see what the issue is with admitting this. It really is so obvious. Numbers are human inventions. They have a specific function, and that is to isolate that component of semantic linked to quantity. They don’t do anything else by definition. What numbers do is exactly what it says on the tin, and that doesn’t include conveying the meanings of things that aren’t numbers, such as emotions. i would never think that the word “colour” includes a few numerical ideas contained within as a sort of byproduct. But with “numbers” common sense goes out of the window.

    The main ‘trouble’ is that computers can’t be held to represent anything. And that ‘trouble’ is preciseley the reason they were invented – numbers (usually 1s and 0s) are unlimited in the scope of things that they can represent (with one huge limitation – as long as the item is capable of being represented adequately as a quantity).

    It follows that the brain cannot be a computer, as the the main features of consciousness cannot be repesented as quantities.

    I’m sure teh brain could be “modelled” by a computer but we need to make aboslutely clear of two things :- firstly, the brain is 99% unknown. In terms of overall function, it’s to all intents and purposes 100% unknown. So what theory would be used to model it ? Secondly, that, like a computer model of the weather, simulation does not amount to reproduction. That seems to me to the claim of some sillier branches of computatuionalism but this, by virtue of computing arithmetic’s inherent (and desired) lack of meaning, is a hopeless dead end not worth the ergs wasted it.

  82. Hi Jochen,

    This comment took me a good few hours to write, and I think it’s the best I’m going to be able to do to explain my position, given the amount of time I’m willing to spend. Please read it very carefully, as it’s going to be my last attempt. Of course, if this explanation breaks the impasse, I may follow up with some further discussion.

    Let’s try to go step by step, in order to see where we loose each other.

    Good idea.

    What, however, does this FSA compute? Well, that’s where it gets interesting: there’s no unique answer to the question. How you choose to look at this series of blips matters; only after making such a choice can we meaningfully talk about the computation that is being performed.

    This is where we part company. When you say “the computation that is being performed”, I assume you mean the computation that is being performed inside the box. That’s certainly the computation that I’m interested in. But then you proceed to make judgements about the computation based purely on the interpretational choices (or potential choices) of observers. You don’t take any account at all of what’s happening in the box.

    It’s important to distinguish between what’s going on in a system and how we might interpret that system. If I take a chess-playing computer (let’s say one continually playing against itself) and use it as a doorstop or an art exhibit, then I can interpret it as a doorstop or an art exhibit, but that has no bearing on what’s going on in the computer. The computer is still playing chess against itself. Similarly, an observer might choose to interpret the diode patterns on your box as numbers or words, choosing from a variety of different encoding systems, as you’ve described. But I’m not interested in those observer choices, because they don’t help me understand what’s happening in the box.

    That said, whenever we describe the world we make choices about which abstractions we’re going to use. We pick the abstractions that are most useful given our interests. I don’t say that I’m sitting on atoms; I say that I’m sitting on a chair. So in what follows you’ll see me making some choices, but they’re ones which help me clarify what’s happening in the box, and I’ll try to explain them.

    In making inferences about a system’s mechanism, I can have two sorts of evidence. (1) Observations of the mechanism itself. (2) Observations of its external behaviour. In your black box scenario I have only the latter sort of evidence, and it’s too little evidence for any useful inference about the mechanism. So I would make no computational interpretations until I’ve looked inside the box. Until then, there’s nothing more to be said. (In other cases, we can make useful inferences based on output alone, as I’ll discuss later.)

    Suppose that when I look in the box I discover a simple mechanical device, like the mechanism of a music box: a rotating wheel with prongs pressing on buttons that activate the diodes. In that case there’s really not much more to be said than to describe the physical mechanism. We don’t need to give a computational interpretation, or give the states meanings. (The designer’s intentions could be taken as conferring a meaning on them. But that’s only of interest to us if we’re interested in the designer’s intentions. If the designer intended them to be interpreted as numbers in accordance with your first scheme, then we could say they represent 2, 20, 6, etc. But that doesn’t tell us anything about what’s going on in the box.)

    Perhaps there’s a far more complex mechanism in the box. Perhaps it’s a Windows PC running an interpreted BASIC program which lights up the diodes in a pattern stipulated by a look-up table. Perhaps the PC is also simultaneously running a program playing chess against itself. In that case, the box is, among other things, performing a chess-playing computation. Do I care about this? Perhaps I only wanted to know the specific algorithm that generates the lights, and knowing that it’s based on a look-up table is sufficient. Maybe I don’t care whether the program is implemented in machine code or interpreted BASIC, and I almost certainly don’t want to know the entire algorithm of the BASIC interpreter. Maybe I don’t care that there’s an operating system and a chess-playing computation running as well. Or maybe I do. How I choose to describe the scenario depends on what I and my listeners are interested in.

    Suppose it turned out that, when interpreted as numbers (by some coding scheme), the patterns of lights could be mapped onto some interesting mathematical sequence. Then someone might use the box as a mnemonic device for representing those numbers. To him, they would represent those numbers. But again that doesn’t change what’s going on in the box. Now further suppose that the box was actually designed for this purpose, and that the patterns are not stored in a table, but that the numbers and patterns are calculated each time they need to be displayed, by some complex mathematical algorithm. If it’s the immediate algorithm that I was interested in, I’d say it’s not using a transition table after all, it’s using such-and-such a mathematical algorithm. If someone asks what computation is being performed, then in the absence of any other knowledge of their interests, that might be the most appropriate answer to give.

    Let’s consider another scenario, where the outputs tell us enough to make a useful inference, without looking inside the box. I borrow someone’s tablet, find a chess program on it, and play chess against it. I can be pretty damn confident that there’s a chess-playing computation going on, i.e. that there’s a chess-playing mechanism. (It’s logically possible that the outputs I’ve seen were generated at random, but it’s so vastly improbable that I can infer that’s not the case. More plausible is that the screen is just mirroring another computer, to which my inputs are being relayed, and where the actual chess-playing computation is going on. But I’d still be right in thinking that there’s a chess-playing computation going on somewhere.)

    Can I equally well infer that this is checkers-playing computation? No. Not only does it not look like a checkers game, but if I try to move the pieces as if they were checkers pieces, in most cases the moves won’t be accepted. (And no additional interface program could map between chess moves and checkers moves, as I explained at #77.) The mechanism is a chess-playing mechanism, not a checkers-playing mechanism. (Of course there could be a checkers-playing program on the tablet too.)

    Let’s try a scenario where an incorrect interpretation can plausibly be maintained for a while. Let’s say the tablet is programmed to play “suicide chess” (where the aim is to get your own pieces captured, and the rules require you to capture whenever you can). Unaware of this, I try to interpret the program as one that plays regular chess. Everything looks OK to start with. I have the right pieces for my interpretation, and they move as they should. But I soon start to notice that the computer is making very bad moves, letting its pieces be captured for no good reason. Let’s say on one occasion I decide not to capture the piece on offer, and to make a non-capture move instead. To my surprise, I find that the computer won’t let me make that move. The only move it will let me make is a capture (because that’s required by the rules of “suicide chess”). Eventually I realise that this is a suicide chess program. In this case, I managed to maintain my incorrect interpretation for a while. But it was incorrect, as I eventually discovered, because the mechanism was a suicide-chess-playing mechanism, and not a regular-chess-playing mechanism.

    To sum up, my main point has been that we have to ask ourselves what our interpretations are for. And what they’re for here, as far as I’m concerned, is describing what’s going on in the box. I’m only interested in interpretations that help with that.

  83. P.S. I wrote:

    And what they’re for here, as far as I’m concerned, is describing what’s going on in the box.

    Instead of “describing”, I should have written “clarifying”, “explaining”, or “usefully describing”. My point is that we shouldn’t be looking for all the different ways we could possibly describe some situation. We should be looking for descriptions that do useful work in the given context.

  84. You don’t take any account at all of what’s happening in the box.

    The box is just like Turing’s wheel—merely a physical system going through different states. If you like, you can underpin it by the wheel (as you did), or by an electronic circuit such that if some memory bit is set, a diode lights up, or whatever—it is, indeed, completely arbitrary what occurs inside, just as what’s inside your computer is arbitrary if you run a virtual machine on it.

    But I’m not interested in those observer choices, because they don’t help me understand what’s happening in the box.

    But they are what determines which computation is performed by the box (again, the box merely as a physical system traversing certain states). Consider your computer: it’s a physical system traversing certain states—say, configurations of memory. Each such configuration is a state. If you wanted to, you could indicate this configuration by a pattern of LEDs, each either lit or unlit, depending on whether a certain bit is set or not. The computation that the computer performs is an interpretive gloss on these states, facilitated by output peripherals (i.e. the monitor ‘looks’ at the current state, and produces a pattern of lit pixels in response). Since we tend to interpret these outputs immediately (that’s how they were designed, after all), we tend to miss this interpretive mapping, the way fish may miss the presence of water. But it’s there, and in fact, without it, there simply is no fact of the matter regarding which computation is being performed by a given physical system—you need some way to take the physical states, and abstract computational states from them, and that way is not fully determined by the physical system itself.

    In that case there’s really not much more to be said than to describe the physical mechanism. We don’t need to give a computational interpretation, or give the states meanings.

    Exactly right: we never need to do this. We could look at the state of your computer as merely being voltage-patterns. But in certain cases, we do want to interpret those physical states as computational states, and those instances always involve a degree of arbitrariness. I could, for instance, have given you the box with the wheel inside in order to send you a message—say, we have agreed on some code beforehand. Then, to you, the states indicated by the diodes would be meaningful—but only because you possess the code. That’s the essential observer relativity of the computational states associated to a physical system in a nutshell.

  85. Hi Jochen. Thanks for your reply, but I’m afraid there’s still little meeting of minds between us, so I’ll call it a day.

  86. Hi Jochen, Richard,

    I think I’m somewhere between the two of you so perhaps I can bridge the gap a little.

    I can see valid points on both sides.

    From Richard’s point of view, I think he is right that Jochen is focusing too much on computation as a series of discrete steps. I don’t think that is what computation is really, although it can be viewed as such for some purposes. Rather computation is a process which produces a series of discrete steps according to some pattern of rules (an algorithm) instantiated as some sort of physical causal network in a brain or a computer. The issue is not how a series of states can be interpreted but about how we can interpret a system to be implementing an algorithm. As such, Richard is right that the interpretation of the LEDs is beside the point, what is relevant instead is the mechanism in the box and how the sequence of outputs is generated. There has to be some sort of causal network in place that leads to the production of the sequence, and in a physical system implementing a computation the important physical dynamics can be distilled and simplified down to an algorithm, and this algorithm is what the physical system is computing.

    However I’m sticking to my guns that this is not a sufficient answer to the Dancing with Pixies argument. Richard is right that only certain interpretations of the causal structure of the system will be useful in predicting its behaviour, but usefulness is somewhat subjective and so insufficient as a ground for the (supposedly) objective facts about consciousness and intentionality. Any computer system can be interpreted to be implementing any algorithm, but some of these interpretations are less useful and more ad hoc than others. It would be possible for instance to interpret any given run of a chess computer as playing checkers, but to do so you would need to lay out the entire list of moves and computations of a checker playing algorithm first and then arbitrarily interpret various physical events in the chess computer as corresponding to the steps of the checkers algorithm. As such in practice it can only be done retrospectively after the chess computer has already finished running. It seems unlikely for this reason that one could build a meaningful translation from chess moves to checkers moves in advance. However the fact that there are epistemic limitations which prohibit this does not seem to me to be a sufficient answer to Searle or DwP. These arguments demand some sort of ontological response.

  87. @ John Davey #83
    Thanks for joining in! You may be surprised to learn that we are not that far apart, really, albeit the one disagreement we do have isn’t small or inconsequential…

    […]The obvious difference is that computers are engineered, they have a very organised structure and behaviour, specifically because this makes programming them feasible.

    If this meant to be the main argument, it seems to me it falls over at the every first hurdle. This is a cultural distinction and not one remotely likely to lead to scientific conclusions. Whether a computer is engineered, or produced at random by chemistry, will have no impact – at all – upon the causal capability of a computer, which will be determined by its physical construction.

    It isn’t the main argument, but you are aiming at a detail, instead of the actual point I was trying to make. Sure, how a structure came into existence is not relevant, what is relevant is that computers have a very organised structure and behaviour, while stones do not. This allows me to figure out what version of a given program is running on my own computer: in the world I live in, I can answer this question, thus I thought it’s worth using this example to tackle the issue at hand. As you say “There is no meaning to numbers other than numbers” and I concede this point, but still think we don’t need to throw away the idea that brains compute…

    Thus, I don’t have major issues with the rest of your comment, until the very end; things start to look questionable to me here:

    I’m sure the brain could be “modelled” by a computer but we need to make absolutely clear of two things :- firstly, the brain is 99% unknown. In terms of overall function, it’s to all intents and purposes 100% unknown. So what theory would be used to model it?

    I would pick marginally less pessimistic percentages, but it’s your question that makes me think you may be missing my aim completely. We don’t have a theory to model brains, that’s another of the accepted facts: to build a theory we need some attacking points, some strategies to cobble it together (today we have lots of data, plenty of local/narrow theories and almost nothing to try to bring all theories and data into a single coherent non-local theory).
    One of the reasons why I wrote this post is that I’ve suddenly felt that the challenges to computational functionalism should be taken seriously (previously, I was unable to understand why they are worth even mentioning) and while doing so, learn something useful (or at the very least, shine a different light on some known facts). The impression I had was that it is worth putting some effort in this because it draws together computationalism and embodiment, two (neuro-)strands that look suspiciously antagonistic from my outsider perspective.
    Never mind: the declared aim here (not considering the aforementioned side-effects) is to see if the objections raised are enough to give up on computationalism altogether, or if it is still possible that someday, somehow we will have a computational theory to build a model…

    Which brings us to the real disagreement:

    Secondly, that, like a computer model of the weather, simulation does not amount to reproduction. That seems to me to the claim of some sillier branches of computatuionalism but this, by virtue of computing arithmetic’s inherent (and desired) lack of meaning, is a hopeless dead end not worth the ergs wasted it.

    Now, if I’m right, the prerequisite of meaning, ‘Intentionality’ is already present in the signals that count as the brain inputs, thus we would be dealing with structures and mechanisms that already have the needed bootstrapping ingredients, in the original post I say:

    Once one accepts this point of view, meanings always pre-exist as an interpretation map held by an observer. Therefore, “just” computations, can only trade pre-existing (and externally defined!) meanings and it would seem that generating new meanings from scratch entails an infinite regression.
    […]
    Consequently, computationalism can either require some very odd form of panpsychism or be utterly useless: it can’t help to discriminate between what can generate a mind and what can’t.
    […]
    Dissolving this apparently unsolvable conundrum is equivalent to showing why a mechanism can generate a mind, I don’t know if there is a bigger prize in this game.

    That’s to say: I hear you. The bulk of your remarks is accepted, but I am trying to specifically suggest that the obvious conclusion (yours, as well as many others’) does not necessarily follow. Why? Because if intentionality is present in some mechanisms from the start, then maybe, just maybe, more mechanisms can build on it and get to generate meanings. Also: if intentionality is present in some mechanisms which can be convincingly described as transmitting and processing “intentional signals”, then we have a very intuitive (not just wishful) reason to believe that re-creating equivalent signal transformations might in fact amount to reproduction. The aim here is to keep the possibility alive, not to say “it’s certainly the case”, but to refute the “it’s certainly not the case” position that you seem to espouse.
    As a note: an action potential travelling along an axon, as well as synaptic transmission, look very convincingly as mechanisms that transmit signals. If you stomp on my toe, the consequent signals travelling towards my brain would be quite intuitively consist of information about the damage suffered. In the same way: the way post-synaptic potentials sum and subtract themselves, so to eventually produce (or inhibit, modulate, etc) the next action potential also looks very well positioned to be understood in terms of signal integration/transformation. This is why so much of neuroscience takes the computational stance for granted, and I still don’t see a single reason to think that these assumptions are certainly wrong.
    So no, I’m not prepared to accept that computationalism is a dead end not worth the ergs wasted in it: there is certainly a lot of erg-wasting, but we can’t be sure it’s a dead end. Least of all an obvious dead-end.

    Having said all this: I know you’ll disagree, but please hold on. I’m slowly cooking a longer and more general response. This post was meant to ignite discussions, hoping they will be useful, and it’s time for me to try and see if it worked.

  88. There has to be some sort of causal network in place that leads to the production of the sequence, and in a physical system implementing a computation the important physical dynamics can be distilled and simplified down to an algorithm, and this algorithm is what the physical system is computing.

    To me, the traditional view would be that the causal dynamics merely implement the transition table, i.e. they ensure that some state S transitions to some state S’ if the transition table says that some functional state F transitions to F’. Then, the succession of functional states is the algorithm implemented by the automaton; choosing it to start in some given functional state Fs effectively fixes the ‘input’, and then, the causal dynamics, implementing the state transition table, execute the algorithm by means of traversing a set of functional states. In each case, the algorithm gives rise to an execution trace (if no other input occurs, then, for a deterministic algorithm, this execution trace depends on nothing but the initial state), which is a set of functional states, which is all you need to know in order to obtain the algorithm’s output—i.e. if you wanted to implement some function f(x), then you code x in the initial state, switch on the automaton, and observe its execution trace, which will then give you f(x). So any value of a computable function can be computed in the way that I understand computation.

    Claiming that the causal dynamics that realizes the transition between physical states is important in some way is then to claim that while you can in all cases obtain the value of f(x), only some of those cases count as computations, while others produce that value some other way. To me, this simply means that you’ve chosen a too narrow notion of computation, since I’d want to call everything that tells me the value of a computable function a comptuation; but you might have other requirements, in which case, I’m open to discussing them.

  89. I’ve thought of a clearer way of explaining where Bishop, Searle and others are going wrong.

    Bishop’s argument is based on interpreting real-world states in a way that is independent of the mechanism which produces those states. His interpretations are at the whim of an observer who is ignoring the mechanism. That means that his interpretations tell us nothing about the mechanism. But it’s the mechanism that we’re interested in. Is this a chess-playing mechanism? Is this a conscious mechanism? You can’t answer those questions by giving interpretations that ignore the mechanism.

    When someone tells me his computer can play chess, he’s telling me something about its mechanism. He’s not just telling me that it’s played chess in the past and probably will in the future. He’s telling me that it has what it takes to play chess. It has a chess-playing mechanism. To use Searle’s expression, it has the “causal powers” needed for playing chess, and it has those causal powers by virtue of having an appropriate physical form. In the case of a computer, that means having an appropriate processor and physical memory configuration. We must consider the processor and memory configuration together: the wrong processor for the given memory configuration will be useless. (Install a program onto a computer with a very different instruction set, and it won’t work.)

    So a chess-playing computer is one with the causal powers to play chess, and what gives it the right causal powers is having the right match of processor and physical memory configuration, so that the system behaves appropriately when the processor executes the memory. That’s a physical matter, determined by physical states and the laws of physics. However, we can characterise that physical match at a computational or “software” level of abstraction. We can specify a processor in the abstract by an instruction set, and specify a chess program in the abstract as a sequence of instructions from that instruction set. And then we can say that any physical system (processor and memory) that instantiates that instruction set and program will constitute a chess-playing computer. Of course, there are many different pairs of instruction set and program that will do the job. I can say that a chess-playing computer is any one which instantiates one of those pairs. Not all pairs of instruction set and instruction sequence characterise a chess-playing system. If we think of all the possible sequences of instructions (of a given machine instruction set) only a vanishingly small proportion constitute chess-playing programs. Try generating a long random sequence of instructions, loading it into memory, and executing it. You can be pretty damn sure it won’t play chess. And it won’t start playing chess just by virtue of an observer deciding to interpret it as a chess-playing program!

    A computationalist like me says, basically, that what I’ve said above for chess-playing is also true for being conscious. A conscious system is one that has the right causal powers for being conscious. In the case of an instruction-based computer, it will have the right causal powers if and only if it instantiates an appropriate pair of instruction set and program. (For simplicity here I’m just talking about instruction-based computers, and ignoring other sorts of systems, such as physical neural networks.) Searle talks as if computationalists are ignoring causal powers. And perhaps that’s why he and Bishop feel they can ignore causal powers and mechanism when critiquing the computationalist position. In that case they’re misinterpreting computationalism. But I think their mistake goes beyond misinterpreting computationalism, because they’re also saying very strange things about uncontroversial computations, such as chess-playing.

  90. Hi DM. I wrote my #92 before seeing your #89. Thanks for stepping in. Since our views are relatively close, I think a discussion between us may be productive.

    However I’m sticking to my guns that this is not a sufficient answer to the Dancing with Pixies argument. Richard is right that only certain interpretations of the causal structure of the system will be useful in predicting its behaviour, but usefulness is somewhat subjective and so insufficient as a ground for the (supposedly) objective facts about consciousness and intentionality.

    You use the word “interpretation”. We could also use words like “description”, “abstraction”, “model”, etc. Everything we say about the world involves interpretational choices. I can interpret a certain patch of reality as a chair, as a few pieces of wood, as a collection of atoms, etc. I can interpret a chess-playing computer as a doorstop or as a piece of found art. Many of our distinctions are fuzzy, such as the distinction between a hill and a mountain, and sometimes there’s no fact of the matter as to which word I should use. A certain amount of subjectivity in our descriptions isn’t a problem.

    Perhaps you will say this sort of subjectivity is OK in our descriptions, but it’s more of a problem when it comes to our brains making interpretations on which conscious experiences depend. I think that worry is based on a misunderstanding of consciousness. But that’s the hard problem, and I suggest sorting out the less hard problems first. It seems to me the DwP argument is a different one, and I can refute the DwP argument without addressing the hard problem.

    We also need to be careful to distinguish between epistemic uncertainty (due to lack of knowledge) and indeterminacy that remains when we have all relevant knowledge. To make sure the former doesn’t come into play, let’s assume we know everything there is to know about the mechanism.

    It would be possible for instance to interpret any given run of a chess computer as playing checkers, but to do so you would need to lay out the entire list of moves and computations of a checker playing algorithm first and then arbitrarily interpret various physical events in the chess computer as corresponding to the steps of the checkers algorithm.

    Isn’t this just the kind of irrelevant arbitrary interpretation that you rejected earlier in your comment, with regard to Jochen’s LED box? If we know everything about the mechanism, we know all the instructions in the program and we can (in principle) work out that the program is not capable of playing checkers. In practice, the mere fact that the program is labelled “chess” and comes up with a chess board when we run it is strong evidence of a chess-playing mechanism. The evidence gets even stronger when we’ve been playing with it for a while. This evidence doesn’t give us absolute proof, but we never have absolute proof about anything in the world. However, it gives us sufficient reason to say that it’s a chess mechanism and not a checkers mechanism program.

    It seems unlikely for this reason that one could build a meaningful translation from chess moves to checkers moves in advance.

    I would say it’s impossible. As I argued (or maybe just asserted) at #77, there is no genuine translation to be made, and any scheme would really just be a pseudo-random generator of checkers moves. It might get lucky for a time, and produce a handful of legal and credible moves, but the probability of playing an entire game of checkers this way would be tantamount to zero.

    There’s a probabilistic element here. Knowing the mechanism, we know that the probability of it playing a decent chess game is high, while the probability of it playing a decent checkers game is vanishingly small. That’s not surprising, given that it was designed to play chess, not checkers.

  91. Hi again, DM. Something I omitted to address in your comment:

    Any computer system can be interpreted to be implementing any algorithm, but some of these interpretations are less useful and more ad hoc than others.

    At the machine code level there is no room for interpretation. The algorithm is fixed by the physical state of the processor and memory, which determine the instruction set and program. There is, of course, a choice of terminology. A Jump instruction could be called Leap instead. But all complete descriptions at this level are algorithmically equivalent.

    Questions of interpretation only arise when we choose to describe the system at a different level of abstraction. We may want to describe it using abstractions that are more useful to human ears, using fuzzy ordinary language terms, like “chess-playing”. Even using terms like these, our interpretations are quite constrained, as I’ve been arguing. Fuzziness of this sort is similar to the fuzziness you mentioned earlier, regarding an organism being living or dead. Sometimes we can say either, but usually one of the two would be misleading.

    As far as I can see, the computationalist position does not entail that the demarcation between conscious and not-conscious is a fuzzy one. Personally, I suspect it is, and that systems can sometimes be described as conscious-ish, like organisms can sometimes be described as dead-ish.

  92. Hi Richard,

    > but it’s more of a problem when it comes to our brains making interpretations on which conscious experiences depend.

    I don’t think that’s quite what I’m saying, unless I misunderstand you. It’s not brains making interpretations that’s the issue, it’s brains being interpreted. So when you describe a brain as computing some algorithm, that’s you making an interpretation of what that brain is doing. Since there is no fact of the matter on whether it is actually computing this algorithm, we can’t really say it is conscious in virtue of the fact that it is computing this algorithm.

    > But that’s the hard problem

    Quite! Because that’s just what Searle is addressing. He has no issue with the idea that one could make a computer implement the functions of a human brain, i.e. solving the easy problems. His criticism is aimed squarely at the hard problem.

    > To make sure the former doesn’t come into play, let’s assume we know everything there is to know about the mechanism.

    I agree.

    > If we know everything about the mechanism, we know all the instructions in the program

    Not really. The mechanism is just a collection of atoms in a certain state. There is no program there without interpreting some of that state as corresponding to a program. And there are many ways of doing that. Some of them will be relatively simple interpretations and correspond to the natural tendency to view the system as implementing a chess algorithm or the null algorithm of simply being a lump of matter. But you could also have an insanely complex ad hoc interpretation to map what happens in the system to the instructions of a checkers program. To actually come up with such a mapping is infeasible in practice — you would need superhuman intelligence and the ability to predict in advance all the physical events that are going to take place in the computer, but unlike you I think it should be possible in principle.

    > In practice, the mere fact that the program is labelled “chess” and comes up with a chess board when we run it is strong evidence of a chess-playing mechanism

    Those are, as Jochen puts it, merely interpretational glosses. What the computer is actually doing is shuffling electrons around and allowing atoms to vibrate and so on. We can’t even say it’s sending outputs in binary code without engaging in an act of interpretation. Using a label written on the computer or the natural human interpretation of the output of a monitor attached to the computer is cheating, I would say. If you did manage to realise the insanely complicated infeasible checkers interpretation and built a device for reading the checkers interpretation state then you could present a checkers board on a monitor too.

    > and any scheme would really just be a pseudo-random generator of checkers moves.

    That’s not so. Your model of what it is to make a checkers interpretation is too straightforward. We don’t need to take chess states and map them to checkers moves, so that the system appears to make arbitrary dumb or illegal checkers moves. The point is that we can interpret it to be implementing any algorithm at all. We don’t even need to use the CPU or the RAM if we don’t want to. We can simply use the vibrations of the atoms in the computer’s metal case. If you know how the atoms will vibrate in advance (impossible in practice), you can produce a mapping of different types of vibrations to different types of computational operations. You can probably even do it in such a way that causation is preserved, so that the dependency of this computational operation on that computational operation is mirrored in the fact that this atomic wibble caused that atomic wobble.

    So as long as you don’t mind your computational interpretations being arcane and baroque and completely infeasible and useless in practice, then you can compute any mechanism to be interpreting any algorithm.

    > At the machine code level there is no room for interpretation.

    Once you have machine code you’ve already made the interpretation.

    > Questions of interpretation only arise when we choose to describe the system at a different level of abstraction.

    Agreed. Machine code is already a different level of abstraction than the physical.

    > As far as I can see, the computationalist position does not entail that the demarcation between conscious and not-conscious is a fuzzy one.

    I agree. I tried to explain this upthread in my distinction between vagueness and ambiguity. I’m perfectly fine with vague and fuzzy distinctions being drawn along a consciousness continuum. But that’s not the same as the ambiguity in the interpretation of what algorithm a system is computing. Whether I am conscious cannot depend on how I am interpreted by a third party to be processing information.

    So, in summary, your argument seems to be attempting to show that there is very little room for interpreting what an algorithm is doing. I agree. But that’s not Searle’s point. Searle’s point is that there is unlimited scope for interpreting what algorithm a physical system is computing.

  93. I don’t think that’s quite what I’m saying, unless I misunderstand you. It’s not brains making interpretations that’s the issue, it’s brains being interpreted.

    OK. Then we can drop the matter of brains making interpretations. It was helpful for me to be sure that’s not your concern.

    > At the machine code level there is no room for interpretation.

    Once you have machine code you’ve already made the interpretation.

    To be quite clear what we mean by “machine code”, let’s specify that we’re talking about a listing of opcodes, e.g.

    (1) Move, Add, Move, Jump, …

    (I’ve left out the operands of the instructions for simplicity.) Then I’m saying there’s only one such interpretation possible (for a given combination of processor and program), apart from irrelevant choices of notation.

    Let’s say we take a particular program, look at the first 4 bytes of the executable code (assuming one byte per instruction), and they are:

    (2) 28, 6A, 28, F4 (hexadecimal).

    We look up those bytes in the standard opcode table for that processor, and we get (1).

    My point all along has been that the behaviour of the processor forces us to interpret F4 as Jump. When the processor comes across F4 while executing the program, it jumps to another memory location and continues executing from that point.

    Suppose you want to say that “F4” is only an interpretation. OK, we look directly at the memory location in RAM, and we see a set of 8 flip-flops in the arrangement high-high-high-high-low-high-low-low, where high and low are my best attempt to describe the two possible physical states of the flip-flop. I can’t be more specific because I don’t know the relevant electronics. The specific details don’t really matter. In fact “F4” is an unnecessary intermediary. Forget about “F4”. The point is that there’s a certain physical state of a memory location that causes the processor to jump to another location (it causes the processor to load a new address into its program counter). Whenever the processor encounters a memory location in that state (high-high-high-high-low-high-low-low), it behaves that way.

    You can’t say, “You’re just interpreting that register as a ‘program counter'”. That’s the register that controls which location the processor will fetch the next instruction from. Other registers don’t have that effect. And there’s a fact of the matter as to which instruction the processor is executing at each step. I assume you don’t want to say that that’s open to interpretation.

    Of course, not every memory location with the arrangement high-high-high-high-low-high-low-low is physically identical. But they’re near enough the same that they all cause the processor to jump. That’s why a computer is a discrete state machine. Minor physical differences between one high flip-flop and another have no effect. That’s why we can simply describe the states of the flip-flop as high and low. We don’t worry that they’re only high-ish or low-ish. High-ish states will cause exactly the same behaviour as the precisely specified canonical high state. Minor variations are ironed out by the mechanism.

    Do you accept that (in this physical system) a memory state of high-high-high-high-low-high-low-low cannot be interpreted as any other instruction than a jump? If not, please explain why.

  94. Hi again, DM.

    That’s not so. Your model of what it is to make a checkers interpretation is too straightforward. We don’t need to take chess states and map them to checkers moves, so that the system appears to make arbitrary dumb or illegal checkers moves. The point is that we can interpret it to be implementing any algorithm at all. We don’t even need to use the CPU or the RAM if we don’t want to. We can simply use the vibrations of the atoms in the computer’s metal case. If you know how the atoms will vibrate in advance (impossible in practice), you can produce a mapping of different types of vibrations to different types of computational operations. You can probably even do it in such a way that causation is preserved, so that the dependency of this computational operation on that computational operation is mirrored in the fact that this atomic wibble caused that atomic wobble.

    At that point I was responding specifically to this remark by you, which talked about mapping from chess moves (as Jochen did earlier), not from vibrations of atoms.

    It seems unlikely for this reason that one could build a meaningful translation from chess moves to checkers moves in advance.

    What makes computers interesting (and unlike stones) is that they are discrete state machines capable of executing algorithms of vast complexity. You may claim that anything that vibrates (perhaps including stones) has that capability too, but I don’t accept it, and you’re going to have to argue for it. Therefore it’s vital to me to keep separate the special behaviour of computers from broader phenomena like vibrations. If you’re making claims about vinbrations (or anything that’s not specific to computers) then we should discuss stones not computers, so there’s no danger of conflating different issues.

    Obviously my last comment (#96) was assuming that we were talking about the specific capabilities of a computer. But now I suspect you may respond with a point about vibrating atoms, which will make my comment irrelevant.

    I could take the remainder of your remark above out of the context of a computer, and just address the point about vibrating atoms (as if they were the atoms of a stone). I’m afraid I don’t have time for that just now, but I hope to return to it later.

    In the meantime, could you please clarify something. Do you believe that stones are constantly playing chess against themselves (as well as executing every other possible algorithm)?

  95. Hi DM,

    Just to clarify. I think we’ve been talking at cross-purposes. I’ve been talking about the algorithm executed by the computer at the level we normally consider, the level that determines what we see on the screen, and in other behaviours that we can observe, the level that the designers have in mind when they design a computer. I’ll call this the “ordinary” level of the computer. You seem to be talking about the alleged algorithms being executed in other levels and places, such as in the vibrations of the computer’s case.

    In responding to my comment #96, please bear that in mind. Do you agree with that comment as far as the ordinary level of the computer is concerned?

  96. Hi Richard,

    I agree that if you’re already committed to interpreting the memory cells of RAM as discrete units and you’re already committed to paying attention to charge as opposed to spin or temperature or excitation level or any number of attributes as being interpretable as a 1 or a 0, and you are committed to a similar suite of assumptions regarding the CPU and the connections between them and so on, then perhaps there is no ambiguity.

    But it’s precisely those commitments that Searle is criticising, as far as I can see. They’re useful. Reasonable. They allow for predictable behaviour. But it’s hard to see how you could justify the assertion that the computer as a physical object is to be interpreted as implementing algorithm A on this view when it could be interpreted as implementing algorithm B on a view which considered its organisation on completely different terms, e.g. in terms of vibrations or whatever.

    For what its worth I think you are perhaps right that there’s no mapping from states of a chess computer qua instantiation of a chess algorithm to states of a checkers algorithm. But this is not particularly obvious to me. It may be that one could somehow implement a Turing machine supervening on a chess game and the particular rules an algorithm would use to play it. I very much doubt it but it at least illustrates the kind of clever indirect trickery that can sometimes lead to surprising results. There’s probably a better way around it. Or perhaps you’re right that the idea is simply not workable.

    > Do you believe that stones are constantly playing chess against themselves (as well as executing every other possible algorithm)?

    I don’t think the question is meaningful. I don’t think there is an objective fact of the matter about what algorithms any physical object is computing. I’m a Platonist. I think all these algorithms exist, and I’m perfectly happy to accept that any object can be interpreted to be computing any algorithm, but I don’t think that algorithm has any deep connection with the object.

    But I am a computationalist. It’s just that I attribute consciousness to an algorithmic process directly instead of to a physical object. The connection between the algorithm that is me and the physical object that is my brain is just that this algorithm is the most useful or natural interpretation of what the brain is doing.

  97. Hi DM,

    I’m afraid we’re speaking such different languages that we’re probably not going to make much progress. So I’m going to stop here. I’ll just sum up my view briefly for what it’s worth.

    Algorithms are executed when there exists a mechanism with the causal powers to instantiate that algorithm. A chess-playing algorithm is such a complex and specific one that an appropriate mechanism is vastly unlikely to occur by chance. The only known mechanisms with the necessary causal powers are human brains (those which have learnt to play chess) and computers (those which have been programmed to play chess). The vibrating atoms in a computer’s case do not constitute the right sort of mechanism for playing chess, even when combined in large numbers. Atoms have to be arranged in a much more specific sort of way to constitute the right sort of mechanism.

    The trouble with Bishop’s argument (and yours as far as I can make out) is that it doesn’t pay any attention to mechanism.

  98. “‘Intentionality’ is already present in the signals that count as the brain inputs,”

    To my mind computationlists start all discussions with a statement like “assuming that brains are computers I’ll show that brains are computers”
    This is one of them ! We have to assume that intentionality “exists” (whatever that means) in “signals” (whatever that means) that count as “brain inputs” (whatever that means).

    I have no reason to assume any of that to to be true – there is just no evidence for it. However ..

    “I would pick marginally less pessimistic percentages”

    Upon what scientific basis ?

    “Because if intentionality is present in some mechanisms from the start” ..

    This is a good one.What do you mean by “start” ? It seems to you’ve made the “start” the interesting bit and neatly avoided it altogether. I have a theory about what you mean when you say “start” – “start” is the point at which it becomes easier to apply computational theories. This hardly gives computationalism any credibility and just changes the question. The question is no longer “does the brain work like a computer” as you have rejected that it would appear – the question becomes “how does intentionality arise in the brain prior to the point that we can start applying computational theories?” In which case the role of functionalism becomes so frivolous as to hardly bother discussing it.

    “As a note: an action potential travelling along an axon, as well as synaptic transmission, look very convincingly as mechanisms that transmit signals.”

    That is a physical mechanism, not a computational algorithm. If you are on about duplicating the “signal” through duplication of a stream of electrons – that is not computational simulation. That is physical reproduction. Physical reproduction of a brain amounts to a recreation of the natural physical causal powers of a brain. That is likely to cause mental effects in my opinion, as the necessary causal preconditions are there. But computational simulation of an action potential, inside a computer, based upon an arbitrary model of an action potential, has as much causal power as a crossword puzzle. It will do nothing.

    I’ve always felt that “artificial” consciousness is not far away – but it will never be in a computer (any more than a computer simulation of the weather gets wet), but it WILL be in a biology lab, with experiments on tissue and the actual stuff that causes minds in the first place.

    “Least of all an obvious dead-end.”

    Its the most obvious dead end in the whole of the research – but a lot of people with political clout (Dennett et al) as well as the AI people in defence research have an investment in it.

  99. @John Davey #101.
    Thanks John, you make a good point here, in particular the fact that I am shifting the question:

    the question becomes “how does intentionality arise in the brain prior to the point that we can start applying computational theories?” In which case the role of functionalism becomes so frivolous as to hardly bother discussing it.

    I am not really saying that “intentionality arises in the brain” though: I’m saying that intentionality is there because of the tight (but not 100% reliable) causal relation between the signals the brain receives and whatever causes them in the first place. So yes, the role of functionalism does become (somewhat) trivial if you accept this view. Why would this be a bad thing?

    However, the rest of your rebuttal leaves me unsatisfied: I don’t see much to work with.
    The three “whatever that means”:
    Intentionality: the quality of being about something. Where does it come from? I think I’ve answered this one many times over. I also think it is a weak point in my argument, but I don’t think you are giving me reasons to refute the idea that we can understand action potentials that travel towards the brain as being signals about something. To me, they are (see below), and this does simplify the philosophical conundrums.
    Signals: you then mention an action potential and say that it is not a computational algorithm. Indeed: algorithms work in producing, combining and manipulating the signals. All I’m saying, and I thought this much was clear, is that things like synaptic transmission and action potentials can be understood (and modelled) as signals. If not, I’d like someone to tell me why not…
    Brain inputs: I’ve said it many times that I do see a problem in defining what can be considered as “brain input” and what should be seen as already happening in the brain. I’m asking for a little charity here and, for the sake of simplicity, accept that we can make a meaningful distinction. We know this distinction is going to be blurry, but I fail to see how this blurriness creates a theoretical problem. As always: if you do see why it is so (or where problems arise in other ways), please do let me know. Unfortunately blunt dismissals do not help me pinpointing the origin of the disagreement.

    Thus, you say there is no evidence for my foundational claims, but once again, I have to ask. If you step on my toe, action potentials will travel through my leg and spine as a consequence. I claim it is legitimate to interpret them as “signals about the damage made”, you claim they are not. On my side, there is causality: such action potentials (on first approximation) happen if and only if some damage occurred. On your side, what are the reasons to claim “the action potentials can’t be understood as signals about the damage made”? I don’t have any reason to accept this criticism, so please forgive me if I don’t.

    Of course an arbitrary representation of an action potential would do nothing: I am not talking about this. I’m talking about simulations that represent how (already intentional) signals get manipulated, integrated and so forth. Note that I have a much longer “overall” reply on this whole discussion, to be published soon, please hold on: it may clarify a bit.

    Finally:

    [the brain is 99% unknown. In terms of overall function, it’s to all intents and purposes 100% unknown.]

    I would pick marginally less pessimistic percentages

    Upon what scientific basis?

    Oh, where shall I start? We know about many functional associations with particular brain regions. Many of them. In some cases, we are even getting down to single neurons (in fact, we started doing so back in sixties, I believe).
    For example, we know that the hippocampus has a lot to do with memory in general and spatial memory in particular, but the exact role of the famous “space cells” is still being investigated:
    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4161971/

    Similarly, we know plenty of things about the inner workings of the early visual processing:
    http://www.scholarpedia.org/article/Models_of_visual_cortex
    So maybe we can hope that we know just a little more of 1%.

    Also: we do know very little in terms of overall function, but we can nevertheless use techniques such as deep brain stimulation to alleviate the symptoms of Parkinson’s (http://www.parkinsons.org.uk/content/deep-brain-stimulation), and of course we all use drugs that alter how our brains work in reliable ways (and in most cases, we do have decent ideas of why), so I wouldn’t go as far as saying that in functional terms we know exactly nothing. We know very little, that’s agreed, but we do know something.

  100. Sergio

    ” If you step on my toe, action potentials will travel through my leg and spine as a consequence. I claim it is legitimate to interpret them as “signals about the damage made”, you claim they are not. ”

    Actually I don’t see how you could infer this from what I said…. however, to say that the signals (or electric currents , whatever you want to call them) are “about” anything is clearly wrong. The best bet is to say they are part of the causal chain that eventually brings about “intentionality” in the higher levels of the brain function.

    It might just be language on your part, so it may be that you would agree with me on this.

    “All I’m saying, and I thought this much was clear, is that things like synaptic transmission and action potentials can be understood (and modelled) as signals.”

    Well, it wasn’t clear .. but what of it ? Where do you go from this ? To reiterate the point that was in discussion, you were claiming that computational simulation of a brain could be viewed as replication. I had previously said this was not true. Your counter was that reproducing a “signal” could reproduce ( i suppose) mental effects. I pointed out that reproducing an actual signal – a flow of electrons – does not amount to computational simulation but amounts to physical replication. So if your proposal is to replicate “brain signals” as actual physical electron flows this has nothing to with computing.

    I don’t see how pointing out that action potentials can be ‘modelled’ is a reply to my previous point. Anybody who has studied maths knows that anything can be modelled, from electorates to rotting bins to weather and the price of bananas.

    The question is what view do you take of the model ? Do you think that a computer simulation of a ‘modelled’ action potential is the same thing as an actual action potential, and do you think it has the same natural causal power ?

    Like i say to most computationalists – do you think a duck is the same thing as a painting of a duck ?

    Depends upn your view of the complexity of the brain of course, but I’d say the last 3 paras of the reply prove my point ..

  101. @John #103
    Sorry John, I shouldn’t assume you had the time and will to read all of the discussion: some of our misunderstandings stem from this (wrong) assumption, I think. I have a practical problem here: it’s really hard to remember in what context I’ve addressed what, so I may be re-doing the same mistake again (assuming this or that is understood).

    As explained in the new post, the best challenge I can think of is precisely what you are proposing:

    to say that the signals (or electric currents , whatever you want to call them) are “about” anything is clearly wrong. The best bet is to say they are part of the causal chain that eventually brings about “intentionality” in the higher levels of the brain function.

    It would be fantastic if you could do the following:
    Re-post your question on the latest article (Resurgent), and importantly, add a reason why “say that the signals are ‘about’ anything is clearly wrong”. That’s to say: I do have an idea of why you think the “clearly” word is justified, because you suggest it. Yes, it might be just a matter of how we use the words (“intentionality” or “about”), but my point here is that a tight causal chain that is sustained by internal (sensory) structures is what ignites the possibility of fully formed intentionality (trying to decide at what level we can start talking of intentionality and where we should add “proto-” qualifiers looks futile to me). So yes, I can agree with your point, but this doesn’t mean you agree with mine.

    I’ll indulge on this thread just to answer your (non rhetorical) questions, after this I’ll ask you to move to the new thread, otherwise it will become too difficult for me to keep track of the ongoing discussions:

    The question is what view do you take of the model?

    I don’t know for sure, but I do suspect that the “right” model might count as a partial reproduction, so I guess this is the conclusion you really want to refute.

    Do you think that a computer simulation of a ‘modelled’ action potential is the same thing as an actual action potential[?]

    (I guess this was a rhetorical question, but:) Not at all, I’m still with you on this one.

    do you think it has the same natural causal power?

    A simulated spike does not, in itself, that’s for sure. The question however is: can we identify “the same” causal powers to “reproduce” them in a separate model, but in a functionally equivalent way?

    I’ve discussed this with Peter a while back: his take is that maybe we can, but we would probably need to simulate all the molecules involved (or more), so when trying to reproduce a whole brain, this would be at the very least uninteresting and not-informative – I don’t know if he’ll be happy to say that it “would count as reproduction”, though (wouldn’t be surprised if not). My hope is that, by analysing causal relations in terms of information processing (computations, whatever, I do assume you understand the gist) we might be capturing most of the relevant ones (and therefore be able to leave out other “known to be irrelevant” details).
    If we could, we would then be able to reproduce the causal relations in a model. Once we get convinced that we did get the relevant details right enough, “did we at least partially reproduce a mind?” would be a question that needs to be answered (assuming the answer would be “no” looks very wrong to me).

    There is no guarantee this will work. To some extent I’m the first one to expect that “relevant causal details” will be coming from a ridiculously high number of directions (see also http://sergiograziosi.wordpress.com/2015/02/15/complexity-is-in-the-eye-of-the-beholder-thats-why-it-matters/ for a direct answer to your question about complexity), but your position seem to be “you can’t replicate the causal details” which to me seems, ahem, clearly (!!) a premature conclusion.

    Anyway, I appreciate you taking the time to engage: your frustration with computationalists is very visible, so I appreciate your efforts even more!

  102. Pingback: Why Sergio’s computationalism is not enough | Observing Ideas

  103. Pingback: Consciousness at last. | Writing my own user manual

  104. Pingback: Robert Epstein’s empty essay | Writing my own user manual

  105. Pingback: Partisan Review: “Surfing Uncertainty”, by Andy Clark. | Writing my own user manual

Leave a Reply

Your email address will not be published. Required fields are marked *