Sergio’s Computational Functionalism

sergio differenceSergio has been ruminating since the lively discussion earlier and here by way of a bonus post, are his considered conclusions…..

Linespace

 

Not so long ago I’ve enthusiastically participated in the first phases of the discussion below Peter’s post on “Pointing”. The discussion rapidly descended in the fearsome depths of the significance of computation. In the process, more than one commentator directly and effectively challenged my computationalist stance. This post is my attempt to clarify my position, written with a sense of gratitude to all: thanks to all for challenging my assumptions so effectively, and to Peter for sparking the discussion and hosting my reply.

This lengthy post will proceed as follows: first, I’ll try to summarise the challenge that is being forcefully proposed. At the same time, I’ll explain why I think it has to be answered. The second stage will be my attempt to reformulate the problem, taking as a template a very practical case that might be uncontroversial. With the necessary scaffolding in place, I hope that building my answer will become almost a formality. However, the subject is hard, so wish me luck because I’ll need plenty.

The challenge: in the discussion, Jochen and Charles Wolverton showed that “computations” are arbitrary interpretations of physical phenomena. Because Turing machines are pure abstractions, it is always possible to arbitrarily define a mapping between the evolving states of any physical object and abstract computations. Therefore asking, “what does this system compute?” does not admit a single answer, the answer can be anything and nothing. In terms of one of our main explananda: “how do brains generate meanings?” the claim is that answering “by performing some computation” is therefore always going to be an incomplete answer. The reason is that computations are abstract: physical processes acquire computational meaning only when we (intentional beings) arbitrarily interpret these processes in terms of computation. From this point of view, it becomes impossible to say that computations within the brain generate the meanings that our minds deal with, because this view requires meanings to be a matter of interpretation. Once one accepts this point of view, meanings always pre-exist as an interpretation map held by an observer. Therefore, “just” computations, can only trade pre-existing (and externally defined!) meanings and it would seem that generating new meanings from scratch entails an infinite regression.

To me, this is nothing but another transformation of the hard problem, the philosophical kernel that one needs to penetrate in order to understand how to interpret the mechanisms that we can study scientifically. It is also one of the most beautifully recursive problem that I can envisage: the challenge is to generate an interpretative map that can be used to show how interpretative maps can be generated from scratch, but this seems impossible, because apparently you can only generate a new map if you can ground it on a pre-existing map. Thus, the question becomes: how do you generate the first map, the first seed of meaning, a fixed reference point, which gets the recursive process started?

In the process of spelling out his criticism, Jochen gave the famous example of a stone. Because the internal state of the stone changes all the time, for any given computation, we can create an ad-hoc map that specifies the correspondence of a series of computational steps with the sequence of internal steps of our stone. Thus, we can show that the stone computes whatever we want, and therefore, if we had a computational reduction of a mind/brain, we could say that the same mind exists within every stone. Consequently, computationalism can either require some very odd form of panpsychism or be utterly useless: it can’t help to discriminate between what can generate a mind and what can’t. I am not going to embrace panpsychism, so I am left with the only option of biting the bullet and show how this kind of criticism can be addressed.

Without digressing too much, I hope that the above leaves no doubt about where I stand: first, I think this critique of computational explanations of the (expected) mind/brain equivalence is serious, it needs an answer. Furthermore, I also think that answering it convincingly would count as significant progress, even a breakthrough, if we take ‘convincingly’ to stand for ‘capable of generating consensus’. Dissolving this apparently unsolvable conundrum is equivalent to showing why a mechanism can generate a mind, I don’t know if there is a bigger prize in this game.

I’ll start from my day job, I write software for a living. What I do is write instructions that would make a computer reliably execute a given sequence of computations, and produce the desired results. It follows that I can, somehow, know for sure what computations are going to be performed: if I couldn’t, writing my little programs would be vain. Thus, there must be something different between our ordinary computers and any given stone. The obvious difference is that computers are engineered, they have a very organised structure and behaviour, specifically because this makes programming them feasible. However, in theory, it would be possible to produce massively complicated input/output systems to substitute the relevant parts (CPU, RAM, long-term memory) of a computer with a stone, we don’t do this because it is practically far too complicated, not because it is theoretically impossible. Thus, the difference isn’t in the regular structure and easily predictable behaviour of the Von Neumann/Harvard and derived architectures. I think that the most notable differences are two:

  1. When we use a computer, we already have agreed upon the correct way to interpret its output. More specifically, all the programs that are written assume such a mapping, and would produce outputs that conform to it. If a given program will be used by humans (this isn’t always the case!) the programmer will make sure that the results will be intelligible to us. Similarly, the mapping between the computer states and their computational meaning is also fixed (so fixed and agreed in advance, that I don’t even need to know how it works, in practice).
  2. In turn, because the mapping isn’t arbitrary, also the input/output transformations follow predefined and discrete sets of rules. Thus, you can plug different monitors and keyboards, and expect them to work in similar ways.

For both differences, it’s a matter of having a fixed map (we can for simplicity collapse the maps from 1 & 2 into a single one). Once our map is defined and agreed upon, we can solve the stone problem and say “computer X is running software A, computer Y is running software B” and expect everyone to agree. The arbitrariness of the map becomes irrelevant because in this case the map itself has been designed/engineered and agreed from the start.

This isn’t trivial, because it becomes enlightening when we propose the hypothesis that brains can be modelled as computers. Note my wording: I am not saying “brains are computers”, I talk about “modelled” because the aim is to understand how brains work, it’s an epistemological quest. We are not asking “what brains/minds are”; in fact, I’ll do all I can to steer away from ontology altogether.

Right, if we assume that brains can be modelled as computers, it follows that it should be possible to compose a single map that would allow us to interpret brain mechanisms in terms of computations. Paired with a perfect brain scanner (a contraption that can report all of the brain states that are required to do the mapping) such a map would allow us to say without doubt “this brain is computing this and that”. As a result, with relatively little additional effort, it should become possible to read the corresponding brain. From this point of view, the fact that there is an infinite number of possible maps, but only one is “the right” one, means that the problem is not about arbitrariness (as it seemed for the stone). The problem is entirely different, it is about finding the correct map, the one that is able to reliably discern what the scanned mind is thinking about. This is why in the original discussion I’ve said the arbitrariness of the mapping is the best argument for a computational theory of the mind. It ensures the search space for the map is big enough to give us hope that such a map does exist. Note also that all of the above is nothing new, it is just stating explicitly the assumptions that underline all of neuroscience; if there are some exceptions, they would be considered very unorthodox.

However, this where I think that the subject becomes interesting. All of the above has left out the hard side of the quest, I haven’t even tried to address the problem of how computations can generate a “meaningful map” on its own. To tackle this mini-hard problem, we need to go back to where we started and recollect how I’ve described the core of the “anti-computalism” stance. Taking about brain/mechanisms, I’ve asked: how [does the brain] generate the first map, the first seed of meaning, a fixed reference point, which gets the recursive process started? Along the way, I’ve claimed that it is reasonable to expect that a different but important map can be found, the one that describes (among many other things) how to translate brain events into mind events (thoughts, memories, desires, etc.). Therefore, one has to admit that this second map (our computational interpretation) would have to contain, at least implicitly, the answer on the fixed reference point. How is this possible? Note that I’ve strategically posed the question in my own terms, and mentioned the need for a fixed reference point. You may want to recall the “I-token” construct of Retinoid Theory, but in general, one can easily point out that the reference point is provided by the physical system itself. We have, ex-hypothesis, a system that collects “measurements” from the environment (sensory stimuli), processes them, and produces output (behaviour); this output is usually appropriate to preserve the system integrity (and reproduce, but that’s another story). Fine, such a system IS a fixed reference point. The integrity that justifies the whole existence of the system IS precisely what is fixed – all the stimuli it collects are relative to the system itself. As long as the system is intact enough to function, it can count as a fixed reference point; with a fixed reference, meanings become possible because reliable relations can be identified, and if they can, then they can be grouped together and produce more comprehensive “interpretative” maps. This is the main reason why I like Peter’s Haecceity: it’s the “thisness” of a particular computational system that actually seeds the answer of the hard side of the question.

Note also that all of the above captures the differences I’ve spelled out between a standard computer and a common stone. It’s the specific physicality of the computer that ultimately distinguishes it from a stone: in this case, we (humans) have defined a map (designing it from scratch with manageability in mind) and then used the map to produce a physical structure that will behave accordingly. In the case of brains/minds, we need to proceed in the opposite direction: given a structure and its dynamic properties, we want to define a map that is indeed intelligible.

Conclusions:

  • The computational metaphor should be able to capture the mechanisms of the brain and thus describe the (supposed) equivalence between brain events and mind events.
  • Such description would count as a weak explanation as it spells out a list of “what” but doesn’t even try to produce a conclusive “why”.
  • However, just expecting such mapping to be possible already suggests where to find the “why” (or provides it, if you feel charitable). If such mapping will prove to be possible, it follows that to be conscious, an entity needs to be physical. Its physicality is the source of the ability of generating its own, subjective meanings.
  • This in turns reaffirms why our initial problem, posed by the unbounded arbitrariness of computational explanations, does not apply. The computational metaphor is a way to describe (catalogue) a bunch of physical processes, it spells out the “what” but is mute on the “why”. The theoretical nature of computation is the reason why it is useful, but also points to the missing element: the physical side.
  • If such a map will turn out to be impossible, the most likely explanation is that there is no equivalence between brain and mind events.

 

Finally, you may claim that all these conclusions are themselves weak. Even if the problematic step of introducing Haecceity/physicality, as the requirement to bootstrap meaning, is accepted, the explanation we gain is still partial. This is true, but entails the mystery of reality (again, following Peter): because cognition can only generate and use interpretative maps (or translation rules), it “just” shuffles symbols around, it cannot, in no way or form ultimately explain why the physical world exists (or what exactly the physical world is, this is why I steered away from ontology!). Because all knowledge is symbolic, some aspect of reality always has to remain unaccounted and unexplained. Therefore, all of the above can still legitimately feel unsatisfactory: it does not explain existence. But hey, it does talk about subjectivity and meaning (and by extension, intentionality), so it does count as (hypothetical) progress to me.

Now please disagree and make me think some more!