Against Non-Computationalism

Marcin Milkowski has produced a short survey of arguments against computationalism; his aim is in fact to show that they all fail and that computationalism is likely to be both true and non-trivial. The treatment of each argument is very brief; the paper could easily be expanded into a substantial book. But it’s handy to have a comprehensive list, and he does seem to have done a decent job of covering the ground.

There are a couple of weaknesses in his strategy. One is just the point that defeating arguments against your position does not in itself establish that your position is correct. But in addition he may need to do more than he thinks. He says computationalism is the belief ‘that the brain is a kind of information-processing mechanism, and that information-processing is necessary for cognition’. But I think some would accept that as true in at least some senses while denying that information-processing is sufficient for consciousness, or characteristic of consciousness. There’s really no denying that the brain does computation, at least for some readings of ‘computation’ or some of the time. Indeed, computation is an invention of the human mind and arguably does not exist without it. But that does not make it essential. We can register numbers with our fingers, but while that does mean a hand is a digital calculator, digital calculation isn’t really what hands do, and if we’re looking for the secret of manipulation we need to look elsewhere.

Be that as it may, a review of the arguments is well worthwhile. The first objection is that computationalism is essentially a metaphor; Milkowski rules this out by definition, specifying that his claim is about literal computation. The second objection is that nothing in the brain seems to match the computational distinction between software and hardware. Rather than look for equivalents, Milkowski takes this one on the nose rather, arguing that we can have computation without distinguishing software and hardware. To my mind that concedes quite a large gulf separating brain activity from what we normally think of as computation.

No concessions are needed to dismiss the idea that computers merely crunch numbers; on any reasonable interpretation they do other things too by means of numbers, so this doesn’t rule out their being able to do cognition. More sophisticated is the argument that computers are strictly speaking abstract entities. I suppose we could put the case by saying that real computers have computerhood in the light of their resemblance to Turing machines, but Turing machines can only be approximated in reality because they have infinite tape and and move between strictly discrete states, etc. Milkowski is impatient with this objection – real brains could be like real computers, which obviously exist – but reserves the question of whether computer symbols mean anything. Swatting aside the objection that computers are not biological, it’s this interesting point about meaning that Milkowski tackles next.

He approaches the issue via the Chinese Room thought experiment and the ‘symbol grounding problem’. Symbols mean things because we interpret them that way, but computers only deal with formal, syntactic properties of data; how do we bridge the gap? Milkowski does not abandon hope that someone will naturalise meaning effectively, and mentions the theories of Millikan and Dretske. But in  the meantime, he seems to feel we can accommodate some extra function to deal with meaning without having to give up the idea that cognition is essentially computational. That seems too big a concession to me, but if Milkowski set out his thinking in more depth it might perhaps be more appealing than it seems on brief acquaintance. Milkowski dismisses as a red herring Robert Epstein’s argument from the inability of the human mind to remember what’s on a dollar bill accurately (the way a computational mind would).

The next objection, derived from Gibson and Chemero, apparently says that people do not process information, they merely pick it up. This is not an argument I’m familiar with, so I might be doing it an injustice, but Milkowski’s rejection seems sensible; only on some special reading of ‘processing’ would it seem likely that people don’t process information.

Now we come to the argument that consciousness is not computational; that computation is just the wrong sort of process to produce consciousness. Milkowski traces it back to Leibniz’s famous mill argument; the moving parts of a machine can never produce anything like experience. Perhaps we could put in the same camp Brentano’s incredulity and modern mysterianism; Milkowski mentions Searle’s assertion that consciousness can only arise from biological properties, not yet understood. Milkowski complains that if accepted, this sort of objection seems to bar the way to any reductive explanation (some of his opponents would readily bite that bullet).

Next up is an objection that computer models ignore time; this seems again to refer to the discrete states of Turing machines, and Milkowski dismisses it similarly. Next comes the objection that brains are not digital. There is in fact quite a lot that could be said on either side here, but Milkowski merely argues that a computer need not be digital. This is true, but it’s another concession; his vision of brain computationalism now seems to be of analogue computers with no software; I don’t think that’s how most people read the claim that ‘the brain is a computer’. I think Milkowski is more in tune with most computationalists in his attitude to arguments of the form ‘computers will never be able to x’ where x has been things like playing chess. Historically these arguments have not fared well.

Can only people see the truth? This is how Milkowski describes the formal argument of Roger Penrose that only human beings can transcend the limitations which every formal system must have, seeing the truth of premises that cannot be proved within the system. Milkowski invokes arguments about whether this transcendent understanding can be non-contradictory and certain, but at this level of brevity the arguments can really only be gestured at.

The objection Milkowski gives most credit to is the claimed impossibility of formalising common sense. It is at least very difficult, he concedes, but we seem to be getting somewhere. The objection from common sense is a particular case of a more general one which I think is strong; formal processes like computation are not able to deal with the indefinite realms that reality presents. It isn’t just the Frame Problem; computation also fails with the indefinite ambiguity of meaning (the same problem identified for translation by Quine); whereas human communication actually exploits the polyvalence of meaning through the pragmatics of normal discourse, rich in Gricean implicatures.

Finally Milkowski deals with two radical arguments. The first says that everything is a computer; that would make computationalism true, but trivial. Well, says Milkowski, there’s computation and computation; the radical claim would make even normal computers trivial, which we surely don’t want. The other radical case is that nothing is really a computer, or rather that whether anything is a computer is simply a matter of interpretation. Again this seems too destructive and too sweeping. If it’s all a matter of interpretation, says Milkowski, why update your computer? Just interpret your trusty Windows Vista machine as actually running Windows 10 – or macOS, why not?

 

 

Sergio Resurgent

sergio differenceSergio’s thoughts on computationalism evoked a big response. Here is his considered reaction, complete with bibliography…

+++++++++++++++++++

A few weeks ago, Peter was kind enough to publish my personal reflections on computational functionalism, I made it quite clear that the aim was to receive feedback and food for thought, particularly in the form of disagreements and challenges. I got plenty, more than I deserve, so a big “thank you” is in order, to all the contributors and to Peter for making it all possible. The discussions that are happening here somehow vindicate the bad rep that the bottom half of the internet gets. Over the last week or so, I’ve been busy trying to summarise what I’ve learned, what challenges I find unanswerable and what answers fail to convince. This is hard to do, as even if we have all tried hard to remain on topic, our subject is slippery and comes with so much baggage, so apologies if I’ll fail to address this or that stream, do correct me whenever you feel the need.

This post is going to be long, so I’ll provide a little guidance here. First, I will summarise where I am now, there may be nuanced differences with the substance of original post, but if there are, I can’t spot them properly (a well-known cognitive limitation: after changing your mind, it’s very difficult to reconstruct your previous beliefs in a faithful way). There certainly is a shift of focus, which I hope is going to help clarify the overall conceptual architecture that I have in mind. After the summary, I will try to discuss the challenges that have been raised, the ones that I would expect to receive, and where disagreements remain unresolved. Finally, I’ll write some conclusive reflections.

Computational Functionalism requires Embodiment.

The whole debate, for me revolves around the question: can computations generate intentionality? Critics of computational functionalism say “No” and take this as a final argument to conclude that computations can’t explain minds. I also answer “No” but reach a weaker conclusion: computations can never be the whole story, we need something else to get the story started (provide a source for intentionality), but computations are the relevant part of how the story unfolds.
The missing ingredient comes from what I call “structures”: some “structures” in particular (both naturally occurring and artificial) exist because they effectively measure some feature of the environment and translate it into a signal, more structure then use, manipulate and re-transmit this signal in organised ways, so that the overall system ends up showing behaviours that are causally connected to what was measured. Thus, I say that such systems (the overall structure) are to be understood as manipulating signals about certain features of the world.
This last sentence pretty much summarises all I have to say: it implies computations (collecting, transmitting, manipulating signals and then generating output). It also implies intentionality: the signals are about something. Finally, it shows that what counts are physical structures that produce, transmit and manipulate signals: without embodiment, nothing can happen at all. I conclude that such structures can use their intrinsic intentionality (as long as their input, transmission & integration structures work, they produce and manipulate signals about something), and build on it, so that some structures can eventually generate meanings and become conscious.

Once intentionality is available (and it is from the start), you get a fundamental ingredient that does something fundamental, which is over an beyond what can be achieved by computations alone.

Say that you find an electric cable, with variable current/voltage that flows in it. You suspect it carries a signal, but as Jochen would point out, without some pre-existing knowledge about the cable, you have no way of figuring out what it is being signalled.
But what precedes brains already embodies the guiding knowledge: the action potentials that travel on axons from sensory periphery to central brain are already about something (as are the intracellular mechanisms that generate them at the receptor level). Touch, temperature, smells, colours, whatever. Their mapping isn’t arbitrary or optional, their intentionality is a quality of the structures that generate them.

Systems that are able to generate meanings and become conscious are, as far as we know (and/or for the time being), biological, so I will now move to animals and human animals in particular. Specifically, each one of us produces internal sensory signals that already are about something, and some of them (proprioception, pain, pleasure, hunger, and many more) are about ourselves.
Critics should say that, for any given functioning body, you can in theory interpret these signals as encoding the list of first division players of whichever sport you’d like, all you need is to design a post-hoc encoding to specify the mapping from signal to signified player – this is equivalent to our electrical cable above: without additional knowledge, there is no way to correctly interpret the signals from a third-party perspective. However, within the structure that contains these signals, they are not about sports, they really are about temperature, smell, vision and so forth: they reliably respond to certain features of the world, in a highly (and imperfect) selective way.

Thus, you (all of us) can find meaningful correlations between signals coming from the world, between the ones that are about yourself, and all combinations thereof. This provides the first spark of meaning (this smells good, I like this panorama, etc). From there, going to consciousness isn’t really that hard, but it all revolves around two premises, which are what I’m trying to discuss here.

  • Premise one: intentionality comes with the sensory structure. I hope that much is clear. You change the structure, you change what the signal is about.
  • Premise two: once you have a signal about something, the interpretation isn’t arbitrary any more. If you are a third-party, it may be difficult/impossible to establish what a signal is signalling exactly, but in theory, the ways to empirically get closer to the correct answer can exist. However, these ways are entirely contingent: you need to account for the naturally occurring environment where the signal-bearing structure emerged.

If you accept these premises, it follows that:

a) Considering brains, they are in the business of integrating sensory signals to produce behavioural outputs. They can do so, because the system they belong to includes the mapping: signals about touch are just that. Thus, from within, the mapping comes from hard-constraints, in fact, you would expect that these constraints are enough to bootstrap cognition.
b) To do so, highly developed and plastic brains use sensory input to generate models of the world (and more). They compute in the sense that they manipulate, shuffle symbols around, and generate new ones.
c) Because the mapping is intrinsic, the system can generate knowledge about itself and the world around it. “I am hungry”, “I like pizza”, “I’ll have a pizza” become possible, and are subject-relative.
d) thus, when I sayThe only facts are epistemological” I actually mean two things (plus one addendum below): (1) they relate to self-knowledge, and (2) they are facts just the same (actually, I’d say they are the only genuine facts that you can find, because of (1)).

Thus, given a system, from a third-party perspective, in theory, we can:

i. Use the premises and conclusion a) to make sensible hypotheses on what the intrinsic mapping/intentionality/aboutness might be (if any).
ii. Use a mix of theories, including information (transmission) theory (Shannon’s) and the concept of abstract computations to describe how the signals are processed: we would be tapping the same source of disambiguation – the structure that produces signals about this but not that.
iii. This will need to closely mirror what the structures in the brain actually do: at each level of interpretational abstraction, we need to keep verifying that our interpretation keeps reflecting how the mechanisms behave (e.g. our abstractions need to have verifiable predictive powers). This can be done (and is normally done) via the standard tools of empirical neuroscience.
iv. If and only if we’ll be able to build solid interpretations (i.e. ones that make reliable predictions) and we’ll be able to cover all the distance between original signals, consciousness and then behaviour, we will have a second map: from the mechanisms as described in third-person terms, to “computations” (i.e. abstractions that describe the mechanisms in parsimonious ways).

Buried within the last passage, there is some hope that, at the same time, we will learn something about the mental, because we have been trying to match, and remain in contact with, the initial aboutness of the input. We have hope that computations can describe what counts of the mechanism we are studying because the mechanisms themselves rely on (exist because of) their intrinsic intentionality. This provides a third meaning to “the only facts are epistemological(3): we would have a way to learn about otherwise subjective mental states. However, in this “third-party” epistemological route, we are using empirical tools, so, unlike the first-party view, where some knowledge is unquestionable (I think therefore I am, remember?), the results we would get will always be somewhat uncertain.

Thus: yes, there is a fact to be known on whether you are conscious (because it is an epistemological fact, otherwise it would not qualify as fact), but the route I’m proposing to find what this fact is is an empirical route and therefore can only approximate to the absolute truth. This requires to grasp the concept that absolute truths depend on subjective ones. If I feel pain, I’m in pain, this is why a third-party can claim there is an objective matter on the subject of my feeling pain. This me is possible because I am made of real physical stuff, which, among other things, collects signals about the structure it entails and the world outside.

At this point it’s important to note that this whole “interpretation” relies on noting that the initial “aboutness” (generated by sensory structures) is possible because the sensory structures generate a distinction. I see it as a first and usually quite reliable approximation of some property of the external environment: in the example of the simple bacterium, a signal about glucose is generated, and it becomes possible to consider it to be about glucose because, on average, and in the normal conditions where the bacterium is to be found, it will almost always react to glucose alone. Thus, intentionality is founded on a pretence, a useful heuristic, a reliable approximation. This generates the possibility of making further distinctions, and distinctions are a pre-requisite for cognition.
This is also why intentionality can sustain the existence of facts: once the first approximations are done, and note that in this context I could just as well call them “the first abstractions”, conceptual and absolute truths start to appear. 2+2 = 4 becomes possible, as well as “cogito ergo sum” (again).

This more or less concludes my positive case. The rest is about checking if it has any chance of being accepted by more than one person, which leads us to the challenges.

Against Computational Functionalism: the challenges.

To me, the weakest point in all this is premise one. However, to my surprise, my original post got comments like “I don’t know if I’m ready to buy it” and similar, but no one went as far as saying “No, the signal in your fictional bacterium is not about glucose, and in fact it’s not even a signal“. If you think you can construct such a case, please do, because I think it’s the one argument that could convince me that I’m wrong.

Moving on the challenges I have received, Disagreeable Me, in comment #82 remarks:

If your argument depends on […] the only facts about consciousness or intentionality are epistemological facts, then rightly or wrongly that is where Searle and Bishop and Putnam and Chalmers and so on would say your position falls apart, and nearly everyone would agree with them. If youre looking for feedback on where your argument fails to persuade, Id say this is it.

I think he is right in identifying the key disagreement: it’s the reason I’ve re-stated my main argument above, unpacking what I mean with epistemological fact and why I do.
In short: I think the criticism is a category error. Yes there are facts, but they subsist only because they are epistemological. If people search for ontological facts, they can’t find them, because there aren’t any: knowledge requires arbitrary distinctions and at the first level, only allows for useful heuristics. However, once these arbitrary distinctions are done and taken for granted, you can find facts about knowledge. Thus: there are facts about consciousness, because consciousness requires to make the initial arbitrary distinctions. Answering “but in your argument, somewhere, you are assuming some arbitrary distinctions” doesn’t count as criticism: it goes without saying.
This is a problem in practice, however: for people to accept my stance, they need to turn their personal epistemology head over feet, so once more, saying “this is your position, but it won’t convince Searle [Putnam, Chalmers, etc…]” is not criticism to my position. You need to attack my argument for that, otherwise you are implying “Searle is never going to see why your position makes sense”, i.e. you are criticising his position, not mine.

Similarly, the criticism that stems from Dancing with Pixes (DwP: the universal arbitrariness of computations) doesn’t really apply. This is what I think Richard Wein has been trying to demonstrate. If you take computations to be something so abstract that you can “interpret any mechanism to perform any computation” you are saying “this idea of computation is meaningless: it applies to everything and nothing, it does not make any distinction”. Thus, I’ve got to ask: how can we use this definition of computation to make distinctions? In other words, I can’t see how the onus of refuting this argument is in the computationalist camp (I’ve gone back to my contra-scepticism and I once more can’t understand how/why some people take DwP seriously). To me, there is something amiss in the starting definition of computation, as the way it is formulated (forgetting about cardinality for simplicity’s sake) allows drawing no conclusions about anything at all.
If any mechanism can be interpreted to implement any computation, you have to explain me why I spend my time writing software/algorithms. Clearly, I could not be using your idea of computations because I won’t be able to discriminate between different programs (they are all equivalent). But I have a job, and I see the results of my work, so, in the DwP view, something else, not computations, explain what I do for a living. Fine: whatever that something is, it is what I call computations/algorithms. Very few people enjoy spending time in purely semantic arguments, so I’ll leave it to you to invent a way to describe what programming is all about, while accepting the DwP argument. If you’ll be able to produce a coherent description, we can use that in lieu of what I normally refer to computation. The problem should be solved and everyone may save his/her own worldview. It’s also worth noting how all of this echoes the arguments that Chalmers himself uses to respond to the DwP challenge: if we start with perfectly abstract computations, we are shifting all the work on the implementation side.

If you prefer a blunt rebuttal, this should suffice: in my day job I am not paid to design and write functionless abstractions (computations that can be seen everywhere and nowhere). I am ok with my very local description, where computations are what algorithms do: they transform input into outputs in precise and replicable ways. A so and so signal gets in this particular system and comes out transformed in this and that way. Whatever systems show the same behaviour are computationally equivalent. Nothing more to be said, please move on.

Furthermore, what Richard Wein has been trying to show is indeed very relevant: if we accept an idea of computations as arbitrary interpretations of mechanisms, we are saying “computations are exclusively epistemological tools” they are, after all interpretations. Thus, interpretations are explicitly something above and beyond their original subject. It follows that they are abstract and you can’t implement them. Therefore, a computer can’t implement computations: whatever it is that a computer does, can be interpreted as computations, but that’s purely a mental exercise, it has no effect on what the computer does. I’m merely re-stating my argument here, but I think it’s worth repeating. We end up with no way to describe what our computers do: Richard is trying to say, “hang on, this state of affairs in a CPU reliably produces that state” and a preferential way to describe this sort of transitions in computational terms does exist: you will start describing “AND”, “OR”, “XOR” operators and so on. But doing so is not arbitrary.
If I wanted to play the devil’s advocate, I would say: OK, doing so is not arbitrary because we have already arbitrarily assigned some meaning to the input. Voltage arriving through this input is to be interpreted as a 1, no voltage is a 0, same for this other input. On the output side we find:
1,1 -> 1, while 0 is returned for all other cases. Thus this little piece of CPU computes an AND operation. Oh, but now what happens if we invert the map on both the inputs and assume that “no voltage equals 1”?
This is where I find some interest: computers are useful, as John Davey remarked, because we can change the meaning of what they do, that’s why they are versatile.

The main trouble is that computers cant be held to represent anything. And that trouble is precisely the reason they were invented numbers (usually 1s and 0s) are unlimited in the scope of things that they can represent […].

This is true, and important to accept, but does not threaten my position: if I’m right, intentional systems process signals that have fixed interpretations.

Our skin has certain types of receptors that when activated send a signal which is interpreted as “danger! too hot!” (this happens when something starts breaking up the skin cells). If you hold dry ice in your hand, after one or two seconds you will receive that signal (because ice crystals form in your skin and start breaking cells: dry ice is Very Cold!) and it will seem to you that the ice you’re holding has abruptly become very hot (while still receiving the “too cold” signal as well). It’s an odd experience (a dangerous one – be very careful if you wish to try it), which comes in handy: my brain receives the signal and interprets it in the usual way, the signal is about (supposed) too hot conditions, it can misfire, but it still conveys the “Too hot” message. This is what I was trying to say in the original post: within the system, certain interpretations are fixed, we can’t change them at will, they are not arbitrary. We do the same with computers, and find that we can work with them, write one program to play chess, another one for checkers. It’s the mapping in the I/O side that does the trick…

Moving on, Sci pointed to a delightful article from Fodor, which challenges Evolutionary Psychology directly, and marginally disputes the idea that Natural Selection can select for intentionality. I’m afraid that Fodor is fundamentally right in everything he says, but suffers from the same kind of inversion that generates the DwP and the other kind of criticism. I’ll explain the details (but please do read the article: it’s a pleasure you don’t want to deny yourself): the central argument (for us here) is about intentionality and the possibility that it may be “selected for” via natural selection. Fodor says that natural selection selects, but it does not select “for” any specific trait. What traits emerge from selection depends entirely from contingent factors.
On this, I think he is almost entirely right.

However, at a higher level, an important pattern does reliably emerge from blind selection: because of contingency, and thus unpredictability of what will be selected (still without the for), what ultimately tends to accumulate is adaptability per-se. Thus, you can say that natural selection, weakly selects for adaptability. Biologically, this is defensible: no organism relies on a fixed series of well-ordered and tightly constrained events to happen at precise moments in order to survive and reproduce. All living things can survive a range of different conditions and still reproduce. The way they do this is by, surprise! sample the world, and react to it. Therefore: natural selection selects for the seeds of intentionality because intentionality is required to react to changing conditions.
Now, on the subject of intentionality, and to show that natural selection can’t select for intentions, Fodor uses the following:

Jack and Jill
Went up the hill
To fetch a pail of water
Jack fell down
And broke his crown
And thus decreased his fitness.

(I told you it’s delightful!)
His point is that selection can’t act on Jack’s intention of fetching water, but only on the contingent fact that Jack broke his crown. He is right, but miles from our mark. What is selected for is Jack’s ability to be thirsty, he was born with internal sensors that detect lack of water: without them he would have been long dead before reaching the hill. Mechanisms to maintain homeostasis in an ever changing world (within limits) are not optional, they exist because of contingency: their existence is necessary because the world out there changes all the time. Thus: natural selection very reliably selects for one trait: intentionality. Intentionality about what, and how intentionality is instantiated in particular creatures is certainly determined by contingent factors, but it remains that intentionality about something is a general requirement for living things (unless they live in a 100% stable environment, which is made impossible by their very presence).

However, Fodor’s argument works a treat when it’s used to reject some typical Evolutionary Psychology claims such as “Evolution selected for the raping-instinct in human males”, such claims might be pointing into something which isn’t entirely wrong, but are nevertheless indefensible because evolution directly selects for things that are far removed from complex behaviours. Once intentionality of some sort is there, natural selection keeps selecting, and Fodor is right in explaining why there are no general rules on what it selects for (at that level): when we are considering the details, contingency gets the upper hand.

What Fodor somehow manages to ignore is the big distance between raw (philosophical) intentionality (the kind I’m discussing here – AKA aboutness), and fully formed intentions (as desires and similar). We all know the two are connected, but they are not the same: it’s very telling that Fodor’s central argument revolves around the second (Jack’s plan to go and fetch some water), but only mentions the first in very abstract terms. Once again: selection does select for the ability to detect the need for a given resource (when this need isn’t constant) and for the ability to detect the presence/availability of needed resources (again, if their levels aren’t constant). This kind of selection for is (unsurprisingly) very abstract, but it does pinpoint a fundamental quality of selection which is what explains the existence of sensory structures, and thus of intrinsic intentionality. What Fodor says hits the mark on more detailed accounts, but doesn’t even come close to the kind of intentionality I’ve been trying to pin-down.

The challenges that I did not receive.

In all of the above I think one serious criticism is missing: we can accept that a given system collects intentional systems, but how does the systems “know” what these signals are about? So far, I’ve just assumed that some systems do. If we go back to our bacterium, we can safely assume that such a systems knows exactly nothing: it just reacts to the presence of glucose in a very organised way. It follows that some systems use their intrinsic intentionality in different ways: I can modulate my reactions to most stimuli, while the bacterium does not. What’s different? Well, to get a glimpse, we can step up to a more complex organism, and pick one with a proper nervous system, but still simple enough. Aplysia: we know a lot about these slugs, and we know they can be conditioned. They can learn associations between neutral and noxious stimuli, so that after proper training they would react protectively to the neutral stimulus alone.

Computationally there is nothing mysterious (although biologically we still don’t really understand the relevant details): input channel A gets activated and carries inconsequential information, after this, channel B starts signalling something bad and an avoidance output is generated. Given enough repetitions the association is learned, and input from channel A short-cuts to produce the output associated with B. You can easily build a simulation that works on the hypothesis “certain inputs correlate” and reproduces the same functionality. Right: but does our little slug (and our stylised simulation) know anything? In my view, yes and no: trained individuals learn a correlation, so they do know something, but I wouldn’t count it as fully formed knowledge because it is still too tightly bound, it still boils down to automatic and immediate reactions. However, this example already shows how you can move from bare intentionality to learning something that almost counts as knowledge. It would be interesting to keep climbing the complexity scale and see if we can learn how proper knowledge emerges, but I’ll stop here, because the final criticism that I haven’t so far addressed can now be tackled: it’s Searle’s the Chinese room.

To me, the picture I’m trying to build says something about what’s going on with Searle in the room, and I find this something marginally more convincing than all the rebuttals of the Chinese room argument that I know of. The original thought experiment relies on one premise: that it is possible to describe how to process the Chinese input in algorithmic terms, so that a complete set of instructions can be provided. Fine: if this is so, we can build a glorified bacterium, or a computer (a series of mechanisms) to do Searle’s job. The point I can add is that even the abilities of Aplysiae exceed the requirements: Searle in the room doesn’t even need to learn simple associations. Thus, all the Chinese room shows us is that you don’t need a mind to follow a set of static rules. Not news, isn’t it? Our little slugs can do more: they use rules to learn new stuff, associations that are not implicit in the algorithm that drives their behaviour, what is implicit is that correlations exist, and thus learning them can provide benefits.
Note that we can easily design an algorithm that does this, and note that what it does can count as a form of abstraction: instead of a rigid rule, we have a meta-rule. As a result, we have constructed a picture that shows how sensory structures plus computations can account for both intentionality and basic forms of learning; in this picture, the Chinese room task is already surpassed: it’s true that neither Searle nor the whole room truly know Chinese, because the task doesn’t require to know it (see below). What is missing is feedback and memories of the feedback: Searle chucks out answers, which count as output/behaviour, but they don’t produce consequences, and the rules of the game don’t require to keep track neither of the consequences nor of the series of questions.

Follow me if you might: what happens if we add a different rule, and say that the person who feeds in the questions is allowed to feed questions that build on what was answered before? To fulfil this task Searle would need an additional set of rules: he will need instructions to keep a log of what was asked and answered. He would need to recall associations, just like the slugs. Once again, he will be following an algorithm but with an added layer of abstraction. Would the room “know Chinese” then? No, not yet: feedback is still missing. Imagine that the person feeding the questions is there with the aim of conducting a literary examination (they are questions about a particular story) and that whenever the answers don’t conform to a particular critical framework, Searle will get a punishment, when the answers are good he’ll get a reward. Now: can Searle use the set of rules from the original scenario and learn how to “pass the exam”? Perhaps, but can he learn how to avoid the punishments without starting to understand the meanings of the scribbles he produces as output? (You guessed right: I’d answer “No” to the second question)

The thing to note is that the new extended scenario is starting to resemble real life a little more: when human babies are born, we can say that they will need to learn the rules to function in the world, but also that such rules are not fixed/known a-priori. Remember what I was saying about Fodor and the fact that adaptability is selected for? To get really good at the Chinese room extended game you need intentionality, meaning, memory and (self) consciousness: so the question becomes, not knowing anything about literary theory, would it be possible to provide Searle with an algorithm that will allow him to learn how to avoid punishments? We don’t know, so it’s difficult to say whether, after understanding how such an algorithm may work, we would find it more or less intuitive that the whole room (or Searle) will learn Chinese in the process. I guess Peter would maintain that such an algorithm is impossible, in his view, it has to be somewhat anomic.
My take is that to write such an algorithm we would “just” need to continue along the path we have seen so far: we need to move up one more level of abstraction and include rules that imply (assume) the existence of a discrete system (self), defined by the boundaries between input and output. We also need to include the log of the previous question-answer-feedback loop, plus of course, the concepts of desirable and undesirable feedback (anybody is thinking “Metzinger” right now?). With all this in place (assuming it’s possible), I personally would find it much easier to accept that, once it got good at avoiding punishments, the room started understanding Chinese (and some literary theory).

I am happy to admit that this whole answer is complicated and weak, but it does shed some light even if you are not prepared to follow me the whole way: at the start I argue that the original task is not equivalent to “understanding Chinese”, and I hope that what follows clarifies that understanding Chinese requires something more. This is why the intuition the original Chinese room argument produces is so compelling and misleading. Once you imagine something more life-like, the picture starts blurring in interesting ways.

That’s it. I don’t have more to say at this moment.

Short list of the points I’ve made:

  • Computations can be considered as 100% abstract, thus they can apply to everything and nothing. As a result, they can’t explain anything. True, but this means that our hypothesis (Computations are 100% abstract) needs revising.
  • What we can say is that real mechanisms are needed, because they ground intentionality.
  • Thus, once we have the key to guide our interpretation, describing mechanisms as computations can help to figure out what generates a mind.
  • To do so, we can start building a path that algorithmically/mechanistically produces more and more flexibility by relying on increasingly abstract assumptions (the possibility to discriminate first, the possibility to learn from correlations next, and then up to learning even more based on the discriminations between self/not-self and desirable/not desirable).
  • This helps addressing the Chinese room argument, as it shows why the original scenario isn’t depicting what it claims to depict (understanding Chinese). At the same time this route allows to propose some extensions that start making the idea of conscious mechanisms a bit less counter-intuitive.
  • In the process, we are also starting to figure out what knowledge is, which is always nice

I Hope youve enjoyed the journey as much as I did! Please do feel free to rattle my cage even more, I will think some more and try to answer as soon as I can.

Bibliography

Bishop, J. M. (2009). A cognitive computation fallacy? cognition, computations and panpsychism. Cognitive Computation, 1(3), 221-233.

Chalmers, D. J. (1996). Does a rock implement every finite-state automaton?. Synthese, 108(3), 309-333.

Chen, S., Cai, D., Pearce, K., Sun, P. Y., Roberts, A. C., & Glanzman, D. L. (2014). Reinstatement of long-term memory following erasure of its behavioral and synaptic expression in Aplysia. Elife, 3, e03896.

Fodor, J. (2008). Against Darwinism. Mind & Language, 23(1), 1-24.

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and brain sciences, 3(03), 417-424.

Sergio’s Computational Functionalism

sergio differenceSergio has been ruminating since the lively discussion earlier and here by way of a bonus post, are his considered conclusions…..

Linespace

 

Not so long ago I’ve enthusiastically participated in the first phases of the discussion below Peter’s post on “Pointing”. The discussion rapidly descended in the fearsome depths of the significance of computation. In the process, more than one commentator directly and effectively challenged my computationalist stance. This post is my attempt to clarify my position, written with a sense of gratitude to all: thanks to all for challenging my assumptions so effectively, and to Peter for sparking the discussion and hosting my reply.

This lengthy post will proceed as follows: first, I’ll try to summarise the challenge that is being forcefully proposed. At the same time, I’ll explain why I think it has to be answered. The second stage will be my attempt to reformulate the problem, taking as a template a very practical case that might be uncontroversial. With the necessary scaffolding in place, I hope that building my answer will become almost a formality. However, the subject is hard, so wish me luck because I’ll need plenty.

The challenge: in the discussion, Jochen and Charles Wolverton showed that “computations” are arbitrary interpretations of physical phenomena. Because Turing machines are pure abstractions, it is always possible to arbitrarily define a mapping between the evolving states of any physical object and abstract computations. Therefore asking, “what does this system compute?” does not admit a single answer, the answer can be anything and nothing. In terms of one of our main explananda: “how do brains generate meanings?” the claim is that answering “by performing some computation” is therefore always going to be an incomplete answer. The reason is that computations are abstract: physical processes acquire computational meaning only when we (intentional beings) arbitrarily interpret these processes in terms of computation. From this point of view, it becomes impossible to say that computations within the brain generate the meanings that our minds deal with, because this view requires meanings to be a matter of interpretation. Once one accepts this point of view, meanings always pre-exist as an interpretation map held by an observer. Therefore, “just” computations, can only trade pre-existing (and externally defined!) meanings and it would seem that generating new meanings from scratch entails an infinite regression.

To me, this is nothing but another transformation of the hard problem, the philosophical kernel that one needs to penetrate in order to understand how to interpret the mechanisms that we can study scientifically. It is also one of the most beautifully recursive problem that I can envisage: the challenge is to generate an interpretative map that can be used to show how interpretative maps can be generated from scratch, but this seems impossible, because apparently you can only generate a new map if you can ground it on a pre-existing map. Thus, the question becomes: how do you generate the first map, the first seed of meaning, a fixed reference point, which gets the recursive process started?

In the process of spelling out his criticism, Jochen gave the famous example of a stone. Because the internal state of the stone changes all the time, for any given computation, we can create an ad-hoc map that specifies the correspondence of a series of computational steps with the sequence of internal steps of our stone. Thus, we can show that the stone computes whatever we want, and therefore, if we had a computational reduction of a mind/brain, we could say that the same mind exists within every stone. Consequently, computationalism can either require some very odd form of panpsychism or be utterly useless: it can’t help to discriminate between what can generate a mind and what can’t. I am not going to embrace panpsychism, so I am left with the only option of biting the bullet and show how this kind of criticism can be addressed.

Without digressing too much, I hope that the above leaves no doubt about where I stand: first, I think this critique of computational explanations of the (expected) mind/brain equivalence is serious, it needs an answer. Furthermore, I also think that answering it convincingly would count as significant progress, even a breakthrough, if we take ‘convincingly’ to stand for ‘capable of generating consensus’. Dissolving this apparently unsolvable conundrum is equivalent to showing why a mechanism can generate a mind, I don’t know if there is a bigger prize in this game.

I’ll start from my day job, I write software for a living. What I do is write instructions that would make a computer reliably execute a given sequence of computations, and produce the desired results. It follows that I can, somehow, know for sure what computations are going to be performed: if I couldn’t, writing my little programs would be vain. Thus, there must be something different between our ordinary computers and any given stone. The obvious difference is that computers are engineered, they have a very organised structure and behaviour, specifically because this makes programming them feasible. However, in theory, it would be possible to produce massively complicated input/output systems to substitute the relevant parts (CPU, RAM, long-term memory) of a computer with a stone, we don’t do this because it is practically far too complicated, not because it is theoretically impossible. Thus, the difference isn’t in the regular structure and easily predictable behaviour of the Von Neumann/Harvard and derived architectures. I think that the most notable differences are two:

  1. When we use a computer, we already have agreed upon the correct way to interpret its output. More specifically, all the programs that are written assume such a mapping, and would produce outputs that conform to it. If a given program will be used by humans (this isn’t always the case!) the programmer will make sure that the results will be intelligible to us. Similarly, the mapping between the computer states and their computational meaning is also fixed (so fixed and agreed in advance, that I don’t even need to know how it works, in practice).
  2. In turn, because the mapping isn’t arbitrary, also the input/output transformations follow predefined and discrete sets of rules. Thus, you can plug different monitors and keyboards, and expect them to work in similar ways.

For both differences, it’s a matter of having a fixed map (we can for simplicity collapse the maps from 1 & 2 into a single one). Once our map is defined and agreed upon, we can solve the stone problem and say “computer X is running software A, computer Y is running software B” and expect everyone to agree. The arbitrariness of the map becomes irrelevant because in this case the map itself has been designed/engineered and agreed from the start.

This isn’t trivial, because it becomes enlightening when we propose the hypothesis that brains can be modelled as computers. Note my wording: I am not saying “brains are computers”, I talk about “modelled” because the aim is to understand how brains work, it’s an epistemological quest. We are not asking “what brains/minds are”; in fact, I’ll do all I can to steer away from ontology altogether.

Right, if we assume that brains can be modelled as computers, it follows that it should be possible to compose a single map that would allow us to interpret brain mechanisms in terms of computations. Paired with a perfect brain scanner (a contraption that can report all of the brain states that are required to do the mapping) such a map would allow us to say without doubt “this brain is computing this and that”. As a result, with relatively little additional effort, it should become possible to read the corresponding brain. From this point of view, the fact that there is an infinite number of possible maps, but only one is “the right” one, means that the problem is not about arbitrariness (as it seemed for the stone). The problem is entirely different, it is about finding the correct map, the one that is able to reliably discern what the scanned mind is thinking about. This is why in the original discussion I’ve said the arbitrariness of the mapping is the best argument for a computational theory of the mind. It ensures the search space for the map is big enough to give us hope that such a map does exist. Note also that all of the above is nothing new, it is just stating explicitly the assumptions that underline all of neuroscience; if there are some exceptions, they would be considered very unorthodox.

However, this where I think that the subject becomes interesting. All of the above has left out the hard side of the quest, I haven’t even tried to address the problem of how computations can generate a “meaningful map” on its own. To tackle this mini-hard problem, we need to go back to where we started and recollect how I’ve described the core of the “anti-computalism” stance. Taking about brain/mechanisms, I’ve asked: how [does the brain] generate the first map, the first seed of meaning, a fixed reference point, which gets the recursive process started? Along the way, I’ve claimed that it is reasonable to expect that a different but important map can be found, the one that describes (among many other things) how to translate brain events into mind events (thoughts, memories, desires, etc.). Therefore, one has to admit that this second map (our computational interpretation) would have to contain, at least implicitly, the answer on the fixed reference point. How is this possible? Note that I’ve strategically posed the question in my own terms, and mentioned the need for a fixed reference point. You may want to recall the “I-token” construct of Retinoid Theory, but in general, one can easily point out that the reference point is provided by the physical system itself. We have, ex-hypothesis, a system that collects “measurements” from the environment (sensory stimuli), processes them, and produces output (behaviour); this output is usually appropriate to preserve the system integrity (and reproduce, but that’s another story). Fine, such a system IS a fixed reference point. The integrity that justifies the whole existence of the system IS precisely what is fixed – all the stimuli it collects are relative to the system itself. As long as the system is intact enough to function, it can count as a fixed reference point; with a fixed reference, meanings become possible because reliable relations can be identified, and if they can, then they can be grouped together and produce more comprehensive “interpretative” maps. This is the main reason why I like Peter’s Haecceity: it’s the “thisness” of a particular computational system that actually seeds the answer of the hard side of the question.

Note also that all of the above captures the differences I’ve spelled out between a standard computer and a common stone. It’s the specific physicality of the computer that ultimately distinguishes it from a stone: in this case, we (humans) have defined a map (designing it from scratch with manageability in mind) and then used the map to produce a physical structure that will behave accordingly. In the case of brains/minds, we need to proceed in the opposite direction: given a structure and its dynamic properties, we want to define a map that is indeed intelligible.

Conclusions:

  • The computational metaphor should be able to capture the mechanisms of the brain and thus describe the (supposed) equivalence between brain events and mind events.
  • Such description would count as a weak explanation as it spells out a list of “what” but doesn’t even try to produce a conclusive “why”.
  • However, just expecting such mapping to be possible already suggests where to find the “why” (or provides it, if you feel charitable). If such mapping will prove to be possible, it follows that to be conscious, an entity needs to be physical. Its physicality is the source of the ability of generating its own, subjective meanings.
  • This in turns reaffirms why our initial problem, posed by the unbounded arbitrariness of computational explanations, does not apply. The computational metaphor is a way to describe (catalogue) a bunch of physical processes, it spells out the “what” but is mute on the “why”. The theoretical nature of computation is the reason why it is useful, but also points to the missing element: the physical side.
  • If such a map will turn out to be impossible, the most likely explanation is that there is no equivalence between brain and mind events.

 

Finally, you may claim that all these conclusions are themselves weak. Even if the problematic step of introducing Haecceity/physicality, as the requirement to bootstrap meaning, is accepted, the explanation we gain is still partial. This is true, but entails the mystery of reality (again, following Peter): because cognition can only generate and use interpretative maps (or translation rules), it “just” shuffles symbols around, it cannot, in no way or form ultimately explain why the physical world exists (or what exactly the physical world is, this is why I steered away from ontology!). Because all knowledge is symbolic, some aspect of reality always has to remain unaccounted and unexplained. Therefore, all of the above can still legitimately feel unsatisfactory: it does not explain existence. But hey, it does talk about subjectivity and meaning (and by extension, intentionality), so it does count as (hypothetical) progress to me.

Now please disagree and make me think some more!

The Intuitional Problem

intuitronMark O’Brien gives a good statement of the computationalist case here; clear, brief, and commendably mild and dispassionate. My impression is that enthusiasm for computationalism – approximately, the belief that human thought is essentially computational in nature – is not what it was. It’s not that computationalists lost the argument, it’s more that the robots failed to come through. What AI research delivered has so far been, in this respect, much less than the optimists had hoped.

Anyway O’Brien’s case rests on two assumptions:

  • Naturalism is true.
  • The laws of physics are computable, that is, any physical process can be simulated to any desired degree of precision by a computer.

It’s immediately clear where he’s going. T0 represent it crudely, the intuition here is that naturalism means the world ultimately consists of  physical processes, any physical process can run on a computer, ergo anything in the world can run on a computer, ergo it must be possible to run consciousness on a computer.

There’s an awful lot packed into those two assumptions. O’Brien tackles one issue with the idea of simulation: namely that simulating something isn’t doing it for real. A simulated rainstorm doesn’t make us wet. His answer is that simulation doesn’t produce physical realities, but it does seem to work for abstract things. I think this is basically right. If we simulate a flight to Paris, we don’t end up there; but the route calculated by the program is the actual route; it makes no sense to say it’s only a simulated route, because it’s actually identical with the one we should use if we really went to Paris. So the power of simulation is greater for informational entities than for physical ones, and it’s not unreasonable to suggest that consciousness seems more like a matter of information than of material stuff.

There’s a deeper point, though. To simulate is not to reproduce: a simulation is the reproduction of the relevant aspects of the thing simulated. It’s implied that some features of the thing simulated are left out, ones that don’t matter. That’s why we get the different results for our Parisian exercise: the simulation necessarily leaves our actual physical locations untouched; those are irrelevant when it comes to describing the route, but essential when it comes to actually visiting Paris.

The problem is we don’t know which properties are relevant to consciousness, and to assume they are the kind of thing handled by computation simply begs the question. It can’t be assumed without an argument that physical properties are irrelevant here: John Searle and Roger Penrose in different ways both assert that they are of the essence. Even if consciousness doesn’t rely quite so brutally as that on the physical nature of the brain, we need to start with a knowledge of how consciousness works. Otherwise, we can’t tell whether we’ve got the right properties in our simulation –  even if they are in principle computational.

I don’t myself think Searle or Penrose are right: but I think it’s quite likely that the causal relationships in cognitive processes are the kind of essential thing a simulation would have to incorporate. This is a serious problem because there are reasons to think computer simulations never reproduce the causal relationships intact. In my brain event A causes event B and that’s all there is to it: in a computer, there’s always a script involved. At its worst what we get is a program that holds up flag A to represent event A and then flag B to represent event B: but the causality is mediated through the program. It seems to me this might well be a real issue.

O’Brien tackles another of Searle’s arguments: that you can’t get semantics from syntax: ie, you can’t deal with meanings just by manipulating digits. O’Brien’s strategy here is to assume a robot that behaves pretty much the way I do: does it have beliefs? It says it does, and it behaves as if it did. Perhaps we’re not willing to concede that those are real beliefs: OK, let’s call them beliefs*. On examination it turns out that the differences between beliefs and beliefs* are nugatory: so on gorunds of parsimony if nothing else we should assume they are the same.

The snag here is that there are no robots that behave the way I do.  We’ve had sixty years of failure since Turing: you can’t just have it as an assumption that our robot pals are self-evidently achievable (alas).  We know that human beings, when they do translation for example, extract meanings and then put the meanings into other words, whereas the most successful translation programs avoid meanings altogether and simply swap text strings for text strings according to a kind of mighty look-up table.

That kind of strategy won’t work when dealing with the formless complexity of the real world: you run into the analogues of the Frame Problem or you just don’t get really started. It doesn’t even work that well for language: we know now that human understanding of language relies on pragmatic Gricean implicatures, and no-one can formalise those.

Finally O’Brien turns to qualia, and here I agree with him on the broad picture. He describes some of the severe difficulties around qualia and says, rightly I think, that in the end it comes down to competing intuitions.  All the arguments for qualia are essentially thought experiments: if we want, we can just say ‘no’ to all of them (as Dennett and the Churchlands, for example, do). O’Brien makes a kind of zombie argument: my zombie twin, who lacks qualia but resembles me in all other respects, would claim to have qualia and would talk about them just the way we do.  So the explanation for talk about qualia is not qualia themselves: given that, there’s no reason to think we ourselves have them.

Up to a point: but we get the conclusion that my zombie twin talks about qualia purely ex hypothesi: it’s just specified. It’s not an explanation, and I think that’s what we really need to be in a position to dismiss the strong introspective sense most people have that qualia exist. If we could actually explain what makes the Twin talk about qualia, we’d be in a much better position.

So I mostly disagree, but I salute O’Brien’s exposition, which is really helpful.

Dancing Pixies

Picture: Dancing Pixies. I see that among the first papers published by the  recently-launched journal of Cognitive Computation, they sportingly included one arguing that we shouldn’t be seeing cognition as computational at all.  The paper, by John Mark Bishop of Goldsmith’s, reviews some of the anti-computational arguments and suggests we should think of cognitive processes in terms of communication and interaction instead.

The first two objections to computation are in essence those of Penrose and Searle, and both have been pretty thoroughly chewed over in previous discussions in many places; the first suggests that human cognition does not suffer the Gödelian limitations under which formal systems must labour, and so the brain cannot be operating under a formal system like a computer program; the second is the famous Chinese Room thought experiment. Neither objection is universally accepted, to put it mildly,  and I’m not sure that Bishop is saying he accepts them unreservedly himself – he seems to feel that having these popular counter-arguments in play is enough of a hit against computationalism in itself to make us want to look elsewhere.

The third case against computationalism is the pixies: I believe this is an argument of Bishop’s own, dating back a few years, though he scrupulously credits some of the essential ideas to Putnam and others.  A remarkable feature of the argument is that uses panpsychism in a reductio ad absurdum (reductio ad absurdum is where you assume the truth of the thing you’re arguing against, and then show that it leads to an absurd, preferably self-contradictory conclusion).

Very briefly, it goes something like this; if computationalism is true, then anything with the right computational properties has true consciousness (Bishop specifies Ned Block’s p-consciousness, phenomenal consciousness, real something-that-it-is-like experience). But a computation is just a given series of states, and those states can be indicated any way we choose.  It follows that on some interpretation, the required kind of series of states are all over the place all the time. If that were true, consciousness would be ubiquitous, and panpsychism would be true (a state of affairs which Bishop represents as being akin to a world full of pixies dancing everywhere). But since, says Bishop, we know that panpsychism is just ridiculous, that must be wrong, and it follows that our initial assumption was incorrect; computationalism is false after all.

There are of course plenty of people who would not accept this at all, and would instead see the whole argument as just another reason to think that panpsychism might be true after all. Bishop does not spend much time on explaining why he thinks panpsychism is unacceptable, beyond suggesting that it is incompatible with the ‘widespread’ desire to explain everything in physical terms, but he does take on some other objections more explicitly.  These mostly express different kinds of uneasiness about the idea that an arbitrary selection of things could properly constitute a computation with the right properties to generate consciousness.

One of the more difficult is an objection from Hofstadter that the required sequences of states can only be established after the fact: perhaps we could copy down the states of a conscious experience and then reproduce them, but not determine them in advance. Bishop uses an argument based on running the same consciousness program on a robot twice; the first time we didn’t know how it would turn out; the second time we did (because it’s an identical robot and identical program) but it’s absurd to think that one run could be conscious and the other not. 

Perhaps the most tricky objection mentioned is from Chalmers; it points out that cognitive processes are not pre-ordained linear sequences of states, but at every stage have the possibility of branching off and developing differently. We could, of course remove every conditional switch in a given sequence of conscious cognition and replace it by a non-conditional one leading on to the state which was in fact the next one chosen. For that given sequence, the outputs are the same – but we’re not entitled to presume that consious experience would arise in the same way because the functional organisation is clearly different, and that is the thing, on computationalist reasoning, which needs to be the same.  Bishop therefore imagine a more refined version: two robots run similar programs; one program has been put through a code optimiser which keeps all the conditional branches but removes bits of code which follow, as it were, the unused branches of the conditionals. Now surely everything relevant is the same: are we going to say that consciousness arises in one robot by virtue of there being bits of extra code there which lie there idle? That seems odd.

That argument might work, but we must remember that Bishop’s reductio requires the basics of consciousness to be lying around all over the place, instantiated by chance in all sorts of things. While we were dealing with mere sequences of states, that might look plausible, but if we have to have conditional branches connecting the states (even ones whose unused ends have been pruned) it no longer seems plausible to me.  So in patching up his case to respond to the objection, Bishop seems to me to have pulled out some of the foundations it was originally standing on. In fact, I think that consciousness requires the right kind of causal relations between mental states, so that arbitrary sets or lists of states won’t do.

The next part of the discussion is interesting. In many ways computationalism looks like a productive strategy, concedes Bishop – but there are reasons to think it has its limitations. One of the arguments he quotes here is the Searlian point that there is a difference between a simulation and reality. If we simulate a rainstorm on a computer, no-one expects to get wet; so if we simulate the brain, why should we expect consciousness? Now the distinction between a simulation and the real thing is a relevant and useful one, but the comparison of rain and consciousness begs the question too much to serve as an argument. By choosing rain as the item to be simulated, we pick something whose physical composition is (in some sense) essential; if it isn’t made of water it isn’t rain. To assume that the physical substrate is equally essential for consciousness is just to assume what computationalism explicitly denies.  Take a different example; a letter. When I write a letter on my PC, I don’t regard it as a mere simulation, even though no paper need be involved until it’s printed; in fact, I have more than once written letters which were evntually sent as email attachments and never achieved physical form. This is surely because with a letter, the information is more essential than the physical instantiation. Doesn’t it seem highly plausible that the same might be true to an even greater extent of consciousness? If it is true, then the distinction between simulation and reality ceases to apply.

To make sceptical simulation arguments work, we need a separate reason to think that some computation was more like a simulation than the reality – and the strange thing is, I think that’s more or less what the objections from Hofstadter and Chalmers were giving us; they both sort of draw on the intuition that a sequence of states could only simulate consciousness  in the sort of way a series of film frames simulates motion.

The ultimate point, for Bishop, is to suggest we should move on from the ‘metaphor’ of computation to another based on communication. It’s true that the idea of computation as the basis of consciousness has run into problems over recent years, and the optimism of its adherents has been qualified by experience; on the other hand it still has some remarkable strengths. For one thing, we understand computation pretty clearly and thoroughly;  if we could reduce consciousness to computation, the job really would be done; whereas if we reduce consciousness to some notion of communication which still (as Bishop says) requires development and clarification, we may still have most of the explanatory job to do.

The other thing is that computation of some kind, if not the only game in town, still is far closer to offering a complete answer than any other hypothesis.  I supect many people who started out in opposing camps on this issue would agree now that the story of consciousness is more likely to be ‘computation plus plus’ (whatever that implies) than something quite unrelated.