The Intuitional Problem

intuitronMark O’Brien gives a good statement of the computationalist case here; clear, brief, and commendably mild and dispassionate. My impression is that enthusiasm for computationalism – approximately, the belief that human thought is essentially computational in nature – is not what it was. It’s not that computationalists lost the argument, it’s more that the robots failed to come through. What AI research delivered has so far been, in this respect, much less than the optimists had hoped.

Anyway O’Brien’s case rests on two assumptions:

  • Naturalism is true.
  • The laws of physics are computable, that is, any physical process can be simulated to any desired degree of precision by a computer.

It’s immediately clear where he’s going. T0 represent it crudely, the intuition here is that naturalism means the world ultimately consists of  physical processes, any physical process can run on a computer, ergo anything in the world can run on a computer, ergo it must be possible to run consciousness on a computer.

There’s an awful lot packed into those two assumptions. O’Brien tackles one issue with the idea of simulation: namely that simulating something isn’t doing it for real. A simulated rainstorm doesn’t make us wet. His answer is that simulation doesn’t produce physical realities, but it does seem to work for abstract things. I think this is basically right. If we simulate a flight to Paris, we don’t end up there; but the route calculated by the program is the actual route; it makes no sense to say it’s only a simulated route, because it’s actually identical with the one we should use if we really went to Paris. So the power of simulation is greater for informational entities than for physical ones, and it’s not unreasonable to suggest that consciousness seems more like a matter of information than of material stuff.

There’s a deeper point, though. To simulate is not to reproduce: a simulation is the reproduction of the relevant aspects of the thing simulated. It’s implied that some features of the thing simulated are left out, ones that don’t matter. That’s why we get the different results for our Parisian exercise: the simulation necessarily leaves our actual physical locations untouched; those are irrelevant when it comes to describing the route, but essential when it comes to actually visiting Paris.

The problem is we don’t know which properties are relevant to consciousness, and to assume they are the kind of thing handled by computation simply begs the question. It can’t be assumed without an argument that physical properties are irrelevant here: John Searle and Roger Penrose in different ways both assert that they are of the essence. Even if consciousness doesn’t rely quite so brutally as that on the physical nature of the brain, we need to start with a knowledge of how consciousness works. Otherwise, we can’t tell whether we’ve got the right properties in our simulation –  even if they are in principle computational.

I don’t myself think Searle or Penrose are right: but I think it’s quite likely that the causal relationships in cognitive processes are the kind of essential thing a simulation would have to incorporate. This is a serious problem because there are reasons to think computer simulations never reproduce the causal relationships intact. In my brain event A causes event B and that’s all there is to it: in a computer, there’s always a script involved. At its worst what we get is a program that holds up flag A to represent event A and then flag B to represent event B: but the causality is mediated through the program. It seems to me this might well be a real issue.

O’Brien tackles another of Searle’s arguments: that you can’t get semantics from syntax: ie, you can’t deal with meanings just by manipulating digits. O’Brien’s strategy here is to assume a robot that behaves pretty much the way I do: does it have beliefs? It says it does, and it behaves as if it did. Perhaps we’re not willing to concede that those are real beliefs: OK, let’s call them beliefs*. On examination it turns out that the differences between beliefs and beliefs* are nugatory: so on gorunds of parsimony if nothing else we should assume they are the same.

The snag here is that there are no robots that behave the way I do.  We’ve had sixty years of failure since Turing: you can’t just have it as an assumption that our robot pals are self-evidently achievable (alas).  We know that human beings, when they do translation for example, extract meanings and then put the meanings into other words, whereas the most successful translation programs avoid meanings altogether and simply swap text strings for text strings according to a kind of mighty look-up table.

That kind of strategy won’t work when dealing with the formless complexity of the real world: you run into the analogues of the Frame Problem or you just don’t get really started. It doesn’t even work that well for language: we know now that human understanding of language relies on pragmatic Gricean implicatures, and no-one can formalise those.

Finally O’Brien turns to qualia, and here I agree with him on the broad picture. He describes some of the severe difficulties around qualia and says, rightly I think, that in the end it comes down to competing intuitions.  All the arguments for qualia are essentially thought experiments: if we want, we can just say ‘no’ to all of them (as Dennett and the Churchlands, for example, do). O’Brien makes a kind of zombie argument: my zombie twin, who lacks qualia but resembles me in all other respects, would claim to have qualia and would talk about them just the way we do.  So the explanation for talk about qualia is not qualia themselves: given that, there’s no reason to think we ourselves have them.

Up to a point: but we get the conclusion that my zombie twin talks about qualia purely ex hypothesi: it’s just specified. It’s not an explanation, and I think that’s what we really need to be in a position to dismiss the strong introspective sense most people have that qualia exist. If we could actually explain what makes the Twin talk about qualia, we’d be in a much better position.

So I mostly disagree, but I salute O’Brien’s exposition, which is really helpful.

52 thoughts on “The Intuitional Problem

  1. Peter, nice one !!

    If we could actually explain what makes the Twin talk about qualia, we’d be in a much better position

    You mean to explain why the Twin can’t talk about qualia, and it’s just an absurd academic game altogether, don’t you? A little dose of positivism won’t harm us.

  2. Hi Peter,

    Thanks for the kind words. I just thought I might clarify a few things.

    Firstly, the discussion about the simulation of abstract properties isn’t really begging the question, in my view. It doesn’t assume that consciousness is independent of physicality, it just concludes that it might be. The idea here is to anticipate rejections of the simulation argument which are of the form “a simulated waterfall is not wet”. It’s actually the simulation argument that is intended to explain why I think computers could be conscious.

    I would also like to clarify that in the simulation argument, no features are completely omitted, in that all are at least represented. The simulation makes no simplifying assumptions and should therefore behave exactly as the real world. The difference, of course, is that it is not physical. Of course I could be wrong that this makes no difference, but I try at least to show why I think it is more parsimonious to conclude that it probably doesn’t.

    I don’t think the argument that we have no such robots really works (actually, in the article it’s simulated people in a virtual environment). If you accept the two premises, then it must be possible in principle to build such a simulation. I’m sure that it will never be possible in practice, but ruling it out completely would seem to require either supernaturalism or something uncomputable about physics. Given that it should be possible in principle, the rest of the argument follows. Recall that I’m not trying to argue for the feasibility of computational consciousness, only that it’s not obviously absurd or incoherent as many seem to think.

    Similarly I’m not really trying to explain qualia. Rather I’m trying to refute the notion that AIs could never experience qualia in the same way we do.

    I disagree with your worry that causal relationships cannot be reproduced in simulations. For example, the physical representation of the raising of Flag A forms part of the physical chain of cause and effect that causes the physical representation of Flag B to switch to the raised state. The physical and the logical are simply two different levels of description of the same events. In the same way, I wrote the article both because I wanted to and because a certain bunch of atoms interacted with another bunch of atoms so as to cause a chain of physical events including fingers hitting a keyboard in a certain pattern.

    Thanks again for the thoughtful response and indeed for mentioning my article at all!

  3. Thanks, Mark. I’ll let others have a go rather than coming back on all those points. I did think it was a good piece – hope you’ll write more.

  4. Thanks Peter and Mark. It would seem that to rule out computationalism re qualia there would have to be an accepted theory of consciousness showing that there’s a physical requirement for having qualia, the simulation of which wouldn’t duplicate the role of that requirement in entailing the existence of qualia. I don’t think we have such a theory, so computationalism stands as a possibility.

    I’m sympathetic to representationalism as an approach to explaining consciousness, and computationalism is essentially a representational thesis: simulations represent functional features of that which they simulate. So if we simulate a patently representational system like the brain in enough detail, there will be enough fidelity in representational content to entail the existence of qualia for the system doing the simulation, *if* some sort of representational theory of consciousness is true.

    Of course if the system doing the simulation is just sitting there like a laptop, not interacting with the world, this violates our intuition that consciousness is deeply connected with behavior. But if we keep in mind that sleeping brains and bodies – completely immobilized – can support the full blown experience of having vivid dreams, the idea that a stationary system running a simulation can be conscious won’t seem so weird.

  5. Mark and Tom,

    Irrespective of the elaboration of any representational theory of consciousness, what OUTPUT VARIABLES of any simulation are going to tell us anything about consciousness beyond of what we can learn from acutual neurological correlates of conscious states??

    Could you enumerate at least the main output variables (no limits for details grain size)in any feasible simulation that will account for conscious features and qualia.

    In Peter’s example the output variables are: (longitude, latitude, height, time) in degrees, feet and seconds. In yours?

    The first thing when designing a simulation is to decide the output.

  6. Hi Vicente,

    Personally, I don’t see a problem with consciousness existing in a closed system with no output variables at all — that’s what you get if you put a person in a box, after all. In the article, I write about the simulation of a whole room. In that thought experiment, a virtual person has a virtual body and a virtual environment. As far as I’m concerned, the virtual body and environment gives it all the embodiment it would need for consciousness, whether or not there is any output other than internal representations of state. The minimal output variables would therefore be, well, nothing, in which case you effectively have a person in a box.

    In reality, if you want to be able to confidently judge that the system is conscious, you would need quite a rich representation of its behaviour.

    In the thought experiment, I imagine the output variables would be some sort of 3D rendering of the room, perhaps including audio. We could watch the virtual person go about her business and perhaps even see FMRI-style scans of her brain processes. It presumably wouldn’t be difficult to allow us to communicate with her either by some means.

    In the real world, I think the Turing Test is pretty good. I think passing the Turing Test with experienced judges for a sustained amount of time, demonstrating wit, insight and an ability to follow the thread of a conversation is likely only possible for a conscious system. Indeed, there are plenty of actual people who may fail such a test, so I conceive of it as a test where only passing it is significant and failure wouldn’t mean much.

    I can’t prove that passing such a test requires consciousness, but in particular if the way in which the system processes information is somewhat analogous to that of a human brain, e.g. being some form of neural network rather than an elaboration of Eliza-like tricks, then it is my strong suspicion that it would be conscious.

  7. in a computer, there’s always a script involved.

    That comes with the assumption that ‘my own brain doesn’t have scripts, because I’d be aware of them if it did’.

    But your brain doesn’t have information on DNA, for example. Or information on how your liver works. Or information on how to digest an apple.

    Just as much your own brain could come with scripts, but not have any information about that. How many visual illusions are there, like the old line length one? Why does one line seem longer – what would be a simple explanation is that you have scripts doing preprocessing of the image, and such is a side effect of the preprocessing. Why does a triangle seem to be there? Again, scripts would be a simple answer.

    You don’t have any information they are there, you don’t have any sense they are there – only when the truth of the illusion is discussed can you form an evaluation of something that is distorting incoming evaluation.

    Maybe you quarantine the scripts, say that they are just in regards to visual processing and not part of the true conciousness. Even then, atleast its worth considering the presence of unknown scripts.

  8. Hi Mark,

    if the way in which the system processes information is somewhat analogous to that of a human brain, e.g. being some form of neural network

    What do you mean by analogous?

    If by analogous you mean producing the same OUTPUTS, then I came back to my question.

    If by analogous you mean following similar mechanisms then simulations utterly defeat the purpose.

    And mostly important, I believe that the models we have to describe and explain how the brain processes information are… “immature”. To have such a model is the first thing we need, if we have to know if it is possible to replicate “analogous” mechanisms on another “substrate”.

    You are referring to creating a virtual environment, which is not necessaraly a simulation, despite virtual environments can include simulations to support some of their elements.

    To me, without understanding how your virtual guy’s brain is going to work (at least conceptually), to be simulated (or emulated), it is difficult to say anything…

  9. Hi Vicente,

    > What do you mean by analogous?

    I’m assuming that when information is processed, as I believe is happening in human brains, we can to some extent abstract that out from the physical goings on so as to produce a logical rather than a physical description of the chain of cause and effect. This essentially constitutes an algorithm. Information processing systems work analogously if, at some level of description, they can be described to be implementing similar or identical algorithms.

    For example, this video demonstrates a domino computer:
    https://www.youtube.com/watch?v=OpLU__bhu2w

    This functions analogously to an electronic computer even though the physical events happening are very different because it is possible to offer identical logical descriptions of cause and effect for both. At one level, the final domino falls over because domino 1 hit domino 2, 2 hit 3 and so on, but at another level we can say it outputs the number 4 because it was computing 1+3. At an intermediate level, we can describe each logical AND or OR operation that went into that calculation, and these might be the same for any number of different physical realisations of a four-bit binary adder.

    So if we produce a system which behaves like a human and works analogously to a human, I am confident it is conscious.

    >If by analogous you mean following similar mechanisms then simulations utterly defeat the purpose.

    In my view, all simulations are analogies of a kind. In the thought experiment of the article, there is an analogy between physical electrons and virtual electrons. In fact, I think it unlikely that we would need to go to that extreme low level. It may be possible to produce human intelligence by simulating neurons and ambient hormone and neurotransmitter levels directly, in which case there is still an analogy but drawn at a higher level.

    I don’t actually think we need to understand brains to reproduce human intelligence, although it would certainly help. I can think of two ways we might do it without understanding. Firstly, we might be able to simply scan the brain and reproduce what it is doing in a simulation of physical processes. The simulation should at least behave intelligently even though we have no understanding of how it does it. Secondly, we might be able to harness the power of evolution and self-adapting systems like neural networks that iteratively learn and develop intelligence autonomously, but leaving us completely in the dark as to how they work when they achieve it. This is essentially the situation we are in today with artificial neural networks, which can often do fantastic things but are too complex and chaotic for us to understand.

  10. Hi Mark,

    I believe there is significant flaw in your approach, you’re often using in the same terms intelligence and consciousness, which leads to unclear expectations.

    A machine can be said intelligent with respect to its capacity to solve problems (stated by humans btw), in the frame of a spefic kind of problems/solutions.

    Maybe, machines could implement algorithms used by humans to solve problems, if any. Machines could even use simulations of neurological processes (let’s call them algorithms) to solve certain problems, why not.

    Consciousness is a completely different issue.

  11. Hi Vicente,

    I agree that consciousness is a different problem than intelligence, however my view is that any machine which is generally intelligent — and in particular which solves problems in a way analogous to how humans do it — is also conscious. My reasoning for this view is in my original article.

  12. Similarly I’m not really trying to explain qualia. Rather I’m trying to refute the notion that AIs could never experience qualia in the same way we do.

    I think it should noted that whatever qualia we experience is likely not the full spectrum of qualia. Distinct architectual differences could mean just as much as we don’t detect infra red, an AI might experience a qualia from an entirely alien range.

  13. Hi Callan,

    My own view is that architectural differences at a logical or software level imply different qualia. I don’t think the substrate matters. Also, I don’t think qualia have content beyond their functional roles in identifying different kinds of sensory input or affective states.

  14. Mark,

    I got your point… let’s forget about brain processing analogies for a moment, you are saying that if you run the same algorithm on a digital processor or on a quantum processor (when available, I cannot think of any other kind of processor different from ordinary digital processors) you will get the same qualia, irrespective of the “circuitry”.

    The point is that the algorithm only exists in the programmer’s mind. The processor is just a physical system following a certain behaviour, there is no such a thing as a physical algorithm. There is no physical “logical” information. If you say that a bit or qbit takes value 1, it means that there is a current flowing through a certain inverter transitor, or an atom is in a certain state. So the substrate will have to determine qualia as much as the algorithm at any rate.

    For example, think of color blindness, the vision logical level is probably the same than healthy vision has. Visual information is formally processed with the same workflow, but a change in the substrate makes a whole world difference, because color mechanisms have nothing to do with logic or with algorithms (I’m just referring to neural correlates). This lack of a logic layer, in the architecture, is responsible for all kind of optical illusions…

    Your approach is platonic, with algorithms having some real existence independent of the “thinker”.

  15. Vicente: I think ‘platonic’ is a little strong, but the fact is computationalism simply punts on the ontological problems posed by phenomenality by embracing the ontological problems posed by abstraction and intentionality. Punting this way seems to provide abductive warrant in the form of parsimony, but since phenomenality is in no way explained, it really possesses no abductive warrant at all. For a phenomenal realist such as yourself, he has to be missing the whole point. For an eliminativist such as myself, he’s simply using one hard problem to duck another. What sounds like a ‘best of both worlds’ solution is in fact quite the opposite: an account that relies on the inexplicability of intentionality to avoid explaining phenomenality.

  16. Vicente:

    I disagree that the algorithm only exists in the programmer’s mind. As it happens I am actually a Platonist, so your accusation is appropriate. However not all computationalists are so I will try to defend against your point without insisting that algorithms are independent of the physical world.

    Without appealing to Platonism, I might argue that algorithms are not in the mind of the beholder but actually physically instantiated whenever there is a consistent way of interpreting a series of physical events so as to correspond to that algorithm, in particular if the correspondence takes advantage of an analogy between the chain of physical causation and the logic of the algorithm (rather than depending on arbitrary coincidence after the fact).

    Of course this means that there may be more than one way to interpret a particular physical interaction as an algorithm. For example, if in a digital computer we invert the representation of one and zero (true and false), then AND gates become OR gates and vice versa, so any system that computes T AND T = T can be interpreted as computing F OR F = F. Where this kind of equivalence holds, I consider both interpretations to be essentially the same computation but rendered using different symbols. The two algorithms are completely isomorphic in the mathematical sense and so are in fact the same mathematical object. Of course no interpretation we put on the physical algorithm could have any implications for qualia or consciousness, and mathematical isomorphism explains why.

    Scott:

    I am both a Platonist and something of an eliminativist with regard to qualia. I prefer not to say that qualia don’t exist, because I do think that they exist as identifiers for different sensory inputs or affective states. But I also think they are without content beyond these functional roles.

    I don’t think intentionality is a particularly hard problem. The only difficulty lies in convincing those with incompatible intuitions that a functionalist account is not missing something. The functionalist account of intentionality would be to explain reference in terms of (a) similarity of logical structure (i.e. my mental model of a mug bears some similarity to the attributes of a physical cup) and (b) causal association. Seeing a mug causes my mental model of a mug to be activated. Picking up a cup involves activating my mental model of a mug.

    These attributes (a) and (b) are how intentionality can be (and has been) achieved in robots, and that’s really all I think there is to it, conflicting intuitions notwithstanding.

    Of course that’s just a quick sketch. Obviously there’s a little more detail needed to deal with the intentionality of abstract objects and so on but that’s a different topic really. The point is, if we can attribute functional intentions to a robot, as I think we can, then the possibility is open that this is all intentions are — that is until you can clearly articulate what this picture is missing.

  17. Mark:

    What about processing speed? if your algorithm is in control of a system that interacts with other systems, then processing speed and results delivery timing will have a big impact on the “experience” that you’re algorithm is going through. Will you admit that processing speed depends on the substrate(processor)? a different experience entails different qualia?

  18. I don’t know about qualia, but they will certainly have a different experience of the other systems. A slow information processor perceives its environment to be evolving more rapidly than a fast information processor.

    But in a closed system, such as a simulation of a virtual environment with virtual people, the virtual people would perceive no difference.

  19. Mark, I believe last comment might not be fully coherent with cmmt #14. Then, a theory of consciousness should be general, not just valid for a particular simulated scenario.

  20. Mark: “I don’t think intentionality is a particularly hard problem. The only difficulty lies in convincing those with incompatible intuitions that a functionalist account is not missing something. The functionalist account of intentionality would be to explain reference in terms of (a) similarity of logical structure (i.e. my mental model of a mug bears some similarity to the attributes of a physical cup) and (b) causal association. Seeing a mug causes my mental model of a mug to be activated. Picking up a cup involves activating my mental model of a mug.”

    And the causal explanation of misrepresentation is? The answer to the symbol-grounding problem is? Normative causation works how? I could go on and on. No one has anything remotely approaching a consensus-commanding explanation for how the (apparent) properties of intentional phenomena naturally come about. Calling them irreducible or transcendental is just a face-saving way of saying you’re tired of being stumped, as far as I can tell. Personally, I think it’s a defining feature of ‘hard problems’ that many entertain incompatible reasons for thinking them not hard at all!

    Platonism asks that we believe in immaterial difference engines lying outside space-time. For me, this, no matter how you ontologically deflate it, is simply too extravagant to be warranted by something so obviously treacherous as metacognitive intuition. Look at the philosophy of mathematics: it seems pretty clear mathematicians don’t so much know what they are doing as they know what to do next. And since we actually have real honest to goodness difference engines here in the high-dimensional jungle of nature, I’m prone to think low-dimensional supernatural engines so many seem to so clearly intuit are kinds of metacognitive mirages, the kinds of confusions you might expect a brain to make struggling to get a grip on its own activities, especially those, like mathematical cogitation, lacking any evolutionary history.

    Think about all the work evolution had to put into reliably tracking mundane environments. It just seems implausible that it happened to develop, in the absence of reproductive filtration, the ability to cognize some transcendental (aka., supernatural) realm *for what it is.* It just sounds like magic to me anymore. What data informs Platonism? Why should anyone trust that data, let alone our ability to interpret it? And if the data informing Platonism is such that all theories relying on it will remain perpetually underdetermined, then what use is it? To whit, how could any theory of consciousness that depends on it be said to explain anything, as opposed to simply shift the bubble under the wallpaper?

  21. Hi Scott,

    I’m going to leave your objections to Platonism largely unanswered, if that’s all right. It’s too tangential.

    >And the causal explanation of misrepresentation is? The answer to the symbol-grounding problem is? Normative causation works how?

    I fail to see these problems, I’m afraid. It might help if you illustrated the problem with an example, but I’ll try to proceed without one.

    I could interpret a misrepresentation to be a symbol which ought to refer to A but actually refers to B. On my view, this issue doesn’t arise. because reference entirely decomposes into similarity and causal association. If this is so, it is not possible for a symbol to actually refer to B if it is causally associated with A and models A better than it models B. Yet both these conditions are somewhat vague. A symbol might be somewhat equally associated with A and B. In such cases, for instance where there are identical twins we have each encountered on separate occasions without realising they are not the same person, then there is no fact of the matter regarding which twin is referred to by the mental model of the person we perceive as unique. As such the idea that there is a definite link between a symbol and its referent is mistaken, something I will explore a little more in a moment.

    Rather than representations taken in isolation, like lone algebraic symbols taken out of context, what we are actually dealing with are dynamic and predictive mental models which help an agent to navigate an environment. There is no normative mapping for any symbol to any referent. An agent navigates its environment successfully if it has a useful mental model. If its mental model is inaccurate, it has false beliefs and may fail to navigate its environment successfully. For instance, it may think there is a rabbit in a bush when it is in fact a tiger — a misrepresentation of its environment which could prove fatal.

    I think there is an illusion at play here. You think you perceive a tree, and there must therefore be some symbol in your brain which really truly refers to that tree by some mysterious and ineffable process of semantic linkage. I think that is a false impression. I don’t think you ever really perceive objects in the world. I think objects in the world cause mental concepts to be activated, and what you are really perceiving is the interplay of those mental concepts. Our access to the world is therefore not nearly as immediate as we think it is. We cannot really think about an object in the world, ultimately we are always thinking about our models of those objects. Of course this is an old philosophical chestnut, but I think it actually holds the key to deflating the syntax and semantics problem.

  22. Mark: Much of what you say is what I hold as well!

    But the reason I can is that I don’t believe representation, content, and reference exist as traditionally – intentionally – conceived. I’m a thoroughgoing eliminativist. Brains rely on covariational intersystematicities between themselves and their environments to leverage gene-conserving behaviours. ‘Modelling’ can consist of isomorphisms, but it is far more prone to metabolically cheaper, non-isomorphic interactive interrelationships. The biggest question, on an account such as my own, has to do with intentionality. What is it? What does it consist in? I see myself on the hook to answer all these. You don’t so much answer these questions (links would be appreciated!) as allude to the move to the ‘mental’ you think allows you to answer or evade them.

    The question is how. For my part, I find it difficult to understand how the move does anything more than resituate all the dilemmas within another even more hoary dilemma. I also agree that perception is supervisory as opposed to constitutive: I’m on board with the Bayesian case in this respect. But I just can’t make sense out of claims like ‘what you are really perceiving is the interplay of those mental concepts.’ Real perceiving? What could ‘perception’ mean here? And if I’m not ‘perceiving’ this ‘interplay of mental concepts,’ then what am I doing? Are my beliefs true of my mental models only, not the world?

    On your account, what would a science of truth look like?

  23. Hi Scott,

    I agree with most of that. In particular, I agree that the traditional conception of intentionality is a little misguided. I don’t think that means we have to throw the concept out altoghether — it is at least quite possible to recover a kind of approximate intentionality from the kind of model of cognition I favour.

    In particular, I want to be clear that where I talk of isomorphisms, I am talking only of approximate isomorphisms with regard to certain features of the environment. The mental model is quite good but far from perfect.

    I am still unclear about where my model breaks down in your view. Intentionality, by which I think you mean reference, is no more and no less than a correspondence of form and/or a causal association between a mental model and an object in the world. What do you think this account omits?

    >What could ‘perception’ mean here?

    The information that is available to your mental processes. Information about the world is not directly and immediately available to you. Automatically filtered and translated information in the form of a mental model is available to you, much as would happen in the “mind” of a robot which represents its environment by multiple levels of processing of sensory stimulus.

    >Are my beliefs true of my mental models only, not the world?

    Your beliefs *are* your mental model. They are (usually) approximately true of the world at some level of description, by which I mean the mental model composed of your beliefs resembles the world in key respects. Well enough for you to navigate the world successfully at any rate.

    For example, you may believe the sun is a ball of hot gas. This is approximately true, but only approximately. If one were pedantic, one could pick many ways to point out subtle inaccuracies in this belief. A ball is a spherical toy. The sun is not a toy, and is only approximately spherical. Furthermore it is not made of gas but of plasma. “Hot” is a relative term and so meaningless unless used in comparison with something else.

    Even these pedantic nitpicks could be torn apart, I’m sure. Still, there’s a kernel of truth there. The picture of the world implied by the sentence “The sun is a ball of hot gas” is approximately right. In my view, the only statements that are ever absolutely true are those that can be expressed as logical or mathematical statements. All other statements are too vague or ambiguous, but can be held to be true enough for practical purposes.

    >On your account, what would a science of truth look like?

    I don’t think it is the job of science to study truth. That’s a philosophical question, and I’m broadly on board with some version of the correspondence theory of truth. I don’t think it’s accurate to say a statement is intrinsically true or false. It must first be interpreted and converted to a mental model. A statement is approximately true if this mental model corresponds pretty well to the real world and false otherwise. Since different people may interpret the same statement differently, it is really those interpretations that are true or false. As a shorthand we can say a statement is true or false if its interpretations are almost always consistently true or false, respectively.

  24. Mark: “I don’t think it is the job of science to study truth.”

    So truth can’t be naturalized but reference can? The evaluative component of cognition can only be half understood? Is that what you’re saying?

    “I am still unclear about where my model breaks down in your view. Intentionality, by which I think you mean reference, is no more and no less than a correspondence of form and/or a causal association between a mental model and an object in the world. What do you think this account omits?”

    I mean intentionality in the biggest, fattest sense, the one including reference, normativity, purposiveness, and so on. Any account of consciousness, if it’s have any hope making headway, has to not only naturalistically explain phenomenality and intentionality, it must also tell us why we find them so devilishly difficult to explain in the first place. Your position punts on phenomenality, and you acknowledge you need some way to explain away the realist’s intuitions. But your position makes no sense of intentionality at all, as far as I can tell. Why is it, if aboutness consists in causal association and nothing more, that causal reasoning makes the properties of aboutness (intentionality narrowly construed) so difficult to understand? What, for instance, is transparency? How can one ‘mental model’ be ‘about’ another? Everything is ‘causally associated’ with everything else in some way, so citing this association is uninformative. What is it about the ‘aboutness causal association’ that gives rise to the possibility of evaluation, correctness, truth? That causes so many incompatibilities with causal reasoning to arise?

    Are you beginning to see the hardness of the problem, Mark?

  25. Mark: Fascinating discussion and I weighed in on the original discussion on Massimo’s Scientia blog that what computationalism shares with biological brains is time or computers artificially inject time into the system while brains extract time from the environment. The computationalist intuition is valid because of the shared time fact but the non-computationalist intuition is valid because time is naturally extracted from within by biology.

    Scott: As you say brains are blind but the metaphor doesn’t end there because the neocortex shares so many properties with the retina itself. Namely the neocortex overlays the more basic brain functions and “sees” those functions. If the Phenomenal Self Model is the correct view then the waking self is nothing more than the biggest piece of phenomenality we all walk around with. All of this above conversation point to the brain that splits from down within into an outer environment and a core self? The intentionality phenomena is still the stimulus response between any organism and the environment, except on a more complex scale?

  26. Hi Scott,

    >So truth can’t be naturalized but reference can?

    It’s not the job of science to naturalize truth or reference. I’m not doing science right now, I’m doing philosophy. The job of science is to figure out how the brain works from a functional perspective. Phenomenal consciousness and intentionality are philosophical problems as far as I’m concerned.

    >I mean intentionality in the biggest, fattest sense, the one including reference, normativity, purposiveness, and so on.

    Again, it might help if you could illustrate with examples where my account runs into problems. If by normativity you mean some sort of objective moral realism, then I don’t buy into that. But it’s certainly not hard to understand how an evolved social being can believe certain things to be good or evil, where good and evil can be cashed out in terms of desirability and toleration, approval and disapproval, guilt and pride.

    Purposiveness is a case if taking action to achieve some goal, informed by predictions derived from mental models. I drink water because I want to quench my thirst. My mental model of water predicts that when I drink it my thirst will vanish, and I have evolved as a biological system that regards thirst as an undesirable state. This is no different in my view from a chess computer moving a pawn with the purpose of capturing a knight or an automatic pilot adjusting the angle of an aileron to maintain level flight. I simply don’t see the problem.

    >Why is it, if aboutness consists in causal association and nothing more, that causal reasoning makes the properties of aboutness (intentionality narrowly construed) so difficult to understand?

    For the kinds of reasons highlighted by Frank Jackson’s “Mary’s Room” and Nagle’s “What is it like to be a bat?”. Having all the factual knowledge about how brains work doesn’t give you the ability to imagine what it is like to be that brain. To fully imagine what it is like to be a bat, you would need the ability to reconfigure your brain to be the brain of a bat. To imagine the colour red without having ever seen it, you would need to activate brain circuits that have never been activated, and you have no way to do that by an act of will any more than you can become an expert at juggling by reading a book on the physics of juggling.

    So there is a disconnect between our understanding of functionalist accounts and our experience of actually being a conscious entity, and this creates a powerful illusion that conscious experience cannot be explained by functionalism, and illusion I think is incorrect.

    >What, for instance, is transparency?

    Context? I assume you’re not asking me what it means for light to pass through a substance?

    >How can one ‘mental model’ be ‘about’ another?

    We can have models of models. Models can be structurally similar and they can be causally associated.

    >What is it about the ‘aboutness causal association’ that gives rise to the possibility of evaluation, correctness, truth?

    As I said before, truth and correctness are only approximate. Reference is not absolute, as illustrated with the twins example. If a model is very close to some real world system but is crucially incaccurate in some regard, then it is wrong. To the extent that a mental model resembles a real world system, and to the extent that the two are very closely interlinked causally, so that perceiving the real world system causes the mental model to be activated, or artificially stimulating the neurons of the model causes the subject to report thoughts of the real world system, we are more likely to say the one references the other.

  27. Mark: “Phenomenal consciousness and intentionality are philosophical problems as far as I’m concerned.”

    They can’t be solved, in other words, save to the parochial satisfaction of this or that theorist. No consensus commanding account can be found. The most obvious reason for this is that the problems they pose are simply too *hard,* but biting the ignorance bullet is difficult, so many insist that they are *intrinsically insoluble.*

    Either way, the degree to which your account turns on them is the degree to which your account fails to solve the problems it claims to solve (unless perpetual philosophical confusion and controversy counts as ‘solution’!). You do see this?

    The causal linkage of brain systems and environments is a given, as are the ways those systems can fall in and out of synch. The mystery is – has always been – how do intentional, phenomenal beings such as US fit in this picture?

    To say that’s an intrinsically philosophical question is either to have implausible faith in the theoretical problem solving power of philosophy (think of the track record!) or to bite the mysterian bullet.

  28. Hi Scott,

    I think there’s a difference between mysterianism and regarding a problem as philosophical. The former is perhaps the position that nobody will ever find an answer, while the latter is the view that the answer may be available to us via argument and analysis.

    I do think the answer is available to us and I do think we already have the answer (functionalism). I agree that there will likely never be a consensus, but this is not mysterianism, any more than I am a mysterian about moral realism or the nature of truth or whether God exists.

    It is fine to hope for consensus or for some piece of empirical evidence that would forever settle the question, but such hopes should not cause you to believe that a definitive solution can ever be found if there are good reasons to suppose that such a resolution to the debate is impossible.

    However, I suspect that there is one way in which the debate may be settled in my favour, and this is not by evidence or even by argument, but by a gradual shift in norms and expectations. If we ever manage to build intelligent machines which become integrated into our lives (as in the excellent movie, “Her”), it will, I believe, become increasingly difficult and indeed politically incorrect to maintain skepticism regarding their consciousness. I would argue that racism and sexism were (largely) defeated not by philosophical argument or empirical evidence but by a social movement that changed how the privileged perceived the oppressed. These positions are now completely taboo (as they should be) and the same may one day be the case for skepticism of the consciousness of “electronic people”. Of course, this doesn’t prove my case — my case is proven by argument — but it does allow for the possibility that one day these arguments might be broadly accepted as correct.

  29. Mark: I meant mysterious FAPP. Argument can be used to rationalize any case – which should come as no surprise, given what we’re learning about it. The inability to command anything remotely resembling scientific consensus, when combined with what we now know regarding human theoretical incompetence, shouts ignorance as loudly as anything can. Believers will persist, of course, because that’s what believers do. But certainly the more comprehensive view takes this all into account, as opposed to hoping against hope that one has won the Magical Belief Lottery, only for real, which seems to be what you’re advocating. Argument is important, but information, data, is required to successfully arbitrate between arguments.

    Her is a fantastic movie, but it does intentional realist, functionalist accounts no favours whatsoever. What Spike Jonze does is demonstrate the painfully *parochial* nature of human social cognition by initially introducing an OS that can only be understood via causal/mechanical cognition, then giving us Samantha, who initially presents an information structure that we (along with Theodore) cannot but interpret via our socio-cognitive systems, then progressively shows the viewers how tragically misaligned those socio-cognitive systems are with the actual problem-ecology presented by Samantha – until she once again becomes something that only causal/mechanical cognition can solve. Jonze, in other words, demonstrates how the apparent intentional functions discharged (for those inclined to interpret them as such) are actually *an illusion of ignorance.* To think Samantha is one of us is simply to be ignorant as to what Samantha in fact is.

  30. I have no major issue with your position, which seems to be agnosticism and optimism that it will some day be resolved by science. It is not my position. I am very confident that functionalism is correct, but I don’t see how science could ever confirm that. My main beef is with those who claim that functionalism is nonsense.

  31. Agnostic? No. My position is actually quite a bit more extreme than yours, so I really have no claim to doxastic moderation or theoretical sobriety. My beef is with defenses a la Pigluicci, say, where bona fide, tremendous difficulties of intentionality are given a rhetorical gloss, if considered at all. No one has a workable naturalistic account of intentionality, *period.* (If you’re interested you can check out, http://rsbakker.wordpress.com/2014/04/21/whos-afraid-of-reduction-massimo-pigliucci-and-the-rhetoric-of-redemption/).’Irreducibility’ is simply a way to tie a pretty bow around this fact, an attempt to transform it into a virtue. The happy coincidence, after all, is that philosophers have prescientific domain all their own. Given the way argumentation so regularly collapses into rationalization, it simply serves too many institutional interests to be taken as much more than an institutional artifact – I think.

    Mundane functionalism is no more mysterious than the differences between grains of description. Intentional functionalism, which poses functions (mental or pragmatic), like reference, evaluation, and so on, that out-and-out contradict causal cognition, is mysterious indeed. It does not have a naturalistic account of intentionality. To the extent it relies on intentional functions to explain or explain away phenomenality, it is simply playing a shell game with our very real institutional inability to explain consciousness, not to mention the alchemical disarray of cognitive science.

    These problems continue to defeat theoretical cognition. Declaring them ‘intrinsically philosophical’ merely admits this fact in a way that makes certain philosophers happy.

    This is really the sum of my criticism. As I alluded, there’s much in your paper that parallels my own argumentative strategies, otherwise. And you’re a helluva a writer to boot!

  32. Hi Mark,

    If you don’t mind, let’s go through a couple of thought experiments using your theory, to try to understand the relations between functions, algorithms and processing substrates:

    1. Imagine an extremely talented individual, with outstanding aptitudes and skills for mental calculation. I’ll call it individual “A”. Now, let’s assume we have developed a simulation programme for human brains (sort of your virtual world) and, we have parameterised an specific instance, for the brain of a second individual “B”.
    Now we ask A (already conscious) to run in his mind this simulation. According to you position, will A become B? Will A be self conscious, but will have B’s mind in his own conscious space?
    In general terms, what happens when a conscious entity runs the script of another conscious entity.

    If processor A simulates processor B running a simulation of processor C. Does A become C? or is A conscious of the overall situation.

    2. You consider that the computing architecture is transparent in what to consciousness concerns. Let’s consider two real computing architectures: Serial and Parallel. In fact, there is a serious problem in your example, in the real world all events are concurrent (let’s forget about relativistic simultaneity problems), so in your simulation you will need a computing thread for each simulated element (and I mean each quark in the brains of your virtual people), and then you have to synchronise all parallel threads. I discard a serial simulation because it will simply don’t suit virtualisation specs to the level you require.

    So here you have an example where the computing architecture matters a lot. Serial processing is not allowed. Still, from your perspective, serial processing will run the whole algorithm, and should serve the purpose.

    Now, If you run N threads concurrently to simulate a brain, will you get N concurrent conscious streams? How will they be unified into one single conscious experience?

    This problem of mind unification, the binding problem, is central for consciousness understanding.

    Mark, I agree in that, for the moment, consciousness is beyond the scope of physics. But also, for the moment, brains are the only systems in the Universe that we know to raise consciousness. Don’t you think it is reasonable to wait and see what we can “positively” say about brains, how they work, before entering into the preorgasmic feedback loops bacchanal of helluva writers that you and Scott have enjoyed. Intuitions are strong, I know by own experience, and HER is a very good film, but I’m just recalling Pulp Fiction’s (Quentin Tarantino) quote: “let’s not star s***** each other’s d***** just yet”.

    But, please, let’s come back to the gedankenexperiments if you don’t mind.

  33. Hi Mark,

    I think there’s a difference between mysterianism and regarding a problem as philosophical. The former is perhaps the position that nobody will ever find an answer, while the latter is the view that the answer may be available to us via argument and analysis.

    Isn’t science itself argument and analysis? Does that lend to truth being examined by science (scientific methods)?

    Caveat: It might sound silly but I’d leave room for consent here – rather than reasoning that it HAS to be open to scientific investigation, I’d give room for individuals having to give their consent for it to be, in regards to themselves.

  34. Scott:

    Thanks for the kind words!

    Vicente:

    When one conscious entity simulates another, there are two minds coexisting on the same substrate. A is not B nor C. What A knows is not what B knows and vice versa. I visualise it as being similar to the idea of a virtual machines in computer science, where a physical computer realises a virtual computer which can in turn realise a further virtual computer and so on. Each step of virtualisation typically has a drastic effect on efficiency so there are practical limits to what can be achieved.

    All parallel algorithms can be implemented by serial algorithms. Before the era of multi-core processors, all threads would simply take turns executing on the same processor. I don’t think the problem of unifying conscious experience is a serious one because I think it is partly an illusion. I think there are parts of your brain that are working independently on different problems at the same time, so that there are parts of your brain that know something before other parts of your brain. The idea that there is a single stream of consciousness is, in my view, false, but it is perhaps an artifact of a process whereby the brain builds an autobiographical narrative for storage in working memory (and later, selectively, for long-term memory). If what you remember about yourself is a linear narrative, that seems to explain the unity of consciousness to me.

    Finally, you seem to think all this talk is pointless intellectual masturbation. Perhaps, but if that is your view it is perhaps best not to engage in it. I think there are very good reasons to suspect there will never be any definitive evidence one way or the other, and if this view is right then there’s no point waiting for more empirical data to come in. This is a question which is profoundly interesting to me and to others and it is a discussion which I enjoy.

    Callan S.:

    I think science is in the business of building models of the world so as to facilitate the prediction of observations. I don’t think it is in the business of clarifying abstract concepts.

  35. Mark: “I think there are very good reasons to suspect there will never be any definitive evidence one way or the other, and if this view is right then there’s no point waiting for more empirical data to come in. This is a question which is profoundly interesting to me and to others and it is a discussion which I enjoy.”

    The problem isn’t that it’s a parlour game, the problem is the failure to flag it as such. Why not begin with, “I think the problem of consciousness is insoluble, but here’s why I think my brand of computationalism gives us a potentially useful metaphor/scheme to use in lieu of genuine scientific understanding.”

    “I don’t think the problem of unifying conscious experience is a serious one because I think it is partly an illusion. I think there are parts of your brain that are working independently on different problems at the same time, so that there are parts of your brain that know something before other parts of your brain. The idea that there is a single stream of consciousness is, in my view, false, but it is perhaps an artifact of a process whereby the brain builds an autobiographical narrative for storage in working memory (and later, selectively, for long-term memory).”

    My own approach generalizes this, uses *neglect* rather than representation as a phenomenological/biomechanical bridging concept, thus providing a naturalistic way to explain away the myriad peculiarities bedevilling phenomenality and intentionality. The ‘big failure’ of consciousness research and cognitive science more generally, as I see it, is the failure to systematically investigate the metacognitive resources underwriting the swamp of intuitions informing the explananda of consciousness and cognition. The big take-away lesson to draw from the ‘neglect of distinctions’ explanation of conscious unity is the way the neglect of information actually produces a *positive phenomenal intuition.*

    And I think a good number of the intuitions plaguing our attempts to understand phenomenality and intentionality, the intuitions underwriting your own theoretical pessimism in fact, can be understood as artifacts of metacognitive neglect. Here’s an example: http://rsbakker.wordpress.com/2013/05/27/the-something-about-mary/

    I’m not sure I’ve convinced any one around here, tho! You think philosophers have a credibility crisis – try being a science fiction author!

  36. Scott, The notion of neglect was just as ever present centuries ago in explaining why liquid water freezes or vaporizes. The explanations of water property states were simply not there because nobody was looking DEEPER than the observed functions. When the bonding properties of water molecules were better understood the intuitive explanation was obvious. No physical properties were violated in the behavior of water molecules. The same applies to qualia, phenomonality, consciousness etc.

    Mark, Every functional explanation except consciousness has accompanying intuition because functionality entails causal properties and time, characteristics of consciousness itself. As Scott’s BBT explains those are the very properties of the brain which PRESENTLY escape an intuitive understanding by the brain itself. For a better understanding see my comment above.

  37. VicP: Indeed it was! This is why I find it amazing that agnotology is as obscure as it is, something that only social psychologists really pay attention to. But then, look at how long it took mathematicians to wrap their head around zero. As Kahneman likes to say, humans have a hard time with nonentities…

  38. Mark,
    I think science is in the business of building models of the world so as to facilitate the prediction of observations. I don’t think it is in the business of clarifying abstract concepts.

    Depends if the former begins to overlap with the latter (if only in a ‘explain the illusion’ way). I guess what it might take is some sort of metric by which one might agree the abstract concepts is being explained by something. Certainly optical illusions are an example of something that might seem abstract, but has a metric by which it is explained.

  39. Perhaps I’ve missed the point but on beliefs vs beliefs*, it seems questionable to me that this thought experiment gets around Searle’s upgraded argument presented in Is The Brain a Digital Computer?:

    ‘It follows that you could not discover that the brain or anything else was intrinsically a digital computer, although you could assign a computational interpretation to it as you could to anything else. The point is not that the claim “The brain is a digital computer” is false. Rather it does not get up to the level of falsehood. It does not have a clear sense. You will have misunderstood my account if you think that I am arguing that it is simply false that the brain is a digital computer. The question “Is the brain a digital computer?” is as ill defined as the questions “Is it an abacus?”, “Is it a book?”, or “Is it a set of symbols?”, “Is it a set of mathematical formulae?”‘

    I think the first issue, relating to the Chinese Room, is that the simulation’s beliefs* would ultimately be represented as machine code. But if we wished, couldn’t we take this code and translate in such a way that the simulation believed itself to have a different set of beliefs*? So simulations would seem to lack intrinsic intentionality. Same goes for qualia*.

    The other problem that Searle gets into in Is the Brain a Digital Computer is that if we’re willing to accept a collection of 1s & 0s represent beliefs and qualia why doesn’t this extent to any area of the universe where we can assign a set of states to 1 and another set of states to 0.

    Again, maybe I’m missing the boat on this? Searle’s paper can be found here:

    https://philosophy.as.uky.edu/sites/default/files/Is%20the%20Brain%20a%20Digital%20Computer%20-%20John%20R.%20Searle.pdf

    Lanier makes a similar – though I’d say more succinct & clearer argument in You Can’t Argue with a Zombie:

    http://www.davidchess.com/words/poc/lanier_zombie.html

    ” In other words: When a natural phenomenon, like a meteor shower, is measured, it turns into a string of numbers. The program that runs a computer (the object code[7]) is also a string of numbers, so we have two similar items. The string of numbers that runs a particular computer has to perfectly follow the rules of that computer or the computer will crash. But if you can find the matching computer, any particular string of numbers can run as a program[8]. In fact, for any string of numbers, you can in theory find or construct many computers, each of which will run the same string of numbers as a different program. So one computer might read the meteor shower and end up doing your taxes as a result, while another might calculate racetrack odds from exactly the same “object code”. If your brain is functionally equivalent to a computer program, there is no reason a meteor shower can’t be that program, if you take the trouble to find the right computer to run it.

    Does even the possibility[9] of this computer give the meteor shower consciousness, if only for a moment[10]?”

  40. I think wh?t ??u typed made a lot oof sense. ?ut, wh?t ?bout th?s?
    suppose yo? added a little content? I ain’t suggesting you? content is not ?ood, but
    suppose yyou a?ded ? title that grabbbed people’s attention? ?
    mean Conscious Entities ? Blog Archie ? ?he Intuitional
    ?roblem is a l?ttle plain. You c?uld peek at Yahoo’s h?me p?ge and watch how t?ey create news headlines t? grab people to open the links.
    You mig?t add a video or a rel?ted picture or two t? grab readers ?nterested about eve?ything’ve ?ot
    to s?y. ?ust my opinion, it coul? make your blog ? little livelier.

    ?lso visit my page great film soundtrack

  41. Hello Sci,

    The other problem that Searle gets into in Is the Brain a Digital Computer is that if we’re willing to accept a collection of 1s & 0s represent beliefs and qualia why doesn’t this extent to any area of the universe where we can assign a set of states to 1 and another set of states to 0.

    I’m not sure what you mean by this – why doesn’t that extend to any area of the universe where we can assign a set of states to 1 and another set of states to 0?

    What do you mean by extend? In regard to what you seem to be saying, that state setting can’t be extended?

    The question “Is the brain a digital computer?” is as ill defined as the questions “Is it an abacus?”, “Is it a book?”, or “Is it a set of symbols?”, “Is it a set of mathematical formulae?”‘

    I think that’s a little uncharitable in your reading – can books do if/then statements? Can symbols? Can they do AND or OR gates? No. So the dismissal simply involves ignoring pivotal components and then with those gone, treating anything as much as a muchness. “Is it an orange?” – well of course it could be asked that way, if we ignore the logic gates a computer has. Then we can ask about anything that has no logic gates.

    Does even the possibility[9] of this computer give the meteor shower consciousness, if only for a moment[10]?”

    The question implies conciousness has to be taken as some sort of physical thing (like it’s on a periodic table or something). It’s an attempt at an absurdum argument which, unless you insist on treating conciousness as a thing, boomerangs back and shows the relative absurdity of the idea of conciousness.

    Instead of treating it as ‘how could conciousness be a processed meteor shower? It’s empty!’ just try imagining it the other way: with it being the case that that emptyness and physical interactions of the meteors are indeed all there is.

    The only way it seems to prove conciousness can’t involve such things is the clear absence of conciousness in the meteor shower. But this can be read the other way if you let it – that absolutely, there is a clear absence of some sort of special extra presence. There is just the physical interactions of the meteor shower. If you stop rejecting it as ‘that’s not me’ for a moment, then that’s the other perspective involved.

  42. @Callan: “What do you mean by extend? In regard to what you seem to be saying, that state setting can’t be extended?”

    The problem is that the attribution of 1s to a set of states, and 0s to the complementary set, is that there’s nothing intrinsically computational about those sets. IIRC, Searle’s saying there’s no intrinsic intentionality to anything in the physical world and computationalists make the mistake of assuming they’ve found an exception.

    This relates to the issue with books and abacuses. They, like computers, only have the meaning we choose to give their operations and contents. I’m not sure why having logic gates would invalidate Searle’s position, as the output of the physical process is indeterminate without someone bringing in their intended meaning?

    Correct me if I’m wrong but seems to me you’re suggesting a computer can run Doom 3, then run a Consciousness Program ™, then go back to being a non-conscious entity when running Word Perfect?

    As for the meteor shower showing the emptiness of consciousness – I think with subjective experience the seeming is being, taking us back to the “Who is being fooled?” question in this site’s tagline. I don’t think this is an argument for souls, free will, or any other cherished notion – it’s just suggestive of computational accounts being inadequate to the task of explaining what’s going on with consciousness and intentionality.

    I’m fairly certain arrangements of the physical brain will, in fact, be enough to produce consciousness. Maybe other patterns of materials in the environment – like a meteor shower – are having conscious experiences as well. But as Nagel notes in Mind & Cosmos, the mystery of *why* any arrangement of matter works to produce a “what is it like?” feeling for that subsystem remains, similar to how learning how to start a fire doesn’t reveal the details of combustion to the fire-starter.

  43. I think you’re missreading the intent of computationalists as trying to show something there – some kind of computery thing, not just matter.

    See Scott’s note above about how long it took to invent the idea of ‘zero’. Maybe you’re assuming the computationalists are trying to prove a presence in what they say, because we don’t automatically think in terms of absences or proving absences.

    Even a computer isn’t intrinsically computational, by the same token.

    So what words do you have for when we look at…well, the thing you’re looking at now (well, atleast it’s screen)?

    Are you going to attribute what you’re looking at as a ‘computer’ or ‘a monitor’? Does that mean you’re saying there’s something extra there, some computeryness or monitoryness?

    If not, why would the computationialists be saying there’s something else there beyond the physical? It’s really a missreading to assume they are saying that. Maybe it’s a bit of an unusual or even bad english use to use ‘computer’ to refer to absence, but if were lacking words for absence, like we lacked a word for zero, that’s an understandable speed bump. But it does not mean someone is refering to a presence beyond the physical, when they use the word computation.

    Correct me if I’m wrong but seems to me you’re suggesting a computer can run Doom 3, then run a Consciousness Program ™, then go back to being a non-conscious entity when running Word Perfect?
    As much as I could claim, uncontroversially, that someone can be born and eventually die.

    I think with subjective experience the seeming is being

    That would mean I could fly in RL, when sleeping and having a dream where I can fly. Ie, where it seems I can fly.

  44. @Callan: Sorry, none of that made sense to me as a good defense of computationalism. The argument against computationalism doesn’t presume computationalists are trying to argue for a soul, rather their particular attempt at reducing mind to symbols ignores the fact that intrinsic intentionality lies with the user not with the program. Comptuational states are attributions onto the environment.

    Your last line about flying confuses the contents of subjectivity with the actual fact that something is being experienced subjectively. (One might also make argument regarding rationality and the indeterminate nature of any physical process, but it seems this is less worrisome for naturalism than consciousness and intentionality.)

    Here’s neuroscientist Koch on the failure of computationalism – note how the physical object, rather than the computer program, is what he thinks can have consciousness:

    http://www.technologyreview.com/news/531146/what-it-will-take-for-computers-to-be-conscious/

    “In 100 years, you might be able to simulate consciousness on a computer. But it won’t experience anything. Nada. It will be black inside. It will have no experience whatsoever, even though it may have our intelligence and our ability to speak.

    I am not saying consciousness is a magic soul. It is something physical. Consciousness is always supervening onto the physical. But it takes a particular type of hardware to instantiate it. A computer made up of transistors, moving charge on and off a gate, with each gate being connected to a small number of other gates, is just a very different cause-and-effect structure than what we have in the brain, where you have one neuron connected to 10,000 input neurons and projecting to 10,000 other neurons. But if you were to build the computer in the appropriate way, like a neuromorphic computer, it could be conscious.”

  45. Sci,

    rather their particular attempt at reducing mind to symbols ignores the fact that intrinsic intentionality lies with the user not with the program.

    Yes, I know the argument, it goes something like : well trying to reduce it to symbols is just reducing it to something, we, the human meaning makers, make!

    I tried to explain that ‘reducing the mind to symbols’ is not at all the argument – they aren’t trying to reduce it symbols, they are trying to reduce it to an absence of symbols. But because we use english and to use english involves pumping out a bunch of symbols, it might seem they are just trying to reduce it to symbols. You can’t say it’s not a good defence of computalionalism and as well as saying ‘they attempt to reduce mind to symbols’ without having shown you didn’t understand the information I attempted to convey. I don’t confirm you got what I meant.

    I think Koch engages little in skepticism about how the world looks rosy, not (to use an analogy) considering perhaps rose coloured glasses are hardwired to his vision – it’s not a quality of the world, it’s a quality of the modifiers in his own skull. So to with his assertion ‘The fact that you have conscious experience is the one undeniable certainty you have.’. He doesn’t engage in skepticism about the term ‘concious’ and how, like the rose coloured glasses, perhaps such a terms rosyness arises from a series of structural contraints.

    If you were forced to wear rose coloured glasses all your life without realising it, you might find, if the glasses were suddenly torn off, that the actual palid white of your hands (or whatever skin colour) was shocking and never how you’d have described yourself before. As shocking as the inner black he mentions.

  46. @Callan: That computationalists can’t explain themselves without recourse to intentionality via the use of language only exacerbates my doubts. It’s one thing to claim eliminativism and accuse others of wishful thinking, another to prove a materialist account. Recall that solving how the brain generates consciousness is not an explanation of why that combination of matter suffices. Not to mention the issues with intentionality and rationality would also have to be explained.

    And even then I see no reason to believe materialism must entail computationalism. Massimo, Searle, Tononi have – AFAIK – rejected computationalism without abandoning materialism.

    (Not that I’m convinced materialism deserves a privileged place in out thoughts. Seems like that’s more a cultural shift due to shaming tactics and ingroup selection, though I also don’t think immaterialism guarantees souls, God, or other religious beliefs.)

    Recall that Integrated Information Theory does not entail Koch’s panpsychism, and that if IIT is correct computationalism would be arguably be sunk:

    http://www.informationweek.com/mobile/mobile-applications/no-god-in-the-machine/d/d-id/1251115

    Finally, if computationalists are thinking to upload their brains into simulated worlds and thus live forever, I can’t help but smirk at the accusation anyone else is looking at the Mind problem with rose colored glasses.

  47. That computationalists can’t explain themselves without recourse to intentionality via the use of language only exacerbates my doubts.

    How do you expect them to speak to you? If what they have to say does have merit but you will not see merit in it for them using language to speak it to you?

    Finally, if computationalists are thinking to upload their brains into simulated worlds and thus live forever,
    I don’t know, maybe I have the wrong handle on the group called ‘computalionalists’? But as to the word ‘computer’ and it’s applicability, that’s what I’ve looked at. More so the absence of traditional concepts we associate with ourselves, and the different model of thinking about ourselves that can be engaged if you consider the absences rather than use the language to reinforce all the old concepts.

  48. The medical scientists and Dr’s are finding increasingly more health benefits and cures for ailments comparable
    to Alzheimer’s, Autism, Cancer, PTSD, Stress plus an untold amount of other debilitating diseases that plague America right this moment, actually there are lots of, nearly thousands of patents registered by the massive pharmaceutical
    companies right now as you’re studying this.

  49. The point about simulation and information is well made. The ONLY systems that can be simulated computationally are those systems that are already abstract computations. (In a sense, this is a corollary of Church-Turing.)

    For example, we can simulate a calculator with no loss because a calculator is already a simulation, or computation, (of mathematics). If the Paris trip was part of some computer game and already virtual, it’s simulation would lose nothing.

    Essentially, if the mind can be computed, then it necessarily already must be a computation. But this doesn’t seem to be the case, which to me rules out computationalism.

Leave a Reply

Your email address will not be published. Required fields are marked *