Conversation with a Zombie

dialogueTom has written a nice dialogue on the subject of qualia: it’s here.

Could we in fact learn useful lessons from talking to a robot which lacked qualia?

Perhaps not; one view would be that since the robot’s mind presumably works in the same way as ours, it would have similar qualia: or would think it did. We know that David Chalmers’ zombie twin talked and philosophised about its qualia in exactly the same way as the original.

It depends on what you mean by qualia, of course. Some people conceive of qualia as psychological items that add extra significance or force to experience; or as flags that draw attention to something of potential interest. Those play a distinct role in decision making and have an influence on behaviour. If robots were really to behave like us, they would have to have some functional analogue of that kind of qualia, and so we might indeed find that talking to them on the subject was really no better or worse than talking to our fellow human beings.

But those are not real qualia, because they are fully naturalised and effable things, measurable parts of the physical world. Whether you are experiencing the same blue quale as me would, if these flags or intensifiers were qualia, be an entirely measurable and objective question, capable of a clear answer. Real, philosophically interesting qualia are far more slippery than that.

So we might expect that a robot would reproduce the functional, a-consciousness parts of our mind, and leave the phenomenal, p-consciousness ones out. Like Tom’s robot they would presumably be puzzled by references to subjective experience. Perhaps, then, there might be no point in talking to them about it because they would be constitutionally incapable of shedding any light on it. they could tell us what the zombie life is like, but don’t we sort of know that already? They could play the kind of part in a dialogue that Socrates’ easily-bamboozled interlocutors always seemed to do, but that’s about it, presumably?

Or perhaps they would be able to show us, by providing a contrasting example, how and why it is that we come to have these qualia? There’s something distinctly odd about the way qualia are apparently untethered from physical cause and effect, yet only appear in human beings with their complex brains.  Or could it be that they’re everywhere and it’s not that only we have them, it’s more that we’re the only entities that talk about them (or about anything)?

Perhaps talking to a robot would convince us in the end that in fact, we don’t have qualia either: that they are just a confused delusion. One scarier possibility though, is that robots would understand them all too well.

“Oh,” they might say, “Yes, of course we have those. But scanning through the literature it seems to us you humans only have a very limited appreciation of the qualic field. You experience simple local point qualia, but you have no perception of higher-order qualia; the qualia of the surface or the solid, or the complex manifold that seems so evident to us. Gosh, it must be awful…”

17 thoughts on “Conversation with a Zombie

  1. So, what are our obligations towards zombie robots? If a (self-avowed) zombie robot, a (self-avowed) conscious robot, and a (uh, self-avowed?) normal human being are each tied on a railtrack, with the usual out-of-control trolley careening towards them, and you having the power to choose the route the trolley goes down, which one would you choose? (Let’s say if you do nothing, a Kindergarten gets blown up by the League of Anti-Moral Philosophers, LAMP for short.)

    I’m kidding, of course, but I think the problem posed in the end of the dialogue is interesting: while Hal claims to not have conscious experience, and hence, presumably no capacity to suffer, I’m nevertheless inclined to believe that inflicting gratituous harm upon him would be morally wrong. But I’m not entirely sure why: if what Hal says is true, then I shouldn’t hesitate anymore than I would in mistreating a rock (not that I’m a habitual mistreater of rocks, mind). So in some sense, it seems to be his preference not to be harmed that counts for me—which is not something I’d otherwise pointed to as a cause of some entity being subject to moral consideration, if prompted.

    This opens up a bit of a can of worms: what sort of entity can be said to have preferences, and in what way is it permissible to frustrate them? Arguably, animals have preferences, but if we attribute preferences to animals, can we consistently refuse to do so for plants? What are everybody’s intuitions here?

    But it seems that the premise is a bit out of whack. First of all, Hal’s not really a zombie, is he? I mean, he’s not functionally identical to a conscious being—quite the opposite. Otherwise, he’d presumably be talking about the richness of his subjective experience of a field of roses, say. But if Hal is right, and he has no subjective experience, and he talks the way he talks because he lacks that subjective experience, then it would seem that Qualia do have some causal force after all—at least enabling us to talk differently from Hal.

    Of course, Hal might, perhaps, be wrong—he really has subjective experience, but doesn’t know it (although that seems to me a distinctly strange proposition). Or, of course, we could be wrong, if the most radical of eliminativists are right, and we’re just deceived about our phenomenal experience.

    In all of these cases, however, the question remains: what is it that makes Hal talk as if he has no phenomenal experience, while we talk as if we did? Presumably, there must be some causally efficacious factor at work here.

    Anyway, this suggests an interesting reverse-zombie scenario: could a phenomenally conscious being be the functional isomorph of an unconscious one? By that, I don’t merely mean claiming to not experience any subjective phenomenology, despite knowing full well that they do—i.e. lying. I suppose, to some degree, that would be possible. No, I mean having actual, full-fledged phenomenology, but, for some reason, being entirely unaware of it—truthfully claiming that ‘as far as I can tell, there is nothing it is like for me to experience pain’, and so on.

    It might just be a testament to my limited powers of imagination, but I find that I simply can’t imagine that. It seems like trying to deny what even Descartes couldn’t doubt.

    But if that’s right, then doesn’t there have to be something that makes it impossible for me to entertain this thought? And is there another good candidate for that something than my actual phenomenal experience? Is my mere belief in having phenomenal experience enough? (In particular, shouldn’t I be able to doubt that belief—as people like the Churchlands presumably do?)

  2. I find the pain discussion to be fairly…dimensionless. As one might expect as the default from a perspective stuck in the very much first person.

    To add dimension, clearly we can defy pain. You can hold your hand over a lit candle for some time into the pain of it.

    To add more dimension, there are times in a more natural environment where ignoring pain is profitable. Pain generally means losing some of the bodies resources – but what if the gain is far more than the resources lost? What if the gain is more than the pain?

    An organism that can defy it’s pain and, for example, take the pain of some bee stings to obtain the rich resource of honey would do far better than an organism that retreats at all personal resource loss.

    But when is it better to soak resource loss?

    Well maybe if you had a complicated processor, you could have it where resource loss has a weight to it, but predicted resource gain also has a weight to it. And one weight might fight it out with the other.

    The machine in the example doesn’t sound like a ‘run at any resource loss’ model. It sounds more sophisticated than that. But the example lacks depth where it does not talk about weighing off loss to gain. It might say ‘no, the weight/cost of that is too much’…at which point its almost pain talk, isn’t it? Except the machine is talking in informed terms – it talks of the cost and profit margin, instead of merely crying out of something being a cost with no compensatory gain. A truely naive machine might just cry out it’s in pain (with the word ‘pain’ being a quorum word, which was a random sound output at some point made when some machine lost slow to replace/repair resources and other machines began to associate this particular random sound with that resource loss).

    So as a random internet poster, I charge the example with shallow characterisation!

  3. Curious layperson question -> Has there ever been a philosophical question relating to the nature of reality that was actually settled by thought experiments?

  4. Attempted layperson answer: depends on what you would count as ‘thought experiments’. Would for instance Descartes’ cogito—his realization that in the imaginary setting of being fed sensory data by an evil demon, he still could not be made to doubt his own existence—apply? What about the homunculus regress, which demonstrates that explaining perception by invoking a special ‘perception module’ yields inconsistencies?

    Other than that, thought experiments—as all arguments—are of course only as good as their premises, even if the reasoning is perfect. And there are preciously few premises that can’t be (or indeed, haven’t been) questioned one way or another. So it seems to me that what you could reasonably expect from thought experiments is simply that given some set of premises, some conclusion follows; and there, I think there are indeed some examples. Mostly, these are demonstrations that certain sets of premises are mutually inconsistent, i.e. that one can’t simultaneously hold some set of beliefs about the world.

    Owing to my own education (or lack thereof), off-hand I can think of two examples. Einstein demonstrated, by thought experiment, that the following three premises:
    1) the speed of light is constant in all frames of reference,
    2) all inertial frames of reference are equivalent,
    3) there exists a unique notion of present moment across all frames of references,
    are inconsistent, that is, you can’t hold all three of these beliefs about the nature of reality simultaneously.

    Similarly, Bell demonstrated that
    1) quantum mechanics is an empirically adequate theory,
    2) influences can only propagate with the speed of light, and
    3) all properties of an object are definite at all times,
    are mutually inconsistent. At least one of these three beliefs likewise must be discarded. All proposed accounts of ‘reality’ affirming all three are simply false.

    Both of these are, of course, generally considered to be physics, not philosophy. As I said, that’s partly just my own bias. Another part of this is the unfortunate tendency to start calling philosophy ‘science’, whenever it is sufficiently well developed to yield precise statements about a given part of the world. (However, Bell subtitled the book in which he collected his findings ‘collected papers on quantum philosophy’.)

    Does any of this meet your criteria?

  5. After all the head pounding over qualia, I think we’re going to settle on some combination of eliminativism with a helping of behavioral explanation. A robot will explain what red is like by a readout of RGB (red-green-blue) values. A robot is capable of explaining the ineffable (to us). It effs the ineffable. I think the confusion here is that we assume robots (zombies) lack something, while the reverse may be true. To turn a robot into a phenomenal being we erase (yes, make blind, as in BBT) their ability to eff. if they can still eff, we continue the process until they can no longer eff. Yet their perception of red still has a behavioral sensory connection. They can still report that they see red; they just can’t eff it anymore due to their systematic blinding. If they were first constructed unable to eff (as we were), we wouldn’t even need to do that.

    Some people (indeed I have, to myself) object that such a simple view can’t explain intense phenomenal experience like pain, but how true is this? It’s true that the behavioral connection between sense and response is so intense that we convince ourselves that something else must be there, but we also have instances where Buddhist monks can set themselves on fire without apparent wincing.

  6. @Jochen: Wouldn’t eliminativism be a challenge to Cogito Ergo Sum? And so that isn’t exactly settled?

    The thought experiments of physicists seem like possible examples, though they do seem different from the arguments made by philosophers? That said I’d need a moment to explain why I see these as fundamentally different.

  7. Well, as I said, there’s always assumptions you can doubt. For instance, in the case of Bell’s theorem, you could doubt that probabilities are always between 0 and 1; negative-valued probabilities, or complex ones, prohibit the derivation (as do certain extensions of ‘probability theory’ to nonmeasurable sets). Of course, then you’re saddled with having to explain what exactly those ‘probabilities’ are supposed to mean. But such things have been investigated.

    As for the cogito, I think another way of viewing this would be that it poses a challenge to eliminativism: Descartes’ inference that he in fact exists, we may hold, was correct. But if we’re eliminativists about meaning or content, then it seems difficult to justify the correctness of this reasoning. So in some ways, then, eliminativism would have to account for how one could use reasoning based on the fact that there is thought about some subject in order to reach valid conclusions, when in fact there is, in some sense, no such thought.

  8. @Jochen: Thanks for the reply. Yeah, I’d agree eliminativism just seems hard to swallow. Cogito Ergo Sum does *seem* to put a radical eliminativist like Rosenberg in a though position, though at the least it seems the matter of whether we exist because we think isn’t settled among philosophers?

    Beyond the question of intentionality’s true nature I find myself of two minds on the balance between metaphysics and scientific discovery. On the one hand I do see how metaphysics extends beyond current science and possibly what science can ultimately tell us, and as such there are underlying metaphysical assumptions in the thought experiments of physicists (the existence of definitive laws might be one such assumption?).

    On the other hand it seems odd to me for people to insist a particular ontology like panpsychism, materialism/naturalism, etc should be seen as the correct metaphysics when in fact we simply don’t have solutions to certain problems like the Hard Problem. Naturalism, at least as described in the tenets of the linked site, seems to go beyond what is known scientifically even about matter – something Bitbol notes in his examination of of a guy named Bas Van Fraasen (Google tells me he’s a philosopher at San Fran State)’s Empirical Stance:

    http://www.academia.edu/15598837/A_MORE_RADICAL_CRITIQUE_OF_MATERIALISM._A_dialogue_with_Bas_Van_Fraassen_about_matter_empiricism_and_transcendentalism

    The second of the paper goes further into QM than I could keep up with but you’ll likely find it an easier go. The basic critique, that we don’t know enough about matter to characterize it properly, I got since Chomsky said the same thing awhile back. Bitbol notes spacial extension and existence of particles aren’t even definitive at the QM level, and AFAIK this layperson knows he’s correct about that?

  9. Having ‘a thought’ simply raises the question of how that was sensed to begin with?

    When people have sensory crosswiring, where they can ‘taste’ certain colours, we don’t start attributing taste to colours. Perhaps if everyone did it we would, but as you can see, that just drifts into argumentum ad populum.

    So what is the sense of a thought? What if it’s merely crosswired to report a ‘thing’, when there’s no such thing or something very different from what it reports as sensed?

  10. Hi Peter,

    First, many thanks for posting on the zombie conversation. I didn’t know at the outset where talking with Hal would lead, so it was fun finding out. But I did want to make the following points about conscious experience, all of which come up in the dialog:

    It’s unequivocally real; it’s categorically private; it doesn’t and can’t figure in 3rd person explanations of behavior; and it likely has to do with specific functions or functional architectures related to behavior-guiding representation. This last point suggests that human behaviorally-equivalent, but not functionally or physically identical zombies might be possible, hence the possibility of Hal. But please note: I’m currently agnostic on whether a human behaviorally equivalent system without experience, e.g., Hal, could actually exist. It all depends on what the correct explanation of consciousness turns out to be.

    Speaking of which: the conversation also suggests, following Metzinger, that any representational system (RS) with our capacities will have basic, non-decomposable elements used in representing/describing the world, which as a result are not describable and thus ineffable (a basic characteristic of qualities). But if something like IIT is right (a very open question at this stage, of course), then the existence of qualia for the RS depends on its having a certain sort of functional architecture. In any case, given the evidence-based hypotheses thus far, I don’t think that “qualia are apparently untethered from physical cause and effect” since after all any RS will be physically instantiated.

    You say: “Some people conceive of qualia as psychological items that add extra significance or force to experience.” I’d say that qualia, in various combinations and usually bound into an overarching gestalt, are *all there is* to experience. What some people suppose, I think, is that experience adds motivational force to what would otherwise be merely physical goings-on. As Anthony Cashmore puts it “I suggest that … consciousness heightens our desire to listen to music, for example, or to watch or participate in sporting activities.” (“The Lucretian swerve: The biological basis of human behavior and the criminal justice system.” Publications of the National Academy of Sciences, 2010.www.pnas.org/cgi/doi/10.1073/pnas.0915161107) But of course this raises the insoluble problem of dualist interactionism.

    You say “Perhaps talking to a robot would convince us in the end that in fact, we don’t have qualia either: that they are just a confused delusion.” I don’t see this as a live possibility since experiences like red, pain, etc. are our primary, untranscendable reality as subjects, a representational reality in terms of which the represented world is presented to us.

    Your last contemplated possibility, that other sorts of systems might be super-qualia-fied, so to speak, when compared to us, rings true to me. The range of possible qualia is only limited by the range of what the system is capable of encoding in its multi-dimensional state spaces. As rich as our conscious lives are, they probably aren’t the last word in experience. Speaking of which, I think everyone should try a hallucinogen at least once before they die, just to get a glimpse of other experiential possibilities. That and a few lucid dreams.

    Thanks again!

  11. Tom: “Speaking of which, I think everyone should try a hallucinogen at least once before they die, just to get a glimpse of other experiential possibilities. That and a few lucid dreams.”

    I also recommend people try something like flunitrazepam or some other amnesia inducing drugs to get a good phenomenological handle on the profound degree to which memory conditions experience. (Trying these drugs in extra-legal contexts, however, is very morally fraught since they are generally sold as date rape drugs on the black market).

    “You say “Perhaps talking to a robot would convince us in the end that in fact, we don’t have qualia either: that they are just a confused delusion.” I don’t see this as a live possibility since experiences like red, pain, etc. are our primary, untranscendable reality as subjects, a representational reality in terms of which the represented world is presented to us.”

    Do you have any account of metacognition, Tom? This is a very strident, categorical claim you’re making here, and I find myself wondering what kind of metacognitive capacity would be required to warrant it. I have many experiences all the time, but no experience of ‘representational realities’ whatsoever. To infer my experience possesses such a nature with such conviction, it seems to me, requires both information access and cognitive capacity. The problem, here, is that it seems wildly implausible to suppose we would evolve anything but opportunistic access and problem-specific capacities–nowhere near what would be required to make categorical statements regarding the nature of experience.

  12. Hi Jochen,

    Thanks for your very interesting and on point comments. You say: “But if Hal is right, and he has no subjective experience, and he talks the way he talks because he lacks that subjective experience, then it would seem that qualia do have some causal force after all—at least enabling us to talk differently from Hal.”

    The difficulty here of course is that of interactionist dualism: how would qualia, if different from their neural correlates, exert causal power in producing the report “I feel pain”? There’s no good account of how this would work that I’m aware of. On the other hand, if qualia are the same thing as their correlates, then there’s no problem assigning causal power to them: they have the exact same causal powers as their correlates. But if that’s the case, then the experience of pain per se doesn’t play a causal role in the report of pain above and beyond what neurons are doing. So, on the identity thesis, experience doesn’t add to the causal powers of conscious neural processes, as for instance compared to unconscious neural processes. And indeed, experience is never cited in 3rd person causal accounts, e.g., neuroscientific accounts, of behavior.

    The way I see it, certain representational processes non-causally entail (necessitate) the existence of experiences for the system, such that there are two non-interacting explanatory spaces, one subjective and phenomenal, the other objective and quantitative (“Respecting privacy” at Naturalism.org). For an experience to be reported, there needn’t be (and isn’t) a causal “push” from the experience to the reporting mechanism. Rather, there are two trains of events, one phenomenal, one neural, which match up pretty well most of the time (phenomenal-physical parallelism). If I truthfully report pain there’s usually some sort of physically detectable bodily event to which both my report and my experience correspond. But there is no causal interaction between pain and its correlates or the reporting mechanism, which handily solves the problem of mental causation (there is none), but without making experience epiphenomenal (something can only be epiphenomenal if it’s in the same explanatory space as what it’s epiphenomenal with respect to).

    As to your question whether a creature with robust conscious experience could truthfully report not having it, that seems unlikely if there’s a consistent phenomenal-physical parallelism per my suggested story above.

    Re your reluctance to harm Hal: I think we’re hard wired to take behavior, in particular that expressive of what we normally assume are conscious states (e.g. of pain), as sufficient evidence for those states. So even if Hal claims he doesn’t experience pain, his behavior, in particular his expressed strong intention and propensity to avoid damage (including thrashing and yelling) is enough to make us think twice about inflicting harm on him. Why? Because we can’t help but suppose he does feel pain. But of course this doesn’t prove he feels it.

    “Hal’s not really a zombie, is he? I mean, he’s not functionally identical to a conscious being—quite the opposite.” Right, he’s not your physically identical (hence functionally identical) zombie of Chalmers’ zombie argument against physicalism. But he is (he claims) not a conscious subject, so is a human behaviorally-equivalent zombie. Whether Hal could actually exist is I think an open question.

  13. Hi Tom, thanks for your response. I actually agree with a lot of it, but that won’t be obvious since it’s no fun talking about things one agrees on, hence, I’m just stating this up front.

    I’ll start with one point of agreement anyway.

    The difficulty here of course is that of interactionist dualism: how would qualia, if different from their neural correlates, exert causal power in producing the report “I feel pain”? There’s no good account of how this would work that I’m aware of.

    Right, and that’s basically the objection that, to me, makes interactionist dualism a non-starter. In fact, I think a stronger statement can be made: in a sense, no causal interaction can be possible, because the realm of the physical just is co-extensive with that with which we can causally interact—after all, ultimately, things in the world are only known to us by the way they react to causal probes. Say you had the ghost of a billard queue, capable of striking balls; then by virtue of being so capable, it has some physical aspect, detectable via its causal interactions with billard balls.

    On the other hand, if qualia are the same thing as their correlates, then there’s no problem assigning causal power to them: they have the exact same causal powers as their correlates. But if that’s the case, then the experience of pain per se doesn’t play a causal role in the report of pain above and beyond what neurons are doing.

    Hm, I’m not sure I understand you correctly here. If qualia are the same thing as their correlates, either type- or token-identical to them, then the experience of pain does have a causal role—the experience of pain is then just the same as, say, C-fibers firing. So ‘I cried out because I felt pain’ just says the same thing as ‘I cried out because of C-fibers firing’.

    The next part, too, seems to me to suggest that you’re not really advocating an identity theory—you speak of different ‘trains’ of physical events and mental experience, and seem to imply that they can fail to match up. But if that’s the case, then there is an issue regarding mental causation—because then, you don’t say ‘I see a beautiful red rose’ because you have that very experience, but rather, because of the neural correlates to that experience.

    However, I’m also not quite clear on your notion of ‘explanatory space’. I would interpret it as follows, but I’m not confident it’s what you intended: basically, there’s one single set of events that unfold, but (at least) two ways of accounting for it—a third-personal and a first-personal one. On the third-personal account, we have a simple chain of causality, say involving a shin hitting the edge of the table, c-fibers firing, and the production of air pressure variations encoding various expletives any children in the vicinity will immediately pick up on and start chanting.

    On the first-personal account, you feel pain, and hence, you curse (and then feel embarrassed thanks to the dirty looks by the children’s mother)—itself also a chain of causality, but in some sense incommensurable with that of the third-personal account. If that comes somewhat close to what you have in mind, then I can relate to it, to a degree. It reminds me of the way in which, in physics, events may have complementary descriptions—with the most popular being the complementarity between the wave and particle pictures. Depending on the context (usually taken to mean a set of measurements), the same physical events may have different (even incompatible) descriptions, which nevertheless are jointly necessary to fully account for the phenomena. (Of course, I’m speaking metaphorically here, and not implying some sort of quantum mechanical description of the mind/brain relationship.)

    Generally, this sort of thing would probably be considered to be some form of dual-aspect monism, but to me, this characterization doesn’t adequately account for the exclusivity of each aspect—looking at the world in terms of a strict physical context, you get things like Leibniz’ mill, where there is quite literally no room for the mental; while looking at it in mental terms, the physical is just as hard to come by: even examining your own brain, you really only have access to its phenomenal aspect. So from one explanatory frame, the other is inaccessible.

    As for me not wanting to harm Hal, I’m not sure this simply reduces to some ingrained assumption of a capacity for suffering. You can argue that what really counts morally is a capacity for having goals, and it’s not obvious that that requires consciousness. In fact, eliminativists probably have to argue something like this eventually. If we were ever to meet an alien race, and we’re sure that they don’t have any conscious experience (say we’ve figured out what’s physically necessary for it, and they don’t have it), do we really have no moral obligations towards them whatever? I don’t believe so, but then, there must be some other criterion for moral value than a capacity for suffering.

    But he is (he claims) not a conscious subject, so is a human behaviorally-equivalent zombie.

    Hm, not sure about this: surely, it’s part of human behavior to claim to be conscious? So it seems there’s a difference, behaviorally, between us and Hal.

    Whether Hal could actually exist is I think an open question.

    Well, as long as we’re merely talking possibility, not plausibility, Hal could always just generate its utterances randomly, coming up with your dialogue purely by chance. I think it’s very difficult to argue for any sort of phenomenal consciousness in that case.

  14. Thanks Jochen. A few responses/clarifications, although overall I think we’re on basically the same page:

    Re explanatory spaces, you say that “there’s one single set of events that unfold, but (at least) two ways of accounting for it—a third-personal and a first-personal one.” The way I’d put it is that there’s a physical, objective set of causally related events that we appeal to in 3rd person, e.g., neuroscientific, explanations of behavior, and then there’s a closely correlated stream of experience that we appeal to in everyday folk explanations. So there are two parallel causal explanatory stories about the same behavior. As you put it, my crying out (the behavior in question) gets explained in two ways, one involving c-fibers and other physical mechanisms, the other involving my experience of pain. My suggestion is that we shouldn’t try to mix these explanations, and that any attempt to do so will raise the insoluble problem of phenomenal-physical causation. For more on the idea of explanatory spaces see “Respecting privacy,” http://www.naturalism.org/philosophy/consciousness/respecting-privacy

    “Generally, this sort of thing would probably be considered to be some form of dual-aspect monism, but to me, this characterization doesn’t adequately account for the exclusivity of each aspect—looking at the world in terms of a strict physical context, you get things like Leibniz’ mill, where there is quite literally no room for the mental; while looking at it in mental terms, the physical is just as hard to come by: even examining your own brain, you really only have access to its phenomenal aspect. So from one explanatory frame, the other is inaccessible.”

    Experience presents the world in qualitative terms to each of us as subjects, whereas qualities don’t figure in 3rd person, objective descriptions. But, as you might be suggesting (not sure), this doesn’t mean that the world itself is a single substance that has two aspects (I don’t think the idea of substance or stuff really applies). Rather, the world appears to us as knowers in both qualitative (subjective, experiential) and quantitative/conceptual (intersubjective, physical) terms. We don’t find qualities anywhere in the world as described physically, but of course they are omnipresent as the world is given to us in experience. Why qualities come to exist for certain sorts of representational systems like us, as described in non-qualitative, objective terms (e.g., physically instantiated functions), is the problem of consciousness. But we’re never going find qualia themselves in the world as thus described, hence the exclusivity you advert to.

    This also means, as you correctly surmise, that I don’t think an identity theory is in the offing. Were pains identical to physical goings-on, then they would play causal roles in 3rd person explanations. But for a pain just to *be* c-fibers, etc., it would have to exist intersubjectively like c-fibers, etc. exist, that is, be a possible object of observation and objective description. But qualities like pain exist only for the system (hence the problem of other minds). So the way I see it, their causal role is restricted to 1st person subjective explanatory space.

    “So it seems there’s a difference, behaviorally, between us and Hal.” Yes, but I was aiming for human behavioral equivalence in terms of cognition and behavioral complexity and flexibility, not the behavioral indistinguishability of the classic philosophical zombie. The latter, of course, would claim to have experience, whereas Hal (an example of the former) doesn’t.

    “You can argue that what really counts morally is a capacity for having goals… there must be some other criterion for moral value than a capacity for suffering.” Interesting, and you may be onto something here. At the moment, I’m not sure that a system’s having goals qualifies it as a moral subject, something for which we should feel concern. Without sentience, after all, a system doesn’t really care about achieving anything, so why should we care about the system? It isn’t as if we’re causing it frustration, annoyance, pain, or depression, should we prevent it from achieving its goals. But still, it does seem a bit brutal to thwart a human-level zombie (if such can exist) intent on its mission in life – a seeming that I think comes from projecting our experienced motivations onto it.

  15. Hi Scott:

    “I also recommend people try something like flunitrazepam or some other amnesia inducing drugs to get a good phenomenological handle on the profound degree to which memory conditions experience. “

    Indeed. After an endoscopy for which I was sedated using such a drug, my wife drove me home. She says I was completely conscious all the way back (awake, eyes open, listening, responding coherently), but I have no memory of the drive. Quite the revelation!

    “I have many experiences all the time, but no experience of ‘representational realities’ whatsoever.”

    Right, since we experience the world, not our representations of it. But qualia themselves are a representational reality in the sense that we can’t transcend them. We’re stuck with the various qualitative particulars in terms of which objects in the world appear to us and in which we conduct our conceptual transactions (the auditory and visual experience of using language, as in reading this sentence).

    “…it seems wildly implausible to suppose we would evolve anything but opportunistic access and problem-specific capacities–nowhere near what would be required to make categorical statements regarding the nature of experience.”

    This claim itself is pretty categorical and involves a good deal of metacognition (knowledge about our cognitive capacities as gleaned from science). So I’m not sure why the claim that we can’t transcend our experience is off limits. Plus, when thinking about our situation as conscious creatures, it seems dead obvious.

  16. Tom: “This claim itself is pretty categorical and involves a good deal of metacognition (knowledge about our cognitive capacities as gleaned from science). So I’m not sure why the claim that we can’t transcend our experience is off limits. Plus, when thinking about our situation as conscious creatures, it seems dead obvious.”

    There’s a growing mountain of empirical support for the notion that cognition is heuristic, and a growing body of work that metacognition is heuristic as well. The problem isn’t that claims that ‘experiences are representational realities’ are categorical, but that it’s unclear what justifies the ‘dead obviousness.’ If you accept that metacognition consists of ‘fast and frugal heuristics,’ then you accept that it solves by strategically neglecting elements of what’s actually going on. Intuitive ‘dead obviousness,’ on this way of looking at things, is far more likely a symptom of neglect than ‘evidence.’

    Either way, you’re still on the hook for adducing the information evidencing ‘experience = representational realities’ (and how could you not be?).

    So what is that information?

  17. Scott: “you’re still on the hook for adducing the information evidencing ‘experience = representational realities’ (and how could you not be?).”

    As I suggested above, the evidence consists in the fact that we can’t transcend qualitative experience in how the world appears to us, even when we traffic in concepts, as for instance when doing science. You think we *can* transcend experience, but I’m not sure on what basis. On the view I currently hold (following Metzinger for the most part), it’s the systematic limitations of recursive representation in a system (a type of necessary blindness to its own representational goings-on) which might account for phenomenal experience in the first place. Whereas on your view, the metacognitive blindness and neglect you cite prevents us from forming any plausible hypotheses about experience at all, and extends to skepticism about its very existence. But why aren’t you skeptical about *that* claim as well, given the fact that cognition is merely heuristic?

Leave a Reply

Your email address will not be published. Required fields are marked *