HOT death…

Higher OrderYou may have seen the very interesting review that Micha kindly mentioned recently. I plan to discuss that next week, but before talking about Higher Order Theories (HOTs) it seemed best to set the context by talking about Ned Block’s paper which seeks to demolish them.

Higher Order theories come in many flavours, but the basic proposition is that a mental event, a thought or a feeling, is conscious if there is another thought about it, that original mental event. The basic intuition is that you can be aware of something unconsciously, but when you’re aware of your awareness it’s conscious.

Not everyone likes this perspective (Roger Penrose caustically pointed out that pointing a video camera at itself doesn’t make it conscious) and some would say either that it only explains certain varieties of consciousness or that it only explains certain aspects of consciousness. Nevertheless, HOTs have had a long run as respectable contenders, representing one of the major areas where we might choose to look for The Answer.

Block’s paper last year was unusual in seeking to offer something like a knock-down destruction of the case, or perhaps it would be more accurate to sought he meant to pursue the HOTists until their last hiding place had been flushed out. His targets were not the modest theorists who claim that HOTs may explain some varieties of aspects of consciousness, but the ones he characterised as ‘ambitious’: in particular those who claim HOTs could explain the ‘what-it- is-likeness’ (let’s call it WIIL) of conscious experience.

The starting point is the uncomfortable fact that we can have thoughts about being in conscious states we’re not in fact in. We can think we’re seeing red when in fact we’re seeing green, or not seeing anything at all. You might well feel that this in itself is something of a blow for HOTs, and your first reaction might be that they should give up any claim of a conscious state where there is no state to work with. That sounds sensible but the retreat is not so easy as it seems if we want to retain WIIL for dreams and illusions, as we surely do.  In any case, Block’s targets take the opposite path: sure there can be WIIL in those cases, they reply.

Now Block springs the trap. So you’re saying, he observes, that an episode is conscious if it is the object of a simultaneous higher order thought?  And that thought is a sufficient condition for a conscious episode. Yet it’s also a necessary condition that that episode is the object of a higher order thought. Yet in this case we have only the one, higher-order thought to work with (and we can assume it ain’t self-referential). So there is no conscious episode. We’ve got necessary and sufficient conditions which are not compatible – what madness is this?

There is a way out which those unused to philosophical discussion may find a little odd: this consists of saying that in these cases, where we have a second-order thought about an experience we’re not actually having, there is an object of the second order thought after all: it’s simply one that doesn’t exist, that’s all.  We must remember here that the objects of thought are slippery customers; we often think about things that don’t exist (Pokemon, the sixth wife of Henry VII, the house I would have built if I had won the lottery, square circles).

But, says Block, if you take that route, where’s your WIIL (or maybe in this case it should be WIIAL –What It Isn’t Actually Like)? It’s now fake WIIL – and you can’t rest content with that.

I’ve omitted some important technicalities and details in the foregoing, but I hope the gist comes through: where does it leave us?  It seems an effective argument to me, and I would add that for those who are seeking the essence of WIIL, it seems intuitively unlikely to me that it could ever have resided in thoughts about thoughts: that brings it all back into the head, whereas what it is like ought to out there with ‘it’. Those who never believed in WIIL will not, of course, be troubled by any of this.

So following Block’s demolition, the advocates of HOT admitted their error, thanked him for clarifying, and issued a full retraction. No, of course they didn’t, as we shall see…

36 thoughts on “HOT death…

  1. The URL to Block’s paper is not working because the CE address is unnecessarily listed before the Block address.

  2. Peter,

    Well, I wrote a response about this, so I felt I ought to speak up. (Here’s a link to the full paper: http://www.class.uh.edu/faculty/jweisberg/__docs/Abusing%20WIL-%20J%20Weisberg.pdf )

    The gist is this. Some mental states are conscious; some are not. What’s the difference? HOT theory says we’re aware of ourselves as being in the conscious states. It then posits a mechanisms to explain this inner awareness: a kind of metarepresentation–a scanning of your mental states by a higher-order system. To be conscious, a state must be properly represented by this system. Ok so far?

    Now it turns out the scanning mechanism can deliver a false positive (in principle, at least). What happens then? Answer: subjects can’t tell the difference between that and an accurate case. Is this already impossible or incoherent? I don’t see it. What I think is that Block already thinks that a scanner/HO representation can’t do the “real what it’s like” job. Why? Because he’s an avowed qualia-phile–he already knows there are qualia. HOT theorists reject this from the get-go. So it’s not a surprise that this “real” thing can’t be accounted for in HOT theory.

    So, the question for the HOT theorists seems to be: what state is conscious in the false positive case? It can’t be the state represented–that doesn’t exist. And it can’t be the HOT–we’re not aware of that states, ex hypothesi. My answer (and Rosenthal’s) is that it’s the state we’re aware of being in, just as we said in the beginning. That’s how things SEEM to the subject; that’s what matters to her. Yes, that state doesn’t exist, but the subject is aware of herself as being in it. And, according to HOT theory, that’s all that matters for consciousness.

    By the way, Block’s “real WILs” can occur even though the subject is totally unaware of them. So, for Block, a real what it’s like may not be AT ALL present to the subject, it may make no impact at all on what she’s aware of, what she reports, what consciously influences her behavior, etc. That, to me, doesn’t sound like any “what it’s like” I’ve experienced. Sounds, well, fake, to be honest.

    Anyway, HOTs are sufficient for consciousness. Yes, that’s the view! I am aware that this strikes most people as impossible or incoherent, but I diagnose that as a symptom of latent (or active!) qualo-philia–a very unpleasant condition. The necessary condition makes no mention of real episodes or whatever–it just says: for a state to be conscious, we must be properly aware of it. It is neutral (we claim) about whether what we’re aware of must exist or if it can be like MacBeth’s dagger. Now, why can’t it be like MacBeth’s dagger? We think that theoretical reasons (and commonsense ones as well) suggest the MacBeth view. And, in the end, if that goes against some commonsense (or philosophical) intuition about the nature of consciousness, so be it. Sometimes good theory trumps intuition.

    Sorry for the long-winded post, but since I’m on record saying Block is wrong, I thought I ought to speak up and explain why we haven’t yet got around to the full retraction.

    (PS I’ve enjoyed lurking around your blog for a while now! Cheers, Josh)

  3. Josh: “The gist is this. Some mental states are conscious; some are not. What’s the difference? HOT theory says we’re aware of ourselves as being in the conscious states.”

    I’m puzzled. If I understand, what you are saying is that we must first be in a conscious state in order to be aware of ourselves as being in a conscious state. So why does a conscious state require the higher order reflection of one “being in a conscious state” before it is a conscious state? This seems incoherent.

  4. Arnold–

    The sentence you quote is confusing–my apologies. It does make it sound like we’re aware of already conscious states. That’s not the idea. Rather, mental states are conscious when we are aware of ourselves as being in them. Otherwise, they are not conscious.

    On Armstrong’s presentation of the theory, he suggests that consciousness may a late evolutionary development, emerging only when internal scanning and integration processes develop, presumably with bigger-brained critters. (See also Dennett on this idea.) This helps bring out the contrast with the qualia-based view: qualia are generally seen as having an earlier evolution and a much more universal presence–perhaps co-existing with all physical matter in some way. The HOT view, by contrast, embraces “Armstrong’s iceberg”–the idea that most mental processes are nonconscious, while consciousness is merely the “tip” of the mind.

    PS I find that as people get clear on the commitments of the HOT theory, they often say, “oh, sure, it’s not incoherent. It’s just false.” And this is because it is seen as much to “thin” to explain the richness of conscious sensory experience. My own view is that this alleged richness is an illusion. Once this is cleared up, the HOT theory (and its empirical competitors–functionalist, representationalist, reductionist theories) makes more sense.

  5. Josh,

    I agree that the greater part of mind/brain events are non-conscious. But as a neuroscientist, I find myself searching for possible neurobiological referents for the terms “lower-order state” and “higher-order state”. In your paper “Abusing the notion of what it’s likeness: A response to Block”, you wrote “… the HO state is the state *responsible* for there being something it is like for the subject.” Here, it seems to me that you identify the HO state as that brain state that manifests *subjectivity*. Would you agree?

    Now if we want to understand phenomenal consciousness, we have to formulate a theoretical model of brain mechanisms having the structure and dynamics that can reasonably be said to constitute subjectivity. I’ve claimed that the minimal condition for subjectivity is a brain representation of *something somewhere* in a perspectival relation to a fixed locus of spatiotemporal origin in the brain — what I designate as the core self (I!). And I also claim that the brain mechanism that realizes subjectivity must be innate because we have no sensory apparatus for detecting the world space we live in. I call this brain mechanism the *retinoid system*. In this view, active neuronal patterns in the brain’s sensory modalities remain preconscious mental events until they are projected into egocentrically organized retinoid space, at which point they become part of our phenomenal experience — what it’s like for us. Would you call the pattern of activity in retinoid space a higher-order mental state? How does this fit with HOT?

  6. Hi Arnold.

    There’s an ambiguity, I think, in the idea of a “state that manifests subjectivity.” It could mean the state responsible for there being something it’s like for the subject–the mechanism or process producing the “subjective feel.” Or it could mean the state the subject is aware of being in–how things phenomenally seem to the subject. So, it may be that a HO state (a state tracking activity in another brain region) is responsible for there being something it’s like: if that state does not occur, there’s nothing it’s like for the subject. But it may seem to the subject herself when this occurs that she is in pain, or seeing red, or what have you–that is what the HO state “tells” her: you are in a pain state, or a red visual state, etc. So, her state of pain “manifests subjectivity” in this sense because she’s aware of it in the right way (and so it “lights up” for her).

    It sounds to me (and no doubt, there’s a great deal more detail to your model) that it may fit with the HOT theory. Projecting nonconscious sensory mental events into an egocentric space sounds similar to the idea of representing oneself as being in this or that sensory state. It would be intriguing to compare the views at a finer level of detail. How does your view relate to that of Damasio? His view has affinities to the HO approach, as he himself notes.

  7. Josh,

    Before discussing my model of subjectivity in comparison with Damasio’s model, I want to clear up the ambiguity about the “state that manifests subjectivity”. The retinoid model of consciousness claims that activation of the brain’s retinoid-space mechanism is responsible for there being something it’s like for the subject — this IS subjectivity/phenomenal consciousness/the “subjective feel”. It is the conscious primitive that can be described as simply being at the *origin of a volumetric surround* –the feel of being *here* within a surrounding space. The retinoid model of consciousness further claims that excitatory patterns in all of our other sensory/cognitive mechanisms remain pre-conscious brain events until they are projected into our retinoid space in proper spatio-temporal register where they become the enriching content of our phenomenal world — the space around us, manifest for us, the feel of what it’s like for us. Does this help clear up the ambiguity?

  8. What about if instead of talking about the requirements for “conscious state” to be such, we talk about requirements for a “conscious being” to be such.

    I think that what is true is that, in order to be a conscious being, a “conscious entity”, you must be aware of your own conscious nature.

    Now, we cannot scale this fact to each conscious state, we cannot get below the global frame.

    Rather than “I think therefore I am”, I would say, I know that I think therefore I am.

    Eventually there are always some infinite recursive components in all these kinds of reasoning. Whenever something perceives something, that’s it, unless we could define the intrinsic observer, referred in the post as “self-referential HO thought”.

    To me, this text points out, once more, that consciousness and sheer existence are so intimately linked, that mean the same.

  9. Using ‘aboutness’ to explain ‘consciousness’ is like using wind to explain air. This is the thing I’ve never been able to wrap my head around with HOT theories, the prediliction to use obscurities to explain obscurities. Am I alone in thinking it all just sounds like rational psychology?

    The ‘of’ in ‘conscious of’ is the million dollar question, isn’t it? It seems to me that the explanation of intentionality is the very thing we’ll need before we can hope to explain consciousness.

  10. Scott: “The ‘of’ in ‘conscious of’ is the million dollar question, isn’t it? It seems to me that the explanation of intentionality is the very thing we’ll need before we can hope to explain consciousness.”

    Your confusion is understandable because most talk of intentionality is not based on the details of brain mechanisms that might give us intentionality. In my theoretical model of the cognitive brain, the relationship between a conscious experience and the thing it is *about* is explicated by the operating characteristics of retinoid space (consciousness) in recurrent excitation with pre-conscious synaptic matrices which represent objects and events, like this:

    [retinoid] [synaptic matrices]

    It is only after events in the synaptic matrices are projected into our egocentric retinoid space that they can become phenomenal objects “out there” in our phenomenal world, which we can subject to further pre-conscious cognitive processing — and then back to retinoid space for our updated phenomenal experience.

  11. Perhaps I have never understood, but it seems to me that the notions behind HOT have been around longer than the trendy name. This is at least true in AI. In the 70s it was widely accepted that complex operating systems were on the brink of consciousness. In fact, a computer operating system is a complex, high-order system built from the lowest level digital states and then layer upon layer of abstraction. An operating system will observe its overall state (how it is doing, what programs are running, how hot it is, what is its current health, …). It even brings all of this information into a common 2-dimensional manifold – the video monitor. Nonetheless, we don’t believe that a modern computer is conscious, and if it is (by fluke) then no one can explain why. For starters, no one can make the anthropomorphic analogy between a process and a thought precise. Such a definition implies a mechanistic description of intentionality. Note, this has nothing to do with complexity; we could make an OS arbitrarily complex and the mechanics would be the same.
    If you endeavor to program a conscious machine you will find that the syntax of any conceivable programming language is inadequate. As so many have pointed out, a computer has no mechanism for interpreting syntax with ‘intention.’ Within the computer there isn’t an operating system, just a complex set of instructions for turning on and off hundreds of millions of logic gates every second. The meaning of that mechanistic cacophony is ascribed by us, as we watch the pixels flicker. As Steve Jobs said ‘…a computer is a bicycle for the mind.’

  12. Jay:
    As so many have pointed out, a computer has no mechanism for interpreting syntax with ‘intention.’ Within the computer there isn’t an operating system, just a complex set of instructions for turning on and off hundreds of millions of logic gates every second. The meaning of that mechanistic cacophony is ascribed by us, as we watch the pixels flicker. As Steve Jobs said ‘…a computer is a bicycle for the mind.’

    Yes, but many will claim that the brain is just a complex set of instructions for operating trillions of synapses…. a computer is a bicycle for a motorbike?

    From a pure physical substrate point of view, what is there in the brain matter that is not in the computer? and, if the secret were in the architecture, what could it be?

    We have to go to ontology, the point is that the computer does not exist without you, neither do the mountains or any other element in the Universe… but you do exist without them, and need not other conscious beings to exist.

    The problem with HOT, as with most rational logical approaches to the problem of consciousness, is its logical nature itself, it will not work, for the same reason that ultimately physics will not explain the very nature of the fundamental components of the Universe. It will just provide laws to describe their behaviour and interactions (which is pretty good).

    What is the requirement for a piece of matter to be matter? and for energy to be energy? and for a thought to be a thought?

    We are stuck, cognitive closed. Probably, we will just have to make ourselves happy with an extensive description of the NCCs, or a deep knowledge of neurophysiology, the rest will have to wait.

  13. Vicente: “From a pure physical substrate point of view, what is there in the brain matter that is not in the computer? and, if the secret were in the architecture, what could it be?”

    Along the lies of Vicente’s question, does anyone reading this know of any existing artifact that contains a part that has a volumetric analog representation of the space in which it exists?

  14. Arnold, lies = lines? or is it that critics are getting tough?

    “volumetric analog representation”… when you first made this question, some time ago, I thought that you should specify the hierarchy level of representation. I mean, in your logical gates layouts and representations of the retinoid space there seems to be this space mapping, but in terms of the physical nervous tissues, and synaptic arrangements, is there really such an analog representation? or is it just in terms of neural activity space distribution. Analog but discrete? at the end of the day you are moving from a continuous real space, to a discrete and discontinous synaptical space, and phenomenal representation, (mental images) seem to be continous, weird.

    Why don’t you build such a device? if so, how would you check that it is actually conscious? I don’t know if finding an existing device equipped with that feature would be useful.

  15. Arnold: On your account, why is it we seem to have so much difficulty naturalizing intentionality? Why, just for instance, is intuition so drastically wrong about the ‘will,’ say?

    One of the reasons I don’t find approaches like yours very convincing is their penchant to simply posit new mechanisms to ‘explain’ this or that feature (or apparent feature) of consciousness. “What’s intentionality? The operating characteristics of this device over here.”

    My own hunch is that things are wonkier than an ‘accomplishment perspective’ can explain, that there’s a profound reason why we find consciousness so strange. What would impress me would be if your account could provide a single, parsimonious explanation for why intentional concepts/phenomena possess the general incompatibility with naturalistic thinking they do.

  16. Vicente: “… but in terms of the physical nervous tissues, and synaptic arrangements, is there really such an analog representation? or is it just in terms of neural activity space distribution.”

    Retinoid space really is a spatiotopic analog representation of the physical space around the core self (I!). This is why the putative brain mechanisms of the retinoid system lead to correct predictions in the case of the SMTT experiments. It is also why it provides the best solution for the Moon Illusion and for many other puzzling perceptual phenomena.

    Vicente: “Why don’t you build such a device? if so, how would you check that it is actually conscious?”

    I don’t know how to build such a device using current physical components. I doubt such a device, even if it were possible to build one, would be conscious. But evolution has apparently gone beyond our best technology and has given us a system of brain mechanisms having the functional property of such a hypothetical device.

  17. Scott: “What would impress me would be if your account could provide a single, parsimonious explanation for why intentional concepts/phenomena possess the general incompatibility with naturalistic thinking they do.”

    My account explains many of the empirically revealed phenomena of consciousness. The question of why “… intentional concepts/phenomena possess the general incompatibility with naturalistic thinking they do” is a separate question. In a nutshell, the reason is that conscious phenomena are in the first-person domain of description, whereas naturalistic thinking (science — consensual observations) is in the third-person domain of description. That is why I have proposed the following bridging principle for the scientific investigation of consciousness:

    *For any instance of conscious content there is a corresponding analog in the biophysical state of the brain*

    This is consistent with the metaphysical stance of dual-aspect monism.

  18. Vincent – I would agree with part of what you said, though the later part sounds a little bit like philosophical idealism.

    As to your questions: Yes the brain is ‘a complex set of instructions for operating trillions of synapses,’ which cannot invoke ‘intentionality’ or subjective experience. To me that is reason enough to suspect the mind is something different. However, in a world where people can deny qualia, I understand that attacking the self-consistency of mechanistic descriptions of the mind can be more generically persuasive. It seems clear to me that all future mechanistic/functional theories will face the same fate. I suspect there is a deeper argument to be made, wherby all mechanistic theories will fail to be consistent. In any event, it just seemed odd to me that HOTs were pretty obvious ideas that had been explored and even exploited in other, very common domains without anyone claiming they created a ‘conscious entity.’

  19. Arnold,

    Would replicating the retinoid space neural arquitecture and the corresponding visual input, with ordinary electronics and software work? if you manage to have the same (equivalent) electrical activity (firing and propagation), would it work? or what is the system element, of behaviour, or component, that really needs to be copied (emulated) in your artifact? arcthitecuture? currents and voltages? chemistry? all of them? in orther to produce the conscious experience.

    Even if you do it, what kind of conscious experience would that be, lacking all the other ingredients that support the self. This is the point where HOT could play a role. The conscious experience is not just qualia, the cognitive frame necessary to interprete the situation is as important.

    Then
    My account explains many of the empirically revealed phenomena of consciousness

    I know you will appeal again to the SMTT experiment and others, but, I believe that regarding the phenomenal side it is not an empirical revelation… rather just a revelation.

    Jay:

    I quite share your view.

    What do you understand by “idealism” in this context?

  20. Vicente: “what is the system element, of behaviour, or component, that really needs to be copied (emulated) in your artifact? arcthitecuture? currents and voltages? chemistry? all of them? in orther to produce the conscious experience.”

    Since I’m not omniscient, it would be safest to say “all of them” because the only convincing evidence for conscious experience is found in creatures with neuronal mechanisms having architecture, ionic currents and voltages, chemistry, etc. My retinoid theory of consciousness claims that activation of the neuronal architecture of the retinoid system is necessary and sufficient to produce conscious experience.

    Vicente: “The conscious experience is not just qualia, the cognitive frame necessary to interprete the situation is as important.”

    Yes, but our cognitive processes depend on the prior activation of the retinoid system to be effective in interpreting our phenomenal world. See Fig.8 here:

    http://people.umass.edu/trehub/YCCOG828%20copy.pdf

  21. to me idealism means the world consists of one knowable ‘substance’ and that ‘substance’ is mind. In a particular sense, the opposite of the materialism stance of many of our friends on this board.

  22. Arnold,

    OK, so then your artifact will need some more functions in addition to the spaciotopic representation module. The retinoid system is a necessary precondition, but not sufficient for consciousness, is it? Eventually you’ll end up designing a whole AI complete brain simulation…

    Could you give me a reference in any anatomy/histology textbook to the brain’s retinoid structure.

    Jay: What does “knowable” mean? that can know? that can be known? ¿a pansychist approach? if the substance is one, how can then be structures, and a variety of objects?

    Why is it that brains are required for consciousness? why don’t we see conscious phenomena all over around, supported by other systems? complex system emergent property? self-organising “knowable substance” complex structures raise consciousness?

  23. Vicente: “The retinoid system is a necessary precondition, but not sufficient for consciousness, is it?”

    Theoretically, activation of the retinoid system alone is necessary and *sufficient* for consciousness. But in this case it would be the most primitive level of consciousness — only a feeling of being at the origin of a surround. Also, it would be a completely locked-in kind of consciousness, without any way of expressing it overtly. Our normal rich conscious experience does require other pre-conscious cognitive mechanisms to be synaptically attached in recurrent excitation loops to to our retinoid system as shown in Fig. 1.16 in *The Cognitive Brain* (MIT Press 1991) here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter16.pdf

    Vicente: “Could you give me a reference in any anatomy/histology textbook to the brain’s retinoid structure.”

    As far as I know, the structure and dynamics of the brain’s putative retinoid system has not been referenced in any anatomy/histology textbook. It is a theoretical construct that is supported by a wide variety of empirical findings. I expect that with advances in the tools of neuroscience the structure of the retinoid system will be directly observed.

  24. What a wonderful discussion! It certainly serves to remind me just how whacked my own views are. I have difficulty with approaches like Arnold’s simply because I just don’t see how consciousness is explained so much as broken down into components and mapped across various candidate NCCs. I have difficulty with HOT theories (though I am a fan of Armstrong’s iceberg) because they beg the question of intentionality: saying we become conscious of brain activity when it’s taken up by some metarepresentational system requires some naturalized understanding of ‘metarepresentational.’ I’m much more sympathetic to Tononi’s notions of dynamic information integration: the further we get from ‘higher order’ and ‘thought’ the better!
    But this just illustrates the ‘Explananda Problem,’ doesn’t it? When we’re not even clear on what it is we’re trying to explain, how can any kind of explanation command any meaningful consensus? As a meaning skeptic, I think that intentionality either needs to be explained or explained away for any theory of consciousness to avoid begging the question.
    So what Jay says regarding what Harnad calls the symbol grounding problem: ‘If you endeavor to program a conscious machine you will find that the syntax of any conceivable programming language is inadequate. As so many have pointed out, a computer has no mechanism for interpreting syntax with ‘intention.’ Within the computer there isn’t an operating system, just a complex set of instructions for turning on and off hundreds of millions of logic gates every second. The meaning of that mechanistic cacophony is ascribed by us, as we watch the pixels flicker.’
    I actually think this is what the Retinoid and HOT theories are doing: anthropomorphizing neuromachinery, then crying, Eureka! What has me stumped is the question of what could convince me that I’m wrong, given my existing commitments.

  25. Scott: “I have difficulty with approaches like Arnold’s simply because I just don’t see how consciousness is explained so much as broken down into components and mapped across various candidate NCCs.”

    I have explicitly rejected the notion that conscious (C) can be explained on the basis of sheer neural correlates (NCC). My view is that for an explanation of C we are obliged to formulate credible neuronal brain mechanisms that be demonstrated to generate neuronal patterns that are analogs (in the similarity sense) of the conscious phenomena we seek to explain. Perhaps you will get a better idea of my approach if you read these two papers:

    http://evans-experientialism.freewebspace.com/trehub01.htm

    http://theassc.org/documents/where_am_i_redux

    Scott: “I’m much more sympathetic to Tononi’s notions of dynamic information integration:”

    I don’t see how information integration can explain conscious experience. It seems to me that if you accept Tononi’s explanation then you would also be obliged to accept a Google server center as a conscious entity. I don’t!

    Scott: “What has me stumped is the question of what could convince me that I’m wrong, given my existing commitments.”

    Read the papers that I linked and then we can discuss the matter in more depth.

  26. A lot of what Scott is saying seems to make more logical sense to me. I also agree with Scott especially when he argues that we can’t even agree on what we are trying to explain.

  27. Scott: ” I actually think this is what the Retinoid and HOT theories are doing: anthropomorphizing neuromachinery, then crying, Eureka!”

    There is a big difference between Retinoid theory and HOT theories. HOT theories are simply functional claims — essentially labeled black boxes and directional arrows without specifying the necessary neuro-machinery. Retinoid theory, in contrast, details the minimal neuronal mechanisms with the structure and dynamics competent to generate proper analogs of salient conscious experiences. We can actually perform empirical tests of the logical/phenomenal implications of the brain’s putative retinoid mechanisms. This is not “anthropomorphizing”.

  28. But you do see the problem, don’t you Arnold? Mapping various (interpretations of) phenomenal features of consciousness onto various hypothesized neural mechanisms is important work – there’s no doubt of that. And if the research bears those hypotheses out, so much the better: you have discovered WHERE this or that feature originates. But not HOW – at least not in any way most would find convincing.

    This is what makes the hard problem so damned hard. You need to find some way of spinning the intentional out of the causal, an explanation that somehow clarifies or dissolves the perplexities plaguing the former. Short of this, assertions that this or that intentional phenomena arises this way there will strike many as a kind of ‘anthropomorphizing.’ (My own view is VERY extreme: I think all representational accounts are guilty of anthropomorphizing).

    Why, for instance, is consciousness normative as opposed to dispositional on the retinoid account?

    My own fear is that consciousness AS IT APPEARS will need to be explained away before it can be explained at all, that you could be right in all the particulars, but everyone will think you are somehow missing the point, simply because the consciousness they think they have doesn’t actually exist.

  29. Scott, I surely do see the problem. I have a forthcoming article that supports the view that thoughts about one’s own conscious experience can be expected to seem to be completely different from the brain activities that actually constitute these same thoughts.

    Would you elaborate on the distinction between normative consciousness and dispositional consciousness?

  30. Peter, yesterday I submitted a comment in response to Scott. It is still awaiting moderation. Is there a problem with this comment?

  31. I would love to take a looksee at that article, Arnold!

    Certainly consciousness is normative as we experience it: things are true or false, right or wrong, good or bad. This is one of the cornerstones of ‘subjectivity.’ The trick is one of explaining this in mechanistic or dispositional terms. Machines just do things, and complicated machines do complicated things. Whether they fail or succeed is a judgement that we make from the outside. We distinguish between competence and performance in a way that has spilled a tremendous amount of philosophical ink, and yet remains wholly mysterious.

  32. Sorry Arnold: sometimes the anti-spam software takes against a comment which has links in. I’m afraid I’m not always very quick about picking these up.

    PS – I’ve also finally fixed the link to Block’s paper.

Leave a Reply

Your email address will not be published. Required fields are marked *