So now let’s look at the review by Richard Brown of Rocco J. Gennaro’s new book, which sets out his HOT.  I should disclose that I have not read Gennaro’s book, so I’m merely reviewing a review of a theory. You could call it a higher order review.

Gennaro’s own concerns feature concepts strongly: his theory includes conceptualism, the belief that concepts provide key structure for our conscious experience.  This causes him some difficulty over animals and infants, as he needs them to have sufficiently advanced concepts to form the required higher order awareness. He needs at least some account of how the required concepts are acquired (or how they came to be innate), and he needs to avoid getting into the bootstrap bind where you need to have sophisticated conceptual structures in order to pick up the concepts you need in order to have sophisticated conceptual structures.

My personal bias is that conceptualism tends to bring unnecessary complications, and in this particular case I’m not sure why we need to make the weather so heavy (though no doubt Gennaro has his reasons). All we need is the awareness of an awareness: the higher order awareness does not need to look through to the objects in the external world. The conceptual apparatus required surely ought therefore to be modest, and I wouldn’t fight very hard against the idea that it could be built in or be the result of a very early internal rewiring exercise in the infant mammalian brain.

The main point of interest for us is the difference which Brown sets out between his own view and Gennaro’s. This hinges on a fundamental point: are we talking about higher order awareness of a mental state (Gennaro), or of our being in that mental state (Brown)? At this point I’d say I find Brown’s view, which he characterises as the non-relational view, more appealing. It seems important to specify that the awareness involved must be ours, after all.

However, both sides claim that their view is best able to deal with the kind of objections raised by Block and discussed last time, arising from cases where we have the higher order awareness but actually through some error the lower-order awareness which it targets is not actually there. Gennaro, it seems, wants to disqualify these states altogether: where the first-order state is not there, there’s no consciousness. If you’re not seeing something red, you can’t have the subjective consciousness of redness. This is neat in its way, but seems arbitrary (How come subjective experience, of all things, comes to have a special kind of immunity from error?), and surely we want to retain the possibility of a subjective experience not based in reality? I can see that some might argue that dreams lack real subjective experience, but are we prepared to say the same of illusions and mirages? That seems a high price to be paying.

Brown’s escape route is quite different: by adopting the non-relational view he can cut free from the first-order state altogether. Who cares whether it’s there or not? It’s just about the right kind of second-order state, that’s all. This may seem a little weird, but after all the orthodox view of qualia, our subjective experiences, is that they are largely decoupled from ordinary causal reality. On Brown’s view, if we see green, we can have the experience of red without invalidating the experience. We’ll still behave as though we were having the experience of green.  Block might denounce our qualia as fake, but meh, if that’s what you mean by fake, all qualia were always fake, so who cares?

The logic of that seems faultless, but I couldn’t help feeling that my sympathies were swinging back to Gennaro somewhat. Brown mentions a second problem, the problem of the rock: why can’t we make a rock conscious by having the right kind of second-order mental state about it? For Brown’s decoupled  view, there’s no problem because we’re not even talking about the first order awareness – rock, schmock, it’s simply irrelevant. Gennaro does have to face the issue and it seems he seeks to do it simply by disqualifying rocks with an additional rule or specification. Brown considers this another unattractively arbitrary lash-up, and perhaps it is, but in another light it seems far closer to common sense (though of course the realm of common sense is some way off by now in any event).

For myself the net effect of the discussion is to make me feel more strongly than before that if we are to have a HOT (and I’m not absolutely wedded to that in itself), we’d do much better to stick with what Block calls the unambitious variety, the kind that doesn’t seek to explain subjective experience or exorcise those deadly sirens we call qualia.

149 Comments

  1. 1. Arnold Trehub says:

    Peter and all,

    Here is an interesting study reported in *Consciousness and Cognition* that touches on several aspects of phenomenal consciousness:

    Synaesthetic perception of colour and visual space in a blind subject: An fMRI case study

    Valentina Niccolai, Tessa M. van Leeuwen , Colin Blakemore , Petra Stoerig

    A b s t r a c t

    “In spatial sequence synaesthesia (SSS) ordinal stimuli are perceived as arranged in peripersonal space. Using fMRI, we examined the neural bases of SSS and colour synaesthesia for spoken words in a late-blind synaesthete, JF. He reported days of the week and months of the year as both coloured and spatially ordered in peripersonal space; parts of the days and festivities of the year were spatially ordered but uncoloured. Words that denote time-units and triggered no concurrents were used in a control condition. Both conditions inducing SSS activated the occipito-parietal, infero-frontal and insular cortex. The colour area hOC4v was engaged when the synaesthetic experience included colour. These results confirm the continued recruitment of visual colour cortex in this late-blind synaesthetes. Synaesthesia also involved activation in inferior frontal cortex, which may be related to spatial memory and detection, and in the insula, which might contribute to audiovisual integration related to the processing of inducers and concurrents.”

    The perception by a blind synaesthete of ordinal stimuli that are phenomenally ordered by color in peripersonal space is particularly interesting and suggests that what is a higher-order state and what is a lower-order state is undecidable without a model of the cognitive brain *mechanisms* that are the putative referents of such states. I wonder if you would agree that the peripersonal space in which the stimuli are ordered by the blind subject must exist as a mental state prior to the perception of the ordered stimuli. What do others here think about this finding?

  2. 2. Richard Brown says:

    Hi, thanks for this great post! What pushes me towards the ambitious HOT theory are the empirical results (see the two papers I cite in the review, one by Lau and myself, and the other by Lau and Rosenthal)

  3. 3. Peter says:

    Many thanks Richard – perhaps we’ll give these issues a further look at some stage.

  4. 4. Arnold Trehub says:

    Hi Richard,

    I’ll ask again. Do you agree that the phenomenal peripersonal space in which the stimuli are ordered by the blind subject must exist as a mental state (1st order state?) prior to the perception of the ordered stimuli. If not, why not?

  5. 5. Vicente says:

    if we see green, we can have the experience of red without invalidating the experience

    This is why the whole thing is wrong.

    We never saw green in the first place, you could say, if the retina what stimulated with the frequency (in terms of sense called green), and we had the experience “usually” related to the wavelength called red. So you just saw red in the first place. Like colour blind people have other kind of experience.

    Then, we could have stimulated the visual cortex, or nerve, directly, with similar results.

    The point is that the brain generates those experiences, how? and what are those experiences? what are the greeness and the redness? those are the questions, and all these considerations help nothing to answer them.

    Tell me what and how, and all the others questions will be solved up to any order.

  6. 6. scott bakker says:

    I’m with you, Vicente: I don’t see how any of this does anything more than reorganize the mysteries. Just to add to your point, qualia also underscore the problem in ways aside from brute inexplicability. We always want to smuggle in the all important distinction between the subject of the experience and the experience itself (as HOT theories do with their ‘transitivity principle’), but we need to remember that this very distinction ALSO belongs to our explananda. We want to think consciousness in terms belonging to consciousness, which is to say, intentionally, so we posit dualities when we are actually talking about a singular system. The ‘Cartesian Theatre’ is not a global problem, it’s one that can be chopped up in innumerable little ways. Representationalism, or any account that assigns ‘content’ to meat, is simply setting up miniature theatres – or so it seems to me. We turn to it, because it is so amenable to the way we think that we find it difficult thinking about the problem otherwise. Even after centuries of wandering the same maze.

    Like I say, give me Tononi or Seth any day. Once we can actually quantify consciousness, then we can go about hunting down its actual correlates, then we definitively decide whether animals and infants actually suffer conscious experiences, then we can start figuring out what the hell is going on.

    My own guess is that pretty much every staple of conscious experience is going to suffer a dismal fate similar to what the ‘feeling of willing’ seems to be suffering at the hands of science today. The single thing I find most amazing about neuroscience is the fact that, even though I AM the domain it studies, I am utterly ignorant of the things it keeps discovering about me. I see no reason why this trend should not continue. So my question is, What if the very structure of consciousness is simply an expression of this ignorance, the artifact of a profound – and quite natural – form of anosognosia? Consciousness as camera obscura.

  7. 7. Arnold Trehub says:

    Scott and Vicente,

    We’ve been over this territory before. First, in order to understand phenomenal consciousness, we must accept our epistemological limitations. Just as we cannot fully understand how quantum processes produce all of the objects and events within our experience, so must we acknowledge that we will be unable to fully understand how brain processes produce our undeniable conscious experience. Second, in contrast to sheer correlation, all explanation is theory bound, so we cannot expect to understand consciousness without an explanatory theory. Third, overwhelming evidence indicates that consciousness is produced by brain processes, so the challenge is to formulate a theory of brain processes that can be demonstrated to explain/predict relevant conscious experiences.

    As for dynamic integrated information (Tononi), it misses the mark by a wide margin. If you accept the quantity of integrated information to represent the amount of consciousness in the system, it seems to me that you are also obliged to attribute consciousness to a Google server center.

    Scott, have you read my reply #28 in the previous thread? I think we could have a more fruitful conversation if you address the points made in the papers that I linked.

  8. 8. scott bakker says:

    I’m not sure I understand, Arnold. At least we can generally agree on what it is the Standard Model – as it stands – cannot explain. Not the case here – ‘undeniable conscious experience’ indeed! Otherwise, I’m not quite sure how the following two points are relevant to my argument above.

    I’m also not sure how your analogy with the Google server counts against something like IITC. It remains to be seen how ubiquitous ‘conscious experience’ is.

    Everyone in the consciousness explanation biz, myself included, has to be wary of the ‘man with a hammer’ syndrome. When dealing with fuzzily defined explananda, it because easy to turn your explanation INTO your definition.

    It’s certainly something I fear I’m guilty of on occasion.

  9. 9. Vicente says:

    Arnold, you are right, we’ve been here a few times already. Each time a revisit known land, I do it with the hope of finding some detail, some nuance that could help my despair… or it is like walking around the Kew Gardens, each time you find some plant you never saw before, always the same, always different.

    I don’t deny that the SMTT provides insights into the problem… but just at a functional level.

    In physics, regardless the very basic questions of cosmogony, maybe one day we could answer the “how”, as for the “what” some issues will always remain under an epistemic limiting veil, for us. And even the “how”, accepting the mathematical “platonic” scenary in front of you. I mean, a constant light speed, or entangled spooky interactions, just to mention a few, are beyond your (evolutive?) brain capabilities. So, I agree that it is not just consciousness that poses an epistemological problem.

    As the old roman philosophers liked to say:

    Ignoramus et Ignorabimus

    Consciousness biz is even worse, we don’t know what we are talking about, we haven’t got the slightest sound idea to approach qualia.

    Scott points out, that consciousness is (partially?) resulting from our agnosognosia (I would say human Consc. experience rather than consciousness itself), and claims that along with qualia we have to solve the “subject/object” problem, to see the whole picture. Fair enough. Or, recalling Peter’s motto, if the self is an illusion who is it that’s being fooled?

    I would dare to say that this applies to our current level of consciousness… hopefully there could be higher modes…

    Regarding the free will parallelism mentioned by Scott, it is very interesting. It is clear that (in practical terms) the lack of free will is very much related to “conditioning” and neural programming… the opposite of free thinking and high states of consciousness. The New Scientist web (dynamic blogroll) references to a study that shows that experienced zen meditators are much more robust against subliminal brain washing… so that their conscious field is expanded to include in the conscious space the messages (stimulus) that for ordinary people remain subconscious (subliminal).

    It is clear to me that, at least, the “Free Won’t” approach could make sense, and that it is directly related to brain rules breaking and clarity of mind.

    I say this, Arnold, to sort of justify that NCCs might not be all. Being inclined (just that) to some sort of dualism that could contribute to a final answer is not just a whimsical reaction, it also arises from observation (including instrospection) and reasoning.

  10. 10. Arnold Trehub says:

    Scott: “The single thing I find most amazing about neuroscience is the fact that, even though I AM the domain it studies, I am utterly ignorant of the things it keeps discovering about me. I see no reason why this trend should not continue. So my question is, What if the very structure of consciousness is simply an expression of this ignorance, the artifact of a profound – and quite natural – form of anosognosia? ”

    It seems to me that you frame the problem incorrectly. If you are familiar with the cognitive neuroscience literature you are NOT utterly ignorant of the things it keeps discovering about you. What you mistakenly call your ignorance/anosognosia is your firm BELIEF that the content of your subjective experience cannot possibly be anything like something happening in what you dismissively call the “meat” of your brain. This is a natural belief.

    Think about it. Why are you willing to believe that the earth is spheroid when your immediate experience is that it is a bumpy flat surface extending all around you? When you were a young child you probably did think the earth was flat. As you matured, education and awareness of relevant empirical evidence was sufficient for you to change your belief and accept the earth as a spheroid body rotating around our sun. Why are you unable to accept any account of consciousness that asserts that the content of your subjective experience is the activity of a particular kind of neuronal brain mechanism within your own head?

    If you read the draft of my forthcoming article “Where am I? Redux”, you will see that my working definition of consciousness is this:

    *Consciousness is a transparent brain representation of the world from a privileged egocentric perspective*

    What are your principled arguments against this definition?

  11. 11. scott bakker says:

    Vicente: Don’t you worry that all intentional phenomena will suffer the fate of volition? It could be that they all stand or fall together.

    Arnold: I actually do think that subjective experience is the result of brain processes. I just don’t think that we have a clue as to what ‘subjective experience’ is – and for reasons not so different than the ones belonging to your Copernican analogy. We lack the perspective.

    But again, in the Copernican case, unlike ours, we possessed enough perspective to agree what it was we were trying to explain – the earth – in the present case we lack even that. Thus the ‘man with a hammer’ problem. When the object of explanation is as ductile as consciousness, we can literally game our interpretations of the explananda to meet our explanations part way. As cognitive psychology makes abundantly clear, this is what we do all the time anyway, and with matter far less fraught with mystery and ambiguity than consciousness.

    I haven’t had a chance to look at your paper yet, but I can tell you that from my perspective, unless you have naturalized accounts of these intentional concepts – ‘transparent,’ ‘representation,’ ‘privileged,’ ‘egocentric,’ and ‘perspective’ – I’m not going to be remotely convinced.

    Like I say, I have my hammers as well! And this is just my second-order point: I’m not telling you your account is wrong, only that short of naturalizing intentionality, you’re just not going to convince people like me. The same way that, short of decisively explaining qualia, you’re not going to convince Vincente.

    Why? Because we can’t even agree on what we’re trying to explain!

    You do acknowledge at least this much?

  12. 12. Arnold Trehub says:

    Scott: “I’m not telling you your account is wrong, only that short of naturalizing intentionality, you’re just not going to convince people like me.”

    Perhaps we can agree after all, because I believe the retinoid model does naturalize intentionality. And I can point to decisive experimental results to support the claim.

  13. 13. Vicente says:

    Scott, I am sorry I don’t understand your question. If you could clarify a bit.

  14. 14. scott bakker says:

    Vicente: If you look at Wegner’s research, the ‘feeling of willing’ looks like a kind of post hoc inference that we confuse for something efficacious. For me, the interesting thing is the way we are prone to get it wrong: Why should we confuse something post hoc for something efficacious? The answer, at least in brute terms, is fairly obvious: because consciousness lacks the information required to make this determination. It has no access whatsoever to where the ‘feeling of willing’ lies on the neurocausal stream – quite a bit downriver, it turns out.

    When you consider that the overriding difficulty posed by intentional phenomena is the difficulty of making neurocausal sense of them, then we need to consider the possibility that the ‘brain blindness’ (or informatic parochialism) of conscious experience behind our mistaken ‘feeling of willing’ could be what’s plaguing all of them.

    Arnold: Point away!

  15. 15. Arnold Trehub says:

    Scott, here is a summary of a series of experiments I conducted a long time ago.
    ============================================

    Complementary Neuronal and Phenomenal Properties

    In the development of the physical theory of light, the double-slit experiment was critical in demonstrating that light can be properly understood as both particle and wave. Similarly, I believe that a particular experiment – a variation of the seeing-more- than-is-there (SMTT) paradigm – is a critical experiment in demonstrating that consciousness can be properly understood as a complementary relationship between the activity of a specialized neuronal brain mechanism, having the neuronal structure and dynamics of the retinoid system, and our concurrent phenomenal experience.

    Seeing-More-Than-is-There (SMTT) If a narrow vertically oriented aperture in an otherwise occluding screen is fixated while a visual pattern is moved back and forth behind it, the entire pattern may be seen even though at any instant only a small fragment of the pattern is exposed within the aperture. This phenomenon of anorthoscopic perception was reported as long ago as 1862 by Zöllner. More recently, Parks (1965), McCloskey and Watkins (1978), and Shimojo and Richards (1986) have published work on this striking visual effect. McCloskey and Watkins introduced the term seeing-more-than- is-there to describe the phenomenon and I have adopted it in abbreviated form as SMTT. The following experiment was based on the SMTT paradigm (Trehub 1991).

    Procedure:

    1. Subjects sit in front of an opaque screen having a long vertical slit with a very narrow width, as an aperture in the middle of the screen. Directly behind the slit is a computer screen, on which any kind of figure can be displayed and set in motion. A triangular-shaped figure in a contour with a width much longer than its height is displayed on the computer. Subjects fixate the center of the aperture and report that they see two tiny line segements, one above the other on the vertical meridian. This perception corresponds to the actual stimulus falling on the retinas (the veridical optical projection of the state of the world as it appears to the observer).

    2. The subject is given a control device which can set the triangle on the computer screen behind the aperture in horizontal reciprocating motion (horizontal oscillation) so that the triangle passes beyond the slit in a sequence of alternating directions. A clockwise turn of the controller increases the frequency of the horizontal oscillation. A counter-clockwise turn of the controller decreases the frequency of the oscillation. The subject starts the hidden triangle in motion and gradually increases its frequency of horizontal oscillation.

    Results:

    As soon as the figure is in motion, subjects report that they see, near the bottom of the slit, a tiny line segment which remains stable, and another line segment in vertical oscillation above it. As subjects continue to increase the frequency of horizontal oscillation of the almost completely occluded figure there is a profound change in their experience of the visual stimulus.

    At an oscillation of ~ 2 cycles/sec (~ 250 ms/sweep), subjects report that they suddenly see a complete triangle moving horizontally back and forth instead of the vertically oscillating line segment they had previously seen. This perception of a complete triangle in horizontal motion is strikingly different from the tiny line segment oscillating up and down above a fixed line segment which is the real visual stimulus on the retinas.

    As subjects increase the frequency of oscillation of the hidden figure, they observe that the length of the base of the perceived triangle decreases while its height remains constant. Using the rate controller, the subject reports that he can enlarge or reduce the base of the triangle he sees, by turning the knob counterclockwise (slower) or clockwise (faster).

    3. The experimenter asks the subject to adjust the base of the perceived triangle so that the length of its base appears equal to its height.

    Results:

    As the experimenter varies the actual height of the hidden triangle, subjects successfully vary its oscillation rate to maintain approximate base-height equality, i.e. lowering its rate as its height increases, and increasing its rate as its height decreases.

    This experiment demonstrates that the human brain has internal mechanisms that can construct accurate analog representations of the external world. Notice that when the hidden figure oscillated at less than 2 cycles/sec, the observer experienced an event (the vertically oscillating line segment) that corresponded to the visible event on the plane of the opaque screen. But when the hidden figure oscillated at a rate greater than 2 cycles/sec., the observer experienced an internally constructed event (the horizontally oscillating triangle) that corresponded to the almost totally occluded event behind the screen. The experiment also demonstrates that the human brain has internal mechanisms that can accurately track relational properties of the external world in an analog fashion. Notice that the observer was able to maintain an approximately fixed one-to-one ratio of height to width of the perceived triangle as the height of the hidden triangle was independently varied by the experimenter.

    These and other empirical findings obtained by this experimental paradigm were predicted by the neuronal structure and dynamics of a putative brain system (the retinoid system) that was originally proposed to explain our basic phenomenal experience and adaptive behavior in 3D egocentric space (Trehub, 1991). It seems to me that these experimental findings provide conclusive evidence that the human brain does indeed construct phenomenal representations of the external world and that the detailed neuronal properties of the retinoid system can account for our conscious content.
    ==============================================

    Scott: “I can tell you that from my perspective, unless you have naturalized accounts of these intentional concepts – ‘transparent,’ ‘representation,’ ‘privileged,’ ‘egocentric,’ and ‘perspective’ – I’m not going to be remotely convinced.”

    In the SMTT experiments, the subject has no visual-sensory input of a triangle oscillating left and right on the screen in front of him, yet he has a vivid conscious experience of a triangle oscillating in this way in the space in front of him. This astonishing result was successfully predicted by the neuronal structure and dynamics of the retinoid model. Notice that the subject has no awareness of his own brain’s retinoid *representation* of a triangle moving back and forth in his brain. Instead the subject *sees* the triangle as “the real thing” oscillating left and right *out there* in the space in front of him. So his conscious experience is *about* the “unmistakable” triangle out there in his phenomenal world; i.e., intentionality is explained by the neuronal properties of the retinoid system. Moreover, the *egocentricity* of this conscious experience is determined by the fact that what is left and right and in front of the subject is fixed by spatio-topic locations with respect to the locus of perspectival origin in retinoid space — what I define as the core self (I!). And this egocentric perspective is *privileged* because it cannot be occupied by any other entity.

    What are your thoughts about all this?

  16. 16. Vicente says:

    Scott, if you are asking me if I believe that consciousness is just an epiphenomeal by-product of neural activity (no understood yet), I would say that I don’t know, though probably yes for most of it.

    Of course, if there is no conscious agency, free will is an illusion.

    Careful with Wegner’s conclusions, one thing is the illusion of controlling a process, or an optical illusion, based on external factors, and a very different thing is the free-will illusion, which would be an intrinsic property of the system. One thing is to buy a lie, and another is to be a lie.

    Still I believe there is a slight hope for Free Won’t, truth will set you free.

    Still, phenomenology (consciosness) cannot fall at all, is all there is there for you. What the fully materialistic approach will guarantee is that everything will eventually fall into oblivion. Consciousness is based on a previous biological requirement. Think that once free energy (Gibbs) begins to be scarce (entropy reigns), life, which relies on biophysical processes very far from equilibrium, will vanish from the Universe at the first stages of the Universe thermodynamic lingering, and consciousness with it.

    So, your only hope is that consciousness could be decoupled from brains to some exent, with or without will.

  17. 17. scott bakker says:

    Arnold: Fascinating stuff!

    Ignorance revealing question: Just generally, how is this not a version of the aperture problem? The kind of SMTT mechanisms postulated would be required of the visual system as a whole, wouldn’t they, given that every retinal cell in the receptive field is in effect a kind of ‘informatic slit’?

    “This astonishing result was successfully predicted by the neuronal structure and dynamics of the retinoid model. Notice that the subject has no awareness of his own brain’s retinoid *representation* of a triangle moving back and forth in his brain. Instead the subject *sees* the triangle as “the real thing” oscillating left and right *out there* in the space in front of him. So his conscious experience is *about* the “unmistakable” triangle out there in his phenomenal world; i.e., intentionality is explained by the neuronal properties of the retinoid system. Moreover, the *egocentricity* of this conscious experience is determined by the fact that what is left and right and in front of the subject is fixed by spatio-topic locations with respect to the locus of perspectival origin in retinoid space — what I define as the core self (I!). And this egocentric perspective is *privileged* because it cannot be occupied by any other entity.”

    I can see why this experiment carries a lot of water for you. The thing about your model I’ve always found the most impressive is the way it economizes ‘information capture.’ Nevertheless, I’m afraid that like Vicente I don’t see how this even scratches the Hard Problem – the question of why the retinoid system generates consciousness at all. But I also don’t see how this even begins to explain intentionality, as opposed to simply assume it. A little replica, as the Wittgenstein would say, does not a rule make. And that’s the problem with representations: they require normativity to make sense.

    You have a conscious perspective, one which we are inclined to understand in visual terms, so the notion of a neural system that is physically isomorphic to your visual perspective, a replica, is bound to intuitively appeal to you. It even seems to provide a replica of the ‘self’ – or at the very least a placeholder for it. But none of this explains why this particular set of replicas should be ‘true or false’ of the ‘original,’ when the natural world is filled with replications possessing no truth content one way or another (like DNA).

    On my own account, intentionality (aboutness, and so on) is best thought of as a family of ‘compression heuristics,’ a set of gerrymandered kluges allowing consciousness to assert systematic relationships in the absence of any real access to the actual causal histories behind their formation. Evopragmatic fictions. We have experience ‘of’ rather than ‘from’ because of the information horizon of consciousness.

  18. 18. scott bakker says:

    Vicente: BEING the lie is indeed the crux of the problem. How do you make sense of the lie from within the lie, using only lies to find your way? This is the cornerstone of my own amateur attack on the Hard Problem. The first task is one of extracting the consciousness we do have from the one we don’t.

    Epiphenomenalism makes no sense to me for a number of reasons. For Wegner, the ‘feeling of willing’ isn’t epiphenomenal, it just isn’t causal in an intuition-friendly way (it’s a mechanism for assuming/assigning social ownership of actions). If all intentional phenomena are vulnerable to this kind of orthogonal neurocausal revision, then we are looking at a dismal picture indeed. (I’m afraid ‘free won’t’ makes as little sense to me as ‘free will,’ so my religious scruples remain wedded to the latter).

    The consciousness that you want decoupled might be a far, far cry from the one you want.

  19. 19. Arnold Trehub says:

    Scott: “You have a conscious perspective, one which we are inclined to understand in visual terms, so the notion of a neural system that is physically isomorphic to your visual perspective, a replica, is bound to intuitively appeal to you. It even seems to provide a replica of the ‘self’ – or at the very least a placeholder for it. But none of this explains why this particular set of replicas should be ‘true or false’ of the ‘original,’ when the natural world is filled with replications possessing no truth content one way or another (like DNA).”

    A blind person has an egocentric perspective on the space around him/her just as a sighted person has. This subjective brain perspective IS consciousness — one’s phenomenal world. The Hard Problem will never be solved according to you or Vicente as long as you “rig” the game by demanding the ultimate TRUTH of the proposed explanation of consciousness — something that is not demanded of any other scientific explanation. TRUTH can only be determined within the structure of a *formal* system by faithfully following the pre-established rules of the system. The naturalization of consciousness is not like that! It is accomplished when consciousness is explained in biophysical terms, and the explanation is successfully tested within the norms of science. Science is a pragmatic enterprise; it is not omniscient; its claims are always open to revision and can never be taken as TRUTHS. Perhaps philosophy understands “explanation” in different terms.

  20. 20. scott bakker says:

    “A blind person has an egocentric perspective on the space around him/her just as a sighted person has.”

    Or a camera.

    “A blind person has an egocentric perspective on the space around him/her just as a sighted person has. This subjective brain perspective IS consciousness — one’s phenomenal world. The Hard Problem will never be solved according to you or Vicente as long as you “rig” the game by demanding the ultimate TRUTH of the proposed explanation of consciousness — something that is not demanded of any other scientific explanation.”

    I find this very confusing. I’m not demanding the ‘ultimate truth’ of anything: just an empirical theory that plausibly explains the generation of consciousness, which is quite different from an empirical theory that plausibly explains how the brain maps environmental information. Is a CCTV camera conscious? Why not?

    Consciousness is an extraordinarily peculiar natural event in so very many ways. I’m sorry, Arnold, but I don’t see how your theory accounts for any of those peculiarities. A good empirical theory, I’m sure you’ll agree, demystifies, not remystifies. Your approach leaves me with all the same questions – or I should say replicas of them!

    Which brings me back to my original point: if you think your theory empirically explains consciousness, then you are not explaining what I call consciousness. I’m sure you’ve discovered how damnably difficult it is to convince anybody of ANYTHING in this field. IMHO, this is a good part of the reason why: not because we have different ideas about ‘explanation,’ but because we have different ideas about what needs to be explained.

    Have you had a chance to read Schwitzgebel’s Perplexities of Consciousness? I’m curious as to what Peter thinks of it…

    [I liked it. – Peter]

  21. 21. Arnold Trehub says:

    Arnold: “A blind person has an egocentric perspective on the space around him/her just as a sighted person has.”

    Scott: “Or a camera.”

    Scott: “Is a CCTV camera conscious? Why not?”

    You seem to be missing the point. According to my definition of consciousness no camera is conscious, not even a CCTV camera. Why? Because no known camera (or any other known artifact) contains a perspectival representation of the volumetric space in which it exists (the world). So it can’t have a representation of *something somewhere* in perspectival relation to itself. In other words it lacks subjectivity, the essence of consciousness.

    Peter: “Have you had a chance to read Schwitzgebel’s Perplexities of Consciousness?”

    Yes. I’ve had many discussions with him. In fact. I’m one of the people he acknowledges in his introduction to the book.

  22. 22. Arnold Trehub says:

    A correction to my post 21 —

    Scott: “Have you had a chance to read Schwitzgebel’s Perplexities of Consciousness?”
    Yes. I’ve had many discussions with him. In fact, I’m one of the people he acknowledges in his introduction to the book.

    Follow up —

    Scott: “Which brings me back to my original point: if you think your theory empirically explains consciousness, then you are not explaining what I call consciousness.”

    Then it would help if you tell us clearly what you call consciouness.

  23. 23. Vicente says:

    Arnold,

    perspectival relation to itself

    For the case of consciousness the geometrical or spatiotopic perspective, provided by the I! locus in the retinoid space, is just one more element, and probably not the most important. What we really need is to understand how the psychological self perspective is generated.

    I I! locus in combination with the visual cortex and the motor output, could be very important for a chameleon to aim and shoot its tongue to a target, but quite irrelevant to explain why are we discussing this.

    The only way to really tackle this issue, would be to show you that there can be conscious states in which the subject is not aware of being in any space, or, spaceless conscious states, SlCS! probably it would be easier for you to think of timeless conscious states, just equivalent. It is the only way to prove that a retinoid structure is not a basic precondition for consciousness.

    Then, how to show it! you see. What kind of experiment could you envisage, so that a subject having a SlCS can convey the experience to you, except by providing testimony of it. How can dual-aspect monism cope with this. This is the real subjectivity that has to be explained, not the geometrical one refering to: I am here, you are there.

    Strictly speaking, the other way would be, shuting down an individual’s putative retinoid system PRS, without impacting on other systems, and seeing what happens. For that, first, the PRS should be anatomically and physiologically identified and described, and then a means to turn it off should be found. Feasible??

  24. 24. Arnold Trehub says:

    Vicente: “What kind of experiment could you envisage, so that a subject having a SlCS can convey the experience to you, except by providing testimony of it… …. Strictly speaking, the other way would be, shuting down an individual’s putative retinoid system PRS, without impacting on other systems, and seeing what happens. For that, first, the PRS should be anatomically and physiologically identified and described, and then a means to turn it off should be found. Feasible??”

    This is an interesting comment. Laboratory investigations and clinical findings suggest that neuronal mechanisms in the parietal region of the brain are an essential part of the brain’s representation of egocentric/retinoid space. So we can assume that damage to this part of the brain should have a measurable effect on one’s conscious experience. What would happen if a part of retinoid space is shut down while another part works OK? This is exactly what happens in severe cases of hemi-spatial neglect due to brain lesions involving the right parietal-temporal-occipital junction. In such cases, patients are simply not conscious of the left part of the world that is represented in the right parietal region of their brain, while they are perfectly well aware of the world to their egocentric right. For one account of this see the second paragraph on p. 321 here:

    http://people.umass.edu/trehub/YCCOG828%20copy.pdf

  25. 25. scott bakker says:

    “You seem to be missing the point. According to my definition of consciousness no camera is conscious, not even a CCTV camera. Why? Because no known camera (or any other known artifact) contains a perspectival representation of the volumetric space in which it exists (the world). So it can’t have a representation of *something somewhere* in perspectival relation to itself. In other words it lacks subjectivity, the essence of consciousness.”

    I was being imprecise (and glib, for which I apologize). The point is that visual representations in the brain are simply not enough: What has to be added to a camera to make it conscious? Your answer: another representation, this one of the ‘volumetric space in which it exists,’ or the ‘world.’ Once again, even if I were a representationalist, I’m not sure how this generates any experience, let alone the intentional cloud that is the experience of subjectivity. Where does agency (as opposed to behaviour) come from? Where does normativity (as opposed to causal regularity) come from? Where does calculation (as opposed to neural computation) come from? Where does quality (as opposed to the processing of sensory information) come from? Where does aboutness (as opposed to distal causal mechanisms) come from? Where does conscious unity (as opposed to staggering neural complexity) come from? Where does the Now come from?

    And so on. ‘Consciousness,’ for me, is a bundle of perplexities consisting of these and other phenomena. Theorizing devices/modules/systems for each may be well and fine for correlating consciousness with the brain (I say ‘may’ because I personally think that a good number of these phenomena will actually be explained away), but how it is the piling on of devices, adding a ‘world mechanism’ to a camera for instance, should result in something as remarkable as THIS? I have no clue. Would a camera linked to a spy satellite suffice? What kinds of complexity is required? Any complexity? Are there any time constraints on the processes involved? Any material constraints? If we could arrange the population of China in such a way to structurally mimic the retinoid system, would China become conscious? How is it this particular structural complex A, generates such a peculiar emergent effect, while that structural complex B, does not?

    And so on. But for me, as I said earlier, representationalism assumes the very thing that needs to be explained for a thoroughgoing naturalized account of consciousness.

  26. 26. Arnold Trehub says:

    Scott: “Where does agency (as opposed to behaviour) come from? Where does normativity (as opposed to causal regularity) come from? Where does calculation (as opposed to neural computation) come from? Where does quality (as opposed to the processing of sensory information) come from? Where does aboutness (as opposed to distal causal mechanisms) come from? Where does conscious unity (as opposed to staggering neural complexity) come from? Where does the Now come from?”

    “Agency”, “normativity”, “calculation”, “quality”. “aboutness”. “conscious unity”, “now”: All are verbal tags for particular states of the brain. So the question is not “Where do they come from?”, but rather “What brain states might be indicated by these words?” It seems to, me that no reasonable answer/explanation can be forthcoming by wandering in a closed hall of verbal mirrors. Moreover, none of the above descriptors can be applied without the basic precondition of the brain having a transparent representation of the world (a volumetric surround) from a privileged locus of perspectival origin (I!) — subjectivity. My claim is that the particular neuronal structure and dynamics of the retinoid model is the minimal biophysical mechanism that can create this necessary precondition for *conscious unity*, *quality*, *the extended present (now)*, etc. Why should you believe this? The only acceptable grounds for such a belief within scientific norms is that the theoretical model is able to predict/postdict (explain) relevant conscious content within the limits of empirical measurement.

  27. 27. Arnold Trehub says:

    Scott and all,

    Here is a brief talk by Thomas Metzinger that might help you understand what the retinoid model accomplishes:

    http://www.philosophie.uni-mainz.de/metzinger/

    Incidentally, Metzinger also claims this:

    “The functional basis for instantiating the phenomenal first-person perspective can be seen as a specific cognitive achievement: the ability to use a centered representational space (Trehub 1991, 2007, 2009).”

    My publications cited by Metzinger focus on the neuronal properties and the cognitive implications of the retinoid model.

  28. 28. Arnold Trehub says:

    Correction. Here is the direct link to the Metzinger talk:

  29. 29. Vicente says:

    Arnold,

    Retinoid mechanisms have the structural and dynamic properties well suited to perform such tasks, and it appears from this and the previously cited studies that the retinoid system might be distributed over several cortical regions including the temporal and parietal areas and the superior frontal sulcus.
    A particularly striking finding of hemispatial neglect in patients with brain lesions involving the right temporal–parietal–occipital junction (Bisiach & Luzzatti, 1978) also lends support to the retinoid model of spatial representation. In this study, patients were asked to imagine that they were standing in the main square in
    Milan which was a very familiar setting for them. They were first instructed to imagine themselves facing the cathedral and to describe what they could see in their ‘‘mind’s eye.’’ They reported a greater number of details to the right than to the left of their imaginary line of sight, often neglecting prominent features on the left side.

    Interesting, any other findings in this sense since 1978?

    The problem seems to be related to locating objects in space, rather than with creating the feeling of space itself.

    Besides, the observations are related to retrieving memories: I would like to see the effects on real-time navigation.

    Even if you loose half of your sorrounding space feeling, in terms of being a conscious being nothing has changed. I admit that it is not the same as loosing a sense, but it doesn’t impair consciousness.

    Consider the following experiment: You progressively destroy sectors of the retinoid space the subject to gradually loose the feeling of space by angular sectors around, what would happen when you have just left a tiny fraction of an angle? does it make sense.

  30. 30. Arnold Trehub says:

    Vicente: “The problem seems to be related to locating objects in space, rather than with creating the feeling of space itself.”

    As I recall, experiments have been done in which patients with left hemi-spatial neglect were asked to point to where their visual field seemed to be divided into two equal halves (bisecting their egocentric space). Instead of pointing straight ahead, these subjects consistently pointed some degrees to the *right* of their normal foveal axis. This suggests that, with this kind of brain damage, the volume of phenomenal space was reduced in the egocentric left with respect to the egocentric right. This is consistent with the retinoid theory of consciousness.

    Vicente: “Besides, the observations are related to retrieving memories: I would like to see the effects on real-time navigation.”

    I recently visited someone who had suffered a stroke leaving him with left hemi-spatial neglect. He claimed to be unaware of the space that was to the far left of him compared to what was on his far right. He often tripped over objects to his left, and ignored utensils and navigational exits on his left side. He seemed to try to compensate by frequently inclining his head to the left so that his visual field included more of the space to his left than to his right (consistent with the spatial bisection evidence above).

    Vicente: “Consider the following experiment: You progressively destroy sectors of the retinoid space the subject to gradually loose the feeling of space by angular sectors around, what would happen when you have just left a tiny fraction of an angle? does it make sense.”

    Theoretically, yes. But who would volunteer for such a heroic experiment?!!
    Even if there were a volunteer, it would be grossly unethical and illegal to do it!

  31. 31. scott bakker says:

    Arnold: Cool little talk. Thomas was kind enough to blurb my infothriller, Neuropath, when it came out a few years back. I’ve been following his work and corresponding with him for years, and to my knowledge he doesn’t think the hard problem is a nonproblem, nor does he think anyone has solved it. He also readily admits the problems posed by representationalism.

    ““Agency”, “normativity”, “calculation”, “quality”. “aboutness”. “conscious unity”, “now”: All are verbal tags for particular states of the brain.”

    I actually think this will be shown to be false, as well as part of the reason we are having such a devil of a time getting clear about what we’re trying to explain. Is “free will” a ‘verbal tag’ for a particular ‘free-will device’ in the brain? Or is it a good old fashioned forced false belief, what happens perhaps when cognitive systems originally evolved to navigate external environments find themselves confronted with sketchy intraneural information regarding its own behaviour?

    “So the question is not “Where do they come from?”, but rather “What brain states might be indicated by these words?” It seems to, me that no reasonable answer/explanation can be forthcoming by wandering in a closed hall of verbal mirrors. Moreover, none of the above descriptors can be applied without the basic precondition of the brain having a transparent representation of the world (a volumetric surround) from a privileged locus of perspectival origin (I!) — subjectivity.”

    Like it or not, you’re trapped in the hall of mirrors with the rest of us, Arnold. It is entirely possible that you’re operating on the basis of false assumptions and faulty concepts. You can’t simply remove yourself from the predicament by declarative fiat – not if you’re going to convince anyone using different assumptions and concepts!

    To give you an example of just how those basic commitments impact interpretation, I actually take Metzinger’s short talk as an example of just how problematic the representational approach quickly becomes, and why it is the neurostructural isomorphisms you postulate fall short of genuinely explaining consciousness, let alone its intentional components. I understand why the vehicle/content distinction is so attractive: once you find neural structures that somehow replicate environmental information, you can shout ‘Eureka! there’s our content right there. These devices!’ Since it belongs to the nature of devices to do things, it becomes easy to think that these devices actually explain the experiences involved.

    But then jerks like me come around and start asking for natural explanations of vehicle/content distinctions, which require natural explanations of normativity and intentionality more generally. What is a ‘content generating device?’ How does a neural recapitulation of environmental information become something that can be right or wrong, true or false? Once again, DNA recapitulates enormous amounts of information, but it seems a clear cut category error to talk of ‘true DNA’ and ‘false DNA.’ So what is about these recapitulations that make them so special? What is the natural explanation for normativity anyway? How can VALUE be explained on the basis of putative neural facts?

    My question still stands: What makes a neural replica true or false of an environment (a representation, rather than simply efficient part of a greater homeostatic device)? Saying that it ‘arises’ as a consequence of the operation of the retinoid system is not an explanation. Explanations generally resolve perplexities, not ignore them.

    I’m no dualist. I think the brain is all there is, same as you. This is the ironic thing: what I’m arguing isn’t that your account is ‘wrong,’ only that most will never find it convincing because of the conceptual/phenomenal abyss between the causal system you are postulating and the consciousness that is supposed to be its effect. If the details of those incompatibilities do not fall out of your account – if the retinoid theory does not provide some natural understanding of normativity/quality/intentionality and their peculiarities – then it will seem to miss the mark. The successful theory of consciousness, you would think, will also explain why consciousness has such a difficult time explaining itself.

    This is what I try to do with my own approach, which I think is entirely compatible with the retinoid theory, minus the question-begging representationalism.

  32. 32. scott bakker says:

    Vicente: Your question is quite ingenious! Along similar lines, I wonder what something like Anton-Babinski syndrome would mean for the retinoid theory. According to Prigatano, empirical evidence is lacking. Is retinoid space ‘filled’ with systematic delusions? Or does some other system simply fail to acknowledge it has been cut off from information from the retinoid system? Is it quasi-perceptual or quasi-cognitive? That latter would be an instance of what you are suggesting, would it not? Consciousness minus the retinoid system…

    The whole topic of neglect and anosognosia is near and dear to my heart, because I think it must be an incontrovertible empirical fact that consciousness suffers multiple forms of ‘natural anosognosia.’ Once you consider the crazy disproportion between the mind-boggling amount of information processed by our brains and the (comparatively) miniscule fraction that actually makes it to consciousness, then it seems plausible to assume that consciousness is globally afflicted by what might be called Anton-Babinski ‘effects,’ and this could very well be the reason why we find it so devilishly difficult to explain.

    As well as why we find HOT theories and Cartesian Theatres so attractive…

  33. 33. Arnold Trehub says:

    The Anton-Babinski syndrome: My bet is consciousness without the ability to cognitively interpret or properly utilize the visual content of retinoid space for the purpose of navigating in the world.

  34. 34. Vicente says:

    Arnold,

    Just a clarification, I never proposed to carry out such an experiment, it was sort of “thought experiment”.

    Then, I don’t care about the result, what I wanted to point out is that:

    – unless you implement the experiment on yourself, you’ll only have a very small part of the available information.

    – Consciousness itself seems not to be impared by the reduction of the sense of space, the contents of consciousness, of course, are.

    In that sense, consider a much more simple case: You want to study the effects of high doses of alcohol in the CNS. Unless you down a few shots yourself, you’ll miss most of the information. Of course you can observe the inmediate changes in the subject behaviour, perception, balance, etc, or the different tissues destruction in the long term. But only through introspection you can achieve a full overview of what’s going on, and that, for your particular case.

    In this case, it is easy to understand the biochemical explanation, still, Scott’s abyss between the neurotransmitters and the subjective experience of being drunk is there. Well, and for ordinary conscious states.

    This is why consciousness, or at least a broad part of it, is out of the strict scientific consideration. And this contributes to the cognitive closure that Scott refers to, as part of our agnosognosia.

    I have to tell you that I admire your endurance, you could have made a career as boxing sparring (real compliment no ulterior meaning).

    To finish, and say something, related to the current post topic, HOTs. I believe that in most conscious states, in some of them particularly, like in the case of having drunk a bit too much, or taken a sedative, there is some kind of perception, that enables you to observe the state from outside, like if you really dwell or move to a higher perspective from which you could observe…. there seems to be an I! locus, not just in terms of space…

    But you see, I cannot really inform you, objectively, of these observations….

  35. 35. Arnold Trehub says:

    Vicente: “Just a clarification, I never proposed to carry out such an experiment, it was sort of “thought experiment”.

    I didn’t think you proposed that it actually be carried out. But there are some cultures (sadly, even today) in which doing such an experiment might actually be contemplated.

    Vicente: “Consciousness itself seems not to be impared by the reduction of the sense of space, the contents of consciousness, of course, are.”

    Are you claiming that consciousness still remains when all of the contents of consciousness (egocentric space/retinoid space) is eliminated? If so, I strongly disagree. How can we be said to be conscious when there is nothing it is like to be conscious?

    Vicente: “This is why consciousness, or at least a broad part of it, is out of the strict scientific consideration. And this contributes to the cognitive closure that Scott refers to, as part of our agnosognosia.”

    As I understand it, Scott’s anosognosia re consciousness is our ignorance of our own brain as the source of consciousness. He seems to say that consciousness IS this particular kind of ignorance. But my view is that consciousness cannot be this kind of ignorance because consciousness serves the extremely important function of providing us with a useful representation of the world that we live in — the opposite of ignorance.

    Vicente: “I have to tell you that I admire your endurance, you could have made a career as boxing sparring (real compliment no ulterior meaning).”

    Thanks. Boxing may not be a bad metaphor for the persistent kind of interactive intellectual engagement we need to arrive at a standard model of consciousness within scientific norms. It is a truly tough problem.

  36. 36. Arnold Trehub says:

    Scott: ” I’ve been following his [Metzinger] work and corresponding with him for years, and to my knowledge he doesn’t think the hard problem is a nonproblem, nor does he think anyone has solved it. He also readily admits the problems posed by representationalism.”

    I don’t think the “hard problem” is a non-problem either. I just think that the hard problem, in terms of an explanatory gap, is a problem shared by theoretical physics as well as any candidate theory of consciousness. Joseph Levine is the originator of the “explanatory gap” as the hard problem of consciousness. He is on the philosophy faculty of my University and a few years ago I had an extended email discussion with him about the expalanatory gap. With his permission, here is an excerpt from our discussion:

    **************************************************************
    LEVINE and TREHUB on CONSCIOUSNESS and the BRAIN

    Emails: 2007

    Levine:??

    The question was this. Is the spatial organization of the neural system – the real spatial features it possesses – what constitutes the phenomenal space, or is it that they serve to represent the space that is? phenomenal space? If the latter, then the specific spatial features of the? neural systems themselves turn out to be irrelevant, since the requisite representations could have been implemented by any properly organized system of features.

    Trehub:

    My claim is that the spatiotopic organization of the brain’s 3D ?retinoid system constitutes our phenomenal space.??

    Levine:

    If the former, however, the literal space of the neural system is supposed? to constitute phenomenal space, then I think there is a real problem about? how these spatial features – which aren’t represented in the mind, they? just happen to be features of what realizes the mind – can possibly be known? or experienced (except in a third-person way through scientific? investigation, which wouldn’t make them good candidates for constituting our? phenomenal space). After all, neurons possess bunches of features that? don’t make it into our experience.
    How is it we experience their spatial? configuration?

    Trehub:

    Of course, the “features of what realizes the mind” are just features? of a particular system of biological mechanisms in the brain. As you say, these can only be known through scientific investigation. But when you ask ?what causes us to experience their spatial configuration, you are really ?asking what brings consciousness per se into existence. The answer is that? we don’t know — just as we don’t know what brings space-time into existence.

    Levine:

    My argument is that phenomenal experience cannot however be reduced? to the pick up, preservation, and transformation of information.

    Trehub:

    I agree that phenomenal experience per se (consciousness) ?cannot be explained/reduced in this way, but I have shown empirically that ?the salient content of phenomenal experience can be explained by the ?structural and dynamic properties of particular kinds of neuronal ?mechanisms in the human brain. A question: can there be consciousness ?without phenomenal content? ??

    Levine:

    I agree that there are primitives in nature that cannot be explained,? precisely because they are primitive, or basic. But I wouldn’t have thought one wanted to attribute that kind of primitive or basic relation to the? relation between the brain and experience. That seems to be giving up the? physicalist project with respect to the mind.

    Trehub:

    My argument is that the physicalist project with respect to mind will? continue to spin in the wind until we recognize that the currently achievable ?goal is a biophysical explanation of the content of consciousness, not an ?explanation of the primitive existence of consciousness. This is no more a ?matter of giving up the physicalist project on mind than physics gives up? its project on the physical universe by positing the primitive of space-time.?

    *************************************************************

    Arnold:““Agency”, “normativity”, “calculation”, “quality”. “aboutness”. “conscious unity”, “now”: All are verbal tags for particular states of the brain.”

    Scott: “I actually think this will be shown to be false, as well as part of the reason we are having such a devil of a time getting clear about what we’re trying to explain.”

    Can you give a principled reason for believing that my claim above about verbal tags is false?

  37. 37. scott bakker says:

    Very interesting. We’re closer than I originally thought in terms of the rough outline of the problem. Intentionality remains the sticker, however. I still think that the (apparent) incompatibility of intentional and mechanistic concepts is what prevents your case from being compelling.

    I don’t want to pretend that my outlook has any legitimacy in the field because it doesn’t, though I have received many kind words and lots of encouragement – especially from Thomas. The fact is I write novels for a living, and this is a hobby of mine, trying to find genuinely CREATIVE ways (because I’m told imagination is among my few strengths!) of seeing past all the perplexities pertaining to consciousness.

    Once you appreciate the possibility of ‘natural anosognosia’ (NA), the obvious question, I think, is one of how consciousness might be afflicted by it. Metzinger’s description of transparency in the Youtube link you provided is a great example of NA: our inability to access any information regarding the neural processing of environmental information seems to strand us with naive realism, the illusion that we are in direct unmediated contact with our environment. This example of NA actually underwrites a profound structural feature of experience.

    The curious thing to note, however, is the WAY that it explains this feature. Transparency is not the result of any specialized neural device – it’s not as if we’ll find any ‘transparency of consciousness’ NCCs. It simply follows from the information rendered available versus the information that remains cloistered (to borrow a phrase from Schwitzgebel) in the ‘gut brain.’ ‘Transparency,’ in other words, IS NOT A VERBAL TAG FOR PARTICULAR STATES IN THE BRAIN. It’s an illusion generated by our brain’s parochial neuroinformatic ‘perspective’ on itself.

    The question becomes one of how NA might also structurally underwrite experience. This is what I call the Blind Brain Theory of the Appearance of Consciousness. I think (actually, ‘fear’ is the better word) that all intentionality will suffer this fate, that the conceptual peculiarities that make it so damnably difficult to naturalize arise from the parochial nature of the conscious brain’s access to its greater, gut brain self. BBT, I think, strips consciousness (as it appears in attentional awareness) down to something that your retinoid theory can much more plausibly explain. It basically states that when we reflect on consciousness we are literally looking through a peephole on a peephole, absent any way of seeing our peephole as a peephole – which is to say, convinced that we have access to all the information we need.

    So for me, HOT theory is literally more a theory of how we get consciousness so wildly wrong than otherwise!

    I have a draft treatment at: http://rsbakker.wordpress.com/essay-archive/the-last-magic-show-a-blind-brain-theory-of-the-appearance-of-consciousness/

    But otherwise, whatever you think of BBT as a theory, I have no doubt that the problem of informatic parochialism will eventually find its way into mainstream consciousness research. Why? Simply because it is a fact that consciousness accesses nowhere near the 38 000 trillion operations per second performed by our brain, and this asymmetry has to be expressed in experience somehow. Once you appreciate this, then you have a way to explain a great many puzzles, including the now and the unity of consciousness.

  38. 38. Arnold Trehub says:

    Scott: ” ‘Transparency,’ in other words, IS NOT A VERBAL TAG FOR PARTICULAR STATES IN THE BRAIN. It’s an illusion generated by our brain’s parochial neuroinformatic ‘perspective’ on itself.”

    I do think that we are probably closer in agreement than might appear from the back and forth of our discussion. But, in terms of the retinoid theory of consciousness, “transparency” is a word that actually has as its referent a particular aspect of the neuronal structure and dynamics of the cognitive brain — namely, that while all the products of our interoceptive and exteroceptive sensors are properly located spatiotopically in the egocentric volume of retinoid space, there are NO sensors to monitor the physiological activity of the brain cells that fix our epistemic states. Therefore we can have no phenomenal representation of what own our brain is doing. This accounts for what you call the Blind Brain Theory , or what Metzinger and I would call Transparency — a natural product of how nature has “designed” the brain. This is not an illusion, but rather a natural characteristic of the way the brain is built.

  39. 39. scott bakker says:

    The illusion is transparency, the sense of being in direct, unmediated contact with the world. Unless you want to claim that we are in direct contact with the external world, then…

    Note that transparency, which is a rather profound structural characteristic of conscious experience, is not an accomplishment of any particular system, but rather the information horizon of that system. It is the product of constraints placed on information access. Metzinger refers to this as ‘auto-epistemic closure,’ which is to say, intentionally.

    My thesis is that most, if not all, of what what is baffling about consciousness and intentional phenomena can be ‘explained away’ in these terms… and that consciousness is not at all what the informatic peephole of attentional awareness seems to reveal.

    The simply question that needs to be asked is: IF something as fundamental as transparency can be the result of information horizons, then what else?

    I’ve come to realize that this is a difficult way of thinking through the puzzles of consciousness for most philosophers and researchers, for much the same reason the conceptual figure-field switch relativity forced on physicists was so difficult (thinking gravity as a structural expression of space time instead of a discrete force). Traditional approaches all buy into what I call the ‘accomplishment fallacy,’ the assumption that every apparently positive feature of experience must be the ‘accomplishment’ of some device. The Blind Brain Theory literally interprets them as perspectival illusions, suggesting that consciousness, as it is presently understood, is a kind of Ptolemaic consciousness.

    But once you step into its Gestalt, it’s parsimony and explanatory scope become quite troubling.

  40. 40. Arnold Trehub says:

    Scott: “The illusion is transparency, the sense of being in direct, unmediated contact with the world. Unless you want to claim that we are in direct contact with the external world, then…”

    I certainly do not believe in naive realism — the claim that we are in direct contact with with the external world. I understand what you are saying, but I think you confound two different notions. The sense that we are in direct contact with the external world IS the illusion! Transparency — because of the absence of physiological sensors for the neuronal brain events that represent our external world — this *explains* the illusion of being in direct unmediated contact with the external world, rather than being in contact with an internal representation of a world. Don’t you see the difference between these two concepts? The “information horizon” is a property of this particular kind of brain mechanism. As far as creature adaptation is concerned, it is not what you might call an *accomplishment* of the system, but rather a very *useful property* of the system.

  41. 41. Vicente says:

    Arnold:

    Are you claiming that consciousness still remains when all of the contents of consciousness (egocentric space/retinoid space) is eliminated?

    I don’t know… the blank canvas… a mirror in the dark… is there a subject without an object? not the other way round for sure.

    This is close to HOTs, if there’s a conscious state whose content is just itself…

    I suppose that if I could answer that question, I would have answered the question.

    For you the question could be: is the random noise in the putative retinoid system, consciousness of sheer empty space?

  42. 42. Arnold Trehub says:

    Vicente: “For you the question could be: is the random noise in the putative retinoid system, consciousness of sheer empty space?”

    If there is no sensory content in activated retinoid space there is still a representation of being within a surrounding empty space. This is the primitive stage of consciousness — stage 1. This might correspond to the mental state you experience at the moment you awake from a deep dreamless sleep.

  43. 43. scott bakker says:

    Arnold: You’re simply recapitulating my point as far as I can see, except that I would amend ‘useful property’ into ‘structural side-effect.’ The real difference between us is simply one of propriety emphasis: you want transparency to be something special to the retinoid system, the centrepiece of your theory. I’m saying it’s simply one example of something pertaining to all the systems involved in consciousness. Consciousness, as it appears to attentional awareness, is structured by a myriad of information horizons, running the gamut from the asymptotic boundary of the visual field, to the paradoxical peculiarities of the now, to intentionality itself.

    So to return, at long last, to HOT theory, the problem is quite clear from the standpoint of BBT. If intentionality is a structural consequence of some natural anosognosia pertaining to the information constraints placed on consciousness, then trying to explain consciousness via some ‘higher order’ intentional grasping of some kind of content becomes circular in the extreme. It amounts to an attempt to see through the illusion in terms belonging to the illusion.

    Just think about how little information makes it to attentional awareness – and thus how procrustean what we call ‘reflection’ (be it naive or explicit) HAS to be. We want to think reflection, despite its uncontrovertible paucity, grasps the important nub, the information that counts – but why? Not only is ACH thinking a cultural development, something that we only inadvertently evolved to do, it’s track record is nothing short of miserable. Only collectively, over ages and a multitude of various brains, has it provided a relative handful of cognitive goodies (like logic), things we don’t really have any consensus-commanding reflective understanding of in turn!

    By BBTs lights, HOT theory tries to make a virtue out of what is our greatest liability when attempting to explicate consciousness. But you don’t need to buy into BBT to make this critique: all you need are information horizons.

  44. 44. Arnold Trehub says:

    Scott, I think we can all agree that there are all kinds of information horizons.

  45. 45. scott bakker says:

    “Scott, I think we can all agree that there are all kinds of information horizons.”

    So how are they expressed in conscious experience?

    I actually think the unity of consciousness is a prime example.

  46. 46. Arnold Trehub says:

    Scott: “So how are they [information horizons] expressed in conscious experience? I actually think the unity of consciousness is a prime example.”

    Can you explain how information horizons account for the unity of consciousness?

  47. 47. scott bakker says:

    This is going to sound like a strange answer, Arnold, but it’s simply meant to get you thinking the problem in an entirely different way:

    Why is it that flickering lights (or sounds) FUSE beyond a certain threshold?

    Why is that the brain looks like one thing from a distance, and like a complex of billions of things under a microscope?

  48. 48. Arnold Trehub says:

    Scott, I’ve thought about the problem in many different ways for at least 50 years. I have concluded that most of the ways are dead ends and that one way in particular seems to be productive. But I’ll bite.

    Scott: “Why is it that flickering lights (or sounds) FUSE beyond a certain threshold?”

    Because our perceptual mechanisms for discriminating distinct pulses of sensory energy (light or sound) have a limited temporal resolution due to their physiological properties. So high rates of energy fluctuation as stimulus input are smeared as a steady stream in output.

    Scott: “Why is that the brain looks like one thing from a distance, and like a complex of billions of things under a microscope?”

    Because the unaided retinal image of an exposed brain cannot resolve the minute patterns of the the things that compose it. The microscope exposes our retina to a tiny area of the brain that is enlarged by the microscope’s optics to reveal a huge complex of the tiny things that are in the brain.

    But I still wonder how information horizons account for the unity of consciousness.

  49. 49. scott bakker says:

    Both cases are examples of what might be called ‘Default Identity.’ In the absence of discriminations, you have ‘smearing’ as you call it, or ‘fusion,’ the collapse of difference into identity, complexes into simples. It’s one of these ‘forest for the trees’ phenomena, so basic, so ubiquitous as to be almost invisible.

    But the rule seems to be absolutely iron-clad: in the absence of information, identity is the experiential default. Two lights become one light. Ninety billion neurons become one brain.

    Now, given that the ancient cognitive systems we evolved to manage environmental information are the ones adapted to manage our newly evolved capacity to incorporate intraneural information – the brain’s inclusion of itself in its environmental schemes – the question becomes, why should the rule of default identity not apply? The unity of consciousness, on this account, becomes the expression of what might be called an ‘interoceptive resolution horizon,’ an analogue to ‘flicker fusion’ only between communicating systems WITHIN the brain, rather than between environmental systems and brain. Consciousness appears to be a singular bolus of internally related phenomena for the same reason our ancestors thought the stars were all set in a singular sphere: we lack the information required to resolve anything more than the SMEAR of our neural activity.

    Here you can see the pernicious nature of the Accomplishment Fallacy rear its head: most seem to think the unity of consciousness has to be an achievement of some kind. But if you think about it in informatic terms, the question should be the reverse: differentiation, externally related complexity, is the achievement. Simplicity is the default.

    In a sense, when I talk about consciousness being a cartoon I mean it quite literally.

    Of course this simply pushes the bubble under the wallpaper to a new, equally enigmatic position: What lies behind Default Identity? But as I’ve been arguing for some time now, a good number of the perplexities of consciousness can be reduced to variants of this one question.

  50. 50. Vicente says:

    Scott:

    Why is it that flickering lights (or sounds) FUSE beyond a certain threshold?

    This is no mystery (at the brain side!!), all physical systems saturate at increasing frequencies. The limiting factors in the brain ara probably the membrane polarisation mechanisms: i.e. ions channels saturation, and nerve transmission speed… the most simple examples you can find to undertand it are: a harmonic force applied to a mass connected to a spring (mechanical saturation), or the charge curve for a harmonic voltage applied to a capacitor (electrical saturation), considering increasing frequencies for the input. In the brain you have a combination of both, for membrane capacity, and charge carriers mass and electrolytes medium effective mobility.

    What is also very interesting in the same line is: why below a certain exposure time visual information becomes subliminial? i.e. it doesn’t reach the conscious sphere, but it’s still processed and used by the brain.

    What’s it that constitutes the borderline between the conscious and the subconscious, how to heal blindsight?

    Why your default identity is based on a subset of the whole brain? Why is simplicity the default? good question. And of course, why are we watching the cartoons? and, who is watching the cartoons?

    No ofence Scott, but might you be a dualist in disguise? no shame, I have one foot there, the other here.

  51. 51. Arnold Trehub says:

    Scott: “Here you can see the pernicious nature of the Accomplishment Fallacy rear its head: most seem to think the unity of consciousness has to be an achievement of some kind. But if you think about it in informatic terms, the question should be the reverse: differentiation, externally related complexity, is the achievement. Simplicity is the default.”

    Simplicity is the default: This reminds me of the *Principle of Least Effort* proposed by George Kingsley Zipf back in 1949.

    But the problem is that the unity of consciousness is not just one bolus of smear. It is a single phenomenal field in the brain (our occurrent phenomenal world) with a complex internal egocentric spatio-temporal structure that we can, in fact, decompose in serial fashion by attentional/perceptual processes into objects and events which are subject to cognitive analysis.

    Arnold: “It seems to, me that no reasonable answer/explanation can be forthcoming by wandering in a closed hall of verbal mirrors.”

    Scott: “Like it or not, you’re trapped in the hall of mirrors with the rest of us, Arnold.”

    I don’t think this is necessarily so unless you assume that all thought consists in sentential propositions (linguistic constructions). The root meaning of our words is given by the images and sensory events to which they refer. This has to be the adaptive foundation of our semantics. So, in order to escape the hall of verbal mirrors, science deals in publicly displayed events (observations and experiments) and artifacts (theoretical models) and logical inference to try to arrive at provisional consensus on the best explanation of the phenomena in question. Verbal arguments are not enough.

  52. 52. scott bakker says:

    Vicente: “No ofence Scott, but might you be a dualist in disguise? no shame, I have one foot there, the other here.”

    Not that I know of! The thing I always try to keep in mind approaching this problem is something I call ‘encapsulation’ (a notion only tangentially related to Pylyshn’s notorious use of the term in the modularity debate). Kahneman often draws a distinction between inside and outside facts, and the way the former are prone to be distorted due to the way our ignorance of the latter is absolute (the way they are, in Don Rumsfeld’s famous phrase, ‘unknown unknowns’). This is as good an explanation of the cognitive bind that consciousness puts us into as any: we only have recourse to ‘inside facts’ in the first instance. When I say ‘cartoon’ I don’t mean it in any representational sense (because my suspicion is that this belongs to the cartoon). It’s a cartoon FROM, not a cartoon OF, a blinkered, biologically idiosyncratic ‘informatic angle’ that our brains ‘have on themselves.’

    For me, one of the most fascinating things about intentional concepts is the way they spin dualisms wherever they go. The fact that the philosophy of computer science seems to recapitulate all the dualisms you find in philosophy of mind (not simply substance dualisms, but conceptual and formal as well) is something I think will prove enormously significant. I think I have an interesting explanation why they do this, but it throws out so many babies changing the bathwater that I’m certain no one will take them seriously for a long time.

    “What is also very interesting in the same line is: why below a certain exposure time visual information becomes subliminial? i.e. it doesn’t reach the conscious sphere, but it’s still processed and used by the brain.”

    Information horizons (and encapsulation) simply follow from the informatic asymmetry between the brain and consciousness. Information horizons are real: you need only attend to the margins of your visual field to glimpse the curious way they structure experience. Blindsight, priming, a whole host of phenomena attest to their existence. The question is really one of the ROLE they play in consciousness as it appears to attentional awareness. How are they expressed? I think this is a horribly important question, one that, when thought through, will reveal what I’ve been calling the Accomplishment Fallacy among many other things.

    My approach analyzes consciousness in privative terms, as something that needs to be understood in terms of the information it LACKS (and that we know for a fact that it lacks). Consciousness, as we intuitively conceive it, is Ptolemaic: it is the product of various perspectival illusions, which, so long as we think we need to explain (as opposed to explain away) will likely prevent us from recognizing the actual explanation of consciousness when it comes.

  53. 53. scott bakker says:

    “But the problem is that the unity of consciousness is not just one bolus of smear. It is a single phenomenal field in the brain (our occurrent phenomenal world) with a complex internal egocentric spatio-temporal structure that we can, in fact, decompose in serial fashion by attentional/perceptual processes into objects and events which are subject to cognitive analysis.”

    Yeah, this strikes me as quite optimistic as well as overly visuocentric. The visual predominates probably because of its bandwidth, and it does make sense that evolution might take it as an informatic ‘shell’ for the integration of other kinds of neural information, but the fact is, outside of our fovea, most of our visual field is a construct, and the host of all the other phenomena all just… seem… to ‘belong’ somehow. Schwitzgebel’s book does a pretty good job demolishing the kind of phenomenological rigour you speak of. I have no doubt it seems as clear as you say to you, Arnold, but given the wild divergence in phenomenologies, I’m inclined to think your clarity is largely the artifact of a certain interpretative attitude, a taking as clear. Having bought into several different (and incompatible) interpretative phenomenological clarities, I’m a smear-bolus guy through and through. The outside world appears clear enough, but everything else is mush and fabrication. I take the sheer volume of competing accounts as my knockdown argument in this respect.

    “I don’t think this is necessarily so unless you assume that all thought consists in sentential propositions (linguistic constructions). The root meaning of our words is given by the images and sensory events to which they refer. This has to be the adaptive foundation of our semantics. So, in order to escape the hall of verbal mirrors, science deals in publicly displayed events (observations and experiments) and artifacts (theoretical models) and logical inference to try to arrive at provisional consensus on the best explanation of the phenomena in question. Verbal arguments are not enough.”

    I used to buy into this argument wholesale, the notion that when it comes to formal semantics, at the very least, evolution assures that consciousness is strategically placed – but no more.

    Again, it comes down to information horizons: consciousness accesses only a miniscule fraction of the information processed by the gut brain. Now I agree (supposing that consciousness isn’t a spandrel) that evolution assures that this information is causally efficacious, but the problem is that evolution also gives us good reason to assume that consciousness will be pretty much, if not entirely, blind to its neurofunctional role. As a (product of a) neural subsystem, we have no reason to assume that it will have any access to what Craver would call its ‘contextual functions’ within the greater brain – and how could it, when it knows nothing of the brain to begin with?

    In other words, we literally have no idea what are brains are doing when we engage in so-called ‘propositional thought.’ Now the degree to which, say, first-order logic is efficacious, in NO WAY speaks to the efficacy of the miniscule informatic slice that makes it to conscious awareness (because its contextual functions do not exist for us), only to the efficacy of what the brain is doing when we are making ‘inferences.’ Of course we take credit for it, but only because we’re the only game in town.

    I call this the problem of ‘metonymicry,’ the way the informatic encapsulation of consciousness means that we will congenitally confuse our fragmentary informatic role in the great polity of our brain for something sufficient when it almost certainly isn’t. Given the systematic dependency of our keyhole glimpse on what the greater brain is doing, so long as the brain is effective we will seem effective. We could be entirely wrong about ‘reference,’ for instance, and yet it will apparently ‘function’ quite well given metonymicry. It could even, as in the case of formal semantics, appear as the very basis of insight and understanding: How else would it appear in the absence of information to the contrary? In fact, one of the best ways to tell whether we’re suffering from this particular illusion is the degree to which our second-order attempts to make sense of what we’re doing baffle us – as is most certainly the case with logic and mathematics.

    A great deal of what we call philosophy, I now fear, is simply a symptom of the informatic parochialism of the conscious subsystems of the brain.

    For me, the question has been turned on its head: Given the kinds of depletion, truncation, and parochialism suffered by the information that reaches consciousness, why should expect anything element of consciousness to be ‘just so,’ as opposed to fundamentally deceptive in some respect?

  54. 54. Arnold Trehub says:

    Scott: “Again, it comes down to information horizons: consciousness accesses only a miniscule fraction of the information processed by the gut brain.”

    We agree about this, and I have dealt at length with the preconscious and non-conscious mechanisms of the cognitive brain (see *The Cognitive Brain*, MIT Press 1991), but I don’t see how this explains the unity of consciousness or the phenomena of intentionality.

    Scott: “Now I agree (supposing that consciousness isn’t a spandrel) that evolution assures that this information is causally efficacious, but the problem is that evolution also gives us good reason to assume that consciousness will be pretty much, if not entirely, blind to its neurofunctional role.”

    If I openly propose that the functional role of consciousness is to give us a useful brain representation of the world around us from our own egocentric perspective, would you say that I am blind to the neuro-functional role of consciousness? It seems to me that it’s a matter of belief. Those who agree that the weight of all relevant empirical evidence warrants the belief that consciousness is the brain’s first-person representation of the world it lives in cannot be said to be blind to it’s neuro-functional role. Those who deny the relevance of the empirical evidence might be described as being blind to the neuro-functional role of consciousness. The task of science is to convince the skeptics. Its not easy (hence the hard problem) but similar skepticism has been overcome in the past; e.g., heliocentrism, evolution by natural selection, quantum electrodynamics. Why not consciousness as a natural product of a particular kind of brain mechanism?

  55. 55. scott bakker says:

    Arnold: “We agree about this, and I have dealt at length with the preconscious and non-conscious mechanisms of the cognitive brain (see *The Cognitive Brain*, MIT Press 1991), but I don’t see how this explains the unity of consciousness or the phenomena of intentionality.”

    If the question is, What generates conscious unity? then you’re not going to like my answer (which is, nothing). Consciousness appears to be unified, indivisible, for the same reason flickering lights fuse: in the absence of information, identity is the default. As I said, default identity is what needs to be explained. The virtue of my theoretical approach is to show a variety (of apparently intractable) problems can been seen as expressions of one difficult problem.

    “Those who agree that the weight of all relevant empirical evidence warrants the belief that consciousness is the brain’s first-person representation of the world it lives in cannot be said to be blind to it’s neuro-functional role.”

    To be precise, you’re not talking NEUROfunctional roles as opposed to it environmental role.

    The function of the human brain is to maximize the transmission of genetic material. I can hang my hat on a ‘global function’ like that, but when it comes to consciousness? Not at all. Once again, for a meaning skeptic, saying the function of consciousness is to provide a “first-person represention of the world” is tantamount to saying the function of consciousness is to provide “consciousness of the world.” It doesn’t explain anything.

    But even if I set aside my meaning skepticism, the problem is that consciousness obviously possesses a gaggle of functions, and that in many cases we have discovered that the functions we intuitively ascribe to various components are out and out wrong. Think about ‘motivational transparency': we like to think we have privileged access to our motivations, when there’s good reason to believe we’re stranded on the outside looking, making inferences on the basis of observed behaviour like our friends and family.

    The list goes on.

  56. 56. Arnold Trehub says:

    Scott: “Once again, for a meaning skeptic, saying the function of consciousness is to provide a “first-person represention of the world” is tantamount to saying the function of consciousness is to provide “consciousness of the world.” It doesn’t explain anything.”

    A couple of points:

    1. If you really agree that a having a *first-person [brain] representation of the world* is to be conscious of the world, then you do agree with my working definition of consciousness.

    2. Describing the function of consciousness in the way that I have is not intended as an *explanation* of consciousness. Functional descriptions are not explanations. My explanation of consciousness is based on the biophysical particulars of the neuronal structure and dynamics of the brain’s putative retinoid system which realizes the root function of consciousness. In my opinion the retinoid model naturalizes conscious phenomena.

    Scott: “But even if I set aside my meaning skepticism, the problem is that consciousness obviously possesses a gaggle of functions, and that in many cases we have discovered that the functions we intuitively ascribe to various components are out and out wrong.”

    I agree that consciousness possesses a “gaggle of functions”. But my claim is that none of its multitude of functions can be realized unless its *ruling* function of providing *subjectivity* — a transparent representation of the world from a privileged egocentric perspective — is realized.

    Scott: “What generates conscious unity? then you’re not going to like my answer (which is, nothing).”

    I confess that I am baffled by your explanation of conscious unity. Perhaps we think of conscious unity in completely different ways. For me conscious unity is a single global field of phenomenal features properly bound in spatio-temporal register around a center of perspectival origin (I!). The fact that our various sensory modalities are widely separated in the brain requires that their diverse outputs be combined systematically by some kind of brain mechanism to achieve a useful brain representation of the contents of the world around us. Without such a mechanism (e.g. the retinoid system) our phenomenal world would be a wild and useless mishmash of sensory and imagistic features.

  57. 57. scott bakker says:

    It’s baffling because it explains this unity ‘away.’ Consciousness is not unified: it only appears that way given the severity of the informatic bottleneck of attentional awareness. This is why I began the strange way I did: flicker fusion shows that unity is the default in the absence of information. As does any other phenomena where the cognizing of more information leads us to reappraise the unity of a thing as an externally related assemblage of component parts. Thus the force of my question (which you never answered): In the absence of access to information to the contrary, how ELSE should consciousness appear? If you give it its due, you’ll see that it’s a very tough question to answer. Consciousness as it appears to us is the result of billions of intraneural flickers fusing and fusing.

    The binding problem is the only real problem in my books, not the unity of consciousness. My account doesn’t require a retinoid system, or any other ‘theatre’ that brings everything spatially together. Distinction, complexity, is the accomplishment; unity and identity are the default – and these, I’m guessing, will be shown to be a function of information integration a la something like Tononi (or something more complicated, as Edelman seems to think).

  58. 58. Arnold Trehub says:

    Scott: “Consciousness is not unified: it only appears that way given the severity of the informatic bottleneck of attentional awareness.”

    Here is what you seem to be saying:

    — Conscious experience falsely appears to be unified.

    — Why does conscious experience appear to be unified?

    — Because we are not conscious of any disunity.

    — Why are we not conscious of any disunity?

    — Because for this reason (name your reason) we cannot detect the disunity of conscious experience.

    This argument seems a tautology to me because if we are unable to detect disunity in conscious experience then conscious experience must be a unified experience. What warrants your conclusion that conscious experience is not unified?

    Scott: “The binding problem is the only real problem in my books, not the unity of consciousness.”

    On the contrary, in my book the binding problem is an essential aspect of the unity of consciousness. Without proper spatio-temporal binding in a unified egocentric plenum, consciousness confers no adaptive advantage.

  59. 59. scott bakker says:

    Again, it’s not an explanation of why consciousness is unified, only why it seems to be. Once again, why does the brain as an empirical object, absent detailed knowledge, seem to be a singular thing? Because we lack the information required to make the appropriate distinctions. Is this tautological? Of course not. All I’m saying is that the same relation of information to default identity that pertains between the brain and its environment also pertains WITHIN the brain. The difference is that accessing the brain as an empirical object we can sample information via many different channels, whereas accessing the brain as subject we are hardwired in place, stuck with channels we can’t even perceive as channels for lack of information.

    This way of thinking turns conventional approaches to consciousness on its head, but like I keep saying, once you grasp its Gestalt, it becomes a very parsimonious way to explain (away) many perplexities. While it leaves the generation problem untouched, the stripped down consciousness that remains seems much more tractable to mechanistic explanation.

    You still haven’t answered my question, Arnold! In the absence of access to information to the contrary, how ELSE should consciousness appear?

  60. 60. Arnold Trehub says:

    Scott: “You still haven’t answered my question, Arnold! In the absence of access to information to the contrary, how ELSE should consciousness appear?”

    Don’t you see that even if we had disconfirming *information*, consciousness would still be a unified experience! Conscious experience exists prior to any of its informational decompositions and subsequent thoughts about it. It is our global occurrent phenomenal world from which we *extract information* and formulate our sentential propositions/beliefs.

  61. 61. Vicente says:

    Arnold,

    It is our global occurrent phenomenal world from which we *extract information* and formulate our sentential propositions/beliefs.

    Trying to link with current post (subliminal).

    Propositions…could be. Beliefs, I doubt it. Probably the sub/un-conscious, plays an important role in this game.

    “My belief” is that one of our goals in life should be to raise as much information and mental processes (at the least the output)to the conscious layer, and then, equipped this conscious layer with as many analytical tools and objective information as possible, and clean it for parasitic processes, then your statement would be more meaningful.

    Scott:

    Regarding the other point, what if consciousness does not “appear”, because it is already there, so part of the brain is just a transponder. This is what i was referring to with the subliminal time threshold. It looks like if the brain needs a processing time to produce the “data package/information” required to produce the conscious phenomenal experience. Understanding what is this processing requirement, what constitutes the “subliminal border” would shed a lot of light on the issue, we could isolate the conscious from the subconscious at least in neurological mechanical terms. Then, we could concentrate in what is it that makes the conscious mechanisms so special, compared with the subconscious “bulk processor”.

  62. 62. Arnold Trehub says:

    Vicente: “It looks like if the brain needs a processing time to produce the “data package/information” required to produce the conscious phenomenal experience.”

    It seems to me that here you are talking about the *updating* of conscious content/phenomenal experience. Experimental evidence suggests that the processing time for such recurrent updating from pre-conscious representations (synaptic matrices) to conscious representations (patterns in retinoid space) is approximately 500 milliseconds.

  63. 63. scott bakker says:

    “Don’t you see that even if we had disconfirming *information*, consciousness would still be a unified experience! Conscious experience exists prior to any of its informational decompositions and subsequent thoughts about it. It is our global occurrent phenomenal world from which we *extract information* and formulate our sentential propositions/beliefs.”

    I by no means think that I have anything but an interesting guess at how we need to be reconceptualizing these problems, but it’s statements like these that – perhaps paradoxically – make me think I’m on to something. Go back to the problem of unity as Descartes conceived, Arnold: it’s basically the problem of internal relationality versus the external relationality of the world. I’m saying that the external relationality of the latter is a function of information, so why not the former? It took astronomy centuries to accumulate the information needed to abandon Aristotle’s spheres. I’m saying we’re in an analogous situation regarding consciousness. Like every other natural phenomenon, it is simply not what it appears to be. Like every other natural phenomenon, our original understanding suffers the lack of information. The difference, once again, is that it is intraneural: our conscious brain lacks the functional autonomy vis a vis our gut brain that our whole brain enjoys vis a vis its environment. We’re hardwired into our skewed perspective, in this instance.

    Now you haven’t so much argued against this as made asserted contrary commitments – which is well and fine. I’d be happy if people just recognized this reconceptualization as a valid possibility!

    In you answer, you’re trying to enforce the distinction between perceived and cognized, Arnold. First conscious unity, then cognition. What I’m saying overturns the very notion of a Theatre. I’m sure you realize how vexed and controversial this is – I’m with Dennett and Clark, personally. Like I said earlier: the thing I always try to keep in mind is that when we talk ‘consciousness’ we’re talking consciousness as consciously cognized. Given this (and the blurring of cognition and perception more generally) I just don’t find your ‘hyle’ that convincing.

    Even so: When we discuss consciousness we are ‘extracting information and forming propositional commitments’ from your ‘prior phenomenal field,’ are we not? How do you know we’re extracting the ‘right’ information? Given that internal-relationality seems to be the cognitive default in the absence of information, and that INFORMATION IS LOST, how do you know that there is such a thing as a prior unified phenomenal field? Schwitzgebel argues very persuasively that our intuitions are simply a mess in this regard. I’m arguing that we mess things up in systematic ways.

  64. 64. Arnold Trehub says:

    Scott: “When we discuss consciousness we are ‘extracting information and forming propositional commitments’ from your ‘prior phenomenal field,’ are we not? How do you know we’re extracting the ‘right’ information?”

    Sure! We don’t know with certainty that we are extracting the right information. That’s why science is a pragmatic enterprise that forms provisional theories on the basis of the weight of empirical evidence. Science cannot and does not proclaim absolute truths. You and I have essentially different guesses about how to think productively about conscious experience. You seem to think that information is at the foundation of conscious experience. I think that what we call “information” is a cognitive artifact constructed on the basis of decompositions of our conscious experience (our phenomenal world). The utility of our approaches will depend on the body of empirical phenomena that can be successfully predicted/post-dicted on the basis of our theoretical models.

  65. 65. Arnold Trehub says:

    I should add that I don’t believe our phenomenal world, as such, is perceived: it is simply experienced as a unified surrounding manifold. Perception consists in attention-driven decompositions of our occurrent phenomenal world. In my theory, these would be bounded patterns of autaptic-cell activation parsed out of our egocentrically organized retinoid space. For example see “Modelling the World, Locating the Self, and Selective Attention: The Retinoid System”, here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter4.pdf

  66. 66. scott bakker says:

    “Sure! We don’t know with certainty that we are extracting the right information.”

    Given this, doesn’t it behoove you to consider the ways in which you could be misled? You do see this lands you right back into the lap of the explananda problem we originally began with: How do you know that you are explaining the right thing? This is a problem that only the hermeneutic to and fro of science and philosophy can resolve, with the latter being abandoned once the confusions are clarified.

    You do agree that consciousness research bears all the hallmarks of fundamental conceptual confusion? This is pretty much platitudinal for me, which is why I think radical reconceptualizations of the problematic are exactly what the field requires (but which the vast majority are loathe to consider for the predictable reasons).

    Information is my preferred “unexplained explainer” for a variety of reasons. The only thing I’m interested in is the way taking an informatic perspective allows for the systematic interpretation of a wild variety of otherwise apparently disconnected conscious phenomena. Whether information proves key to solving the Hard Problem, I have no idea. But I follow researchers like Tononi because it very well could.

  67. 67. scott bakker says:

    Vicente: “Regarding the other point, what if consciousness does not “appear”, because it is already there, so part of the brain is just a transponder. This is what i was referring to with the subliminal time threshold. It looks like if the brain needs a processing time to produce the “data package/information” required to produce the conscious phenomenal experience. Understanding what is this processing requirement, what constitutes the “subliminal border” would shed a lot of light on the issue, we could isolate the conscious from the subconscious at least in neurological mechanical terms. Then, we could concentrate in what is it that makes the conscious mechanisms so special, compared with the subconscious “bulk processor”.”

    What I’ve been calling ‘information horizons” are simply a more general version of the ‘subliminal border’ you talking about. This is why I think the attempts by Edelman et al to quantify consciousness are so important (over and above the therapeutic bounty it promises): the ability to map neurostructurally consciousness capable systems in the brain.

    In a sense, I think the perspective I’m taking will become glaringly obvious once we actually have this detailed map. Why? Because it will force us to start looking at consciousness as a subsystem within a greater neural environment. This perspective makes the question of how the subliminal borders you mention – information horizons – manifest themselves within experience more than a little pressing. And this, I think, will give us the key we need to resolve the explananda problem: it will show us that so much of what makes consciousness seem mysterious is simply an artifact of neurostructurally mandated distortions and illusions.

    My thesis is deceptively simple: That we mistake consciousness for the same reason we mistake objects IN consciousness – For want of information. Flicker fusion is simply a version of the same illusion.

    Like I keep saying, my question (which I’ve posed in several versions now) is DEVILISHLY difficult to answer: Why should our attentional awarenss of consciousness not suffer the same informatic pitfalls that our attentional awareness of objects in consciousness suffer? If you agree with me that they should not (why should evolution design an entirely different system to cognize ‘self-consciousness of’ as opposed to simple ‘consciousness of’?) then you have a very straightforward, very parsimonious, means of explaining the problem of conscious unity away. Full stop.

    Arnold is kind of like Copernicus at this point, offering an explanation that so profoundly contradicts immediate experience (the illusion of motionlessness) that it strikes most as preposterous. We’re waiting for the map you’re speaking of the same way Copernicus and his contemporaries are waiting for Kepler and Newton. The big difference, is that the perspectival illusions afflicting us in this case are wired in.

    Again, the question is, why should the situation with consciousness be any different than with any other natural phenomena? Why shouldn’t we think our ‘view’ is not a skewed artifact of informatic privation? An illusion of our blinkered perspective…

  68. 68. Arnold Trehub says:

    Scott: “You do agree that consciousness research bears all the hallmarks of fundamental conceptual confusion?”

    Yes I do. Which is why I engage in online discussions/arguments? with other researchers in forums like this. But the relative value of our different intuitions does need to be tested against empirical findings.

    Scott: “… doesn’t it behoove you to consider the ways in which you could be misled?’

    Yes indeed. It behooves all of us to consider the ways we might be misled.
    The moon illusion is a good example of how we are misled. An examination of how the brain can create this misleading perception can tell us a good deal about the tricks the brain can play on our phenomenal experience.

  69. 69. scott bakker says:

    Arnold: “Yes I do. Which is why I engage in online discussions/arguments? with other researchers in forums like this. But the relative value of our different intuitions does need to be tested against empirical findings.”

    The trick is to find what experiments framed in the right way will provide the information needed. Not all experiments are equal. So with the heliocentric model, what was needed was information that lay outside our immediate frame of reference, which seemed to argue indubitable motionless. Once the heliocentric model was conceptualized as an alternative, those experimental possibilities presented themselves.

    Not before.

  70. 70. Kar Lee says:

    Arnold,
    I would like to go back to Comment 36 in which you quoted your email correspondence with Levine, where you said, “My argument is that the physicalist project with respect to mind will? continue to spin in the wind until we recognize that the currently achievable ?goal is a biophysical explanation of the content of consciousness, not an ?explanation of the primitive existence of consciousness. This is no more a ?matter of giving up the physicalist project on mind than physics gives up? its project on the physical universe by positing the primitive of space-time.”

    I would like to make this observation: Anything that is primitive on which other things are built upon is unchanging. The primitivity of space-time is an example. Mass is another. Things that are fundamental cannot be destroyed. If the fundamental buildings block can change or can be destroyed, then they are no longer fundamental because some explanation is required to explain why they change. If they can be explained in terms of other things, then they are not fundamental anymore. Therefore space-time can be considered fundamental in which other things are built upon. On the other hand, consciousness is not. One moment you have it, another moment you lose it. And as in bodily death, you lose it forever. Its existence, or non-existence, or the transition between these two states demands an explanation. It is therefore hard to take consciousness as fundamental. If our goal is only to explain the “content of consciousness”, and not consciousness itself, then clearly we are trying to explain different things, and that could explain the confusions that have been generated in many philosophy of mind discussions.

    If you take consciousness as primitive, then the only way out is to claim that in the event of bodily death, consciousness does not cease to exist, only that its content is wiped out. But that will make you a dualist.

    Am I making any sense?

  71. 71. Arnold Trehub says:

    Scott: “Once the heliocentric model was conceptualized as an alternative, those experimental possibilities presented themselves.”

    OK. I’ve conceptualized the retinoid model of consciousness (“wired” in the brain) which presented the possibility of my SMTT experiments (as well as other relevant experiments). What do you think might characterize the experiments that can provide the information needed?

  72. 72. Vicente says:

    Hi Kar Lee,

    To me you are making perfectly sense.

    That is what I suspect, and have commented a few times in the past, that problably we need to introduce a big dose of ontology into the consciousness problem, since the epistemological line seems to hardly pan out.

    In addition to your great point, my other consideration is that conscious beings are not observer dependant, while the rest is:

    Consciousness = Existence

  73. 73. Vicente says:

    Scott:

    Well, Newton and Kepler could observe and measure the planets’ orbits… we can’t observe others phenomenal world.

    I believe that if there’s an answer it will come in a way anything but parsimonious, it has to be an striking conceptual breakthrough.

    Let’s say… Einstein and Schrödinger rather than Newton and Kepler, despite the former needed the latter to built upon them, their theories were revolutionary.

  74. 74. John says:

    The endless regress of HOT: a HOT is needed to be aware during an instant but at an instant there are no processes so a HOT is needed to be aware during an instant but at that instant there are no processes so….

    If you are not aware during an instant you are not aware.

    Are we doomed to infinite rehashes of what is known to be fallacious? Doomed to infinite repeats of infinite regresses.

  75. 75. Arnold Trehub says:

    Kar: “Anything that is primitive on which other things are built upon is unchanging.”

    I don’t agree. A scientific primitive is a historically contingent concept — something that we don’t bother to explain, or something that we are currently unable to explain. What were primitives hundreds of years ago may no longer be conceived as primitives because today we understand them as products of deeper physical processes. In the future, the broad acceptance of a new physical theory might provide an explanation of space-time that shows that space-time depends on a combination of other physical events and, just like consciousness, does not exist when these necessary events are changed. Science cannot proclaim the immutability of any of its concepts. As I suggested before, science is not omniscient; it is a pragmatic enterprise, not oracular.

    Vicente: “Well, Newton and Kepler could observe and measure the planets’ orbits…”

    This is not quite accurate. Newton and Kepler did observe planets’ orbits that were represented within the perspectival space of their phenomenal worlds, but what they actually measured were patterns of marks on artifacts that represented the planet’s orbits within their own phenomenal worlds. We accepted the implications of their measurements because they accurately predicted subsequent observations.

    Vicente: “… we can’t observe others phenomenal world.”

    Neither can we observe sub-atomic particles, but we can observe the tracks such particles leave in our measuring artifacts. My SMTT experiments are similar in this respect. We can’t observe the subjects’ phenomenal world/perceptual experience, but we can observe the tracks of the perceptual experience in our measuring artifact.

    Vicente: “Consciousness = Existence”.

    So Existence = Consciousness? Can you give us the evidence that supports your claim that your shoes (which I assume really exist) are conscious?

  76. 76. Arnold Trehub says:

    Peter, why does my comment #75 await moderation?

  77. 77. scott bakker says:

    Vicente: “Well, Newton and Kepler could observe and measure the planets’ orbits… we can’t observe others phenomenal world.”

    Which is one of the things that makes the task so daunting, for sure. There’s others aside.

    Vicente: “I believe that if there’s an answer it will come in a way anything but parsimonious, it has to be an striking conceptual breakthrough. Let’s say… Einstein and Schrödinger rather than Newton and Kepler, despite the former needed the latter to built upon them, their theories were revolutionary.”

    The arrival won’t be parsimonious, or the resulting theory won’t be? I guess I’m trying to say that I at least have a candidate for consideration, a radically different way of theorizing consciousness that explains away a number of baffling phenomena with a single, elegant premise – and so drastically pares the Hard Problem down to size. And it does so in a manner analogous to Einstein at least: it reinterprets a variety of phenomenal features as an expression of fundamental structural constraints.

    I’m less that sanguine about the prospect of bending many ears though! Everyone in this biz is busy honing their pet theories, sinking far too much time and treasure into their hunches to have any hope of escaping the human predilection to cherry-pick, skew, and rationalize. The sheer number of theories out there means that professionals in the field are prone to economize their consideration of alternatives, to adopt an eliminative mindset, in which case the truly radical reconceptualizations are bound to find themselves dismissed before they are even understood. We’re hardwired to confuse agreement with intelligence. And if this wasn’t bad enough, I’m a lay outsider, an amateur, which means that I will inevitably run afoul the implicit, largely stylistic criteria we unconsciously use to sort the serious from the spurious. ‘Value attribution’ is generally the doom of the genuinely new.

    Which is why I figure my best chance is to keep pressing people on the questions my view raises, questions that should be obvious, then to poke and prod when they are avoided ;)

    How is flicker fusion (the result of exteroceptive resolution thresholds – or information horizons) related to the unity of consciousness? Why should exteroceptive fusion not obtain interoceptively, especially given the likelihood that we’re stranded with the same machinery to cognize both? How are the information horizons of conscious subsystems expressed in (our attentional awareness of) consciousness? We know that consciousness is an informatic bottleneck, so why isn’t anyone considering the ways this might bear on the first person perspective?

  78. 78. scott bakker says:

    Arnold: “What do you think might characterize the experiments that can provide the information needed?”

    Personally, I don’t think we’ll be off to the races until we get the map that Vicente alluded to, until we have a good understanding of the mechanism of consciousness in at least two respects: the conscious subsystem understood as informatically open, which is to say in terms of it’s neurofunctional context, and the conscious subsystem understood as informatically closed, which is to say, the ways the informatic closure of consciousness generates various phenomenal illusions (for the simple lack of information).

    The most obvious of theses illusions would be the ‘soul intuition,’ the way the conscious brain seems prone to categorize itself as something OTHER than itself. Given that it is informatically closed (consciousness only has the information it has), it makes sense that the brain would see itself as a ‘special object,’ a kind of ‘false unconditioned’ that stands apart from the natural order. In fact, I think information horizons can explain the Kantian intuition of the ‘transcendental ego’ (and much else in philosophy asides). This is just the tip of the iceberg. The consciousness we think we have is the result of cognitive systems trying to figure out the museum from the information available in the vestibule.

  79. 79. Arnold Trehub says:

    Scott, is the conscious subsystem that is “informatically open” the brain’s system of neuronal mechanisms in the objective 3rd-person descriptive domain that generate consciousness (3pC)? Is the conscious subsystem that is “informatically closed” the activity within 3pC that constitutes the subjective 1st-person descriptive domain (1pC)? If so, it seems to me that you are describing the metaphysical stance of dual-aspect monism.

    It is a truism in neuroscience that while the brain has many sensory modalities, it can not sense itself as a biological system (informatic closure?). On this basis we can agree that we have no *direct* evidence that consciousness is anything like the activity of neuronal mechanisms in our brain.

  80. 80. Vicente says:

    Arnold,

    On this basis we can agree that we have no *direct* evidence that consciousness is anything like the activity of neuronal mechanisms in our brain.

    That depends on what you understand by “direct”, but I think we can say they are in direct mapping, i.e. NCCs, on an introspective basis, I wouldn’t say on behavioral observation one, for it has a strong subconscious dimension.

    Play with a set electrodes on your brain, and see what happens… or take any drug that has an effect on the CNS… and introspectively observe the impact on your conscious mind.

    To me, that is direct evidence as far as I am concerned.

    The point is that both stimulii, electrical or chemical, modify the contents of consciousness, or its flow rate… but consciousness itself, I don’t know. This leads us to your previous question, can consciousness and its contents be decoupled?

  81. 81. Vicente says:

    Scott,

    explains away a number of baffling phenomena with a single, elegant premise

    Yes, but what if that premise cannot be scientifically supported?

    I’m a lay outsider, an amateur

    So am I, but in this field the border between amateurs and professionals is pretty blurred. Actually, the writings, that have better contributed to my undestanding (or not) of the mess, were not produced by pros, strictly speaking.

    I believe that the main goal, that orthodox neuroscience will achieve, is to prove that the brain cannot be the only element responsible for consciousness and something more is required. Even without knowing what is that additional ingredient, it would mean a huge step forward.

    Anyway, what I want is to understand what’s it, not to make others value my opinions.

    We’re hardwired to confuse agreement with intelligence.

    ha ha.. yes, actually there’s not a term more counter-scientific, irrational and illogical than “consensus”, stupid hu!!

    The most obvious of theses illusions would be the ‘soul intuition,’ the way the conscious brain seems prone to categorize itself as something OTHER than itself

    Well, the conscious brain, to begin with, doesn’t know that there is such a thing as a brain… but your are conscious. Like you don’t need to know that there is a stomach in your belly in order to digest the food, you see.

    There is no illusion… consciousness comes first, it is all for you, all other considerations come after.

    The brain is an observer dependent object, it does not exist except in the minds of conscious beings. No consciousness, no brain. Now, can consciousness be without a brain?

  82. 82. Kar Lee says:

    Vicente,
    You are definitely right to say that we also need to address the ontological aspect of it. But naturally, we flip flop between the two.

  83. 83. Kar Lee says:

    Arnold,
    Let me try to see if I can type this in the 10 minutes I have here. You disagree that things that are considered fundamental can’t change. A challenge for you: please give me an example, aside from consciousness, that you consider as fundamental but can change, at this point of our state of knowledge.

    While I agree with you that things that we once believed to be fundamental turned out to be something that are derived, therefore rendering them non-fundamental in the new state of knowledge, there is nothing that can explicitly change but still considered fundamental. Unless you completely deny the concept of “fundamental”, you will have to accept that this concept is associated with something that has lasting power, or something that is forever, even though you could still argue that fundamental things don’t exist because there will always be something that is even more fundamental. This point is debatable. However, at any state of our knowledge, if something that is considered fundamental, that thing has to be non-derivable at that point in time.

    If you stick with a materialist’s view, consciousness is clearly a derivable phenomenon. So, it cannot be fundamental, and it needs explanation. To focus on the content of consciousness while ignoring consciousness itself seems to be ignoring the elephant in the room. The hard problem is the existence of the elephant. I hope you can extent the retinoid model to address the existence of consciousness, instead of just assuming it. A tall order, I know. Maybe you could do it.

  84. 84. Arnold Trehub says:

    Kar: “To focus on the content of consciousness while ignoring consciousness itself seems to be ignoring the elephant in the room. The hard problem is the existence of the elephant. I hope you can extent the retinoid model to address the existence of consciousness, instead of just assuming it. A tall order, I know. Maybe you could do it.”

    Do you believe that we can properly explain the existence of consciousness without agreeing on a working definition of consciousness? My working definition specifies that consciousness IS a particular pattern of spatio-temporal autaptic-cell activity in the brain (activated retinoid space). Absent this activity, consciousness DOES NOT EXIST. Present this activity, consciousness DOES EXIST. So the sheer existence of consciousness, for me, is explained by a particular kind of biophysical content in the brain — an evolutionary gift. But I also recognize that others (like yourself) might legitimately ask “WHY does this *particular content* cause consciousness?” I would say that the question makes sense only if you reject my working definition of consciousness. You and others are free to propose your own working definition of consciousness, build a theoretical model around it, test the model, and present empirical evidence in support of the model. Then we can compare the scientific status of competing models.

  85. 85. scott bakker says:

    Arnold: “Scott, is the conscious subsystem that is “informatically open” the brain’s system of neuronal mechanisms in the objective 3rd-person descriptive domain that generate consciousness (3pC)? Is the conscious subsystem that is “informatically closed” the activity within 3pC that constitutes the subjective 1st-person descriptive domain (1pC)? If so, it seems to me that you are describing the metaphysical stance of dual-aspect monism.”

    Yeah, sure. No, not really. I use perspectival metaphors continuously, but more as heuristics as anything else. I fear I’m a metaphysical opportunist where the question of consciousness is concerned.

  86. 86. scott bakker says:

    Vicente: “Yes, but what if that premise cannot be scientifically supported?”

    Then I literally fall to my knees and weep for joy. I find the consequences of my approach horrifying.

    “I believe that the main goal, that orthodox neuroscience will achieve, is to prove that the brain cannot be the only element responsible for consciousness and something more is required. Even without knowing what is that additional ingredient, it would mean a huge step forward.”

    This is what I WANT as well – more than you can know! So of course, my attempts to solve the riddle led me to the diametrically opposite conclusion, which is that all the things that suggest ‘something more’ – that seem to so obviously set it apart from the rest of the natural world, are ‘tricks of our intraneural perspective.’ And this, I’m afraid, is in keeping with the history of science. The earth is not the centre of the universe. Homo sapiens is just another twig on the tree of life. Human consciousness is…

    “There is no illusion… consciousness comes first, it is all for you, all other considerations come after.

    The brain is an observer dependent object, it does not exist except in the minds of conscious beings. No consciousness, no brain. Now, can consciousness be without a brain?”

    But there is illusion, and plenty of it. The bulk of our sensorium. Our sense of willing. Our sense of rationality. As a matter of empirical fact, all these things and more are positively larded with illusion. The question now is really only one of degree. I think I have some persuasive arguments for why we should brace ourselves for the worst.

    Note how what you’re saying almost exactly fits the ‘soul dilemma’ I mentioned, the inability of consciousness to plug itself in natural (causal) contexts, the way it seems to perpetually ‘come first,’ both practically and ontologically. The enormous data we have regarding neuropathology makes the dependence of consciousness upon the brain about as empirically airtight as you could imagine, and yet the INTUITION OTHERWISE persists. BBT explains this intuition, why it is we find idealism so intuitively appealing. It actually predicts that other evolved intelligences would likely have their own similar intuitions. On top of everything, it has the virtue of explaining why consciousness has been such a tough nut for science to crack.

  87. 87. Arnold Trehub says:

    Scott, I agree that our inability to experience the machinations of our own brain from the inside — what you call the BBT — is one reason that idealism is so intuitively appealing. But I think another important reason is that, until recently, there has been no explicit theoretical model of the neuronal structure and dynamics of brain mechanisms that are competent to generate our illusion of being in unmediated contact with the world. It is my expectation that my detailed theoretical model of the cognitive brain and the functional role of the retinoid system as a part of the cognitive brain will help eliminate the cognitive blinker that traps us in the persistent illusion that our conscious experience is something mysterious that is added to the biophysical activity of our brain.

  88. 88. scott bakker says:

    Arnold, this is kind of an aside, but I was wondering what you make of Scattered Brain type thought experiments, where the synaptic gaps between neurons expanded some arbitrary distance.

    I generally agree with you: the tighter science demonstrates the fit between consciousness and brain, the more untenable idealism SHOULD become, but the quantum mysteries of the material, alas, still leave the door open.

    As I’ve said all along, your approach will be convincing (which is quite different than being ‘true’) the degree to which it can explain and/or explain away the perplexities that bedevil it. Otherwise people will say you’ve simply isolated the NCCs. To this extent, and because I think BBT is entirely compatible with your approach, I urge you to think through my questions, at least, if not my approach more generally.

    No one has stepped up to the plate to answer any of them! But then I’ve resigned myself to the fact that questions I think profound typically only sound peculiar to others ;)

  89. 89. Arnold Trehub says:

    Scott: “Arnold, this is kind of an aside, but I was wondering what you make of Scattered Brain type thought experiments, where the synaptic gaps between neurons expanded some arbitrary distance.”

    Scott, I assume that this what you want me to respond to:

    [– Suppose, now, that the Chinese government orders the Chinese population to do something different. One by one, they start replacing the Chinese citizens with neurons. The neurons are suspended in nutrient baths, and housed in small containers. For each kind of neuron the containers have a supply of the neurotransmitters that that kind of neuron normally responds to. One part of the container takes in electrical signals and releases the neurotransmitters when it receives the appropriate input. Another part takes in the neurotransmitters released by the neuron’s synaptic vesicles and generates an appropriate electrical signal as output. Suppose further that the electrical input and output signals are coded in such a way that, when connected to the radios being used by the Chinese citizens, the signals are indistinguishable from those generated by the Chinese citizens. As the citizens are gradually replaced by the neurons, there should be no change in the activity of the system. After all, in Block’s original thought experiment we supposed that the citizens mimicked the activity of individual neurons, and collectively the citizens were organized so as to mimic the behavior of the brain as a whole. There is thus no reason for the activity of the system to change in any relevant way as the citizens are replaced by the neurons, and the system should continue to implement the same program throughout the transition

    Once this process is complete, we will have a system which functions in the same way as the Chinese Nation and a normal human brain. Let’s call this system the scattered brain, for the only important difference between it and a normal brain is that its neurons are scattered throughout a larger region of space. Block has no doubt that human brains are phenomenally conscious, but he does doubt that the Chinese Nation is phenomenally conscious. (Philosophy of Mind, p. 97) But what about the scattered brain? Is it phenomenally conscious or not? –]

    I agree with Block. The physical spacing between neurons is critical for phenomenal consciousness. The reason is obvious. The autaptic-cell activity in retinoid space is not a simple matter of a pattern of action potentials; close-packed neurons (as in a normal brain) create a continuous electromagnetic field with a spatio-temporal amplitude structure determined by the superposition of neighboring local neuronal field potentials. Since the magnitude of emf amplitude rapidly diminishes as the square of the distance from each neuronal source, the kind of integrated neuronal emf field generated in the brain’s retinoid space would not exist in the Scattered Brain. It is important to recognize that the devil is in the details.

  90. 90. scott bakker says:

    I agree with devils and details. I understand the appeal of emf and synchrony approaches: they seem to provide the unified shell of simultaniety that consciousness appears to exhibit. Is there any experimental evidence of their importance?

  91. 91. Kar Lee says:

    Arnold[84],
    I take your point. Indeed, I don’t feel right about your working definition. Another definition that defines away the hard problem is: “Consciousness IS highly integrated information” (Peter wrote about it here)

    So, I now understand that you are not trying to address the hard problem. Thanks for the clarification.

  92. 92. Arnold Trehub says:

    Scott: “I understand the appeal of emf and synchrony approaches: they seem to provide the unified shell of simultaniety that consciousness appears to exhibit. Is there any experimental evidence of their importance?”

    Experimental evidence is meaningful only within the context of theory. Here is experimental evidence based on the logical implications of the putative retinoid mechanism:

    Because each autaptic neuron is a leaky integrator, its integrated membrane potential is a positive function of its pre-synaptic spike frequency, and its local field potential will be a positive monotonic function of its spike discharge frequency which lowers the latency of response of its targeted autaptic neuron in retinoid space. The properties of this mechanism predict that if a circular object in lateral back-and-forth motion is viewed through a triangular aperture in an occluding screen, it should be perceived as an egg-shaped object swinging like a pendulum pivoting at the apex of the triangular aperture. This theoretical prediction was tested and confirmed. For a more detailed account see *The Cognitive Brain*, p.239 and Fig. 14.5, p. 242, here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter14.pdf

    Notice that the phenomenal simultaniety (extended present) of this novel conscious experience depends on the temporal overlap of integrated em patterns at appropriate spatiotopic coordinates due to the short-term memory properties of autapic neurons in retinoid space.

    There is much more experimental evidence out there, but this is a particularly striking example.

  93. 93. Arnold Trehub says:

    Kar: “I take your point. Indeed, I don’t feel right about your working definition.
    … So, I now understand that you are not trying to address the hard problem.”

    OK, now tell us what your working definition of consciousness is so that we can better understand what you mean by “the hard problem”.

  94. 94. Kar Lee says:

    Arnold,
    I don’t believe you do not understand the hard problem. But just for clarity, this is what I wrote in the ebook “Where are the zombies”. Hope it can make the problem clear:

    ……quote………
    “A wise man asking a stupid question inside the Matrix”

    The concept of a Matrix is a virtual environment
    simulated by a giant computer. People are hooked up to
    this giant computer by some electrodes inserting into their spinal cord so that the computer can generate all sorts of sensations for you. Given the right electrical signal, you will feel like you are in a desert, or eating a piece of chicken, or looking at a beautiful flower under a summer sun. Since the computer directly interfaces with your brain, you will be in a dream like environment and will be unable to tell that it is a virtual environment.

    Inside this virtual reality common environment, everyone is given a virtual body (you have to, otherwise you will be bodiless). You can see and feel your virtual hands, your virtual legs, virtual clothing. At the same time, you can also see other people’s virtual bodies as well, just like you will see other real bodies in the real world. In fact, this is how different individuals interact inside this virtual world: Through their virtual bodies, which are purely computer generated. We can imagine that if the
    simulation is as good as what it describes in the movie, we can be completely immersed in this virtual
    environment, unable to recognize that it is just a simulated environment, especially if you have been connected to the Matrix since birth. Now, imagine a doctor performing a brain surgery on someone inside the Matrix and reveals that one’s brain really is a mechanical structure full of gears and springs, similar to the structure of a mechanical clock, with pendulum swinging back and forth. And when a certain spring in the head is pulled, the person under this brain operation is given a certain sensation of pleasure by the computer through the real spinal cord in the real world, and the person promptly reports inside the virtual world, “I feel really good…” Since this is a simulated environment, the computer can make any individual feel any way. But if the association of feelings are applied inconsistently, people inside may eventually recognize it as fake by its inconsistencies and self contradictions. As long as the rules are applied consistently, people won’t be able to recognize the hoax. One rule can well be that when anyone whose virtual spring is “touched”, that person is given a sensation of pleasure. So, inside the Matrix, it becomes a well known “scientific” fact that the spring in the head is a pleasure center and people publish research papers about this fact.

    Then there comes along a wise person inside the Matrix
    who, just like everyone else, is completely unaware of the outside reality. But he asks, “Why is my feeling
    associated with the pulling of this particular spring in my head?” You can imagine people inside the Matrix will look at this wise person with awe and point out to him the obvious: “It is your head. It is the pleasure center in your head. What else do you expect?” You can also imagine that there are neuroscientists inside this virtual world, who are experts in the virtual brain’s functions, attempt to
    seriously answer the question by resorting to some deeper level brain gears mechanics and publish their findings in research papers. At the end, one question remains: why when those deeper brain gears are turned, the person will feel a certain way. Of course, we know that it is the computer sending signals to the real spinal cords outside.

    But if someone who is completely unaware of this higher
    reality where the real spinal cords are located, there can be no answer. There can be no answer from within the Matrix to this wise person’s question. So, our
    neuroscientists in the Matrix, being “materialists” inside the Matrix, have to resort to the final answer: “Of course your feeling is associated with this piece of spring. This is YOUR brain!” Immediately, we see the problem with this answer. These are just virtual bodies. But we also realize that no one inside the Matrix can refute this answer effectively because the “materialists” can always insists that one is to be identified with his/her brain (virtual brain) and there is no problem with that. But being outside of the Matrix and knowing that what is being “touched” is just a simulated virtual body, we know that the wise person is asking a good question inside the Matrix. Indeed, without the “real” reality, one simply cannot explain why touching a spring in ones “virtual brain” will cause the sensation of pleasure. People inside
    the Matrix simply cannot know about the higher reality
    outside, and so their explanation, whatever it is, cannot be the real explanation. Insisting on identifying one’s nature with the virtual body is therefore committing a serious logical error in reasoning.

    But then, aren’t we having the same explanatory gap in
    our “real” world as well? Why when some signal reaches
    a certain part of some (my) brain, I will get this sensation? To explain that, don’t I need to invoke some even higher reality? Otherwise, how else can I explain through this gap?
    …….end quote ………..
    The gap is the HARD PROBLEM. But, Arnold, you already know this, unless I am mistaken.

  95. 95. Arnold Trehub says:

    Kar, yes the GAP is our lack of omniscience — our inability to know the ultimate nature of the real world that we can only simulate. To explain THIS hard problem would require us to leap over the chasm of our own natural ignorance. Science is not omniscient and cannot make the leap. That’s why I say that if we try to explain the ultimate nature of consciousness or anything else (in your terms, the higher reality), we are fated to go round and round through the same revolving door. Science operates by making empirical tests of imperfect theories. This applies to our scientific explanation of consciousness as it does to our scientific explanation of other aspects of our experience. Faith, of course, is not limited in this way.

  96. 96. Vicente says:

    Arnold,

    I agree with Block. The physical spacing between neurons is critical for phenomenal consciousness. The reason is obvious. The autaptic-cell activity in retinoid space is not a simple matter of a pattern of action potentials; close-packed neurons (as in a normal brain) create a continuous electromagnetic field with a spatio-temporal amplitude structure determined by the superposition of neighboring local neuronal field potentials. Since the magnitude of emf amplitude rapidly diminishes as the square of the distance from each neuronal source, the kind of integrated neuronal emf field generated in the brain’s retinoid space would not exist in the Scattered Brain. It is important to recognize that the devil is in the details.

    This makes no sense at all. We are not talking about isolated point charges in space, but a complex system of polarised dielectric layers whose electric (not em) fields distribution calculation lead to solutions that will not follow the square of the distance law.

    In addition, there is no model to consider what role could play the electric fields in the brain in phenomenal consciousness, if any.

    Having said this, a current research line is looking at the possible direct communication, neuron-neuron by cross-talking through mutual induced polarization, that would required synaptical connection. But this would just be another mechanism, probably much faster and useful for wide-areas synchronising, but just that, another mechanism. It is true that this mechanism would require of very compact neuron packing, for short range interactions in transitory regime, before synchronisation could exert an influence over further regions.

    It is quite promising, actually, despite at a very empirical stage, electrical brain stimulation is producing good results, probably mostly as a result of the induced currents, but the effect of the field induced polarization in the area is to be considered too.

    I insist, just another possible physical mechanism working in parallel with chemical synaptical switching, still to be described.

    To summarize, I don’t see any biophysical mechanism in the brain, chemical or electrical accountable for phenomenal experience. Your argument of ultimate realities doesn’t work for me, it is not the same to ignore the very nature of an electron, than to don’t even have a clue of what “redness”, “smoothnes” or the “pitch” of a note could be, despite there are physical correlates for them, it is not the same.

  97. 97. Arnold Trehub says:

    Vicente: “We are not talking about isolated point charges in space, but a complex system of polarised dielectric layers whose electric (not em) fields distribution calculation lead to solutions that will not follow the square of the distance law. ….. I don’t see any biophysical mechanism in the brain, chemical or electrical accountable for phenomenal experience. Your argument of ultimate realities doesn’t work for me, it is not the same to ignore the very nature of an electron, than to don’t even have a clue of what “redness”, “smoothnes” or the “pitch” of a note could be, despite there are physical correlates for them, it is not the same.”

    This is a good illustration of the difficulty we face in trying to convince investigators that consciousness can be understood within the norms of science just as we do with other theoretical entities such as electrons. Vicente wants to talk only about the measurement of dialectric charge distributions in a complex system of neurons in the brain. It appears that he thinks we can ignore the fact that neurons are leaky capacitive integrators with semi-permeable membranes, and that any particular pattern of their flux of ions and electrons in the conductive volume of brain tissue systematically shapes the complex electromagnetic field that these bioelectric currents induce. The point is that these electromagnetic brain fields are inextricably and systematically linked to the patterns of autaptic-cell discharge within retinoid space and cannot be ignored in theorizing about the biophysical basis of consciousness. This is why arguments like the
    Scattered Brain thought experiment are way off target. For more about electromagnetic fields in the brain, see “Magnetic Source Imaging of the Human Brain” by Zhong-Lin Lu and Lloyd Kaufman (2003)

    When Vicente claims that we “don’t even have a clue of what “redness” might be”, he has to recognize that he is echoing Feynman’s claim that he and nobody else really understands quantum electro-dynamics. All that we are able to say (argues Feynman) is that when we do an empirical test of the implications of the theoretical model we discover that it works; i.e., we actually find what the theoretical model says we should find! If a theoretical model of the structure and dynamics of a neuronal brain mechanism predicts that a small square visual stimulus at a particular frequency on the em spectrum will elicit the response “red” from the subject, and that when the stimulus is extinguished the subject will say that he sees a “green” square, and that when the subject fixates a near surface and a far surface while the color after-image persists he will say that the “near” green square looks smaller than the “far” green square, and all of this is what actually happens when we test the theory, then we can legitimately say that we have explained this aspect of consciousness *within scientific norms*. But, of course, we have not explained the ultimate nature of consciousness any more than science has explained the ultimate nature of a photon or an electron. Also, this does not mean that we cannot have a *better explanation* of consciousness or quantum electrodynamics in the future.

  98. 98. Arnold Trehub says:

    “… Feynman’s claim that he and nobody else really understands quantum electro-dynamics.”

    Sloppy wording on my part. Clarification. Feynman claimed that neither he nor anybody else really understands quantum electrodynamics.

  99. 99. Vicente says:

    Arnold,

    At those working frequencies I would talk of quasi-static fields the most…

    I don’t neglect dissipative (currents) effects in the media. What I say is that those fields, are the result of the charges (polarization or free), dynamic behaviour, as it couldn’t be in any other way. For the same reason, you probably have temperature (energy) scalar fields, all over the brain, so what. Once you have a potential propagating through and axon, that’s it, you’ll have the corresponding field. Then, if those fields also contribute to communication mechanisms between neurons, so good.

    QED is just the quantum version of CED, in which particles and forces (gauge fields) are values of the EM quantum field, and follow the typical quantization, uncertainty, etc, etc, so what does that help to understand consciousness…

    I don’t see that you have included any Electrodynamic, classical or quantum consideration, in your retinoid system model….

    I don’t claim anything, why do you see any relationship between QED and “redness”?

    The point is:

    We have incorporated into our physical models and theories of the Universe (brains included), entities and concepts such as particles or fields, observable (directly or indirectly) and measurable, at any rate.

    Could you make a similar statement for “redness”?

  100. 100. Arnold Trehub says:

    Vicente, I agree that our standard physical models are far in advance of our current models of conscious phenomena. It was only as recent as 1977 that the *Journal of Theoretical Biology* published “Neuronal models for cognitive processes: Networks for learning, perception, and imagination”. And it wasn’t until 1991 that a more detailed description of neuronal mechanisms and systems for generating our pre-conscious and conscious experiences was published in *The Cognitive Brain*. I think that my forthcoming paper in the *Journal of Consciousness Studies* makes a solid case against those who claim that the *hard problem* precludes the possibility of there being a generally accepted science of consciousness. In my view, the retinoid model provides a good foundation for developing a standard scientific model of consciousness.

    Just as an addendum to this discussion, I have claimed in *EDGE* that all of modern science is a product of biology.

    ===========================================

    Modern Science is a Product of Biology

    Arnold Trehub

    The entire conceptual edifice of modern science is a product of biology. Even the most basic and profound ideas of science — think relativity, quantum theory, the theory of evolution by natural selection — are generated and necessarily limited by the particular capacities of our human biology. This implies that the content and scope of scientific knowledge is not open-ended.

  101. 101. Kar Lee says:

    Arnold,
    You just contradicted yourself above by saying,
    “I think that my forthcoming paper in the *Journal of Consciousness Studies* makes a solid case against those who claim that the *hard problem* precludes the possibility of there being a generally accepted science of consciousness. In my view, the retinoid model provides a good foundation for developing a standard scientific model of consciousness.”

    But in Comment 95, you wrote “yes the GAP is our lack of omniscience — our inability to know the ultimate nature of the real world that we can only simulate. To explain THIS hard problem would require us to leap over the chasm of our own natural ignorance. Science is not omniscient and cannot make the leap.”

    And there is only one hard problem, no this or that hard problem. So, how can you make a solid case against those who claim that the *hard problem* precludes the possibility of there being a generally accepted science of consciousness, in view of what you said in Comment 95?

  102. 102. Vicente says:

    Arnold,

    Then, follow the chain… or tell me the way to escape reductionism:

    Biology is the result of physics… so physics is the result of physics.

    I always crash against the same wall…

    From a physical point of view it is very difficult to understand how particles manage to self-organise into a tree… or a fly… but ok… but the thing is that they have self organised into a space shuttle carrying other particles (people) into space, because they felt curious about it… :-O

    At the end of the day, this is what we are saying… if the only laws that apply are the laws of nature, there is no way out… the primordial soup of particles and energy (wherever it came from) evolved into the Wall St. stock market, under the laws of physics…. I don’t deny it, just waiting to see the set of equations, or Craig Venter to mix some carbon and nitrogen and get a rabbit out of the hat.

    Don’t see any intelligent design (which I despise) ideas support in my comment, it is simple ignorance.

  103. 103. Arnold Trehub says:

    Kar: “But in Comment 95, you [Arnold] wrote ‘yes the GAP is our lack of omniscience — our inability to know the ultimate nature of the real world that we can only simulate. To explain THIS hard problem would require us to leap over the chasm of our own natural ignorance. Science is not omniscient and cannot make the leap.’ ”

    Kar: “And there is only one hard problem, no this or that hard problem. So, how can you make a solid case against those who claim that the *hard problem* precludes the possibility of there being a generally accepted science of consciousness, in view of what you said in Comment 95?”

    I don’t see a contradiction. Levine proposed that the explanatory gap (the hard problem) was distinctively problematic for any attempt to provide a scientific explanation of consciousness, whereas science faced no such problem in explaining other natural phenomena; e.g., explaining water as H2O. This argument was picked up by many philosophers (and others) to claim that there could be no standard model of consciousness in science like there is for other natural phenomena. I have argued that this is not the case because NO explanation in science bridges the gap of our ignorance of the ultimate nature of the phenomenon in question. So why do the traditional physical sciences live comfortably with this explanatory gap while the science of consciousness is tortured by it? This is what has to be explained.

    I think that Scott understands the problem. And this is what my forthcoming book chapter addresses.

  104. 104. Arnold Trehub says:

    Vicente: “Then, follow the chain… or tell me the way to escape reductionism:
    Biology is the result of physics… so physics is the result of physics.”

    No. It goes this way:

    1. Nature produces organic stuff out of inorganic stuff.
    2. From organic stuff, natural evolution produces biological creatures with brains having cognition and consciousness (retinoid systems?).
    3. Creatures with consciousness evolve to *human* creatures with more powerful cognitive/conscious brain systems..
    4. Human creatures *invent* the sciences of physics and biology.
    5. So all of our sciences (including physics and biology) are a product of biology in its evolved human conscious manifestation.

    As for the logical dependencies between the *sciences* of biology and physics, label each on a separate track of a mobius strip. Follow a track and see where it leads you.

  105. 105. scott bakker says:

    I agree with Arnold ontologically, and with Kar and Vicente epistemologically. Consciousness is another natural phenomena that will be explained naturalistically. What I was trying to get Arnold to see is that Consciousness is not *just* another natural phenomenon: it is the natural phenonmenon that is attempting to *explain itself* naturalistically. And this is where the problem becomes an epistemological nightmare – or very, very hard. “So why do the traditional physical sciences live comfortably with this explanatory gap while the science of consciousness is tortured by it?” Arnold writes. “This is what has to be explained.”

    This is exactly my position, which might be called a “Dual Explanation Account of Consciousness.” Arnold’s Retinoid Theory could be entirely correct, but many, very many, would not be able to see this because they disagree on what it is that must be explained. My Blind Brain Theory explains the hardness of the hard problem in terms of the information we should expect the conscious systems of the brain to lack. The consciousness we think we cognize, I want to argue, is the product of a variety of ‘natural anosognosias.’ The reason Arnold and others seem to be barking up the wrong explanatory tree is simply that we don’t have the consciousness we think we do.

    Personally, I’m convinced that has to be case to some degree. Let call the cognitive system involved in natural explanation the NE system. The NE system originally evolved to cognize external environments: this is what it does best. (We can think of scientific explanation as a ‘training up’ of this system, pressing it to its peak performance). At some point, the human brain found it more and more reproductively efficacious to cognize *onboard information* as well: in addition to continually sampling and updating environmental information, it began doing the same with its own neural information.

    Now if this marks the genesis of human self-consciousness, our perennial confusion becomes the very thing we should expect. We have an NE system exquisitely adapted over hundreds of millions of years to cognize environmental information suddenly forced to cognize 1) the most complicated machinery we know of in the universe; 2) from a fixed (hardwired) ‘perspective'; and 3) with nary more than a million years of evolutionary tuning.

    Given this (and it seems pretty much airtight to me), we should expect that the NE system would have enormous difficulty cognizing consciously available information. (1) suggests that the information gleaned will be *drastically* fractional. (2) suggests that the information will be thoroughly parochial, but also, entirely ‘sufficient,’ given the NE’s inability to ‘take another perseptive’ relative the gut brain the way it can relative its external environments. (3) suggests the information provided will be haphazard and distorted, the product of kluge mutations.

    In other words, (1) implies ‘depletion,’ (2) implies ‘truncation’ (since we can’t access the causal provenance of what we access), and (3) implies a motley of distortions. This is what your NE has to work with.

    This was the (somewhat Dennettian) point I kept hammering over and over above: our attempts to cognize experience utilize the same machinery we use to cognize our environments – evolution is too fond of ‘twofers’ to assume otherwise, too cheap. Given this, the “hard problem” not only begins to seem inevitable, but something that probably every other biologically conscious species in the universe suffers. The million dollar question is this: if information privation generates confusion and illusion regarding phenomena within consciousness, why should it not generate confusion and illusion when regarding consciousness itself? Do we have some ‘magic information quality control’ filter?

    Of course not. We’re stranded: both with the patchy, parochial neural information provided, and with our ancient, environmentally oriented cognitive systems. The result is what we call consciousness. Before we can begin explaining consciousness, in other words, we have to understand the severity of our informatic straits. In this sense, I am wholly with the monistic naturalists like Arnold: given the success of the natural sciences, and given the informatic constraints outlined above, we have to no reason to think that our gut antipathy to naturalistic explanations like his are anything more than an artifact of our ‘brain blindess.’

    (Note: I deliberately simplified this picture to better convey the gestalt of BBT. In addition to ancient NE systems we have less ancient, but still powerful ‘social explanation’ (SE) systems as well (that play a large role, I think, in the shape of the distortions). But the principle still holds.)

  106. 106. Arnold Trehub says:

    Scott, on the page just before my introductory chapter in *The Cognitive Brain*, this quotation, written more than a hundred years ago, is printed:

    “Mind, n. A mysterious form of matter secreted by the brain. Its chief activity consists in the endeavor to ascertain its own nature, the futility of the attempt being due to the fact that it has nothing but itself to know itself with.” — Ambrose Bierce, *The Devil’s Dictionary*

    Though written by Bierce long ago, it captures our epistemological problem re consciousness.

  107. 107. scott bakker says:

    That particular entry in The Devil’s Dictionary is *one* of my favourites. I’ve been coming up with my own satirical definitions for my blog entries with an eye to publishing my own version some day.

    Speaking of which, I took my above response to you and worked it into a post on TPB: http://rsbakker.wordpress.com/2012/07/04/lamps-instead-of-ladies-the-hard-problem-explained/

    What do you make of the way I frame the problem, Arnold?

  108. 108. Kar Lee says:

    Arnold,
    You ask “So why do the traditional physical sciences live comfortably with this explanatory gap while the science of consciousness is tortured by it?”

    It is all because of subjectivity. In physical sciences, subjectivity is to be eliminated at all cost – in experimental results, in descriptions of phenomena, in theories, in predictions. Subjectivity simply does not exist in physical sciences. In physical sciences, the underlying governing laws are assumed to exist objectively, independent of you and me. Objectivity is assumed. With objectivity, the mechanism of the working of the universe is to be explained with the least possible building blocks as possible, and these are the unchanging foundation of physical sciences. If you call the inability of physics to explain its own foundation a gap, it is only because you have to start somewhere. How else do you explain something? Explain in terms of what? In terms of the things that is most self evident, most fundamental. If you call those most self-evident foundation a gap, it is definitely not at the same level as the GAP we have in consciousness. But don’t get me wrong. I am not saying that those once thought of as the foundation will not be better described as derivatives when our knowledge expands. Newton’s second law was fundamental because it is the way nature works, but you can derive it from the principle of least action when you appropriately define what “action” is. (let’s forget about quantum mechanics for the moment). Then why does nature always follow the path of least action? Because it is the way it is. If you call it a gap, I call it a discovery of one important aspect of nature: Natural states tend to be states of some kind of optimization. So, I don’t think this kind of discovery should be labeled a gap at all.

    But in consciousness study, the fundamental question is why when a group of materials come together in a certain fashion, the result is “subjectivity”. If a theory of consciousness starts by proclaiming “subjectivity” exists when a group of materials come together in a certain fashion, I think the theory is already being set up to explain something else other than consciousness. The original question remained unanswered, and the gap remains.

  109. 109. scott bakker says:

    I don’t think Arnold disagrees with the substance of what you’re saying as the emphasis, Kar. He understands the gap very well, as he should, having hashed out the issue with Levine himself! He understands that the explanatory difficulties posed by consciousness in particular need to be explained (which is why I keep plying him with BBT), but he thinks that when all the incremental empirical work is said and done, the ‘consciousness explanatory gap’ will prove to be as inconsequential as you say the ‘general explanatory gap’ is in the sciences.

    Is that a fair statement, Arnold?

  110. 110. Vicente says:

    Arnold (#104)

    I don’t agree, and regarding the gaps, on one hand you discard them and on the other you produce them artificially.

    In your classification of evolutionary stages: physics, biology, psychology… you are just painting in different colors segments of one same rope, but the rope remains one, physics.

    Unless you identify a fundamental, categorical, qualitative element that differentiates one stage from the next one, e.g. there is some additional element in biology that cannot be explained by physics, then you cannot claim that each stage has a reality of its own, except in our human description of the world.

    If it is possible to trace back the history of laptop (for instance) from the factory to the creation of the solar system, then, physics are the result of physics, if not, you will have to come out with a completely different approach.

    We are all aware that the trouble dwells in the interludes (stage interfaces): nothing->something (big bang?); matter -> life; life -> consciousness.

    So, unless you prove that there is a radical change in those interfaces your staggered approach is articial.

    The whole discussion can be summarised in:

    Is it possible to make a physical model, set of equations, that fully describes the behaviour of each stage, as well as the evolution, of one to the next one. Ultimately, can the brain behaviour be fully described by physics? Now we hit the free will problem, and so on, everything is part of the same conundrum.

    The key is to understand if the “chain of events” you present is a continuum or either there are real gaps, discontinouities to be explained, that entitle each stage with a category of its own, not completely based on the preceeding one (precursor?). If it is a continuous, then physics is the result of physics.

    As Scott requires, of course, all this could be completely naturalised.

  111. 111. Arnold Trehub says:

    Scott, yes, that (#109) is a fair statement about my view of explanatory gaps. But when Kar (#108) writes “Subjectivity simply does not exist in physical sciences”, I strongly disagree.

  112. 112. Arnold Trehub says:

    Vicente: “Unless you identify a fundamental, categorical, qualitative element that differentiates one stage from the next one, e.g. there is some additional element in biology that cannot be explained by physics, then you cannot claim that each stage has a reality of its own, except in our human description of the world.”

    You are making my point. We are not omniscient so we simply cannot identify the fundamental/ultimate reality of anything that we experience. *Human descriptions of the world* are all that our science has to work with.
    All that we can do is make guesses (theoretical models) about how the world works and then make empirical tests of the logical implications of our guesses. This is what is happening right now in the search for the putative Higgs boson.

  113. 113. Arnold Trehub says:

    Scott, I think I agree with the way you frame the hard problem of consciousness in TPB. But I’m left sort of dangling, wondering what the words “information” and “consciousness” mean to you.

    This is my take on the concept of information.

    My definition of information:

    *Information is any property of any object, event, or situation that can be detected, classified, measured, or described in any way*

    1. The existence of information implies the existence of a complex ?physical system consisting of (a) a source with some kind of structured? content (S), (b) a mechanism that systematically encodes the structure of? S, (c) a channel that selectively directs the encoding of S, (d) a? mechanism that selectively receives and decodes the encoding of S.

    2. A distinction should be drawn between *latent* information and what ?might be called *kinetic* information. All structured physical objects? contain latent information. This is as true for undetected distant ?galaxies as it is for the magnetic pattern on a hard disk or the ink ?marks on the page of a book. Without an effective encoder, channel, and? decoder, latent information never becomes kinetic information. Kinetic? information is important because it enables systematic responses with? respect to the source (S) or to what S signifies. None of this implies? consciousness.

    3. A distinction should be drawn between kinetic information and? *manifest* information. Manifest information is what is contained in our? phenomenal experience. It is conceivable that some state-of-the-art ?photo-to-digital translation system could output equivalent kinetic ?information on reading English and Russian versions of War and Peace,? but the Russian printed output of the book provides me no manifest information about the story, while an English printed version of the book allows me to experience the story. The explanatory gap is in the causal connection ?between kinetic information and manifest information.

    My working definition of consciousness:

    *Consciousness is a transparent brain representation of the world from a privileged egocentric perspective*

    Here *transparency*, I think, corresponds to what you call brain blindness.

    Are these definitions of information and consciousness consistent with the same terms in your TPB post?

  114. 114. Kar Lee says:

    Hi Scott,
    Is your website http://www.threepoundbrain.com?
    I clicked on the link associated with your name in the comment heading and got nowhere.

  115. 115. Kar Lee says:

    Arnold [111],
    “But when Kar (#108) writes “Subjectivity simply does not exist in physical sciences”, I strongly disagree.”

    Any example of subjectivity in physical sciences?
    But first of all, let me admit that I have been using the word “subjectivity” in different contexts. In physical sciences, I used it to mean that the conceptual framework of how the world works is independent of the people who is conceptualizing the framework. In consciousness, I used subjectivity to mean the existence of first person view.

    But here, let’s talk about subjectivity in the first context, that the framework of how the world works being independent of who is arguing with who. Arnold, can you give me an example of subjectivity that is built into the conceptual framework of how the world works in physical sciences? Or is that what you meant when you said subjectivity?

    Mind you that quantum mechanics does not required subjectivity (not explicitly in there) but only the act of measurement, which is an objective act.

    Also, our inability to know the “ultimate theory” does not imply subjectivity in physical sciences either. What is subjected to our inability is our state of knowledge, not the nature of the world, which physical sciences try to get to.

  116. 116. Kar Lee says:

    Arnold,
    But let’s go back to the second context of the word Subjectivity, the existence of the first person view, which is what we should be talking about anyway, can you think of an example in physical sciences in which this Subjectivity is assumed/demanded?

    My point is, the underlying principles governing how the world/universe works is exactly the same no matter whether this is a p-zombie world. Subjectivity is not there in physical sciences.

  117. 117. Arnold Trehub says:

    Kar: “Any example of subjectivity in physical sciences? …. can you think of an example in physical sciences in which this Subjectivity is assumed/demanded?”

    The entire corpus of the physical sciences consists of artifacts which are external expressions of neuronal models subjectively constructed within the brains of scientists. And these same artifacts that constitute our public science are meaningful only in the subjective/phenomenal world of the scientists who contemplate them. So the first-person perspective (subjectivity) is demanded if what we know as science is to exist.

  118. 118. Kar Lee says:

    Arnold,
    It sounds like idealism to me. But that’s fine. We can agree to disagree. But I am not really at odds with idealism. When I put on my idealist’s hat, I can really come to defend your position. But right now, I have my materialist’s hat on. So, let’s disagree for now.

  119. 119. Arnold Trehub says:

    Kar, I define subjectivity in terms of the activity of a particular kind of brain mechanism with a clearly specified neuronal structure and dynamics (the retinoid model). So in my theoretical model subjectivity is not in the philosophical domain of idealism.

  120. 120. scott bakker says:

    Have you had a chance to read Luciano Floridi’s Philosophy of Information, Arnold?

    Generally, I take information to mean any set of systematic differences capable of making systematic differences – which is to say, in a more general sense than you. Information is structure, pattern. Information processing refers to dynamic interactions between structures/patterns. Really, it’s simply an ‘unexplained explainer,’ but one possessing the *decisive* virtue of admitting both semantic and nonsemantic interpretations.

    I like your tripartite distinction, but I would actually resist the distinction you make between kinetic and manifest information, for two reasons: 1) because I think doing so lands you in the lap of ‘Vicente’s trap,’ if I can call it such! All kinetic information has to be a form of manifest information, since it is arguably only made available to you as it manifests itself; and 2) even if you overlook this problem, it risks locking the concept on either side of the dualism I think information can explain away.

    On my account, information never becomes semantic (what you call ‘manifest’), it only *seems* to become so. The ‘explanatory gap’ is simply an artifact of this seeming.

    The virtue of this approach, I think, can be seen when you apply it to your War and Peace example. The presumption, in your example, is that the English and Russian translations contain the SAME information because they are simply two different linguistic instantiations of the same ‘meaning.’ Since the meaning is the same, but the instantiations are different, it seems that we are dealing with two different kinds or levels of information. On my (nominalist) account, all information is concrete, and ‘sameness’ is simply what happens when information is stripped from similarity. Every reading of War and Peace is *unique,* something that happens in particular brains in particular times and places. Russian information is simply patterned the wrong way for your particular system, though when introduced to different systems, it allows for ‘translation,’ repatterning in terms amenable to your particular capabilities. There is no ethereal, supra-natural ‘information’ that the two versions instantiate, only similar systems producing roughly similar outputs from different linguistic patterns. The apparent ‘gap’ is simply a function of information loss. This is what allows information to explain (away, as in the case of BBT) the semantic as the result of the informatic encapsulation of the brains conscious subsystems.

  121. 121. Kar Lee says:

    Arnold,
    Let me give you an illustration of why I have problem with the way you define subjectivity.

    It is like defining a sunny day as a day when your friend goes to the beach. Your friend has been going to the beach whenever the sun comes out for as long as you have known him. (He has some personal collection in the beach house that he owns, and he has the need to go to appreciate the collection under the sun, and so he goes whenever the sun is shining).

    Instead of choosing to define a sunny day as a day when the sun is not blocked by the cloud, and explain why your friend goes to the beach whenever it is sunny, you choose to define a sunny day as the day when your friend goes to the beach. Obviously, this definition, though in practice makes no difference from what people think of as a sunny day because when you see him going to the beach, it has to be a sunny day, is different from what people usually think of as a sunny day. There is a missing link between your definition and the real phenomenon. Before the fact that “when the sun is shining, your friend goes to the beach is established and proven, your definite just defines the problem away, glossing over the question of why your friend goes to the beach when the sun is shining. Same for subjectivity here. Instead of trying to explain what makes the retinoid system having a subjective view, you define it as subjectivity.

  122. 122. scott bakker says:

    Kar @ Arnold: Your analogy is fine as it goes, but I think Arnold could just as easily reverse the terms, and make the same intuitions your analogy appeals to work *for* him. He could say that he’s defining a sunny day as a day that’s not cloudy, and you’re defining a sunny as a day when friends go to the beach. Just because everyone else defines a sunny day the way you do simply means that everyone else has an inadequate definition. Thus the need for a explanation as to why so many people are wrong. Say you define the earth as the planet around which the heavens revolve, and Arnold defines it as the third planet that revolves around the sun. If everyone agrees with you, then Arnold needs to augment his definition with some account as to why everyone is deceived.

    In point of fact, your disagreement with Arnold exemplifies the problem I’ve been harping on all along: the problem of finding a consensus-commanding definition of what consciousness is. There really is no knockdown way out of the impasse you and Arnold find yourself in. The reason I side with Arnold is simply Occam’s razor. All things being equal, I think it is far more plausible that the tradition of scientific explanation that has served us so well regarding all other natural phenomena will suffice to explain consciousness as well, and that our difficulties accepting this staggeringly successful explanatory approach is far more parsimoniously explained as the result of our cognitive incapacities than as the consequence of the occult properties of consciousness.

    The old vitalism debates provide a rough analogue of the problem.

  123. 123. Vicente says:

    Arnold,

    No, I am not making your point, I claim there is such a difference for consciousness, while you say that the biophysical processes in the brain constitute consciousness, therefore, physics. Then you don’t explicitly clarify what makes biology especial with respect to physics.

    Then,

    The entire corpus of the physical sciences consists of artifacts which are external expressions of neuronal models subjectively constructed within the brains of scientists.

    I see it the other way round:

    The entire corpus of the physical sciences consists of expressions, which are internal neuronal models subjectively constructed within the brains of scientists, of external elements.

  124. 124. Vicente says:

    Kar, Scott, the concept of subjectivity is not to be agreed by definition, and that is part of its definition.

    To me, subjectivity is the main property that define those beings that can exist on their own devices, so that they are observer in-dependant.

    Subjectivity is the property that entitles an observer to be such.

  125. 125. Arnold Trehub says:

    Vicente: “Subjectivity is the property that entitles an observer to be such.”

    Entitled?!! By whom?!!

    I understand that this is your belief. But, in the pragmatic enterprise of science, when conflicting beliefs are expressed as theories, we choose the theory/belief that is judged to be best able to predict/post-dict relevant empirical findings.

    The concept of *observation* in nature is not simple. In “Where Am I? Redux” I suggest that the common notion of the self as the observer has been a stubborn obstacle in our path to an understanding of subjectivity/consciousness. A detailed neuronal model of the cognitive brain in which the core self (I!) is not an observer, but rather the spatio-temporal perspectival *origin* of our phenomenal world, leads to successful predictions of new empirical findings as well as post-dictions (explanations) of previously puzzling phenomena. In this model, observation is performed by the numerous pre-conscious synaptic matrices in the brain’s sensory modalities. So observations are made *before* they become a part of our subjective experience.

    What competing biophysical model of subjectivity can successfully predict the SMTT findings or the MOON ILLUSION?

  126. 126. Kar Lee says:

    Arnold,
    “What competing biophysical model of subjectivity can successfully predict the SMTT findings or the MOON ILLUSION?”

    Interesting question. Are you suggesting that no algorithm in a digital computer can be written to exhibit the same kind of response a person does in the SMTT experiment?

  127. 127. Arnold Trehub says:

    Kar: “Are you suggesting that no algorithm in a digital computer can be written to exhibit the same kind of response a person does in the SMTT experiment?”

    If you are suggesting that an algorithm/program run on a digital computer can be taken as a *biophysical model* of subjectivity the we have very different notions about the meaning of a *biophysical model*.

  128. 128. Kar Lee says:

    Arnold,
    Why restrict to “biophysical” anyway? Is virus biophysical? (Not computer virus, but regular virus.)

  129. 129. Arnold Trehub says:

    Scott: “Have you had a chance to read Luciano Floridi’s Philosophy of Information, Arnold?”

    No I haven’t, but maybe I should.

    Your take on information is interesting, Scott. But when you say “The presumption, in your example, is that the English and Russian translations [of War and Peace] contain the SAME information because they are simply two different linguistic instantiations of the same ‘meaning’”, you don’t take proper account of my distinctions between *latent information*, *kinetic information*, and *manifest information*. The book War and Peace is an artifact full of all kinds of latent information, including strings of graphic marks (no story yet) that will produce DIFFERENT patterns of kinetic information in English and Russian machine translations and in the pre-conscious synaptic matrices of a human reader of these translations. In order for the pre-conscious kinetic information to be experienced as a STORY, it must undergo a further transformation by which the pre-conscious (kinetic) patterns in the synaptic matrices are organized into relevant images and events in the reader’s phenomenal world (retinoid space). It is at this stage that the reader has the *manifest information* of a meaningful story. But if the machine translation is in Russian, my brain is not able to use the kinetic patterns in my synaptic matrices to compose a meaningful story in my retinoid space; i.e., there is no manifest story information for me. So it is only if the machine translation is in English that War and Peace becomes manifest information for me *as a meaningful story*.

    Photo-electric door openers and planaria operate solely on the basis of kinetic information. No manifest information. No stories for them.

  130. 130. Arnold Trehub says:

    Kar: “Why restrict to “biophysical” anyway? Is virus biophysical? (Not computer virus, but regular virus.)

    It is a decision based on the very strong empirical evidence that only a subclass of biophysical entities — certain animals — exhibit behavior that we think reflects consciousness. We have no comparable evidence that non-biological physical entities are conscious.

    I don’t know about viruses. I think there is still a dispute about their status as living organisms.

  131. 131. scott bakker says:

    Floridi’s own semantic conception of information, for me anyway, strikes me as more of a reductio than anything, but the arguments he makes for a general “informatic turn” in things like the philosophy of mind I find very compelling. If nothing else, the man can write.

    I think I got you right the first time: information and information processing (or in your terminology, latent and kinetic information) are *all* there is in my account. What we call consciousness is simply an organizational artifact of these two, much the same as life more generally. Organize information and information processing a certain way and you get life. Organize it a different way and you get consciousness.

    What you call ‘manifest information,’ for me, is simply more information processing, only *encapsulated* within the organizational features that generate consciousness. Encapsulation implies limits, information horizons, and these are what make things like meaning and qualia so difficult for our cognitive systems, which are primarily tuned to navigate external environments, to plug into our understanding of the natural world.

    Or in other words, information horizons are not only what makes it seem necessary to posit ‘manifest information,’ but also why we find ourselves flummoxed by consciousness – trapped in the Hard Problem.

  132. 132. Arnold Trehub says:

    Scott: “Organize information and information processing a certain way and you get life. Organize it a different way and you get consciousness.”

    Yes. We are in agreement on this crucial point. I proposed *manifest information* to distinguish the information organized subjectively within retinoid space from the information organized simply as *kinetic information* in synaptic matrices and in many other kinds of living organisms and in lifeless artifacts. I found it a useful concept. Others might not.

  133. 133. Kar Lee says:

    Arnold,
    “It is a decision based on the very strong empirical evidence that only a subclass of biophysical entities — certain animals — exhibit behavior that we think reflects consciousness.”

    In view of the “problem of other minds”, your statement is not scientific.

    But to bring back the illustration of my objection to your definition, let me go back to your original definition of consciousness: “Consciousness is a transparent brain representation of the world from a privileged egocentric perspective” for a moment.

    Here you define it as a brain representation. And you limit it to only brain representation.

    Let me alter it a little bit, and if you object to my alteration, then it will be the same objection I have to your original definition.

    Here it goes:
    “Consciousness is a transparent *male* brain representation of the world from a privileged egocentric perspective”

    Note that I have excluded female brains, just to show the logical error.

    If you object to this definition because it is too restrictive, I will object to your definition because it is too restrictive.

    See my point?

  134. 134. Arnold Trehub says:

    Kar, all definitions are restrictive. But some possible restrictions are generally thought to be non-relevant while other restrictions are contextually important.

    We have no evidence that the sex of a creature determines whether or not the creature is conscious.

  135. 135. Kar Lee says:

    Arnold,
    Similarly, we have no evidence that brain is the only substrate for an egocentric view.

  136. 136. scott bakker says:

    The only real problem I have is the possibility that it’s a conceptual distinction that reinforces prevailing misconceptions. If you just talk information and information processing you immediately see the potential for information horizons to cause profound problems. ‘Manifest information,’ on the other hand, suggests the delivery of some kind of package, something produced *for* conscious consumption. When you look at consciousness as something evolution stumbled toward, and never had time to properly ‘tune,’ then you can see consciousness as a tangle of ad hoc ‘wiretaps,’ mutation driven (which is to say, random) access to fragments of information that may or may not be amenable to existing cognitive systems. This allows you to explain the kinds of things that Schwitzgebel points out, for instance, not to mention the myriad conceptual perplexities that bedevil things like quality and intentionality understood generally.

    What you call ‘manifest information,’ I call opportunistically accessed information and information processing. It’s the opportunism (the very thing we should expect, for several reasons) that explains the explanatory gap.

    So the idea would be that the retinoid system is something that the subsystems behind the evolution of human consciousness (or attentional awareness) have only partially and imperfectly accessed (explaining things like Anton-Babinski or Blindsight perhaps). The access will be better ‘tuned’ (but not perfect – like blindsight once again) depending on the relative importance of the modality involved – which would be why vision figures so importantly in consciousness, while other forms of information access, like affects and so forth, seem to be so horribly ‘low res.’

  137. 137. Arnold Trehub says:

    Kar: “Similarly, we have no evidence that brain is the only substrate for an egocentric view.”

    True, but what is the evidence that something other than a brain contains an egocentric perspectival representation of its surround?

  138. 138. Kar Lee says:

    Arnold,
    I can respond to your question in two ways
    1) Roomba
    2) But what is the evidence that something other than a male brain contains an egocentric perspectival representation of its surround?

    What do you count as evidence?

  139. 139. Arnold Trehub says:

    Kar,

    1. Roomba does not contain an egocentric perspectival representation of the space in which it exists. Roomba operates on the basis of sensors which detect obstacles, and a mechanism that controls its motion in accordance with a preset schedule of travel and the pattern of output from its motion detectors. No subjectivity.

    2. Women have brains that contain an egocentric perspectival representation of their surround because their brains contain a *retinoid system*, as evidenced, for example, by the following:

    — Women experience a complete object in lateral motion when tested in the SMTT apparatus.

    — Women experience the moon illusion.

    — Women experience 3D objects in the Julesz random dot test.

    — Women experience an increase in the size of an after-image as a positive function of fixation distance.

    Kar: “What do you count as evidence?”

    Evidence is a normative concept that depends on inter-subjective agreement in particular contexts. In particle physics, for example, evidence for the existence of the theoretically posited Higgs boson is provided by the presence of a particular kind of track in the Cern collider apparatus. Whether or not the Higgs boson will be accepted as a part of the standard model will depend on the statistical strength (5 sigma?) of the evidence. Similarly, in the science of consciousness, appropriate empirical tests must be applied to build a standard theoretical model of what constitutes consciousness.

  140. 140. Kar Lee says:

    Arnold [139],
    Let’s see where this debate leads us…

    1. Arnold: “Roomba does not contain an egocentric perspectival representation of the space in which it exists. Roomba operates on the basis of sensors which detect obstacles, and a mechanism that controls its motion in accordance with a preset schedule of travel and the pattern of output from its motion detectors. No subjectivity.”
    My response: “Women don’t have a *male* egocentric perspectival representation of the space in which they exist. Women operate on the basis of *female* brain, thus violates the augmented definition (yes ridiculous, but it is the definition). No subjectivity.”

    2. Arnold: “Women have brains that contain an egocentric perspectival representation of their surround because their brains contain a *retinoid system*, as evidenced, for example,…..”
    My response: “Roomba can navigate its immediate environment and therefore it contains an egocentric perspectival representation of its surround…..”

    On “What do you count as evidence?”:

    Arnold: Evidence is a normative concept that depends on inter-subjective agreement in particular contexts….

    My response: Do you take the ability to navigate ones environment as evidence of having a egocentric perspectival representation of its surround or do you require someone to be able to speak English? What do you count as evidence?

  141. 142. Kar Lee says:

    Interesting Arnold. I know John Searle is big on biological naturalism and I like him very much. But is [141] a response to my comment in [140]?

  142. 143. Arnold Trehub says:

    You might take #141 as a response to your #140, Kar.

    I confess that I’m unable to grasp the point of your comments 1 and 2. You are free to propose any definitions of your own, test their implications and see where it leads you.

    On your direct questions to me:

    a. The ability to navigate in one’s environment is insufficient evidence of an egocentric perspectival representation of one’s surround.
    b. I obviously do not require one to speak English.
    c. There are numerous kinds of behaviors that I would count as evidence for subjectivity. I refer you again to my response in #139.

  143. 144. Kar Lee says:

    Arnold,
    This is just for the sake of debate…

    Point # 1 is to show that if you can define subjectivity away from Roomba with your definition, I can define subjectivity away from half of the human population with the augmented definition. Same approach. This shows that defining a problem away, especially the Hard Problem, is problematic.

    Point # 2 does the same thing, except that this time it is to define something in.

    When I question what you count as evidence, I have in mind the interpretation of the evidence. For Galileo, the planets’ occasional wandering backward in the sky was evidence that the sun was at the center. But for Vatican at that time, it just proved that Galileo was crazy. Same evidence, different interpretations. Your interpretation that people having SMTT experience as them having subjectivity, but I can program a computer to report the same thing, but that won’t count. Same evidence, different interpretations.

    So, when I question what you count as evidence, I am looking for some principles, not a list of which one is and which one is not.

    But as before, I think we are going into a dead end. Why don’t we just agree to disagree?

  144. 145. Arnold Trehub says:

    I agree that we disagree.

  145. 146. Vicente says:

    You lazy guys…

    Kar, the difference is that evidence by definition is not “subject” to interpretation, that is why they had force to Galileo to publicly deny himself, because they had no argument against him, evidence was on his side, no possible interpretation.

    Now, in our case of study, the evidence is that there is no evidence… because we can observe the planets but we can’t observe others conscious realities…

    This is the real disagreement, that Arnold takes NCCs and testimonies as evidence, and others don’t. I accept they are partially important data to take into consideration, but definitely don’t stand in equal footing with planets’ orbits, as far as science is concerned.

  146. 147. Kar Lee says:

    Vicente, lazy…guilty as charged…the summer is so nice around here… ;)

    On the interpretation part, the epicycle models of planetary movements “explained” quite well, and the planetary movements could be interpreted as evidence for the existence of those epicycles with earth being at the center of the universe. So in that sense, Galileo did not have the monopoly over the evidence.

    I think it is always the interpretation of evidence that matters, not the evidence itself. With “interpretation”, anything can be called “evidence”.

    The stock market provides another example. Half of the people think it is going up, another half think it is going down (pressure to sell and pressure to buy have to be equal, otherwise the price will be in free fall or will shoot to the moon…), and both sides have their “evidence”, or rather the interpretation of the same public information that is out there.

    And often, in science, the “correct” interpretation is the one that is the simplest, saving people most effort to grasp, so that even lazy people like it. But then, simplicity is in the eyes of the beholder. Heliocentric interpretation fits that bill over the geocentric one. But for the Vatican at that time, the geocentric interpretation was simpler because God’s position could then be preserved, avoiding the complications of a chaotic godless world.

  147. 148. Arnold Trehub says:

    Vicente: “This is the real disagreement, that Arnold takes NCCs and testimonies as evidence, and others don’t.”

    Don’t you see that our scientific agreement about a model of planetary motion is based on the combined TESTIMONIES of lots of people using the proper measuring procedures! How is this essentially different from agreeing on a model of conscious content (the retinoid model) based on the combined TESTIMONIES of lots of people using the SMTT measuring procedure?

  148. 149. Vicente says:

    Kar, ha ha, no way, it was only the heliocentric model that could explain the observations. You have to get down to the quantum level if you want me to swallow the “interpretions”.

    Ahhh… summer… that is the point, what current consciousness theory can give any explanation about George Gershwin’s feelings when he composed summertime, or mine, when I listen to it..

    Summertime, and the livin’ is easy… Fish are jumpin’ And the cotton is high. Your daddy’s rich. And your mamma’s good lookin’ So hush little baby. Don’t you cry …

    Arnold, might you be twisting language a bit too much when you refer to scientific papers as testimonies.

    I can give you the orbital parameters of the planets, within a certain error margin, and you can go home and check them, no need for trust.

    Could you give me the geometrical parameters of the ovoidal “thing” perceived by the subjects in the SMTT experiment? I’ll make it easier, let’s focus on the brain NCC, could you identify specifically the brain volume (or scattered areas) in which the ovoid is? considering that the physical substract are neurons (not homogeneously and regularly distributed, maybe quite scattered), how come the phenomenal figure is continuous, what works for the time domain for increasing frequencies, does not work in the space domain for geometries, and so on. All these questions apply to vision in general.

    In any case, the experiment is a great piece of scientific work, that sheds a lot of light on the brain machinery, not so much on the nature of the phenomenal experience related (correlated?) to it.

Leave a Reply