Ephaptic consciousness?

Picture: ephaptic consciousness. The way the brain works is more complex than we thought. That’s a conclusion that several pieces of research over recent years have suggested for one reason or another: but some particularly interesting conclusions are reported in a paper in Nature Neuroscience (Anastassiou, Perin, Markram, and Koch). It has generally been the assumption that neurons are effectively isolated, interacting only at synapses: it was known that they could be influenced by each other’s electric fields, but it was generally thought that given the typically tiny fields involved, these effects could be disregarded. The only known exceptions of any significance were in certain cases where unusually large fields could induce ‘ephaptic coupling ‘ interfering with the normal working of neurons and cause problems.

Given the microscopic sizes involved and the weakness of the fields, measuring the actual influence of ephaptic effects is difficult, but for the series of experiments reported here a method was devised using up to twelve electrodes for a single neuron. It was found that extracellular fluctuation did produce effects within the neuron, at the minuscule level expected: however, although the effects were too small to produce any immediate additional action potentials, induced fluctuations in one neuron did influence neighbouring cells, producing a synchronisation of spike timing. In short, it turns out that neurons can influence each other and synchronise themselves through a mechanism completely independent of synapses.

So what? Well, first this may suggest that we have been missing an important part of the way the brain functions. That has obvious implications for brain simulations, and curiously enough, one of the names on the paper (he helped with the writing) is that of Henry Markram, leader of the most ambitious brain simulation project of all,Blue Brain.  Things seem to have gone quiet on that project since completion of ‘phase one’; I suppose it is awaiting either more funding or the advances in technology which Markram foresaw as the route to a total brain simulation. In the meantime it seems the new research shows that like all simulations to date Blue Brain was built on an incomplete picture, and as it stood was doomed to ultimate failure.

I suppose, in the second place, there may be implications for connectionism. I don’t think neural networks are meant to be precise brain simulations, but the suggestion that a key mechanism has been missing from our understanding of the brain might at least suggest that a new line of research, building in an equivalent mechanism to connectionist systems could yield interesting results.

But third and most remarkable, this must give a big boost to those who have suggested that consciousness resides in the brain’s electrical field: Sue Pockett, for one, but above all JohnJoe McFadden, who back in 2002 declared that the effects of the brain’s endogenous electromagnetic fields deserved more attention. Citing earlier studies which had shown modulation of neuron firing by very weak fields, he concluded:

By whatever mechanism, it is clear that very weak em field fluctuations are capable of modulating neurone-firing patterns. These exogenous fields are weaker than the perturbations in the brain’s endogenous em field that are induced
during normal neuronal activity. The conclusion is inescapable: the brain’s endogenous em field must influence neuronal information processing in the brain.

We may still hold back from agreeing that consciousness is to be identified with an electromagnetic field, but he certainly seems to have been ahead of the game on this.

147 thoughts on “Ephaptic consciousness?

  1. It seems highly likely that that the brain’s endogenous em field will bias and modulate neuronal activity in the brain. But what does this tell us about the mechanisms of consciousness? When we are in a deep sleep or in a coma the endogenous em field is in play, yet we are not conscious. I have suggested that we are conscious if and only if we have an active internal representation of *something somewhere* in perspectival relation to a representation of our self. If consciousness requires an em field, the field would have to have this kind of representational spatiotemporal structure. So if we agree that em fields are an essential aspect of consciousness, this necessarily brings us back to ask what kind of neuronal mechanisms can induce the necessary representational structure in the em field.

  2. Arnold,
    It could well be a holographic type of coherent representation. Just a wild guess.

    It is perhaps not too surprising that some previously “overlooked crosstalks” play a role in brain function because we always have this nagging feeling that we have not gotten the full picture. I won’t be surprised even if it turns out that quantum coherency plays a role as well. I actually think coherency is required from other considerations, such as from the approach of personal identity. If that is the case, people who want to upload themselves onto the Internet to live forever will be heading in the wrong direction.

  3. While I buy that em fields in the brain may effect cognitive functioning in terms of modulating neuronal circuits in some way, I find it hard to believe that em fields have a functional or structural effect on the actual information processing. At the very most we have simply not taken into account a weak modulating factor.

    The most well understood portion of the CNS is the retina. Many, many models of the retina have been built on the assumption that the actual information processing functions of the brain are performed and represented by the synaptic connections between neurons. We know enough about the retina to accurately map out the connective patterns between photoreceptor, bipolar, ganglia and horizontal cells. When these connections are translated into artificial neural networks, they perform exactly the same functions as the retina! (edge detection, motion detection etc…)

    If these em fields played a significant role in the actual information processing functions of the brain, then models of brain function which do not include them would not be able to accurately replicate what the brain does.

    Models of retinal function do not take into account em fields, yet do accurately replicate the information processing of a part of the CNS. Therefore, em fields do not play a significant role in the brain’s information processing.

    In short, ANN models work, if we were missing something important, they wouldn’t.

    As I said at the beginning, however, em fields may play a general role in the modulation of neuronal circuits, but this can hardly be called a “key mechanism”.

  4. I agree with #1 and #5. In any case, the additional “electric” (not electromagnetic) field based interaction mechanism, contributes to the consciousness hard problem solution as much as if a new neurotransmitter or a new membrane channel were discovered… so what.

    Maybe, it could help to synchronise faster broader areas, having an important impact in Information Integration processes for a global workspace model setup, I don’t know.

    Kar Lee, this is your wildest guess ever…

  5. That was my first thought too – poor old Henry. He must be feeling a bit blue. Blue brain project indeed. What the heck does he do now?

  6. Fair comment, Joe. Of course you could argue that the retina is a bit of a special case in various ways.

    But in general what we’ve got here is influence between adjacent neurons: I might be wrong, but I think in general it’s unlikely that a set of adjacent neurons would form a functionally coherent entity, which does suggest that the effect would have no specific significance, as you suggest.

  7. Arnold, if I may say something about your very interesting question despite is directly addressed to Peter…

    You are using the concepts functional and coherent for a set of pixels. These two terms only make sense if there is an external observer for those pixels.

    – They are functional if they perform the function of creating a symbol eg: a letter.

    – They are coherent if there is a relation between their attributes, eg: color and position that allows them to become a whole, the set of pixels is a new entity with properties beyond each individual pixel.

    But these considerations are meaningless if there is no screen observer.

    My question is, in the case of adjacent neurons, who or what would be the observer (or external entity) for which they have a coherent behaviour in order to perform a function? take the particular case of a sensorial cortex. If you say other parts of the brain, then I extent the question one more step for those parts of the brain.

  8. Arnold:

    I probably would say they’re functionally coherent, though I agree with Vicente that as they’re at the output end of the process it’s not altogether clear-cut. They’re also a special case in that their relative positions are key to their function, which I believe is not the case for most neurons (although I know in the case of visual processing the topology of the retina survives quite a long way in).

    Hope that makes sense.

  9. Peter,

    Yes, that does make sense. But with regard to the relative position of sensory neurons, empirical findings suggests that all exteroceptive and interoceptive sensory modalities are organized in at least a rough spatiotopic fashion centered on the particular body region innervating the modality. Each sensory modality has its own locus of origin. A key problem in understanding phenomenal consciousness is to show how all the separate sensory/feature mappings can be integrated in proper spatiotemporal register (features properly bound) within a global coherent brain representation of our egocentric volumetric space. This problem is compounded by the fact that we have no sensory transducers for the 3D space that we live in.

    This is the fundamental problem that the retinoid model of consciousness solves.

  10. Arnold,

    I have suggested that we are conscious if and only if we have an active internal representation of *something somewhere* in perspectival relation to a representation of our self

    I am not sure what you mean by pespectival relation but it seems you have extra bits and parts here.

    If you say “internal represeantation of something somewhere for someone “, fine, now we can discuss who is that someone.

    But if you have the internal representation of something somewhere, then separately the internal representation of our self, you need a third point of view, that observes both representations. A representation can’t exist without an observer, both justify each other.

    You, as everybody else, cannot escape the inconsistency.

    “The eye cannot see directly the eye”, for us the screen, the image and the spectator seem to compound one single entity. Either we fall in the cartesian homunculus infinite regression, or we produce incomplete inconsistent statements like yours, no way out, too bad.

  11. Vicente,

    You wrote:

    _”The eye cannot see directly the eye’, for us the screen, the image and the spectator seem to compound one single entity. Either we fall in the cartesian homunculus infinite regression, or we produce incomplete inconsistent statements like yours, no way out, too bad._

    You seem to be making the the very common mistake of assuming that in order for us to be conscious of the world around us we must first observe it. I would call this the observer fallacy. In fact it has things just backwards. My claim is that we must first *experience* our world *before* we can observe anything in our world. The primitive *something somewhere* that constitutes the fundamental ground of conscious experience (our phenomenal world) is the brain’s spatiotopic analog of the volumetric world space around a fixed point of origin (the 0,0,0 coordinate), what I have called the self-locus. The cluster of neurons at the self locus in the retinoid model constitute the *core self*. The core self is *not* an observer; it is the *perspectival origin* of all observations, which in my model of the cognitive brain are realized by the unconscious operations of specialized synaptic matrices in the various sensory modalities. The operation of the synaptic matrices sense, detect, classify, and image selected features that have been parsed out of the brain’s global world representation (retinoid space) and, in recurrent loops, project these images back into retinoid space in spatiotemporal register (called feature binding) to enrich our ongoing conscious experience of the world. There is no infinite regress and there is no homunculus.

    There is much more to be said about these processes than I can introduce here.

  12. Arnold,

    I never said that in order to be conscious of the world we must first observe it.

    We have to sense it, and then process that information, and then somehow we have a phenomenal experience related to some extent to that process. How to interprete, to analyse and to understand what we sense and observe is a different issue. What is to “experience” the world.

    Of course, the “core self” is not an observer, but we are observers, now put that together…. the moment you add an observer, that’s it, you trigger the regression.

    This is the point, one thing is the physiology of vision, and another is to see something…. if not, try to define a mapping between what you call the “brain’s spatiotopic analog of the volumetric world space around”, and your mental image. Consider the case of blindsight condition for example. On top of that, what happens when dreaming or imagining scenarios, is not that clear.

    I am not questioning that the retinoid model provides a good understanding of image processing brain capabilities, I am just saying that all the “representational” information produced by the retinoid system has to presented to “something”, in order to have a conscious visual experience, for the same reason that the set of pixels cannot be functionally coherent if nobody sees them.

    IMO, your approach makes sense in a world of blindsight people, of phil. zombies, but we are talking about conscious entities.

    Just one clarification, I don’t think that pure representational (contemplative) consciousness makes sense, I believe that the conscious experience has to involve some interpretation, some understanding, and that requires of a self. So to me the no-thought state is the no-state. It would be like having tons of raw data, if you don’t perform some data exploitation work on them they are useless. In summary, the retinoid system can play, or plays, an important role in visual perception and space navigation, but clarifies little in what to the “experience” of seeing respects .

  13. Vicente,

    You said:

    “Of course, the “core self” is not an observer, but we are observers, now put that together…. the moment you add an observer, that’s it, you trigger the regression…..This is the point, one thing is the physiology of vision, and another is to see something…. if not, try to define a mapping between what you call the “brain’s spatiotopic analog of the volumetric world space around”, and your mental image.”

    It seems to me that you seriously beg the question when you say that “we are the observers” and then relate this to a problem of infinite regression in an explanation of consciousness. If you don’t make a distinction between *observation* and *conscious content*, you obfuscate the issue. In my view, observation is the sensing, detection, and classification of any particular input pattern. So if you had a photosensitive-mechanical system that “looked at” your computer screen and turned your house lights on when the pixel pattern ON appeared on the screen, and turned lights off when the pattern OFF appeared on the screen, this system would be an observer of your computer’s output; but of course it is not conscious. Observation and response of this kind is found in all kinds of lower organisms, and there is no infinite regress.

    Consciousness, in distinction, is not observation. It is a particular kind of representation, namely a brain representation of something somewhere within a volumetric egocentric surround. This constitutes the full phenomenal scope of the world a conscious creature lives in. Preconscious observational mechanisms in the sensory modalities contribute all kinds of objects and events (mental images) to the brain’s phenomenal world (retinoid space). Where is the infinite regress?

    If you look at *The Cognitive Brain*, Ch. 12, “Self-Directed Learning in a Complex Environment”, you might get a better idea of what I mean. You can read this chapter here:

    http://www.people.umass.edu/trehub/thecognitivebrain/chapter12.pdf

  14. “Consciousness … is a … brain representation of something somewhere within a volumetric egocentric surround.”

    And assuming this is correct, what benefits do you envision this representation providing? As best I can tell, none have been identified so far, at least none necessary for responding to common external stimuli. The unequivocally physical capabilities you collectively call “observation” seem to be sufficient, at least for adequately responding to the external stimuli attendant to quotidian existence.

  15. Arnold: “Consciousness … is a … brain representation of something somewhere within a volumetric egocentric surround.”

    Charles: “And assuming this is correct, what benefits do you envision this representation providing?”

    This brain representation is necessary for us to be able to represent and respond to external stimuli not only on the basis of their intrinsic properties but, crucially, on the basis of their physical location in the world with respect to our own location and the location of other relevant objects. We would be unable to detect the affordances of events in our local world without having a volumetric representation of our egocentric surround.

    The brain’s retinoid system provides the necessary egocentric representation of the world around us.

  16. OK. Since some here seem to tightly couple “consciousness” and “phenomenal experience”, I mistakenly understood your quote to be distinguishing (correctly, IMO) what you are calling “observations” and the latter. So, my question was really about phenomenal experience, the benefits of which seem elusive.

    In any event, although I understand that (per ecological psych) an animal’s model of its environment needs to include its self-centered, dynamic spatial relationships with other entities in the environment, it isn’t obvious to me why that feature of the model warrants the label “consciousness”. Don’t your simulations of presumably “unconscious” retinoid systems behave just fine in their environments? Or do you attribute some concept of “consciousness” to them?

  17. Charles: “So, my question was really about phenomenal experience, the benefits of which seem elusive.”

    Agreed, so asking Arnold: The neural instantiation of the egocentric representation is what’s doing the behavior-controlling work of activating muscles (including speech acts in our case), so what does phenomenal experience add? If experience is identical to its neural correlates, then it doesn’t add any causal power to what they have. If it isn’t identical, then one needs an account of the phenomenal causation of behavioral processes (how qualia influence the physical mechanisms that control behavior), which I don’t see much hope for.

  18. Charles,

    phenomenal experience, the benefits of which seem elusive.

    Could be. What are the criteria to judge or assess if something is benefitial or not.

    What are the benefits of existing?

    Implicitly you are introducing a goal for the process. But who cares… if a huge meteorite hits the Earth…”The End”, what are the benefits of all that has happened in this planet till then.

  19. Kar Lee, Change Blindness is perfect for this discussion, since it is related to attention focus, which entails an observer involment (who pays attention to what?). The low level image processing functions and output seem to be controlled and filtered by the observer higher cognitive functions. I just read that people suffering autism are less prone to change blindness probably due to their different attention mechanisms.

  20. Charles and Tom,

    Charles: “….so what does phenomenal experience add? If experience is identical to its neural correlates, then it doesn’t add any causal power to what they have. If it isn’t identical, then one needs an account of the phenomenal causation of behavioral processes (how qualia influence the physical mechanisms that control behavior), which I don’t see much hope for.”

    I have argued that phenomenal experience *is* our phenomenal world, a transparent brain representation of the world around us from a privileged egocentric perspective. It doesn’t add any causal powers to the system of brain mechanisms that *constitute* our phenomenal world (the retinoid system) because both the subjective 1st-person events and corresponding activity in the relevant brain mechanisms, which are the objective 3rd-person events, are complementary *aspects* of an underlying unknowable reality. This formulation is based on the philosophical stance of dual-aspect monism. Notice that it does not claim that conscious content and brain processes are *identical* because the subjective aspect and the objective aspect reside in two different domains of description. Events in the phenomenal domain and in the biophysical (brain) domain have to be systematically bridged in order to have a proper scientific study of consciousness. To this end I have proposed the following bridging principle:

    *For any instance of conscious content there is a corresponding analog in the biophysical state of the brain.*

    The scientific task is to find brain mechanisms that can generate analogs of salient examples of conscious content. I’ve proposed the retinoid system as such a mechanism.

    Qualia are the phenomenal *features* that have been parsed out of our global phenomenal world by our unconscious cognitive brain mechanisms. So how do qualia influence the physical mechanisms that control behavior?, Charles asks. It seems clear to me that since qualia are *constituted* by particular brain events (dual-aspect monism), they *necessarily* influence behavior by the very fact that they are causally effective as internal stimulus patterns that evoke adaptive behavioral response.

    What are the counter arguments?

  21. Arnold, if I am interpreting what you are saying correctly, your position is that phenomenal experience simply *is* the brain representation of the external world being fed in as input to the brain representation of self (the egocentric representational perspective).

    These brain representations can be observed objectively (e.g. brain scans, cell recording etc…) and described algorithmically (simulations, models etc…)

    You propose the following bridging principle:

    “*For any instance of conscious content there is a corresponding analog in the biophysical state of the brain.*”

    I see this not as a solution to the problem posed in the conversation, but an assumption necessary for it. Unless you subscribe to some form of radical eliminativism – then you must accept that phenomenal experience is an actual part of reality (i.e. not illusory) which requires an explanation. Also, unless you are some kind of dualist – you must accept that phenomenal experience arises as the result of the operation of some physical system.

    These are the two basic assumptions of this conversation (as I understand them):
    1) Phenomenal experience is real and requires explanation
    2) Phenomenal experience arises from physical systems

    The acceptance of the above, leads to the Hard Problem. What is the relationship between physical brain states and phenomenal experience? How do physical brain states result in phenomenal experience? What is the process by which this occurs? Can we bring it under predictive experimental control? Simply stating there is a relationship doesn’t get us anywhere, it’s already been assumed.

    You asked what are the counter-arguments to your position. I think the most problematic is blindsight. If phenomenal experiences are simply the egocentric perspective of a certain type of information processes, then they should be present any time those information processes are. However, in cases of blindsight, the functional information processing is the same (i.e. people afflicted still avoid obstacles and respond to the environment), but phenomenal experience is absent. This shouldn’t be possible if the information process and phenomenal experience are the same thing from different perspectives.

  22. Joe,

    This shouldn’t be possible if the information process and phenomenal experience are the same thing from different perspectives.

    and if that were the case, where is the point of view, that defines the phenomenal experience perspective, located? and who or what occupies it? for the information process an external brain observer could serve, but for the phenomenal side…

    “dual-aspect monism” he he… and particles and waves too… nice euphemism for dualism. Dualism does not clarify anything anyway, and introduces the binding problem. Exhausting.

  23. Joe,

    You wrote:

    “These are the two basic assumptions of this conversation (as I understand them):

    1) Phenomenal experience is real and requires explanation
    2) Phenomenal experience arises from physical systems

    The acceptance of the above, leads to the Hard Problem. What is the relationship between physical brain states and phenomenal experience? How do physical brain states result in phenomenal experience? What is the process by which this occurs? Can we bring it under predictive experimental control?”

    I agree with the first assumption that phenomenal experience is real and requires explanation. However, I think the wording of your second assumption is problematic. If you posit that phenomenal experience (PE) “arises from physical systems [PS]”, it seems to imply a substantive distinction between PE and PS — substance dualism. I think this is wrong. In my theoretical formulation, PE and PS are simply two different aspects of an unknowable reality — dual-aspect monism. Science, as a pragmatic enterprise, tries to understand the subjective PE in terms of the objective PS in the brain which constitutes PE.

    Joe: “I think the most problematic [counter-argument] is blindsight. If phenomenal experiences are simply the egocentric perspective of a certain type of information processes, then they should be present any time those information processes are.”

    The advantage in having an explicit neuronal brain model of the phenomena in question is that we can point to the structural and dynamic properties of the model to explain puzzling phenomena.

    In “Space, self, and the theater of consciousness” (2007), _Consciousness and Cognition_, pp. 324-325, I described a critical experiment that I conducted as a test of the retinoid model. I present it here because I think it shows how blindsight can be explained. It provides evidence that it is possible to have a brain representation below phenomenal threshold that can provide a subliminal stimulus pattern to which one can make adaptive responses. This is the experiment:
    ………………………………………………………………………

    Seeing-More-Than-is-There (SMTT)

    Procedure:

    1. Subjects sit in front of an opaque screen having a long vertical slit with a very narrow width, as an aperture in the middle of the screen. Directly behind the slit is a computer screen, on which any kind of figure can be displayed and set in motion. A triangular-shaped figure in a contour with a width much longer than its height is displayed on the computer. Subjects fixate the center of the aperture and report that they see two tiny line segements, one above the other on the vertical meridian. This perception corresponds to the actual stimulus falling on the retinas (the veridical optical projection of the state of the world as it appears to the observer).

    2. The subject is given a control device which can set the triangle on the computer screen behind the aperture in horizontal reciprocating motion (horizontal oscillation) so that the triangle passes beyond the slit in a sequence of alternating directions. A clockwise turn of the controller increases the frequency of the horizontal oscillation. A counter-clockwise turn of the controller decreases the frequency of the oscillation. The subject starts the hidden triangle in motion and gradually increases its frequency of horizontal oscillation.

    Results:

    As soon as the figure is in motion, subjects report that they see, near the bottom of the slit, a tiny line segment which remains stable, and another line segment in vertical oscillation above it.

    As subjects continue to increase the frequency of horizontal oscillation of the almost completely occluded figure there is a profound change in their experience of the visual stimulus.

    At an oscillation of ~ 2 cycles/sec (~ 250 ms/sweep), subjects report that they suddenly see a complete triangle moving horizontall back and forth instead of the vertically oscillating line segment they had previously seen. This perception of a complete triangle in horizontal motion is strikingly different from the line segment oscillating up and down above a fixed line segment which is the real visual stimulus on the retinas.

    As subjects increase the frequency of oscillation of the hidden figure, they observe that the length of the base of the perceived triangle decreases while its height remains constant. Using the rate controller, the subject reports that he can enlarge or reduce the base of the triangle he sees, by turning the knob counter-clockwise (slower) or clockwise (faster).

    3. The experimenter asks the subject to adjust the base of the perceived triangle so that the length of its base appears equal to its height.

    Results:

    As the experimenter varies the actual height of the hidden triangle, subjects successfully vary its oscillation rate to maintain approximate base-height equality, i.e. lowering its rate as its height increases, and increasing its rate as its height decreases.

    This experiment demonstrates that the human brain has internal mechanisms that can construct accurate analog representations of the external world. Notice that when the hidden figure oscillated at less than 2 cycles/sec, the observer experienced an event (the vertically oscillating line segment) that corresponded to the visible event on the plane of the opaque screen. But when the hidden figure oscillated at a rate greater than 2 cycles/sec., the observer experienced an internally constructed event (the horizontally oscillating triangle) that corresponded to the almost totally occluded event behind the screen.

    The experiment also demonstrates that the human brain has internal mechanisms that can accurately track relational properties of the external world in an analog fashion. Notice that the observer was able to maintain an approximately fixed one-to-one ratio of height to width of the perceived triangle as the height of the hidden triangle was independently varied by the experimenter.

    These and other empirical findings obtained by this experimental paradigm were predicted by the neuronal structure and dynamics of a putative brain system (the retinoid system) that was originally proposed to explain our basic phenomenal experience and adaptive behavior in 3D egocentric space (Trehub, 1991). It seems to me that these experimental findings provide conclusive evidence that the human brain does indeed construct phenomenal representations of the external world and that the detailed neuronal properties of the retinoid system can account for our conscious content.
    …………………………………………………………………………….

    Notice that when the refresh frequency was less than approximately 2 cycles/sec there was no phenomenal experience of the triangle even though retinoid mechanisms were tracing a triangle in egocentric retinoid space. Notice that the retinoid model requires recurrent feedforward and feedback from the imaging matrices of the synaptic matrix mechanisms (which classify sensory input and influence motor response) to sustain a phenomenal experience of an object. In my SMTT experiment, if a subject had damage to area V1 so that the frequency of refreshing feedback to the retinoid were less than 2 cycles/sec, sensory-motor mechanisms might still receive the retinoid pattern of a triangle and make an appropriate response even though the triangular pattern was below the threshold of phenomenal experience. Hence blindsight.

  24. The next to last sentence in my #29 should read:

    “As in my SMTT experiment, if a subject had damage to area V1 so that the frequency of refreshing feedback to the retinoid were less than 2 cycles/sec, sensory-motor mechanisms might still receive the retinoid pattern of a triangle and make an appropriate response even though the triangular pattern was below the threshold of phenomenal experience.”

  25. Having given some more thought to Tom’s causality argument, I remain skeptical. I understand the argument to be, in essence, that because science is inherently an objective enterprise based on direct 3-POV observations and reports whereas access to phenomenal experience (PE) is indirect via 3-POV observations of behavior and direct only via subjective 1-POV reports, science has nothing to say about PE. Consequently, from science’s perspective PE effectively doesn’t exist and therefore can’t have a causal relationship with the physical subjects of scientific investigation.

    If that is a correct summary of the argument, I think it represents a questionable view of the scientific enterprise. Sellars argued (and many have agreed) that the real power of science is that it is a “self-correcting enterprise” in which a primary role of the 3-POV is to facilitate consideration and discussion within a relevant community of whatever hypotheses, observations, reports, et al, are available, a process that leads to consensus within that community, consensus which in turn provides the justification aspect of knowledge. This is a demonstrably effective process that is nevertheless fallible. That fallibility suggests that rather than summarily dismissing subjective 1-POV reports, we might instead ask how much more fallible must our understanding of PE be because it is based only on indirect 3-POV observations of behavior and direct but subjective 1-POV reports. There is a good deal of subjectivity in even unquestionably “scientific” 3-POV observations (another Sellars message), so it really doesn’t seem to be simply a matter of objective vs subjective. The answer may well be that the resulting understanding of PE is “unacceptably fallible”, but that needs to be convincingly argued, not preemptively assumed.

    (Arnold was perhaps making a related point in comment 37 on the Soul Dust thread re scientific explanation.)

    A somewhat related way of looking at this is to consider the ambiguous intent in describing PE as “ineffable”. One possible intent in so labeling PE is to suggest that no detailed description – or explanation – in physical terms is possible. But that would seem an overstatement of the current situation, which seems to be only that no widely accepted description is currently available. In which case it seems premature to assert anything about the causal relationship between PE and the physical.

    Another possible intent in labeling PE “ineffable” is merely to suggest that it can’t be discussed in the “objective” (in some sense) way required by science. An approach to developing a vocabulary to support such discussion of mental events (including PE) is what Sellars’ mythical “Jones” contributes (in principle – Sellars doesn’t propose a specific vocabulary, only a possible methodology for creating one). And there is, in fact, ample current discussion of PE, although the vocabulary currently available for such discussions appears to be woefully inadequate. In any event, science is replete with discussion of theoretical objects/concepts that aren’t available to direct 3-POV observation – in which case the availability of 1-POV reports would actually seem a possible advantage.

    Note: I’m not familiar in any detail with Dennett’s heterophenomenology idea, but I think it is somewhat along these lines in that it incorporates 1-POV observations as evidence that is accorded appropriately limited credibility. (A possible application of Sellars, of whom I understand Dennett to be a fan.)

  26. Arnold –

    That is indeed an imaginative and interesting experiment with suggestive results. However, I have a problem with this conclusion:

    It seems to me that these experimental findings provide conclusive evidence that the human brain does indeed construct phenomenal representations of the external world

    The problem is the addition of “phenomenal”. The experiment does seem to confirm that there is some representation of the environment that has persistence (on the order of seconds) and that the content of that representation gets incorporated (via whatever mechanism) into phenomenal experience. But it isn’t clear (at least to me) that the latter is evidence – conclusive or otherwise – that the former is “phenomenal”. Or even what that word means in that phrase.

    And the experiment appears to suffer the usual defect of arguments suggesting a benefit consequent to the existence of phenomenal experience: the observed capability (in this case, creating a representation with persistence) doesn’t seem to require the attendant phenomenal experience. Whatever mechanism creates phenomenal experience from the neural correlates of immediate visual input could presumably do the same with the neural correlates of the persistent representation. But why?

    Or am I missing something?

  27. Charles: “The problem is the addition of “phenomenal”. The experiment does seem to confirm that there is some representation of the environment that has persistence (on the order of seconds) and that the content of that representation gets incorporated (via whatever mechanism) into phenomenal experience. But it isn’t clear (at least to me) that the latter is evidence – conclusive or otherwise – that the former is “phenomenal”. Or even what that word means in that phrase.”

    On the assumption that “the former” refers to a representation of the environment (correct me if I am mistaken), you seem to miss the point that the vivid conscious experience of a triangle in motion is *not* a representation of the subjects’ visible environment. It is a conscious experience that is constructed by the brain’s putative retinoid mechanisms, and is *completely different from the subject’s visible environment*. In the SMTT experiment, the phenomenal representation of an event (the triangle) that *is not a representation of the subject’s sensory environment*, and is predicted by the structure and dynamics of the retinoid system, is evidence that the causal properties of the retinoid system are the source of the conscious representation, a phenomenal experience of an illusory triangle in front of the subject.

    I use the word “phenomenal” to mean something that is a conscious experience. So, for me, a phenomenal representation specifies a conscious representation as distinguished from a representation which might or might not be a conscious representation. Do you see a problem with this usage?

  28. Arnold –

    As usual in this field, it’s extremely difficult (at least for me) to find a way of saying what is meant since there seems to be no widely accepted standard vocabulary. In direct answer to your question, I use “phenomenal experience” to mean the “visual mental image of the content of the FOV” – what I assume others mean by “qualia”. I think of “representations” as being comprised only of patterns of neural activity, so for me there’s no such thing as a “phenomenal representation”. However, I interpreted your use of that phrase as being equivalent to my use of “phenomenal experience”, and as long as we agree on usage, I’m happy with any vocabulary. Anyway, here’s another try at making my point.

    I am quite aware that the instantaneous content of the FOV is essentially a pair of dots – moving in the slit – against a monochrome background and that any neurological representation of that instantaneous FOV will include only those elements. My unstated (sorry!) assumption is that there is a need (for various purposes) for “representation buffers” (conceptual, although perhaps your retinoids implement something functionally equivalent) that hold recent neurological representations of the instantaneous FOV, thereby making individual representations (in a sense) persistent. And if that is conceptually correct, one can imagine a mechanism that combines sequences of those persistent representations in such a way to create a “virtual” neurological representation of the whole (unseen) triangle.

    Note: In speaking of “persistent” neurological representations, I’m analogizing with cathode ray tubes in which an instantaneous beam creates the illusion of a complete picture owing to the persistence of the individually excited phosphors. But in doing so, I don’t mean to imply any actual “visual” activity, just some kind of merging of representations of the instantaneous FOV. Also, although I’m using the language of discrete events for heuristic purposes, the process presumably is actually continuous.

    If something like that hypothesized process is actually occurring, then there is effectively a neurological representation of the complete triangle. But it isn’t obvious that there must be an accompanying phenomenal experience. Of course, your experiment shows that there is, but that’s just another instance of a (seemingly unnecessary) addition to the actual work, all of which appears to be done by the neurological representations and associated “processing”.

    Hope that makes it clearer.

    Aside: I wonder if the following visual illusion is related to your experiment: loosely hold a pencil by one end and move that end up and down to create the image of a wriggling, snake-like entity. The pencil is, of course, at all times rigid, but presumably some process of combining persistent successive instantaneous images creates the illusion.

  29. Arnold in #25: “It [phenomenal experience] doesn’t add any causal powers to the system of brain mechanisms that *constitute* our phenomenal world (the retinoid system) because both the subjective 1st-person events and corresponding activity in the relevant brain mechanisms, which are the objective 3rd-person events, are complementary *aspects* of an underlying unknowable reality.”

    So it’s the underlying reality that’s doing the causation, in both its subjective and objective aspects, which are non-identical and “reside in two different domains of description” as you say later. Both descriptive stories are therefore equally causal, but in parallel, so it isn’t the case that qualia add extra causal “oomph” to what the neurons are doing. This is very close the psycho-physical parallelism I suggest at http://www.naturalism.org/privacy.htm#barring , although I don’t think that the idea of an underlying reality with its own unobserved nature adds anything to the account, so I simply talk about ways of representing reality, one qualitative, the other conceptual. Reality gets described (represented) in two basically different ways: from the subjective perspective of a free-standing behavior-controlling representational system like individual human beings (somehow resulting in qualia – the hard problem) or from the intersubjective 3rd-person perspective (resulting in concepts).

    It seems to me the question remains of how the qualitative comes to exist according to a 3rd-person account of mind-independent reality, which by necessity abstracts away from phenomenal qualities in its conceptual and quantitative descriptions of the world. You seem to side-step this question (that of closing the explanatory gap) by saying that qualia (conscious phenomenal qualitative states like pain) are an irreducible aspect of an underlying reality. This sounds like a type of pan-psychism – that reality in its basic components or nature has a qualitative-mental side to it. But the evidence thus far is that we only find qualitative states in association with complex cognitive *systems* exhibiting certain functional, representational capacities. This is to say that consciousness is likely a *system* property, not a property of base reality independent of how its components are organized, http://www.naturalism.org/kto.htm#Neuroscience

    As cognitive neuroscience proceeds, the correlations will become every more precise between phenomenal experience and the specific characteristics of physically-realized representational processes, so it seems to me this is the way forward to closing the explanatory gap: what is it, precisely, about being such a representational system that entails the existence of qualia for the system alone, and not for outside observers of the system? (speculated on at http://www.naturalism.org/appearance.htm#part5 ) Your retinoid model is an empirically based account which shows how qualia run in parallel with certain types of representations, those which represent the organism itself at the center of a represented external world. But why are qualia present only when this rather complex type of representational system is up and running? To say qualia are simply an aspect of an underlying reality seems to me to ignore the representational complexities you yourself say are correlated with the existence of qualia. But it’s quite likely I’ve misunderstood your proposal.

  30. Charles, you say: “I understand the argument to be, in essence, that because science is inherently an objective enterprise based on direct 3-POV observations and reports whereas access to phenomenal experience (PE) is indirect via 3-POV observations of behavior and direct only via subjective 1-POV reports, science has nothing to say about PE.”

    I think science has a lot to say about PE, since after all we’re investigating its neural correlates and associated behavior. As I say at http://www.naturalism.org/privacy.htm#epiphenomenal :

    The philo-scientific project of doing full justice to the world in our descriptions – what we think of as attaining maximum objectivity – can’t responsibly declare consciousness non-existent, since after all here it is for each of us, an ineluctable, non-illusory 1st person reality that each of us sincerely attests to, a fantastically rich quality space within which we as phenomenal subjects and our phenomenally presented worlds both exist. Moreover, we have made great strides in pinning down the neural correlates of consciousness (see note 10), so it’s to some extent an empirically tractable phenomenon, albeit invisible to intersubjective observation.

    And you say “In any event, science is replete with discussion of theoretical objects/concepts that aren’t available to direct 3-POV observation – in which case the availability of 1-POV reports would actually seem a possible advantage.”

    Agreed about the role in science of theoretical entities not directly available to observation, and the importance of 1-POV reports, but the difficulty is that conscious states are *by their very nature* not observable, or so I argue in Respecting privacy (quoted above). No one has seen, or will ever see, a pain, since pains are things we undergo, that we *consist of* (among many other qualitative states) as experiencing subjects. And what’s observed from outside are pain’s neural correlates, not pain itself. Same goes for all contents of PE.

  31. Tom,

    I agree with most of what you say, but I don’t agree that dual-aspect monism implies panpsychism. Look at it this way:

    1. There is an all-encompassing underlying reality (R) that is unknowable.
    2. R is partitioned into innumerable different kinds of structures.
    3. Some of these structures constitute living organisms.
    4. Some of these living organisms have internal structures that constitute brains.
    5. Some brains have internal structures that constitute retinoid systems.
    6. Retinoid systems constitute subjectivity/consciousness, a dual-aspect entity by virtue of their particular structure.

    So from the 3pp, the retinoid system is a particular kind of biophysical organization, and from the 1pp, the retinoid system is a particular kind of phenomenal experience.

    My 3pp working definition of consciousness):

    *Consciousness is a transparent brain representation of the world from a privileged egocentric perspective.*

    My 1pp working definition of consciousness:

    *Consciousness is an experience of something somewhere with respect to oneself.*

    In the scientific exploration of consciousness, simple correlates of consciousness are too weak to advance our understanding of conscious content. I’ve argued for the bridging principle of corresponding analogs to constrain our investigations.

    Here’s the bridging principle that I’ve proposed:

    *For any instance of conscious content there is a corresponding analog in the biophysical state of the brain.*

    This sets the scientific task — find brain mechanisms that can generate analogs of salient phenomenal events. The retinoid system is my proposal for such a mechanism. See *The Cognitive Brain*, Ch. 4. “Modeling the World, Locating the Self, and Selective Attention”, here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter4.pdf

  32. Arnold: “Retinoid systems constitute subjectivity/consciousness, a dual-aspect entity by virtue of their particular structure. So from the 3pp, the retinoid system is a particular kind of biophysical organization, and from the 1pp, the retinoid system is a particular kind of phenomenal experience.”

    It seems to me that the two different perspectives are creating the *appearance* of two aspects to reality; it isn’t that reality has two aspects *in itself*. I don’t think, btw, that we as conscious subjects have a literal perspective on our own retinoid system, rather we consist of it (among other things) as representational systems. We as individuals have 1st person perspectives on the *world* (including the body), but not on our brains, and the world is given to us qualitatively. Why this should be so is the hard problem.

    Intersubjectively, as in science and other 3rd person descriptions of the world that get expressed in concepts and quantities, these qualities necessarily drop out- they don’t appear in 3rd person descriptions. This could be why we may never understand qualities as being entailed by 3rd person descriptions (but I’m not sure about this – just a conjecture!).

    Re the hard problem, I agree that it’s the *structure* of retinoid systems (and whatever other correlates of consciousness there are) that are likely the key in understanding why it should be the case that *being* such a system makes the system an experiencing subject.

  33. Tom: “Re the hard problem, I agree that it’s the structure of retinoid systems (and whatever other correlates of consciousness there are) that are likely the key in understanding why it should be the case that being such a system makes the system an experiencing subject.”

    It seems that we agree on this point. Can I assume that you no longer suspect that the retinoid structure as a dual-aspect manifestation of a monistic reality implies panpsychism?

  34. Arnold: “Can I assume that you no longer suspect that the retinoid structure as a dual-aspect manifestation of a monistic reality implies panpsychism?”

    Right, I don’t. You’re saying, I think, that the retinoid structure of someone’s brain that we can see from an outside 3rd person perspective is one aspect of an underlying reality and the other aspect is the phenomenal experience associated with that structure. So reality has a 3rd person, objective aspect and a 1st person, subjective aspect, but the 1st person aspect (phenomenal experience) only exists or appears when the 3rd person aspect assumes a certain configuration and does certain things, namely represent the world from an egocentric perspective.

    The natural question to ask is why do only those processes, those bits of reality, have a 1st person aspect? I myself hanker after an explanation, perhaps having to do with what it is about the structure and function of the retinoid system (and other correlates of consciousness) that entails the existence of phenomenal experience. You seem to be saying that reality just has two aspects when it gets organized a certain way, and there’s no deeper explanation needed or forthcoming. But of course its quite possible I missed something in your account.

  35. Tom: “You seem to be saying that reality just has two aspects when it gets organized a certain way, and there’s no deeper explanation needed or forthcoming.”

    Tom: “You seem to be saying that reality just has two aspects when it gets organized a certain way, and there’s no deeper explanation needed or forthcoming.”

    Yes. Reality has a biophysical aspect and a conscious aspect when it is organized into the brain structure of a retinoid system. There must be a deeper explanation, but here science hits a brick wall. The question at this point is: “Given that the retinoid system can explain the content of consciousness, how can we explain the sheer existence of consciousness?” You might call this an explanatory gap. But notice that it isn’t really a gap — it’s a barrier! And it is the same barrier faced by theoretical physics in trying to explain the sheer existence of the fundamental forces. Science is not omniscient, and is unable to explain the sheer existence of anything.

  36. Arnold: “…how can we explain the sheer existence of consciousness? You might call this an explanatory gap. But notice that it isn’t really a gap — it’s a barrier! And it is the same barrier faced by theoretical physics in trying to explain the sheer existence of the fundamental forces. Science is not omniscient, and is unable to explain the sheer existence of anything.”

    Guess I disagree. For me, the existence of phenomenal experience as arising within the physical, natural realm, and only when the physical realm is organized in certain ways, presents an explanatory puzzle quite distinct from the question of why fundamental physical forces and entities exist and have just the properties they do. You are content to say there’s a subjective, experiential aspect of reality that just happens to be manifest when things in physical, 3rd person reality are ordered in certain ways. If you think you’ve hit the final explanatory wall with this, then you won’t investigate further. Others, including myself, want something more in the way of explanation, perhaps impossibly and unrealistically, but there it is. Have pity on us!

  37. Tom: “For me, the existence of phenomenal experience as arising within the physical natural realm, and only when the physical realm is organized in certain ways, presents an explanatory puzzle quite distinct from the question of why fundamental physical forces and entities exist and have just the properties they do.”

    Interesting. I have the same feeling about this that you have. But when I examine it closely I can’t justify a significant distinction. Can you give us principled justification for an explanatory distinction?

    I should say that, in sympathy with you, I would like something more in the
    way of explanation as well. And I hope that further investigation will prove me wrong about the elusiveness of finding a scientific explanation for the sheer existence of anything, including of course, phenomenal experience.

  38. Arnold: “Can you give us principled justification for an explanatory distinction?”

    The principled distinction is that the evidence strongly suggests that consciousness supervenes on very specific states of physically instantiated affairs, namely those that do representational, cognitive work in service to behavior control (see http://www.naturalism.org/kto.htm#Neuroscience ), whereas the fundamental laws/entities of physics don’t supervene on anything, as far as we know – they just *are*. To me this suggests there’s potentially an explanation of why consciousness only supervenes on representation (I explore some explanatory possibilities at http://www.naturalism.org/appearance.htm#part5 ), whereas I don’t see any further explanation as to why what’s physically fundamental just *is* since to answer that would have to appeal to even deeper fundamental principles to which the same question would apply.

  39. Tom –

    I’m working on KTO (currently in my third pass through it), which is very much in line with my thinking. Because I would like to align our views to the extent possible, I’d like to open a dialog about the paper on a section-by-section basis. I’d suggest moving that off-line, but I think the exchange might contribute to your dialog with Arnold and might actually be of general interest in this forum.

    In section 1, I’m totally with you up to the paragraph which addresses blindsight in which you say:

    … cognitive processes involving conscious sensory experience also seem essential to guiding behavior. Despite the fact that, for instance, blindsight experiments show some rudimentary cognitive capacities remain intact with respect to the blindfield in the absence of phenomenal consciousness, the general rule is that if normal consciousness is curtailed, behavior is compromised, often radically

    I think there is a way to interpret blindsight that doesn’t support the conclusions about consciousness. My speculation is that the processing of sensory inputs may be executed by what we might call a “dual processor architecture”. One processor (perhaps something along the lines of Arnold’s retinoid system) performs the kind of basic functions adequate for satisfying primitive requirements – finding shelter, dodging missiles, etc. – employing only extremely simple models of the environment, possibly derived using the totality of sensory input from the retina. It presumably would produce responsive actions quickly as appropriate to mainly reflexive actions.

    The second processor produces more detailed models of the environment, possibly emphasizing visual sensory inputs from the fovea. It produces more complex responses, presumably a more time consuming activity involving things such as accessing long-term memory. This detail processor interacts cooperatively with the basic processor, and both are adaptive in that they become more adept at performing their functions as time passes and learning occurs.

    A further speculation is that phenomenal experience is an artifact of the presence in those processors of the information necessary to produce the models, information also used to “paint” a virtual visual (in fact, a verbal) image that constitutes the PE illusion.

    Were there such an architecture, aberrational visual behavior could be explained by various failures in the processors. Eg, blindsight presumably would involve a complete breakdown in the processing that produces the PE illusion and the “compromised behavior” would suggest failure of the detail processor as well. Similarly, the limited behavior provided by sensory substitution could be interpreted as the result of the new sensory inputs being routed to the basic processor. I’m trying to write this idea up in greater detail, but that’s enough (too much?) for the present purpose.

    The only other problem I have with this section is with this statement:

    … as subjects we don’t have a first-person perspective on experience, even though as persons we most certainly have a first-person perspective on the world

    It seems to me that an important potential benefit of “killing the observer” is elimination of visual metaphors in these discussions. So, why not just say whatever you have in mind here without them thereby avoiding any temptation to resurrect the slain observer?

  40. Charles: “I’d like to open a dialog about the paper [KTO: Killing the observer] on a section-by-section basis. I’d suggest moving that off-line, but I think the exchange might contribute to your dialog with Arnold and might actually be of general interest in this forum.”

    Hi Charles,

    I’m of course glad to have KTO discussed and glad you’re sympathetic to it (although I’ve since changed my mind about some things), but we should see whether Peter wants to host such a discussion and if so how he’d like to do it. There’s a thread that mentions my work at http://www.consciousentities.com/?p=689

    Appreciate your interest!

    Tom

  41. Well, Tom, it turns out that we don’t need to worry about thread-jacking – through sections 2-6 of KTO, I find no disagreements requiring alignment. The only comment I have is that your discussion of private “facts” might have benefited from familiarity with Sellars and his followers such as Rorty, Brandom, or Ramberg – any of whom would have pointed out that since “facts” – at least epistemic ones, and I’m not sure what other kinds there can be – can’t be private, the whole concept is incoherent. The existence of such “facts” is, in my understanding, a prime example of the “myth of the given”, the target of “Empiricism and Phil of Mind”.

    I especially liked this from section 6:

    … ostensibly private facts about experience, by virtue of being ineffable, are unspecifiable.

    I have attacked “ineffability” by observing that an amazing amount of verbiage has been attached to supposedly ineffable qualia, but that’s just cheap snark. Your observation that qualia are ineffable even from the supposedly privileged 1pp is the serious observation.

    We do start to diverge when in the penultimate paragraph of section 7 you say:

    sensory qualia don’t include privately given or directly observed facts about my experience, rather they are represented facts about what is represented, what I as an organism directly observe, namely the world

    You have distinguished the public (objective) aspects of qualia – those that are manifest in neural activity – and the private (subjective) aspects that aren’t. I don’t see the subjective aspect of sensory quality as involving “facts” period, whether about experience or the world. I even go a step further and consider them illusory. Also, I’m immediately suspicious of cascades such as “represented facts about what is represented”. And (anticipating a bit), I think I’m not on-board with your dual explanatory spaces idea (I haven’t gone back for a careful reading of Respecting Privacy, so I may be wrong). All those negative reactions stem from my failure to understand why we should bother with the subjective aspect of qualia beyond trying to figure out the (presumably physical) mechanisms behind their emergence – ie, the facts not about them per se but about why and how we experience them. (Of course, if that’s what you mean to capture with your cascade, then we actually are in agreement on that as well.)

    I think one more brief comment on the rest will wrap it up. But I’ll wait for any responses you may have so far.

  42. Charles, a few replies to what you say in 45 and 47 above, just focusing on our remaining differences:

    “…blindsight presumably would involve a complete breakdown in the processing that produces the PE illusion and the “compromised behavior” would suggest failure of the detail processor as well. Similarly, the limited behavior provided by sensory substitution could be interpreted as the result of the new sensory inputs being routed to the basic processor.”

    Agreed that as the cognitive, behavior-controlling processes associated with consciousness degrade, so does behavior and consciousness. My only quibble is that I wouldn’t call phenomenal experience (PE) an illusion since it’s undeniably real. What is illusory or mistaken, the way I now see it (a change from KTO), is that PE could play a causal role in 3rd person explanations of behavior.

    “It seems to me that an important potential benefit of “killing the observer” is elimination of visual metaphors in these discussions. So, why not just say whatever you have in mind here without them thereby avoiding any temptation to resurrect the slain observer?”

    Although the conscious subject is not in an observational, epistemic relation to experience (a point Arnold makes as well), it’s still the case that as organisms we observe our environment, including a good deal of our bodies. So seems to me we have a 1pp on the world, that is, we observe it from an egocentric spatial and temporal perspective tied to our bodies.

    “I don’t see the subjective aspect of sensory quality as involving “facts” period, whether about experience or the world. I even go a step further and consider them illusory.”

    Seems to me that qualia are almost always presented in experience as tied to specific objects, e.g., the blue of a blue chair, so in that sense it’s fair to say that it’s a public fact about the chair that its blue, even though I can’t specify a private fact about blueness as a quale. I know the chair is blue even though I don’t know any private facts about blue. And again, I don’t see my experienced blue as an illusion.

    “All those negative reactions stem from my failure to understand why we should bother with the subjective aspect of qualia beyond trying to figure out the (presumably physical) mechanisms behind their emergence – i.e., the facts not about them per se but about why and how we experience them.”

    Agreed: we’re trying to figure out the why and how of PE. As to whether we’ll get a mechanical, causal, emergentist explanation of it, I’m dubious, since those type of 3rd person explanations typically involve public objects at all stages from initial states of affairs to outputs. But PE is categorically private, so it’s hard for me to see how it can be an output of a mechanism that *is* in the public domain, see “Options for explaining consciousness,” http://www.naturalism.org/appearance.htm#part2 Because of this, I suggest we try for a non-causal, non-mechanistic explanation based on evidence that consciousness is associated with representational capacities, http://www.naturalism.org/appearance.htm#part5

  43. Tom, how can you say that “the fundamental laws/entities of physics don’t supervene on anything”? Don’t they supervene on the internal representations of the human brain that invents these laws and entities. We believe that they exist independently of any cognizing creature, but because scientific understanding is always provisional, we can only assert that the so-called fundamental laws are our best current model of physical reality.

  44. Arnold: “Tom, how can you say that ‘the fundamental laws/entities of physics don’t supervene on anything’? Don’t they supervene on the internal representations of the human brain that invents these laws and entities. We believe that they exist independently of any cognizing creature, but because scientific understanding is always provisional, we can only assert that the so-called fundamental laws are our best current model of physical reality.”

    We *represent that* the fundamental laws of nature don’t supervene on anything since *in our model of reality* they are fundamental – that on which everything else depends in 3rd person explanations and descriptions. In our model of reality, the fundamental laws don’t supervene on our brains, rather our *model of reality* supervenes on our brains, and thus we understand our model to be fallible, as you say. The idea of a representation-independent reality is built into the very idea of representation itself, and along with it the idea that our models are better or worse representations of that reality.

  45. Tom, I agree that the idea of representation-independent reality is built into the concept of transparent representation. But even the proposition that “the fundamental laws of nature don’t supervene on anything” is itself a brain model of an unexplained reality. It asserts something (e.g., the fundamental forces), but is not an explanation of what it asserts; i.e., there is no logical derivation. It seems to me that justification for proposing the sheer existence of such fundamentals is a pragmatic one; they enable explanation and prediction of things that interest us. As far as I can see, physics is a product of biology and is absolutely constrained (supervenience?) by the structure and dynamics of the human brain.

  46. Tom –

    “I wouldn’t call phenomenal experience (PE) an illusion since it’s undeniably real.”

    I have been rather surprised at the number of people who use that argument. Perhaps I’m missing something, but to me it seems obviously bogus. After all, the definition of “illusion” is:

    Something that deceives by producing a false or misleading impression of reality.

    It’s admittedly tricky to put the idea into clear language, so perhaps we’re actually in agreement but just using different descriptive metaphors. I’m not arguing that we don’t have PE – the sense of watching a “movie” of the ego-centered environment in the Cartesian Theater and acting in response, only that all of that is an illusion. I speculate that as language users, we have an ability to verbally describe the content of whatever models of the environment reside in whatever processors there are, and that ability becomes manifest via the illusion of PE.

    The how and why of that illusion is, of course, TBD. And while it is a fascinating problem, I don’t see that it warrants anything like the borderline religious devotion it is accorded as long as no one can even offer viable speculations about the why, never mind the how. Anyway, since we seem to agree that whatever else PE is or isn’t, it isn’t causal, we can probably move on with no loss.

    Although the conscious subject is not in an observational, epistemic relation to experience (a point Arnold makes as well), it’s still the case that as organisms we observe our environment”

    Although I think I understand the distinction you want to make, the language just doesn’t work for me. Even ignoring that I don’t know what it would even mean to be “in an observational, epistemic relation to experience“, by distinguishing between the stimulus-response processing that doesn’t involve PE and the PE itself we have created an ambiguity in all visual metaphors – which part of the dichotomy does “observing” refer to?. Consequently, describing either part using such metaphors seems unnecessarily confusing, since not everyone (apparently, hardly anyone so far) is on-board with that dichotomy. Although it’s somewhat awkward to say it without them, it seems well worth the extra effort if doing so avoids such confusion. For example, although I agree with the sentiment of:

    we have a 1pp on the world, that is, we observe it from an egocentric spatial and temporal perspective tied to our bodies.”

    the visual metaphors undesirably mix the “real” stimulus-response processing and the “illusory” PE while adding nothing to a description purely in terms of sensory inputs, environment modeling, and resultant outputs. If PE isn’t causal, what is gained by language that implies that it is, or at least causes readers mistakenly to infer that it is?

    it’s a public fact about the chair that its blue, even though I can’t specify a private fact about blueness as a quale. I know the chair is blue even though I don’t know any private facts about blue.”

    I think what you’re addressing here is what I tried to address (even by my standards, quite ineptly – mea culpa) in comment 31 above. My problem is with the word “fact”, which for me has to do with knowledge. We can “know facts” about some things – ie, make assertions about the thing that we can justify within a specified community. As you note, one can assert “That chair over there, viewed from here in this ambient light, appears to be (is) blue” – “appears to be” or “is” depending on the level of confidence in the assertion – and give reasons for believing that assertion to be “true” based on consensus within the community about information and/or observations accessible to everyone in the community. Members of the community can also discuss their own PEs of the chair, but no one in the community can “know facts” (in the stated sense) about any individual’s PE, including their own, because there is no such commonly accessible basis for justifying assertions about that PE.

    Does that capture your position?

  47. It seems to me that the lack of a clear definition of what is a causal agent in the consciousness problem arena, and how do the cause-effect chains work as human behaviour drivers (undertanding the decision making process) is a major impediment to understand the role that phenomenal experience plays in our existence.

  48. Charles, in #52:

    “which part of the dichotomy does “observing” refer to?”

    In talking about observation, we don’t need to suppose PE is involved, only that a representational system is updating internal representations of what’s external to it. We can usefully talk about a non-conscious system having an egocentric spatial and temporal perspective on the world that helps to control its behavior, see http://www.scholarpedia.org/article/Self_models#Self-models_in_machines In saying *we* have such a perspective I’m not implying (or at any rate don’t mean to imply) that PE plays a causal role in controlling behavior.

    “Members of the community can also discuss their own PEs of the chair, but no one in the community can ‘know facts’ (in the stated sense) about any individual’s PE, including their own, because there is no such commonly accessible basis for justifying assertions about that PE. Does that capture your position?” Yes, that’s about right.

  49. Arnold in 51:

    “As far as I can see, physics is a product of biology and is absolutely constrained (supervenience?) by the structure and dynamics of the human brain.”

    I’d say that as part of our conceptual, scientific model of reality, physics is constrained by intersubjective observational evidence, and in that model biology is constrained by the (observed) fact that organic systems are completely composed of entities described by physics. Of course the brain is what permits us to observe and conceptualize as individuals, but it seems to me the intersubjective, conceptual model of science isn’t directly constrained by the structure and dynamics of our brains, since presumably more or less the same model (mathematically expressed, for instance) would be arrived at by creatures or systems with very different sorts of internal architectures.

  50. Tom: “… but it seems to me the intersubjective, conceptual model of science isn’t directly constrained by the structure and dynamics of our brains, since presumably more or less the same model (mathematically expressed, for instance) would be arrived at by creatures or systems with very different sorts of internal architectures.”

    This doesn’t seem at all obvious to me. For example, the brains of the great apes have internal architectures that are different than ours, but with many similar structures as humans have. Yet we certainly do not expect an orangutang to arrive at a conceptual model of science, to say nothing of a model similar to ours. Even among humans, the structure and dynamics of some brains result in the invention of scientific models, while others generate no such models.

    This is not to say that intersubjective communication is not important in the development of our scientific models. But in the final analysis what is communicated in each of our phenomenal worlds is absolutely constrained by the particular structure and dynamics of each brain.

  51. Arnold, re 56, I didn’t mean to suggest that any type of creature or system would come up with science, only that the content of scientific models – responsive to how reality works – doesn’t depend on what sort of (sufficiently smart) system is conducting the science. At least that’s what I suppose: that in conceptually modeling reality, constrained by intersubjective evidence and assisted by technology, we are to at least some extent transcending the human sensory and perceptual idiosyncrasies that get expressed in our phenomenology.

  52. Arnold –

    I think I understand the point you are making, but it might help in reaching agreement if we back off from focusing on physics “knowledge” (the word I prefer due to the implicit absoluteness of “facts”) or biology knowledge and think about the acquisition of knowledge in general.

    What Tom and I agree on (per the last paragraph of his comment 54) – and with which it seems that you agree as well – is essentially that knowledge is not absolute but contingent. One way of acquiring knowledge is via the process described in the last paragraph in my comment 52, the process Tom describes as the “intersubjective 3rd-person” methodology of science. (And what Rorty meant by his much maligned – and typically misunderstood – quip “Truth is what your peers let you get away with saying.”) It appears that you are applying that general idea to physics; adding the observation that for humans, intersubjectivity is implemented using the brain, a biological entity; and then “closing the loop” by arguing that this makes physics knowledge dependent, in some sense, on biology. But the essential aspect of intersubjectivity is the process executed among the members of a community, not the detailed structure of those members – which is what Tom is arguing in the last few lines of comment 55 (and the just now appeared comment 57).

    Put another way, while participating in the intersubjective process requires that the members of a community have certain capabilities – ones supported by human brains but not, for example, by orangutan brains – those capabilities don’t necessarily require human brains. It’s easy to imagine non-human entities – perhaps even advanced robots – that could intersubjectively reach consensus on the “truth” of certain assertions and offer reasons for that consensus. Calling the result “knowledge” might be uncomfortable for some, but if the establishment of such “truths” met requirements that we humans generally accept as defining knowledge, it seems that consistency would require that we do so.

  53. Tom and Charles,

    Assuming that the subject matter of science is about the physical world, no entity, no matter how intelligent, can have scientific knowledge unless it first has at least a representation of the volumetric space it lives in — its egocentric surround. This is the minimal requirement for phenomenal experience/consciousness. An infant is conscious but has far less intelligence than he/she will develop during maturation. Bottom line: consciousness is a prerequisite for the growth of intelligence and the acquisition of knowledge.

    Intersubjective consensus is a later social process advancing the formal body of scientific understanding and practice as a human institution. This, of course, requires a collective of brains. The science collective is a knowledge multiplier, and as such, transcends (as Tom has noted) “the human sensory and perceptual idiosyncrasies” of individual brains.

  54. Intersubjective consensus is a later social process advancing the formal body of scientific understanding and practice as a human institution. This, of course, requires a collective of brains.

    I think we are all agreed on this except for limiting the process to a later stage of development (of either individuals or communities) and to science. Sellars argues that:

    … all awareness of sorts, resemblances, facts, etc., in short, all awareness even of particulars is a linguistic affair . . . , not even the awareness of such sorts, resemblances, and facts as pertain to so called immediate experience is presupposed by the process of acquiring the use of language.

    That is, all aspects of mental development involve achieving intersubjective consensus using language, not just the scientific aspect. Of course, this somewhat radical contention can be disputed, but it is incumbent on those who do to provide an alternative argument at the same level of sophistication as Sellars’ argument.

    The science collective is a knowledge multiplier, and as such, transcends (as Tom has noted) “the human sensory and perceptual idiosyncrasies” of individual brains.

    And this is roughly Sellars’ contention, only not limited to “the science collective”.

    I continue to suspect that the disconnect we’re experiencing is due to failing to distinguish process and content. I see the process of achieving intersubjective consensus within a community as being largely independent of the community. But the results can vary dramatically – and often do – for some subject matters and among some communities (eg, political or religious). The features that distinguish the science community (we hope) are that consensus forms around evidence the interpretation of which is relatively consistent within the community and that consensus adapts as new evidence arises.

  55. I believe consensus matters little in what to science concerns. Consensus is to do with religion. If science and good scientists have done something is to blow consensus away. In science there is no need for consensus, there are just models that fit data. New better models replace old ones, through an objective assessment and peer “review” process, no room for “consensus”.

    Science has gone beyond human sensory and perceptual idiosyncrasies long time ago, unless we consider modern science instrumentation as an extension of human senses. Even more, science has gone beyond human intuitive thinking mechanisms, just have a look at quantum or relativistic concepts.

    In this sense, science shares with consciousness studies the problem of tackling fundamental epistemic and ontological problems. What is a particle? what is a string? what are qualia (*)? why is there anything? etc…

    Popper falsability theory puts science in the right place, far away from relativism (consensus).

    (*) I agree with Shankar in placing the question “what are qualia?” at the same level as “how qualia interact with the brain?”

  56. Charles, re your #60, I disagree with Sellars’ contention that “all awareness is a linguistic affair”. Effective language necessarily involves semantics as well as syntax, and semantics requires meaningful referents of the linguistic terms (the symbol grounding problem). As far as I can see, the only way to get proper linguistic referents is to parse relevant entities out of the occurrent global phenomenal world. In other words, our primitive awareness is a prerequisite for our use of language. An internal representation of the world around us is a necessary precondition for perception and the development of language.

    You might be interested in reading Ch. 3 “Learning, Imagery, Tokens, and Types: The Synaptic Matrix”, and Ch. 6 “Building a Semantic Network”, in *The Cognitive Brain*.

  57. Arnold –

    I think you are correct in attacking the word “awareness” – and when quoting that passage from Sellars, I thought it an unfortunate choice of words. But I don’t think that slip alters his essential point.

    In Chapter 3 you ultimately get to the point where your “synaptic matrix” system can recognize an image of the object named “Duffy” and respond with a vocalization of that name. And in that sense, the system perhaps could be described as having “awareness of Duffy”. But while the implementation is impressive, conceptually the system seems just a significantly more complex analog of the classical thermostat example. One could certainly design a digital thermostat to “vocalize” the detected temperature’s numerical value as it becomes equal to a stored integer – behavior that I assume the phrase “reliable differential responsive disposition” (RDRD) and it’s various shortened forms are meant to include.

    But I’m pretty sure Sellars wouldn’t agree that RDRD is “awareness” in his intended sense. In the “Duffy” example, the synaptic matrix does appear to learn to match an immediately detected image to a previously detected and stored image. And having done that, one can certainly imagine that in principle the system could learn to produce a corresponding vocalization. However, I take the focus of Sellars’ claim to be those learning processes and, perhaps more important, the additional process of acquiring the ability to justify a learned pairing of a detected and recognized object with a corresponding word, And those processes collectively is what I think Sellars meant by “a linguistic affair”.

    What I take to be the fundamental question of interest has to do with the very beginning of that “affair”: what, if anything, in the process is foundational ,ie, “given”. My impression is that Sellars’ answer – little, if anything – remains a matter of controversy. His general idea (as I understand it, not necessarily correctly) is that acquiring the abilities mentioned above is a multifaceted, integrated, concurrent process that extends over years. For better or worse, consistent with that understanding I’ve begun dipping my toes in the unfamiliar waters of developmental psychology, hoping to find relevant evidence. Any inputs along those lines – pro or con – would be welcomed.

    The abilities displayed in Chapter 6 do seem impressive, but unless I misunderstood the process it isn’t occurring at a point one could reasonably call “prelingual” (the training inputs are sentences, yes?). And if there’s a justification aspect, I missed it. So, from my perspective the relevance isn’t obvious .

  58. Charles, you make my point when you see that the object recognition processes of the synaptic matrix (ch. 3) and the linguistic process of the semantic network (ch. 6) do not capture our sense of awareness/consciousness. Simply put, the reason this is the case is that outputs from these brain mechanisms do not directly provide a representation of *something somewhere* in relation to our self in our brain’s representation of egocentric space. It is only after neuronal images on the mosaic arrays in our synaptic matrices are projected back into retinoid space that they are experienced in their proper locations with respect to our self in our phenomenal world. But notice that before we can sense and parse objects and events out of our phenomenal world, before they are processed in our sensory modalities, we must first have an activated retinoid system to present us with our occurrent phenomenal world. Subjectivity/consciousness comes before cognition.

    If you look at Fig. 8 in my paper “Space, self, and the theater of consciousness”, you will see the Z-planes of retinoid space above a dotted line. Below the dotted line are boxes labeled by sensory, cognitive, motivational, etc. functions. The retinoid system is synaptically linked to all the preconscious mechanisms under the line via neuronal tokens of the self (I!). Damage to mechanisms below the dotted line will result in cognitive impairment, but consciousness will remain intact. Damage to the Z-planes above the line will result in impairment of consciousness (e.g., hemispatial neglect). Interruption of the synaptic link between retinoid space and I! will result in loss of consciousness. I conjecture that the loss of consciousness that results from a sharp blow to the head is caused by just such a functional synaptic disconnection between retinoid and I!.

  59. Arnold –

    I have just read “Space, self, and the theater of consciousness” and now have a much better understanding of what you’re proposing. Even more than before, I think any disconnects are more vocabulary and emphasis than concepts.

    Your speculation in Chapter 8 about beliefs as stored proposition tokens paired with “I!” tokens marking the “true” ones was especially interesting given my own speculations about language processing, and I’d be interested in your reaction to the following:

    http://onthehuman.org/2010/11/does-consciousness-outstrip-sensation/comment-page-1/#comment-3541

    Knowing almost nothing about brain physiology, I couldn’t address the feasibility implementating my idea, so I’m pleased to learn that it is is consistent with your RS.

    Back to the discussion at hand. Notwithstanding my describing the “self” as an illusion, I have no problem with your concept of I!. From the beginning of my forays into the mysterious world of the mind I have assumed that there had to be a model of the ego-centric surround (I’ve been using the Gibsonian term “environment”, which I assume is roughly the same idea), and it’s clear that we do have a sense of being an entity with a perspective. So, I’m happy to accept the concept of I! so long as it is agreed that it doesn’t literally “see” anything.

    The general idea that there is a “representation of *something somewhere* in relation to our self in our brain’s representation of egocentric space” is also perfectly consonant with my view subject to two qualifications. First, you speak of object location “with respect to our self in our phenomenal world”. I understand you to be using “phenomenal world” in a sense that is related to the “mental movie” (what I have been calling “phenomenal experience”) in that the latter is derived (through a currently unknown mechanism) from the former. I just want to be certain that from I!‘s perspective, that “phenomenal world” continues to exist even if the “mental movie” ceases. You also speak of qualia as “features … parsed out of our … phenomenal world”. For me, the “mental movie” is constituted by qualia in that sense. Agreed?

    If all that is correct, it appears that we need a new name for the “mental movie”. And if we are calling the surround as seen from I!‘s perspective the “phenomenal world”, then in order to avoid confusion, any candidate should not include the word “phenomenal”. “Mental image” appears here and there. Better suggestions?

    Second, you say:

    … we must first have an activated retinoid system to present us with our occurrent phenomenal world. Subjectivity/consciousness comes before cognition.

    I have no problem with the first sentence as long the asserted temporal priority is a processing priority in the functioning of a brain well along on the developmental path. At that stage, it seems clear that your stated priority is required. My previous comments disputing such priorities relate only to the early developmental stage in which the child has minimal mental capability and the temporal priority of the development of mental capabilities seems poorly understood, if at all.

    The problem I have with the second sentence is my continuing doubts that throwing in “conscious”, “consciousness”, et al, helps, and suspicions that it may even confuse. I think I get your point, but I’m not entirely sure. I would say it something like:

    In order to detect an object in the phenomenal world as being of a certain kind, we need to be able to generate a representation of the contents of that world, extract representations of individual objects, and compare those to stored representations used as templates for identifying objects of particular kinds.

    If that’s more-or-less right, what does describing any of that as “conscious” buy us? And if it isn’t right, then I misunderstood the whole sentence – ie, using “consciousness” didn’t help. QED.

  60. Charles:

    1. The core self (I!) is the perspectival origin of all phenomenal experience but is *not* the observer of experience, so I! doesn’t see anything.

    2. There is a real but directly unknowable world of which we and our brain’s retinoid system are a part.

    3. Retinoid space, when activated, is normally filled with sensory patterns/images from all sensory modalities that have detected and imaged objects and events around us. This is our occurrent phenomenal world. This is consciousness.

    4. How does your “mental movie” differ from the content of the phenomenal world?

    5. You say:

    ” in order to detect an object in the phenomenal world as being of a certain kind …. If that is more-or-less-right, what does describing any of that as ‘conscious’ buy us.”

    What you describe in this comment are the processes of non-conscious mechanisms, not conscious mechanisms. These are the processes below the dotted line in Fig. 8. Consciousness is a separate brain process that enables the human kind of cognition that you speak of.

  61. Arnold –

    1. Agreed.

    2. I wouldn’t use some of that language, but I think the general idea is fine.

    3 and 5. Just as you don’t understand “mental movie”, I don’t know in what sense you mean “images”. And I don’t know what it means to “image” an event. I envision sensory input sequences being processed to create neurologically-based data structures which processors like your RS’s can work on to do things like detect objects, movement, et al., all in preparation for responsive action. And as I said before, I assume that the results of that processing may be put together into some sort of model of the ego-centric surround.

    I assume that “the occurrent phenomenal world” is the “ego-centric surround” minus the self – I! – but I’m not sure. Which is the problem with adding new terminology, which everyone seems inclined to do. In an arena in which even commonly used words often seem to have no clear and fixed definitions, neologisms clearly can’t help. Similarly with supposed explanations of “consciousness”, which necessarily are no more than new candidate definitions since there appears to be no consensus on that word’s meaning.

    4. This question confirms my suspicion that you are using “phenomenal” in a different way than I, and I think, Tom. We are using “phenomenal experience” to refer specifically to the “visual image in the mind” of the egocentric surround. And we are distinguishing that from a “representation” of that surround that does not necessarily involve any associated visual imaging – it’s essentially just a data structure that can be processed to support many – perhaps all – of the functions “below the line” in your Figure 8. The point of the distinction between a “representation” – in the data structure sense – and a “phenomenal experience” – in my sense – is that the latter, which may well be derived from the former, doesn’t appear to be necessary. Which raises the question “why does it occur?” (As I understand it, “how does it occur?” is the so-called “hard problem”.)

  62. Charles, thanks for your detailed reply. In matters like this, language is always a bit slippery (see “The Pragmatics of Cognition” in TCB, Ch. 16).

    1. The occurrent phenomenal world always includes the core self (I!) as well as the egocentric volumetric surround.

    2. An image, in my theoretical model, is a spatiotopic neuronal analog of any other particular sensory pattern. There are two kinds of images, (a) unconscious images, which consist of patterns of excitation on the mosaic arrays of the synaptic matrices (not egocentric), and (b) conscious images, which consist of patterns of excitation on the Z-planes of egocentric retinoid space (the phenomenal world). These conscious images are evoked via direct recurrent axonal projections from the mosaic arrays of the sensory modalities. The source of mosaic-array excitation might be via direct sensory input, or via recalled input from the memory store of the imaging matrice
    (see TCB, Ch. 3).

    You wrote:

    “We are using ‘phenomenal experience’ to refer specifically to the ‘visual
    image in the mind of the egocentric surround’.”

    In my theoretical model, a phenomenal experience can be a spatiotopic retinoid pattern of excitation from ANY sensory source. For example, if you hit your left thumb with a hammer, the pain in your thumb will be represented wherever your left thumb is within the egocentric Z-plane coordinates of your body envelope. If you move your left arm, the pain will move accordingly in your retinoid space. This is how you are able to apply a bandage if your thumb is bleeding.

    Phenomenal experience is needed because it is an *egocentric “data structure”. Without such a structure we would not experience the world we live in. Incidentally, I have asked many investigators if they know of any artifact that has a volumetric analog of space arround a fixed locus of origin. So far, no examples have been given.

  63. Arnold –

    Ok, I think we’re getting closer.

    Although I understand that “sensory” isn’t just “visual”, for the moment let’s focus on vision and just think functionally – ie, ignore implementation – about what happens when a person’s eyes are receiving light from a FOV.

    Some functions in the person’s brain are: processing visual sensory input to assess activity in the FOV, updating models of the environment, planning responses, etc. But there is also a function that causes the person to have the (illusory) experience of “seeing a picture or movie” of the contents of the FOV. That illusory experience is what a lay person is describing when they say “I see that house over there”. They obviously aren’t thinking about ego-centric surrounds, processing, etc, they’re just describing the scene before them from the perspective of “the mind’s eye”. That “seeing” is what I mean by “phenomenal experience”. (Although the expression is actually intended to include all sensory modalities and “what it’s like” effects.)

    I assume the experience of “seeing a picture/movie” isn’t an explicit feature of your RS, ie, isn’t part of what you call the “phenomenal world” that is experienced by I!. But in either case, that experience is central to these discussions and shouldn’t be subsumed under an all-encompassing term like “phenomenal world”. In order to avoid just the confusion we’re dealing with, it needs it’s own name.

  64. Charles,

    The folk notion of “seeing a picture/ movie” *is* an explicit feature of the retinoid system. It is just experiencing the brain’s neuronal activity in retinoid space from the inside; i.e., the first-person perspective. I defend this claim by showing that the structure and dynamics of the retinoid system successfully predict novel “seeing” experiences. Empirical validation of the theoretical model. Science can do no more. The challenge is for a competing model to explain/predict the same empirical findings.

  65. Well, Arnold, then I don’t understand why you are wasting time with the likes of me instead of preparing your Stockholm acceptance lecture, since my understanding is that your claim is essentially that you have solved the so-called “hard problem”!!

    I should emphasize that I am merely meaning to be humorous, not dismissive. I have never bought the “hard problem” and wouldn’t be at all surprised to hear that someone has solved it. If it’s someone I’ve interacted with. all the better.

    So, let me move on to what I’m really interested in. My conjecture is that language plays a major, underappreciated role in all so-called “cognitive” functionality, in particular creating that “picture/movie” folk notion. Does your RS confirm or refute that comjecture?

  66. Charles,

    As see it, there can be no language in the sense you use the word without a retinoid system. However, our direct “moving picture” of the world does not depend on language. But inner speach and the images evoked during inter-subjective and intro-subjective communication do depend on the mechanisms of the semantic networks. Our construction and contemplation of the images in possible worlds and our contingency-based planning depend on language. In TCB, Ch. 6, I introduce the notion of *self query* in semantic networks. This Is a very important cognitive function on which the images of science and all social institutions are critically dependent.

  67. Charles,
    I have just started to catch up with the conversations going on in here. Now it appears to me that you understand the “hard problem” quite well. The way you explained the second component in 4. of [67] demonstrated the point convincingly. How come you claim in [71] that you have never bought the “hard problem”? But you have a “phenomenal experience” problem of your own, don’t you? You sound like you are not satisfied with something, and that is one of the things that keep you continue on this “consciousness” topic. Could you elaborate what it is?

    Regarding the role of language in cognitive functionality, if I understand what you meant by cognitive functionality correctly (always a big IF), I believe language plays a very important role in enabling a person to recognize a certain attribute if one has the appropriate language compare with someone who does not.

    Jill Bolte Taylor describes in her book “My stroke of insight” that her world was black and white after her stroke. When she was trying to complete some easy jigsaw puzzle as an exercise during her recovery, she was having great difficulty. Color was not something she perceived or could take advantage of (knowingly). But when her mother gave her a hint that she could use color to match the pieces, out of a sudden, her world was in color again. How did that happen?

    One possibility is that after the stroke, she lost the concept of color, and so she was not in a position to perceive it. After her mother’s reminding, she instantly relearned the language/concept (perhaps making an instant connection with her past memory) and suddenly, this attribute of the world become something she could notice. So, her perception of the world changed. Before that, since she did not pick up any color attribute, she claimed no experience of color. That’s why her reconstruction of her memory before that point was black and white. What else can she say? She did not remember seeing any color! So I suspected the statement in the book must have been a reconstructed statement.

    We have other examples of this sort as well. If you travel overseas and are holding a stack of foreign currency, and if there is a bill from a third country when you are going through the stack, you may not notice it. They all look alike. But if you are holding a stack of money from your own country, I bet you will notice the foreign bill in it. So, if you were asked whether you experienced seeing that different bill, you may not recall the experience in one case, but you will in the other.

    For, some primitive tribes that do not have vocabulary for numbers higher than 3 or 4, nine apples or eight apples make no difference to them. Both 8 and 9 are “many”, and they are the same. Language allows a person to recognize certain things which will otherwise be not recognized, like the difference between “8” and “9”.

    However, I don’t believe language determines one’s phenomenal experience though. It probably affects the quality of the experience by affecting the number of attributes that you can detect. But there are certain feelings that I can feel which I have never heard other people described. I can label them with names if I have to, but I am sure no one will understand.

    Language in this case is irrelevant. I cannot avoid feeling the pain when a hammer lands on my toe even if I have been raised by machines under a non-linguistic environment, and so have no word for it. So, I too object to Sellar’s claim that “all awareness is a linguistic affair” unless “awareness” is used in a very restricted sense much like the ability to (self-initiated) differentiate certain attributes, to be aware of certain special attributes.

  68. IMO, language only plays a crucial role in consciousness as far as abstract concepts are involved. The problem is that it’s not that easy to define what is an abstract concept. In many cases abstract concepts are just compound of a set of underlying non-abstract concepts, and as such can be understood.

    In a past page (can’t find it as usual), I had an interesting discussion with Lloyd about this topic, I remember that he (being a linguistic pro) made quite relevant comments.

    I have many times tried to achieve non-verbal reasoning, but i have never succeded.

    I have serious problems to understand the basic role of language in consciousness and awareness, mainly because I still haven’t got a clear idea of what meaning really entails.

    …,and the verb became flesh and made his dwelling among us. John 1:3

    Maybe that is consciousness the interaction between pure abstract and conceptual thought (language) and matter….

    After all, the only way we have to communicate our inner world, our phenomenal experience, is through language. We don’t know what redness is but we have a word for it, and we seem to have a consensus about it. Language, at least to some extent, breaks the spell of subjective ineffable qualia… (very little though).

    Note: sorry for the loose and diffuse comment.

  69. Vincente,
    If someone is playing Tetris, he is probably not thinking in “language” in the linguistic sense. All logical deductions are done in imagery. There are many types of thoughts that are of this kind.

    Playing chess is another example. Deep Blue beat the human champ without mastering English, or for that matter, any human languages, except the “language of chess”. But is that a language? I doubt it because it is not designed for communication.

    I believe there are a lot more non-linguistic conscious thoughts going on in our heads. Language is a tool our collective brains invented to communicate with each other, and as a by-product, depending on the design of a particular language, it enhances the individual brains capability to carry out more detailed differentiations and cognition, such as recognizing the 60+ different types of snow for the Eskimos. But consciousness precedes language, I will think.

  70. Kar Lee, I agree with you. I was refering to logical reasoning. Of course the are many ways of non-verbal thinking.

    Now you’ve made me think, is tetris a logical game?

  71. Arnold (72) –

    I have no disagreements that aren’t just unsubstantiated guesses.

    Thanks for guiding me through your book.

  72. KL (73) –

    How come you claim … that you have never bought the “hard problem”?

    It’s the aura of mystery attendant to that phrase that I don’t buy – the ontological implications, in what precise sense qualia are “ineffable”, “irreducible”, etc.

  73. “1. The occurrent phenomenal world always includes the core self (I!) as well as the egocentric volumetric surround.”

    Wow !

    “2. An image, in my theoretical model, is a spatiotopic neuronal analog of any other particular sensory pattern. There are two kinds of images, (a) unconscious images, which consist of patterns of excitation on the mosaic arrays of the synaptic matrices (not egocentric), and (b) conscious images, which consist of patterns of excitation on the Z-planes of egocentric retinoid space (the phenomenal world). These conscious images are evoked via direct recurrent axonal projections from the mosaic arrays of the sensory modalities. The source of mosaic-array excitation might be via direct sensory input, or via recalled input from the memory store of the imaging matrice
    (see TCB, Ch. 3).”

    This is an example of the never ending attempt to turn syntax into semantics that AI people seem to love. “Seeing An Image” needs no more language than that, “seeing an image” : Its entire semantic content is easily conveyed from one human beiung to another, unless that person is blind. Trying to turn a subjective mental experience into a purely syntactical representation,using the usual plethora of geometric mathematical objects – matrices, projections,mosaics etc – trying to turn it into data – never answers the hard problem because the hard problem is“why are mental experiences more than just data?”

    And if you tell me “because it’s an illusion” I’ll get your postcode, track down your house and blow you up with some TNT in purely “informational” form.

    (Just kidding)

  74. John Davey (#79): “….the hard problem is “why are mental experiences more than just data?”

    The neuronal activation patterns in the brain’s putative retinoid mechanisms and in the synaptic matrices are not data; they are biophysical events. Only when we measure such events do we collect data. Don’t confuse the two processes.

    The scientific question remains: Can it be demonstrated that the structure and dynamics of a theoretical brain model (3d-person perspective) generate proper analogs of conscious content (1st-person perspective). The problem is one of neurobiology and phenomenal experience — not AI.

  75. Arnold
    “The problem is one of neurobiology and phenomenal experience — not AI.”

    .. in which case Arnold I agree with you. I presume by ‘analogs’ you mean what are generally termed the “neural correlates of consciousness”. I look forward to any success in this area !

  76. John D,

    Analogs of conscious content are a special subclass of the “neural correlates of consciousness” because, rather than being simple correlates, they are subject to the more stringent constraint of exhibiting a similarity relationship to salient features of their phenomenal correlates.

  77. Arnold,

    Could you give us an example of such similarity relationship:

    Similar in terms of?:

    – Similarity with a color?
    – Similarity with a sound pitch?
    – etc

  78. Vicente: “Could you give us an example of such similarity relationship:”

    If we take shape and motion as phenomenal features, for example, we can ask whether a candidate brain mechanism has the structural and dynamic properties to generate proper analogs of these features. An excellent example is given by my SMTT experiment where it is demonstrated that the putative retinoid mechanism constructs (as predicted) a vivid phenomenal experience of an object moving in space without a corresponding image projected to the retinas. See *Seeing-more-than-is-there*, pp. 324-325, here:

    http://people.umass.edu/trehub/YCCOG828%20copy.pdf

  79. Arnold

    “, they are subject to the more stringent constraint of exhibiting a similarity relationship to salient features of their phenomenal correlates.”

    similar in what sense ?

  80. John D (#85): “similar in what sense ?”

    Neuronal analogs of conscious content are similar to instances of conscious content in the sense of having characteristics in common.

    For example, in the SMTT experiment (see #84), the spatiotemporal pattern of neuronal discharge in the theoretical model of the brain’s putative retinoid space was triangular in shape and moved laterally back-and-forth like the phenomenal correlate reported by each subject. In other words, the objectively measured indices of conscious content were successfully predicted by the structure and dynamics of the hypothesized retinoid system under the conditions of the SMTT paradigm.

  81. Arnold,

    the objectively measured indices of conscious…

    That is what is a correlate, you measure indices , not conscious content. In the particular case of the SMTT, which is basically a geometrical/dynamical construction, you get a similar geometrical/dynamical behaviour of the retinoid model. I am afraid to say this is just a coincidence, due to the geometrical nature of the setup. It cannot be generalized to the case similarity with qualia, and even in this SMTT case the adjective “vivid” is a bit dary I would say.

    Nevertheless it is an extremely interesting experiment and a success for the model.

  82. Arnold –

    I don’t know if you are aware of this, but what you have done appears to be analogous to, and an extension of, what Sellars has his mythical hero “Jones” do. As I understand it, Jones creates a theoretical model of “sense impressions” (AKA “qualia”, “phenomenal experience”, “conscious content”, etc) for the purpose of facilitating discussion of them by his fellow “Rylean ancestors”. Jones models sense impressions on objects, hypothesizing that they “are similar … in the sense of having characteristics in common.” From that perspective, it appears that you have taken the conceptual framework of Jones’ project a step or two further in that you have proposed a model and an implementation of that model, and furthermore actually implemented a primitive version of it.

    It is important to emphasize that although the Jones/Trehub project may facilitate conversation about the mechanics of sense impressions/qualia, I assume that you agree that neither explains the mechanism by which the neurological process modeled manifests itself before “the eye of the mind”. However, I’m not really sure because I have found your statements on that issue somewhat confusing, so you might want to try again to clarify what you are and – more important, what you are not – claiming. And I don’t think doing so in terms of “phenomenal experience” helps unless you explain in detail how that term relates to the term “qualia”, seemingly the preferred term in this forum.

    In an attempt to clarify the distinction I’m making, I’ll try to put it in terms of your SMTT experiment. I assume that there is some neural “processing” that (in a sense) interprets visual sensory input and creates a neurological “representation” of the environment. And I assume that that representation is not restricted to just reproducing what impinges on the retina at a fixed point in time but can (in a sense) integrate sensory input over time and thereby expand the representation to “more than is there” at an instant in time. And I accept that your RS does that, matching (at least functionally) what apparently happens in the subjects’ brains.

    But the subjects in the experiment also “had the phenomenal experience of a triangle”/”saw a mental image of a triangle”/”experienced a triangular qualia”. And the mechanism of that is what I’m assuming you are not claiming to explain.

  83. Charles,

    Thanks for your nice analysis of my SMTT claims. I’m not familiar with Sellars’ work. At the conclusion you wrote:

    “But the subjects in the [SMTT] experiment also “had the phenomenal experience of a triangle”/”saw a mental image of a triangle”/”experienced a triangular qualia”. And the mechanism of that is what I’m assuming you are not claiming to explain.

    What I suggest is that *qualia*, per se, exist *only* as special kinds of biophysical events generated by the structure and dynamics of the retinoid system. There are no events that might be called “qualia” that are separate from the egocentric biophysical activity within the retinoid system. If there were such non-physical events, we would be confronted with the scientifically insurmountable problems of dualism. Dual-aspect monism dictates that the 3rd-person formulation of retinoid activity is a description/aspect of goings-on in the brain from the *outside* during SMTT, whereas the 1st-person report of the phenomenal experience during SMTT is a description/aspect of goings-on in the brain from the *inside*.

    The scientific problem is one of gathering objective evidence to test the validity of an explanatory model about conscious content. It is for this purpose that I have suggested the bridging principle of corresponding analogs.

  84. Arnold, I have been following your discussions with interest. Charles has raised this very important question that I have been trying to raise: The existence of a mental image of a triangle/experiencing a triangular quale. In order to rephrase it in your language as in:

    “Dual-aspect monism dictates that the 3rd-person formulation of retinoid activity is a description/aspect of goings-on in the brain from the *outside* during SMTT, whereas the 1st-person report of the phenomenal experience during SMTT is a description/aspect of goings-on in the brain from the *inside*.”

    let me rephrase it as: can your model explain why there is such an inside view? Who gets to see the inside? Why do you get to see that one inside no one else can see? What causes the association between “you” and that one particular inside that you are privileged to?

  85. Kar asks Arnold:

    “let me rephrase it as: can your model explain why there is such an inside view? Who gets to see the inside? Why do you get to see that one inside no one else can see? What causes the association between “you” and that one particular inside that you are privileged to?”

    Great questions, which raise an additional question: why suppose there is any internal describing or observing going on such that there would be a “view” of neural operations from the inside as opposed to the outside? It could be that we end up with a categorically private subjective qualitative reality precisely because we’re *not* in an observer-like relation to our own representational processes – Killing the observer, http://www.naturalism.org/kto.htm

    Re Arnold’s statement that “There are no events that might be called “qualia” that are separate from the egocentric biophysical activity within the retinoid system. If there were such non-physical events, we would be confronted with the scientifically insurmountable problems of dualism.” This of course doesn’t count against dualism, only against the ability of 3rd person science to accommodate it, on your view. How we should conceive of qualitative states and explain them in a unified 3rd person philo-scientific account is still an open question, seems to me.

    Funny how discussions often gravitate back to the hard problem, but perhaps not surprising given that there’s no consensus on a solution (that I’m aware of).

  86. Kar and Tom,

    Kar’s rephrasing of my model is misleading when he speaks about a *view* from inside the brain. If one speaks of a view from inside of the brain, it can only be in a loose metaphorical sense. I was careful to speak of two separate domains of *description/aspect* of brain events.

    It is important to distinguish between the operation of those brain mechanisms that constitute *observation* of events in the world, in the sense of detection and classification, and the operation of those different brain mechanisms that constitute our phenomenal experience of being at the center of a volumetric surround. These latter mechanisms (the retinoid system) are the mechanisms of phenomenal consciousness — they *experience* the world from an egocentric perspective but they do *not* observe the world.

    For an account relating to this issue see:

    http://people.umass.edu/trehub/where-am-i-redux.pdf

    Tom: “This of course doesn’t count against dualism, only against the ability of 3rd person science to accommodate it, on your view.”

    Yes, science is a pragmatic enterprise. So it counts against dualism within a scientific framework. We are not omniscient, so anything — even dualism — must be considered a possibility. It might be comforting to some but it just doesn’t lead to any useful scientific continuation.

  87. Thanks for the clarification, Arnold. The only remaining discomfort I feel with respect to your project is along the lines of this question from Tom:

    why suppose there is any internal describing or observing going on

    I agree with his concern with “observing” since any ocular metaphor suggests, even if not intended to do so, a viewer – the dreaded homunculus. But I’m inclined to accept “describing”, but with a twist.

    We naturally think of “seeing” our mental image of the environment and then describing it. But that leaves completely unexplained how the mental image arises. So, I suggest considering the possibility of the opposite order of events: perhaps we “describe” verbally the contents of our internal model of the environment (eg, a model like Arnold’s RS) and then, somewhat analogous to “painting by numbers”, create a mental image of that description.

    So, what would it mean to “describe verbally the contents of our internal model of the environment”? Consider the process by which we presumably learn the use of a word. A teacher directs our attention to an object and makes a sound (utters a word). The presence of the object in our FOV creates some distinguishable neural pattern. With repeated occurrences of this event, we learn to associate the word and the neural pattern (actually, a class of patterns) and in time develop the ability to utter the word when some member of that class of neural patterns occurs. Ultimately, we develop this ability for many classes of objects and events in our environment sensed in various modes, ie, we come to be able to “describe” our environment by associating words with its content as represented by recognition of neural patterns in our internal model of it.

    Nothing in this process requires a mental image, only the abilities to associate words with neural patterns and to use those words to compose verbal descriptions of occurrences of those patterns. But this seems to make the task of explaining the creation of mental imagery due to sensory stimulation somewhat analogous to the task of explaining the creation of mental imagery due to skillful writing, composing, etc. Which is not to suggest that either is easy to explain, just that the latter seems less mysterious. It is in that sense that I have often referred to the mental image/phenomenal experience/qualia as “illusory”. (And I preemptively note that the reply “Of course they’re not illusory, we all experience them” seems to ignore what seems to me the obviously intended meaning of “illusory”, viz, there is no actual picture/sound/odor anywhere in the brain.)

    How might the process of creating a mental image of the environment relate to creating other types of mental images, eg, visual memories, dreams, and hallucinations? Well, if “remembering a visual scene” were implemented by remembering a verbal description of it, that would account for the first type. The other two types could be the result of stored verbal descriptions that were either badly distorted or completely manufactured from more-or-less random combination of descriptive components. And what would explain the relative lack of detail and credibility in mental images of those three types as opposed to ones created from sensory input? Perhaps the ability to constantly “refresh” and refine the description using inputs from the immediately accessible, highly detailed, and reliable extended memory store provided by the environment – as proposed by Noe and O’Regan.

    This idea is entirely speculative and totally unsupported by any data, and even if credible would at most a first baby step toward explaining how the “illusion” of the mental image might arise. But perhaps something to think about.

  88. Charles,

    Before going further along this line, I suggest that you take a look at *The Cognitive Brain*, “Linking the Semantic Network to the Real World”, and “The Structure of Representation”, pp.112-115, here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter6.pdf

    and “Self-Directed Learning in a Complex Environment”, pp. 201-214, here:

    http://www.people.umass.edu/trehub/thecognitivebrain/chapter12.pdf

    You might also want to take a look at Fig. 16.1, p. 288, here:

    http://www.people.umass.edu/trehub/thecognitivebrain/chapter16.pdf

  89. Arnold: “Kar’s rephrasing of my model is misleading when he speaks about a *view* from inside the brain. If one speaks of a view from inside of the brain, it can only be in a loose metaphorical sense. I was careful to speak of two separate domains of *description/aspect* of brain events.”

    Ok, but given that you said “the 1st-person report of the phenomenal experience during SMTT is a description/aspect of goings-on in the brain from the *inside*” I think you can see how our misunderstanding about inner observation might have originated. I think the standard way of getting at what you’re referring to (the existence of phenomenal experience) is that there is “something it is like” to be the retinoid system (RS). As you put it, it *experiences* the world from an egocentric perspective – where the perspective is on the *world*, not the brain. Explaining why it experiences is of course the hard problem.

    Re science, since it only deals in public objects available to the 3rd person perspective, it *might* (note the qualification) be impossible to come up with a standard scientific (mechanistic, causal) story about how a categorically private reality of subjective experience arises from a set of physical conditions. In which case we might have to develop additional conceptual resources that can explain the arising of subjective realities – improbable but not impossible. But then again, if I remember correctly you take the correlation between experience and the RS to be a brute fact about dual aspect monism, not subject to further explanation.

  90. Tom,

    Mea culpa. I confess that I’m not very familiar with the standard philosophical terminology.

    Tom: “Explaining why it [RS] experiences is of course the hard problem.”

    I say RS *experiences* because it is an egocentric/subjective representational brain space. Then you ask, “Why should an egocentric/subjective representational brain space have experiences?” Here I’m stuck in the same way that a physicist is stuck when asked “Why should large dense bodies warp the geometry of space-time?” The best answer that we can give is that using this particular theoretical model we can explain and predict phenomena that interest us.

    What I’m suggesting is that all sciences have a “hard problem” at bottom. But I would not suggest that we stop trying to break the conceptual barrier.

  91. Arnold, see 42, 43 and 44 above, in which I suggested in response to you that there’s a principled distinction between our inability to further explain the brute facts of physical law and the puzzle of explaining consciousness.

    You say “RS *experiences* because it is an egocentric/subjective representational brain space” but of course one wants to know *why* and *how* the RS is the locus of experience but not those neural processes not associated with consciousness. To me, but apparently not for you, this cries out for explanation. What is it about a particular sort of representational process going on in RS that entails the existence of phenomenal experience? I of course accept that basic universally applicable physical laws just are what they are and won’t be further explicable, precisely because they’re fundamental: we can’t (thus far) find more basic principles or regularities in nature which explain them.

  92. Arnold, your usage of the word “experience” seems to carry a lot more utilitarian sense. That may be the root of the disagreement. Just for clarification, I use “experience” to mean the kind of things associated with the ways the phenomenal world shows up for me. So, I will never use “experienece” in the following sense:

    “The thermostat “experiences” a high temperature condition, so its bimetallic strip bends one way and turns the circuit into an open circuit.”
    or
    “The automatic vacuum cleaner “experiences” a wet spot on the floor so it activates it cleaning mechanism and mops up the water.”
    or
    “My robot helper “experiences” an overheated pan handle so it drops the pan onto the counter top.”

    When you say “RS *experiences* because it is an egocentric/subjective representational brain space.”, you can equally well apply this statement to a properly programmed digital computer (I am sure the Roomba vacuum cleaner also has an egocentric representational space built into its software so that it can navigate the carpet landscape). So, is it true to say that an algorithm driven digital computer is/can be conscious to “experience” the world in your view?

    And if the answer is yes, what kind of threshold in software complexity can be used to draw a line between a computer that can experience the world (conscious computer) and a computer that cannot (an unconscious one)?

    My chain of thought has led me to this funny scenario: A computer in a coma, that can still do very wonderful calculations, similar to a human brain in a coma that can still maintain vital bodily functions such as keeping the heart beating. And then, how can you tell if a computer is in a coma if it can still beat you in a chess game? Can the concept of “coma” be really applied to a computer?

  93. Tom, you say:

    “You [Arnold] say “RS *experiences* because it is an egocentric/subjective representational brain space” but of course one wants to know *why* and *how* the RS is the locus of experience but not those neural processes not associated with consciousness. To me, but apparently not for you, this cries out for explanation.”

    To the contrary, I see the need for explanation as well. See my #64 in response to Charles. Here’s why I think the retinoid system is the locus of experience and not other neuronal processes.

    “… you make my point when you see that the object recognition processes of the synaptic matrix (ch. 3) and the linguistic process of the semantic network (ch. 6) do not capture our sense of awareness/consciousness. Simply put, the reason this is the case is that outputs from these brain mechanisms do not directly provide a representation of *something somewhere* in relation to our self in our brain’s representation of egocentric space. It is only after neuronal images on the mosaic arrays in our synaptic matrices are projected back into retinoid space that they are experienced in their proper locations with respect to our self in our phenomenal world. But notice that before we can sense and parse objects and events out of our phenomenal world, before they are processed in our sensory modalities, we must first have an activated retinoid system to present us with our occurrent phenomenal world. Subjectivity/consciousness comes before cognition.”

    Now, you might still ask why an RS brain representation of *something somewhere* with respect to fixed locus of origin (the core self, I! in my theory) should be the substrate of experience. I guess that I would ask you in return if you have ever had a conscious experience without experiencing *something somewhere*. For me, this is *what it is like* to be conscious.

  94. Arnold, you said:
    “Here I’m stuck in the same way that a physicist is stuck when asked “Why should large dense bodies warp the geometry of space-time?” ”
    I think there is an important difference. Physicists are not stuck. They are given a fact, which is true today, true tomorrow, true in the infinite future and infinite past to serve as the foundation. On the other hand, that the RS experiencing a certain thing will lead you to experience a certain thing is not an unchanging fact because once a person dies, the experience is no more. Also, among the billions of RS of the world, only one particular RS’s “experiencing” certain thing will cause you to experience a certain thing, and by convention, we call that RS Arnold’s RS. So, how was that association established? Facts that stay true forever can be considered facts and no explanations are required. But facts that change over time demand explanations because simply put, “what causes the change?” All the endeavors in physics involve expressing things that change in terms of things that don’t change. That is how predictability is achieved.

    In my usage of the word “experience”, I will claim that the RS does not experience, you do. I guess the question I am trying to address is why when a particular RS is in a certain state, *you* experience something?

    If the question is too vague, let’s look at an example. Let say when someone put his hand on the wall, you feel angry. The experiment has been repeated many times and it has the same effect each time. Now you wonder why. Then you realize that when his hand touches the wall, the electrical capacity of the wall changes (remember those light dimmers?) and there is an electrical wire running behind the wall. So you say, ah because when he touches the wall, he changes the condition of the wire running behind the wall. One question solved, another question arises: Why you feel angry when the wire’s condition changes? So you follow the wire, and discover that it goes under your chair. When the wire’s condition changes, your chair becomes more electrostatically charged. You say ah…because the chair is more charged. One problem solved. Another arises: Why do you feel angry when the chair is electrostatically charged? And you follow the chair. After a lot of work, you realize that the chair is electrostatically coupled to an implant in your brain. When the chair is charged, your brain implant flips to a different state. One problem solved, another arises: Why you feel angry when the implant in your brain flips to that state? After more work, you realize that when the implant flip to that state, some group of neurons in your brain start random pattern of firing. One question solved, another arises: Why do you feel angry when that group of neurons start random firing? You look further, ah when that group of neurons start firing, the brain stem develops that unforeseen signal going through it. One problem solved, another arise: Why do you feel angry when there is this signal going through the brain stem. You look further…and further…and more problem solved, another problem arises.

    What we have done is going around in the brain, or in some case just the RS for visual stuff as in your model, and we never get to the point of why you feel angry when those things happen in the brain. We have explanations, right up to that big gap, and then we claim: Then you feel angry. Why?

    The RS model does a nice job explaining how a human gets his internal model right, but I don’t think it addresses the experience part. On that front, I think we are stuck.

  95. “it *might* (note the qualification) be impossible to come up with a standard scientific (mechanistic, causal) story about how a categorically private reality of subjective experience arises from a set of physical conditions.”

    I’d go a step further: it *will prove* impossible to come up with any convincing explanation of any phenomenon that is described using language like “subjective”, “ineffable”, “1st person private”, “what it’s like to be …”, etc. What would it even mean to “tell a story” about the ineffable?

    Rorty’s Phil & Mirror of Nature – especially Chapter 4 – addressed many of the concepts seemingly still in vogue today in debating phenomenal experience: irreducibility, untranslatability, intentionality, intensionality, ontology, and “what it’s like” (citing Nagel), arguing that they are all irrelevant to what – from his pragmatist POV – is important: how effective each vocabulary used in discussing each specific subject matter is in achieving society-nurturing ends. As late as 2000 (in Rorty & his Critics), he made only minor concessions (especially in response to Ramberg’s essay) on relevant points made 20 years earlier.

    Not to disagree with Tom’s position that attempts to advance the discussion of how phenomenal experience arises should continue, just to raise the question of what vocabulary would be most supportive of those attempts.

  96. “Subjectivity/consciousness comes before cognition.”

    Although I get Arnold’s point in the paragraph in comment 99 from which this quote is extracted, the quote seems to me a paradigm of the target of my comment 101. None of the three nouns is well-defined, so I don’t see that any information could possibly be conveyed by such a statement. Is the slash supposed to suggest equivalence? There is no consensus on what faculties are included in “cognition”, so how is one to decide temporal priority of development relative to it? Ie, that vocabulary seems not “supportive”.

    (Not to pick on Arnold, who just happened to provide a good example readily at hand.)

    “experiencing *something somewhere* … is *what it is like* to be conscious.”

    OTOH, this is the sort of fleshing out that I think could be supportive, only I’d leave out “conscious” and say something like:

    *what it is like to be a person” is to have an internal model of the I!-centered environment and to have reactions to objects and events in that environment that are collectively unique”

    (Just an ad hoc example, neither an interpretation of Arnold’s quote nor a proposed definition – although it does make sense to me as a start on one.)

  97. Kar (#100): “Physicists are not stuck. They are given a fact, which is true today, true tomorrow, true in the infinite future and infinite past to serve as the foundation. On the other hand, that the RS experiencing a certain thing will lead you to experience a certain thing is not an unchanging fact because once a person dies, the experience is no more. Also, among the billions of RS of the world, only one particular RS’s “experiencing” certain thing will cause you to experience a certain thing, and by convention, we call that RS Arnold’s RS. So, how was that association established?”

    Physicists are given what they commonly *believe* are facts. What a physicist believes is true today might not be considered true in the light of later evidence and theory. Even physicists are not omniscient.

    The reason your RS experiencing a certain thing leads you to experience a certain thing is because Kar’s RS exists only in Kar’s brain — it belongs to you in the same way that your head belongs to you; just as my RS belongs to me. That is how the personal association is established. Also, the RS works only in the living brain; the RS does not work after death. This is simple physiology. What is unchanging according to the retinoid theory of consciousness is that neuronal activity in a living RS constitutes phenomenal experience.

  98. Charles,

    You wrote:

    “experiencing *something somewhere* … is *what it is like* to be conscious.”

    “OTOH, this is the sort of fleshing out that I think could be supportive, only I’d leave out “conscious” and say something like:

    *what it is like to be a person” is to have an internal model of the I!-centered environment and to have reactions to objects and events in that environment that are collectively unique”

    What it is like to be a person may not be the same as being a conscious person. When you are in a deep dreamless sleep you are still the person Charles, but you have no active internal model of an I!-centered environment, i.e., neuronal activity in your retinoid system is below the threshold for consciousness. When you first wake up (become conscious), you will have a minimal/sparse RS representation of your surround. As you become more alert and engage in your daily activities the phenomenal content of your RS gets increasingly enriched.

  99. Here is what I wrote in “Where are the zombies?” to clarify “your” brain vs “my” brain association, using the movie The Matrix as a platform for discussion:

    “The concept of a Matrix is a virtual environment simulated by a giant computer. People are hooked up to this giant computer by some electrodes inserting into their spinal cord so that the computer can generate all sorts of sensations for you. Given the right electrical signal, you will feel like you are in a desert, or eating a piece of chicken, or looking at a beautiful flower under a summer sun. Since the computer directly interfaces with your brain, you will be in a dream like environment and will be unable to tell that it is a virtual environment.
    Inside this virtual reality common environment, everyone is given a virtual body (you have to, otherwise you will be bodiless). You can see and feel your virtual hands, your virtual legs, virtual clothing. At the same time, you can also see other people’s virtual bodies as well, just like you will see other real bodies in the real world. In fact, this is how different individuals interact inside this virtual world: Through their virtual bodies, which are purely computer generated. We can imagine that if the simulation is as good as what it describes in the movie, we can be completely immersed in this virtual environment, unable to recognize that it is just a simulated environment, especially if you have been connected to the Matrix since birth. Now, imagine a doctor performing a brain surgery on someone inside the Matrix and reveals that one’s brain really is a mechanical structure full of gears and springs, similar to the structure of a mechanical clock, with pendulum swinging back and forth. And when a certain spring in the head is pulled, the person under this brain operation is given a certain sensation of pleasure by the computer through the real spinal cord in the real world, and the person promptly reports inside the virtual world, “I feel really good…” Since this is a simulated environment, the computer can make any individual feel any way. But if the association of feelings are applied inconsistently, people inside may eventually recognize it as fake by its inconsistencies and self contradictions. As long as the rules are applied consistently, people won’t be able to recognize the hoax. One rule can well be that when anyone whose virtual spring is “touched”, that person is given a sensation of pleasure. So, inside the Matrix, it becomes a well known scientific fact that the spring in the head is a pleasure center and people publish research papers about this fact.
    Then there comes along a wise person inside the Matrix who, just like everyone else, is completely unaware of the outside reality. But he asks, “Why is my feeling associated with the pulling of this particular spring in my head?” You can imagine people inside the Matrix will look at this wise person with awe and point out to him the obvious: “It is your head. It is the pleasure center in your head. What else do you expect?” You can also imagine that there are neuroscientists inside this virtual world, who are experts in the virtual brain’s functions, attempt to seriously answer the question by resorting to some deeper level brain gears mechanics and publish their findings in research papers. At the end, one question remains: why when those deeper brain gears are turned, the person will feel a certain way. Of course, we know that it is the computer sending signals to the real spinal cords outside. But if someone who is completely unaware of this higher reality where the real spinal cords are located, there can be no answer. There can be no answer from within the Matrix to this wise person’s question. So, our neuroscientists in the Matrix, being “materialists” inside the Matrix, have to resort to the final answer: “Of course your feeling is associated with this piece of spring. This is YOUR brain!” Immediately, we see the problem with this answer. These are just virtual bodies. But we also realize that no one inside the Matrix can refute this answer effectively because the “materialists” can always insists that one is to be identified with his/her brain (virtual brain) and there is no problem with that. But being outside of the Matrix and knowing that what is being “touched” is just a simulated virtual body, we know that the wise person is asking a good question inside the Matrix. Indeed, without the “real” reality, one simply cannot explain why touching a spring in ones “virtual brain” will cause the sensation of pleasure. People inside the Matrix simply cannot know about the higher reality outside, and so their explanation, whatever it is, cannot be the real explanation. Insisting on identifying one’s nature with the virtual body is therefore committing a serious logical error in reasoning.
    But then, aren’t we having the same explanatory gap in our “real” world as well? Why when some signal reaches a certain part of some (my) brain, I will get this sensation? To explain that, don’t I need to invoke some even higher reality? Otherwise, how else can I explain through this gap? And then to explain the higher reality, don’t we need another level of even higher reality? Isn’t it an infinite regression? At the end, we still have this explanatory gap. Welcome to the hard problem of consciousness!”

    Hope this clarifies the problem a little bit.

  100. Arnold, in 99 you say “It is only after neuronal images on the mosaic arrays in our synaptic matrices are projected back into retinoid space that they are experienced in their proper locations with respect to our self in our phenomenal world.”

    On your account, experience is entailed (comes to exist) by virtue of the functions of the retinoid system. One wonders what it is about these functions, as opposed to other sorts of functions, that explains this entailment. The question, as you put it, is:

    “Now, you might still ask why an RS brain representation of *something somewhere* with respect to fixed locus of origin (the core self, I! in my theory) should be the substrate of experience. I guess that I would ask you in return if you have ever had a conscious experience without experiencing *something somewhere*. For me, this is *what it is like* to be conscious.”

    I agree that conscious experiences ordinarily involve there being something somewhere from an egocentric perspective. But I don’t see how that explains why there’s experience in the first place. That is, you’ve helped to account for some of the standard *content* of experience (the self in its surround), but not experience itself, as far as I can see.

    In 103 you say “What is unchanging according to the retinoid theory of consciousness is that neuronal activity in a living RS *constitutes* phenomenal experience.” (my emphasis) To revisit a question that’s come up before, I don’t quite see how consciousness, a categorically private reality, can literally be *constituted* by some public state of affairs. In any case, as a dual aspect monist you’d say, I think, that it’s just a brute fact that this state of affairs has a private, phenomenal aspect as well as an objective, physical aspect. But folks like me want to know why *just this sort of neural activity* has two aspects, and not other sorts of neural activities.

    Pointing out close analogies and correlations between conscious phenomenal content (e.g., self, surround, pain, red) and neurally instantiated representations is very important, I agree, but still leaves open an explanatory gap, imo. Our disagreement seems to be whether this gap should prompt more thinking about consciousness (my position), or whether the fact that consciousness correlates with the RS and its functions is just a brute fact, like fundamental physical laws (your position).

  101. Arnold,
    Before we get to far off the original line of discussion, let me seek your opinion on computer consciousness. Just to ask the question again, can algorithm driven digital computers have experiences in your view?

  102. Tom, in #106 you say:

    “In any case, as a dual aspect monist you’d say, I think, that it’s just a brute fact that this state of affairs has a private, phenomenal aspect as well as an objective, physical aspect. But folks like me want to know why *just this sort of neural activity* has two aspects, and not other sorts of neural activities.”

    My answer is that *just this sort of [3pp biophysical] activity* contains a representation of *something somewhere*, which is the necessary and sufficient condition for any kind of 1pp private phenomenal experience. All other sorts of neural activities do not have the proper structure and dynamics to represent *something somewhere* in our volumetric egocentric space.

    Roger Penrose and Stuart Hameroff are editing a special edition of *The Journal of Cosmology* devoted to the problem of consciousness in the universe. I have a paper in this edition (in press) titled “Evolution’s Gift: Subjectivity and the Phenomenal World”. Perhaps this paper will give you a better idea of my position. You can read it here:

    http://journalofcosmology.com/Consciousness130.html

    Tom: “Pointing out close analogies and correlations between conscious phenomenal content (e.g., self, surround, pain, red) and neurally instantiated representations is very important, I agree, but still leaves open an explanatory gap, imo. Our disagreement seems to be whether this gap should prompt more thinking about consciousness (my position), or whether the fact that consciousness correlates with the RS and its functions is just a brute fact, like fundamental physical laws (your position).”

    Please do not misunderstand me. Given everything that I currently know, I have no principled grounds for suggesting more than what I claim about the retinoid system and consciousness. But I absolutely agree with your position that we should continue to think as hard as we can about the problem of consciousness. Consciousness is a fundamental scientific problem that deserves and demands sustained intellectual effort!

  103. Arnold in 108:

    “My answer is that *just this sort of [3pp biophysical] activity* contains a representation of *something somewhere*, which is the necessary and sufficient condition for any kind of 1pp private phenomenal experience.”

    It’s a pretty strong claim to say we now know the necessary and sufficient conditions for any kind of phenomenal experience. This means, I think, that you could tell whether or not a particular system was conscious, for instance Metzinger’s artificial self-model system at http://www.scholarpedia.org/article/Self_models#Self-models_in_machines which presumably represents itself in relation to its surround, such that there’s “something somewhere” for it. But assuming you’re right, seems to me this still leaves open the question as to why phenomenal experience should accompany, or be constituted by, or be the subjective aspect of, those conditions. But I’ll have a look at your forthcoming paper, thanks.

  104. Tom,

    I agree that my claim about the brain’s retinoid system is a strong claim. I’ve spent many years studying the problem and I think the claim is warranted. Counter-arguments are welcome.

    It is a misconception to think of Metzinger’s example of an artificial self-model system (STARFISH, Bongard et al) as having an internal global representation of *something somewhere* with respect to its core self. STARFISH has no core self, and it has no analog of the volumetric space in which it exists. STARFISH has an adaptive sensory-motor computational system based on stochastic optimization which changes its motor control routines to compensate for limb loss. To call these digital settings and motor-control routines a “body representation” is poetic license. STARFISH doesn’t represent itself in relation to its surround because it has no representation of its surround.

    Tom: “But assuming you’re right, seems to me this still leaves open the question as to why phenomenal experience should accompany, or be constituted by, or be the subjective aspect of, those conditions.”

    In science, questions like this can only be answered by examining the weight of empirical evidence. This is why the evidence I present should be carefully considered.

  105. Kar,

    You wrote: “Just to ask the question again, can algorithm driven digital computers have experiences in your view?”

    My short answer is NO. The reason, in my view, is that algorithm-driven digital computers are strictly propositional structures. The precondition for having a phenomenal experience, in my view, is a spatiotopic 3D representation of a volumetric space with a fixed perspectival “point” of origin (the self-locus). It seems to me that this has to be an analogical structure. I’ve asked many knowledgeable people if they know of any artifact that has this property. So far, no one has been able to point to one.

  106. Arnold, agreed that the starfish ain’t conscious, but following up on Kar’s question, is there a reason an artifact couldn’t eventually have an analogical structure instantiating a spatiotopic 3D representation of a volumetric space with a fixed perspectival “point” of origin, and thus be conscious? Must consciousness be biological? Seems to me no.

    Another question often raised in attributions of consciousness is whether there’s a hard and fast cut-off point in complexity or functions below which we know for sure something isn’t conscious (or above which it is). As machines gain in capacities comparable to ours, I’m wondering if your model will enable us to know for sure whether they are conscious or not. Same question applies to existing organisms too. Any rough answer from your model about which creatures are conscious?

    If you’ve covered these questions in your writings, my apologies, just direct me to the appropriate chapter/paper. Thanks!

  107. Arnold,
    Thanks for the response. It also seemed to me that the retinoid system is something that a digital computer can simulate well, in the way CD music, though digital, simulates analog music very well. So, your answer is a little bit surprising to me. Anyhow, your response is appreciated.

  108. Kar Lee,

    in the way CD music, though digital, simulates analog music very well

    not really, digital music does not simulate analogue music, what you have is a data digital storage of a sound signal, and then a D/A converter plus amps loudspeaker etc… music is music and can’t be simulated, only produced or reproduced. Vinilo or tape are the same, they only differ in the storage method.

    Actually the sound (and the music even more) only exists in your mind as part of the phenomenal experience, out there you have pressure waves resulting from unbalanced molecular distributions, that’s all.

    And then, of course, a computer simulation does not reproduce the physical system. A power station simulation does not produce energy, for the same reason a brain simulation does not necessarily produce consciousness. I think Arnold is right and the biophysical processes are required, the question is how?

    Yes, most discussions gravitate back to the Hard Problem…. probably the most real non banal problem of Humankind, if we accept that to avoid suffering and live in peace is the goal.

  109. Tom (#112),

    You wrote: “As machines gain in capacities similar to ours, I’m wondering if your model will enable us to know for sure if whether they are conscious or not.”

    If a machine with sensory-motor capability were to incorporate a module with the structure and dynamics of the retinoid system, and also exhibited the same kind of responses as the subjects in my SMTT experiment, then I might be inclined to say that the machine was conscious. However, it might be the case that the normal biophysical properties of the component neurons in the brain’s RS are necessary though not sufficient for consciousness to exist. In this case the machine would not be conscious.

    Tom: “Any rough answer from your model about which creatures are conscious?”

    I would say that all mammals are conscious. I would guess birds. Reptiles, crustaceans, insects, and lower creatures not conscious. But I wonder if octopi might be conscious. Just a guess about this.

    What do you think?

  110. Although I’d say “approximates” rather than “simulates”, KL is essentially correct, as best I recall (I’m dredging up memories from the 60s). there are two potential sources of distortion due to digital sampling: frequency spectrum fold over and the digitizing of analog quantities. Since human hearing is effectively bandwidth limited, sampling at or above the Nyquist rate essentially eliminates the former. The error due to the latter can presumably be made arbitrarily small by increasing the sample word size. So, for all practical purposes the “approximation” is exact. Or maybe even better, since my impression is that current digitizing techniques can “clean up” (in some sense) a noisy analog signal.

    But in any event, I don’t see the relevance to the RS. Arnold will correct me if I’m wrong, but my understanding of the RS is that effectively it is a digital “computer” and that he has “simulated” (ie, implemented on a digital computer a primitive version of) an RS.

  111. Arnold –

    I think your J of C article is a very clear, concise intro to the RS. Since by now I’ve read much – maybe even most – of your book chapters, it’s hard to be sure, but my guess is that one starting with the article would have a much easier time getting the essential ideas than has been my experience.

    Having chided you for some uses of “consciousness”, I am obliged to compliment you on presenting a (mostly – “transparent”?) clear definition of your use of “consciousness”, which is all I’m really asking of anyone who uses the word.

    Having reread the SMTT description, I wonder: since the subject sees only the two dots in the slit, are the slit and the shape behind the screen necessary? If the experiment were set up in such a way as simply to make two dots move on a uniform background and in more complex interrelationships – both moving, different relative movement, non-constant rates, etc, could various shapes be “created” in the subject’s brain? Whether it corresponds to the actual RS functioning or not, I envision t-planes analogous to your Z-planes, each with two dots, all of which over some time interval get superimposed to produce a composite image. Has anything like that been tried?

  112. Charles (116)

    The brain is definitely not a digital computer, nothing in it is digital.

  113. Digital sampling and the nyquist theorem have nothing to do with simulation, have to do with signal processing and signal data storage. In this case we should talk about simulation accuracy and fidelity, but it doesn’t apply either. You don’t simulate neither approximate music, and after all you need a final analogue stage.

    If you simulate a musical instrument, then you could have an approximation of its sound. Still you need a D/A converter and an amplifier in order to listen to it, unless you can have a look to the output data representing the air pressure waves and imagine the sound.

    In the case of RS simulation, down to what level of the brain architecture we should model in order to simulate conscious emergence, and with what precission, whole modules, neurons… individual synapses, molecular processes, tubules structures…

    what level, do you know? definitely the blocks model helps nothing.

    And again you wouldn’t have the physical system, so what would be the analog of the D/A converter in this case, in order to check the resulting phenomenal experience?

    Simulation is useless to study consciousness.

  114. Vicente,
    I think I was just trying to point out that the difference between digital and analog can be made arbitrary small. But I agree with you that CD isn’t simulating anything.

  115. Arnold:

    “However, it might be the case that the normal biophysical properties of the component neurons in the brain’s RS are necessary though not sufficient for consciousness to exist. In this case the machine would not be conscious.”

    Interesting. I would have thought that since your model is essentially functional, the properties of the system’s components wouldn’t make any difference regarding the existence of consciousness so long as they allowed the retinoid system (RS) to do its representational work. So I’m not sure why, for instance, a functionally isomorphic but silicon-instantiated version of my RS wouldn’t support subjective phenomenal states.

    “I would say that all mammals are conscious. I would guess birds. Reptiles, crustaceans, insects, and lower creatures not conscious. But I wonder if octopi might be conscious. Just a guess about this. What do you think?”

    On your model we’d make these judgments strictly on the basis of whether the organism in question incorporates the necessary and sufficient conditions of consciousness that you claim to have identified. On the assumption you’re right, I don’t know enough about the differences between, for instance, reptiles and mammals to judge whether one meets the conditions and the other doesn’t.

    Intuitively, I tend to grant sensory consciousness, such as the feeling of pain, to any creature that manifests what looks like pain-related behavior and that has roughly similar internal processing to ours which mediates that behavior. I’d want to be very sure that there were good reasons *not* to attribute consciousness to that organism before treating it as if it were not. Descartes famously thought dogs didn’t feel pain. I wouldn’t want make Descartes error with respect to any organism, or system for that matter. In Being No One and The Ego Tunnel, Metzinger warns of the dangers of unknowingly creating AIs with capacities for suffering, and suggests we hold off on such projects until we have a better understanding of consciousness.

  116. Charles (#117),

    You wrote: “Whether it corresponds to the actual RS functioning or not, I envision t-planes analogous to your Z-planes, each with two dots, all of which over some time interval get superimposed to produce a composite image. Has anything like that been tried?”

    If I understand what you are suggesting correctly , it sounds like *retinal painting*. If the time interval between the t-planes is short enough, retinal receptor persistence (positive after-image) will cause all superposed dots to compose a complete excitation image on the retina. This process is a simple sensory event, completely different from the post-retinal phenomenal image constructed in the retinoid system during SMTT.

  117. Tom (#121),

    You wrote: “Interesting. I would have thought that since your model is essentially functional, the properties of the system’s components wouldn’t make any difference regarding the existence of consciousness so long as they allowed the retinoid system (RS) to do its representational work.”

    True that my model is functional, but it is also material; you must remember that it is explicitly a putative *brain* system. Here, I’m being conservative in that there is abundant evidence that some creatures with brains, notably humans, are conscious, while there is no comparable evidence that anything without a biological brain is conscious. So I remain agnostic about the possibility of non-biological systems having consciousness.

    Tom: “Intuitively, I tend to grant sensory consciousness, such as the feeling of pain, to any creature that manifests what looks like pain-related behavior and that has roughly similar internal processing to ours which mediates that behavior.”

    Here you are being conservative on compassionate grounds. Given this outlook, you probably would not choose to go fishing. But how would you distinguish between a reflexive escape response and a phenomenal experience of pain on the basis of “what looks like pain related behavior”?

    Tom: “Descartes famously thought dogs didn’t feel pain. I wouldn’t want make Descartes error with respect to any organism, or system for that matter. In Being No One and The Ego Tunnel, Metzinger warns of the dangers of unknowingly creating AIs with capacities for suffering, and suggests we hold off on such projects until we have a better understanding of consciousness.”

    I can understand your concern. As for Metzinger’s warning, I’d say that at our present state of knowledge there is little chance that we will create artifacts with a capacity for suffering. It is a matter to ponder though.

  118. Arnold –

    My point is really about heuristics rather than implementation, so ignore the t-planes.

    If on my first introduction to SMTT you tell me there’s a triangle behind a screen and that it is partially visible through a slit in the screen, I will inevitably think (incorrectly) that what the experiment is showing is that an image of the triangle is being in some sense “reproduced” in the subject’s brain. But if you instead tell me that two dots are moving against a background according to a programmed pattern and the shape of the image “seen” by the subject depends on the specific program, I’ll be inclined to think harder about what’s going on. And if you further tell me that the behavior is replicated by an RS – which obviously doesn’t do “retinal painting” – then I’ll really think hard about how it might work.

    Except that I’ll tend to think about it in terms of function, not implementation. I’m a retired systems engineer and tend to think almost exclusively in functional terms (eg, generic buffers as opposed to “neural matrices” and “spatio-temporal planes”), not detailed implementation (to describe my knowledge of neurology as “scant” would be generous). Which is why I would have preferred to start with your JofC overview essay instead of your book, using the latter only to fill in where some understanding of implementation detail is necessary.

    “you are being conservative on compassionate grounds. … But how would you distinguish between a reflexive escape response and a phenomenal experience of pain”

    Rorty argues that deciding to what entities we do and don’t assume moral obligation is merely a matter of social convention, and that big factors in making the decision are facial similarity to us and the ability to imagine conversation with the entity. He notes that there are animals that we slaughter and eat that are objectively more “like us” than others that we bring home as pets or put on endangered species lists. And he argues that since it is social convention, what we decide with respect to future robots is likely to depend on how they look and converse.

    Think of HAL: “he” (ie, the ominous blinking red light) looks, sounds, and does evil, and lies to boot. We ultimately don’t just want him turned off, we want him “dealt with”.

  119. Arnold,

    “I’m being conservative in that there is abundant evidence that some creatures with brains, notably humans, are conscious, while there is no comparable evidence that anything without a biological brain is conscious.”

    Knowing the necessary and sufficient conditions of consciousness, it would seem that the question of whether it requires a biological substrate would have been answered. If the model invokes properties of biological components that play roles only they can play, then only systems like us in that respect can be conscious. If it doesn’t, then seems to me the model is substrate neutral, in which case the fact that we’ve only seen consciousness supported by brains doesn’t count in favor of the hypothesis that only brains can be conscious.

    “But how would you distinguish between a reflexive escape response and a phenomenal experience of pain on the basis of “what looks like pain related behavior”?

    The more the system has similar internal processing to ours which mediates that behavior then it seems to me the more we’re justified in attributing consciousness, since after all it’s the processing that matters. But until we have a complete theory of consciousness, such attributions are necessarily guesswork. Given, as you say, our present limited state of knowledge, we shouldn’t assume that we won’t inadvertently create a locus of suffering as we build more and more complex cybernetic systems.

  120. Tom –

    “The more the system has similar internal processing to ours which mediates that behavior …”

    Could you elaborate a bit on what you mean in this context by “mediates that behavior”?

    “… then it seems to me the more we’re justified in attributing consciousness … But until we have a complete theory of consciousness, such attributions are necessarily guesswork.”

    But wouldn’t “necessary and sufficient conditions of consciousness” amount to a definition? And if so, aren’t attributions to ourselves premature as well?

    And I’m not playing word games here – that’s a real question I’ve had ever since my early exposure to consciousness (as an area of study) via Susan Blackmore’s “Conversations on Consciousness” in which most if not all of the 20+ interviewees had their own definitions (or no definition) of the concept under discussion.

  121. Charles:

    “Could you elaborate a bit on what you mean in this context by “mediates that behavior”?”

    There’s neurally-related internal processing in us which correlates with phenomenal pain (the NCC of pain) and which supports pain-related behavior, including reports of pain, wincing, withdrawal from the stimulus, learning to avoid it, etc. It’s likely that central, not peripheral processes are the NCC of pain, given phantom limb pain. As a system gets closer and closer to us in having that sort of processing (neural or otherwise), seems to me we’d be more and more justified in supposing it too experiences pain.

    Arnold claims he’s identified the necessary and sufficient conditions of consciousness in his RS model. I’m not sure he has, so that’s why I think attributions of consciousness of non-human systems are still guesswork, based on similarity-to-us of internal processing.

    As for defining consciousness, I mean qualitative, phenomenal subjective states like sensory qualia, e.g., pain, sensations of color, sound, touch, emotions, etc. I take it that’s what’s at issue when thinking about the hard problem.

  122. Tom,

    You wrote: “As for defining consciousness, I mean qualitative, phenomenal subjective states like sensory qualia, e.g., pain, sensations of color, sound, touch, emotions, etc.”

    Do you mean to suggest that the phenomenal color of an alphabetic character on your computer screen is a quale, whereas the phenomenal shape of the same character should not be considered a quale?

  123. Arnold, I don’t tend to think of shapes as qualia since they can usually be specified in terms of more basic qualities, e.g., boundaries of contrasting colors or shades of a color as mapped on x-y coordinates. We can specify and describe shapes intersubjectively no problem. I tend to think of a quale – a basic phenomenal quality like red – as non-composite, not further reducible or describable in more fundamental terms and thus intersubjectively unspecifiable except via ostension – http://en.wikipedia.org/wiki/Qualia#Definitions , http://www.naturalism.org/kto.htm#Qualia

  124. Tom,

    You wrote: “I tend to think of a quale – a basic phenomenal quality like red – as non-composite, not further reducible or describable in more fundamental terms and thus intersubjectively unspecifiable except via ostension –”

    Is our thinking about a quale as something essentially different from our phenomenal experience of an object a conceptual trap? It seems to me that when we try to specify in an intersubjective way our *phenomenal experience* of a shape as simple as a circle, we can’t simply say “its something round”, or give a geometric equation as a specification. We are reduced to pointing to an exemplar of a circle (ostension) and saying “this is what it is like for me”. Doesn’t this suggest that our philosophical notion of “qualia” might be a bit misleading?

  125. Suppose I draw a circle per the definition – ie, a curve all points of which are equidistant from a fixed point – and then say “yes, my phenomenal experience of that figure [ie, the mental image it causes] appears to me consistent with the definition of a circle.” If you can say the same, then we have some confidence that we are having the same phenomenal experience.

    I don’t see how we can do that for a color. There presumably are technical definitions of the members of color standards, but I assume they aren’t in terms of parameters we can experience phenomenally. And although I may have learned to associate my phenomenal experience when seeing a certain sample of a standard set of colors with a certain word (eg, “cerulean”), I can’t further describe my phenomenal experience when seeing the sample. And the same goes for you. So, the fact that we might both agree that the color sample corresponds to “cerulean” provides no evidence to the effect that we are having the same phenomenal experience – or even a similar one: the inverted spectrum problem. (Remark 50 in Wittgenstein’s “Phil Investigations” seems relevant to this issue.)

  126. Our phenomenal experience of a particular circle is not “a curve all points of which are equidistant from a fixed point”. It is an internal image, bounded in size, and at a particular location in our own egocentric space. The best that we can do to convey this experience in intersubjective communication is to present a sample of circles of various sizes and thickness of contour and say (pointing) this example is the closest of all to my subjective experience.

    I suggest that the same is true for our phenomenal experience of a particular color. It, too, is an internal image, bounded in size and shape, at a particular location in our egocentric space. The best we can do to convey this color experience in intersubjective communication is to present a sample of colors of various hues and saturation and point to the example that is closest to our subjective experience.

    In neither case can the person who sees the other’s chosen exemplar of phenomenal shape or color be *certain* that the other person experiences the same thing that he experiences. Pragmatism rules.

  127. Arnold, let me respond to your “Pragmatism rules”, because this approach is highly debatable.

    Q: Should we as a society unplug the guy who has been lying on a hospital bed for the last 5 years unresponsive to anything? Is he experiencing anything like we do?

    Color stimuli probably have more similar effects on different individuals than drugs. I have never taken any LSD before, and I can only try to imagine what it is like to be taking it. Now, if someone tells me, it feels like taking LSD, I will not have the faintest idea of what he is talking about. Now, suppose if I want to find out, and take some myself, can I be sure that it really is what that guy is talking about? I have my doubt because alcohol seems to work differently on me as well.

    Up to this point in my life, I still cannot imagine why people can get addicted to alcohol because it just feels so bad to be drunk. But apparently, many folks like alcohol, among them, many of my good friends. So, if someone tells me “it feels as good as getting drunk”, I will not have any idea of what this person is talking about either. How do I describe my feeling of being drunk? It just feel strange and uncomfortable. This is one type of quale I can try to describe, but not necessarily being able to get to the bottom of it. Privacy of experiences. Privacy of qualia.

    Another example is sex drive. Try to explain that to a three year old and see if you get anywhere. You can tell a three-year old that it is like wanting to eat ice cream. But you know it is not. This is another type of qualia that either you “understand” it, or you don’t. No descriptions can make you understand it if you don’t. If you do, then even if no one describes it to you, you still do. To the experiencer, all experiences are real.

    But to a third party, experiences are not real, and in place of them, the behaviors of the one who claims to experience them. It is this individual dependent reality that sometimes gets pragmatism into trouble. Pragmatic by whose standard? Our discussion has a long way to go.

  128. The drawn figure is an abstract entity defined by equidistance from a point and has nothing to do with phenomenal experience (“the internal image”). The point of assuming it drawn to a specification was to separate the characteristics of the physical entity and those of the phenomenal experience so that they could be compared. To make that clearer, perhaps I should have assumed the circle to be computer-generated.

    The “internal image” represents that abstract figure and either appears to meet the same specification or doesn’t, and clearly one can convey that “yes-or-no” decision verbally. If the drawn circle is tilted for one viewer, that viewer’s decision should at least be “no”, but may even be “no, it appears elliptical”.

    I don’t understand the part of your comment about pointing at multiple circles. The viewer is already looking at a figure that is drawn according to the specification of a circle. The question (as I see it) is whether or not the corresponding internal image appears to meet the specification. What would be the point of introducing another circle for comparison since it would inevitably result in an equivalent internal image? And why is size relevant? I understood the issue to be only about shape, not size. I would assume that just as with parameters defining a color, size is not a parameter we can extract from the internal image absent some context from which relative size might be inferred.

    I agree with your comment about color. But my contention remains that it is not equivalent to shape.

    And I agree that we can’t be *certain* about either case, although in the case of shape we may be able to achieve “some confidence”. (If the stars around “certain” were intended to suggest a quote from my comment, the word taken was out of context. I used it in its dictionary.com definition 6 sense, not definition 1.)

  129. Kar,

    I understand and am in full sympathy with your concern about the broad implications of social pragmatism. What I suggested in #132 is that science is a pragmatic enterprise that takes explanation and successful prediction of “objective” events as its standard of success. This is a limiting factor in trying to understand the biophysical mechanisms responsible for the private 1pp experience of any phenomenal content, qualia of any kind.

    In the context of cultural norms, the standards of societal utility are always at issue. Here questions of resolving conflicting values and standards are paramount.

  130. Kar Lee,

    A genetical lack of alcohol dehydrogenase enzyme, necessary for alcohol metabolism, is quite common amongst chinese community (>50%), causing the symptoms you refer.

    To me, a very interesting introspective experience related to alcohol intake (moderately), is that somehow you can still observe and monitor the effects it has on your perception, feelings, mood, scenario evaluating skills, etc, with some perspective. It seems there is high order state, inmune (to some extent) to the effects of alcohol. Somehow, I can differentiate between “a self”, and brain processes that interact, but are not part of it.

    Aldous Huxley has interesting writings about his experiences with LSD and other substances.

    Anyway, as you said, to convey 1PP experiences is almost impossible…

  131. Charles (# 134): “The “internal image” represents that abstract figure and either appears to meet the same specification or doesn’t, and clearly one can convey that “yes-or-no” decision verbally. If the drawn circle is tilted for one viewer, that viewer’s decision should at least be “no”, but may even be “no, it appears elliptical”.

    In the case of phenomenal experience, we can’t say what the viewer’s decision *should* be. The “yes-or-no” decision is made on the basis of subjective standards, not on the basis of abstract geometric specifications. If I look at a screw-on jar cover with its diameter orthogonal to my line of sight I judge it to be round. If I tilt the jar cover it still looks round to me, and I still judge it to be round even though it projects an elliptical image on my retinas. Consider the moon illusion. When a full moon is near the horizon it looks like a large round object; when it is at its zenith it looks like a much smaller round object. Yet in both cases the projection of the moon’s image on your retinas is the same, at 0.5 degrees in visual angle.

    Charles: “I understood the issue to be only about shape, not size. I would assume that just as with parameters defining a color, size is not a parameter we can extract from the internal image absent some context from which relative size might be inferred.”

    I should have been more careful in my chose of words. In speaking about the shape of an object in phenomenal experience, I take size to be an inseparable aspect of the conscious experience of shape. All features of a phenomenal image, shape, size, color, location, etc., are bound in a unity of experience as realized in egocentric space.

  132. I, of course, meant “should” for a typical and credible subject under controlled conditions, ie, viewing the figure with eyes centered on the perpendicular to the viewed figure’s plane that intersects the figure’s “center”. I assume that most subjects image standard figures in more-or-less the same way, so that a subject who in describing the internal image of a figure meeting the specification of a “circle” provided a description of a “square” would be assumed to be abnormal or a liar.

    If you include size in “shape”, then I agree that nothing can be said about the “sameness” of that feature of two subjects’ internal images, and for the same reason as in the case of color: you can’t specify the size of something in a way that can be extracted from the internal image for comparison. A one foot diameter circle on a plane that is unbounded within the subject’s FOV can be made to create an internal image of any size by placing the plane at some specific distance.

    But at this point, I think we need to revisit Tom’s original issue, viz, whether the “qualia” status of a circle and a color are the same. And Tom suggests that one determinant of the answer is determined by whether the phenomenal experience (internal image) of each is:

    not further reducible or describable in more fundamental terms and thus intersubjectively unspecifiable

    I think we all agree that the answer to the question for a color is “yes” and the answer with respect to the generic shape “circle” is “no”: the latter can be described in terms of a locus of points equidistant from a point, a description that can used intersubjectively. Whether size is a necessary part of the description seems a debatable issue, but since I’m not a fan of the qualia concept, I have no strong opinion. Although my inclination is toward no.

    On the other hand, consider a hexagon. Like a circle, a hexagon is specifiable in more fundamental terms and a subjects’ internal image of the shape can be described in those terms. So, it doesn’t meet Tom’s requirement for a quale. But what about a 19756-agon? It also is specifiable in in more fundamental terms. But in practice, a subject can’t actually describe the shape of the internal image accurately for intersubjective use. So, is the primary determinate of a quale the intersubjective use? Ie, is there an N for which an n-agon is a quale for n>N but not otherwise?

  133. Charles (# 138),

    You wrote: “But at this point, I think we need to revisit Tom’s original issue, viz, whether the “qualia” status of a circle and a color are the same. And Tom suggests that one determinant of the answer is determined by whether the phenomenal experience (internal image) of each is:

    _not further reducible or describable in more fundamental terms and thus intersubjectively unspecifiable_ ”

    Doesn’t the fact that color experience depends on the relative activation pattern of the retinal L-cones (long wavelength), M-cones (medium wavelength), and S-cones (short wavelength), over the range of ~ 400 nm to ~ 700 nm, mean that the phenomenal experience of color is describable in more fundamental terms?

    How is this different from describing the phenomenal experience of a circle in terms of of a particular pattern of retinal-cell activation?

    My own view is that *neither* event can be said to *describe* (or be a reduction of) the phenomenal experience. All that we are justified in saying is that these are both *correlates* of the subjective experiences.

    But I go a step further in claiming that simple biological correlates are to weak to ground a science of consciousness. We have to uncover salient analogs of target phenomenal events and formulate plausible brain mechanisms that can generate such analogs.

  134. Interesting, Vicente. I looked and I found. Huxley’s description of using LSD in “The door of perception and heaven and hell”:
    [ When administered in the right kind of psychological environment, these chemical mind changers make possible a genuine religious experience. Thus a person who takes LSD or mescaline may suddenly understand- not only intellectually but organically, experientially – the meaning of such tremendous religious affirmations as “God is love,” or “Though He slay me, yet will I trust in Him.” ]
    But still, it hardly tells you what it is.

    Regarding “the self” in a drunk state, I even had a similar experience one time when I had a headache. The self seemed not to be affected by the pain, but was monitoring it. I would say, given sufficient training, this state of a “clean self” can happen more often, especially if you are trying to focus on yourself experiencing the pain instead of focusing on the pain itself.

  135. “Doesn’t the fact that color experience depends …”

    I assume that Tom means “describable” and “intersubjectively specifiable” by the subject. The details of opponent color processing, like the details of any other internal processing, are presumably transparent to the subject. Ie, they constitute a 3pp on the biological “mechanics” of phenomenal experience, not the 1pp on (for example) the “internal image” that is the content of that experience. The latter does appear to be “describable” by the subject for standard geometric shapes (where, to repeat, a “description” is something like “locus of points equidistant …”, “four sided, equilateral, all right angles”, etc, not just a name like “circle”, “square”, etc.

    “My own view is that *neither* event can be said to *describe* … the phenomenal experience.”

    I don’t understand this sentence. We always seem to come back to the question of what exactly is under discussion. Sticking to the visual sensory modality, I infer that the “event” to which you refer is a subject’s experiencing a mental image. But if that’s right, I don’t understand what it means to say that such an “event” can or can’t “describe … the phenomenal experience.” A subject experiences the mental image of a specified (generic) shape or color in her FOV and either can or can’t describe the image adequately to allow another subject to accept or deny that the description applies to the internal image occurring when he has the same shape or color is in his FOV.

    At this point, I’m not even sure we agree on a description of the scenario, never mind the answer to any questions about it. And therefore, with respect to:

    … both [are]*correlates* of the subjective experiences”.

    I have no idea what – if anything – in my description of the scenario corresponds to either “subjective experiences” or their “correlates”. And I don’t see what I’m talking about as being likely to do anything as ambitious as to “ground a science of consciousness”.

    So, what has happened to Tom? I find myself in the anomalous position of the amateur trying to mediate between two pros. Help!

  136. Charles in 142, you’re doing a great job and I share your puzzlement. Arnold appears (to me) to contradict himself in 139. He says “Doesn’t the fact that color experience depends on the relative activation pattern of the retinal L-cones (long wavelength), M-cones (medium wavelength), and S-cones (short wavelength), over the range of ~ 400 nm to ~ 700 nm, mean that the phenomenal experience of color is describable in more fundamental terms?” But then he says “My own view is that *neither* event can be said to *describe* (or be a reduction of) the phenomenal experience. All that we are justified in saying is that these are both *correlates* of the subjective experiences”. I agree with his second statement, not the first. (Btw, by event Arnold means the correlates of experience such as the activation pattern, not the experience itself.)

    Mostly to repeat myself (which is why I bowed out) I’d say that experienced shapes aren’t simple, irreducible qualities (qualia), although they are of course experienced *in terms of* qualia, e.g., contrasting colors like black on a white background. Shapes as we experience them are specifiable by means of geometric quantification, thus uncontroversially amenable to 3rd person description. But an experience of a color isn’t amenable to quantification or decomposition within experience, which is *why* it’s a quale – a basic experienced quality that can’t be further specified. Of course the publicly observable neural correlates of qualia *are* further specifiable, in terms of spatial and physical quantities which don’t reference anything qualitative. Why those neural processes should entail the existence of categorically private qualia – the basic, ever-present elements of subjective phenomenal consciousness – is the hard problem.

    Btw, I wouldn’t call myself a pro since I don’t get paid to ponder the HP. But it’s the best problem going if you ask me.

  137. Tom –

    Thanks for the encouragement – and for qualifying Arnold’s use of “event”. I thought I agreed with him on that issue but wasn’t really sure, given my confusion about his use of “events”.

    “Pro” was just intended as shorthand for “actually knows what they’re talking about”, not necessarily meant to imply getting paid. The exchange between you and Arnold has been very beneficial to me, and I appreciate the opportunity to eavesdrop and ask questions.

    BTW, in the course of reviewing some of the earlier comments in this thread I went back and reread KTO and was reminded that I liked that paper a lot – notwithstanding that I know you subsequently changed your position a bit.

  138. Charles and Tom,

    I think this has been a fruitful discussion. I can understand Tom thinking that I contradict myself in #139. What I was trying to convey (apparently unsuccessfully) was this: If Charles is willing to accept an objective formulation (e.g., all points equidistant from … ) as a description of a (subjective) phenomenal experience of a circular object, then he should not object to taking the (objective) activation pattern of L, M, and S retinal cones as a description of the (subjective) phenomenal experience of the object’s color. I express my own view when I say that neither proposal is valid. This is why I believe we need the principle of corresponding *analogs* to bridge 1pp with 3pp in a science of consciousness.

    The pendulum illusion is another example of how the structure and dynamics of a theoretical model (3pp) predicts a novel phenomenal experience (1pp) on the basis of corresponding subjective and objective analogs. You can read a description of the illusion and instructions for experiencing it yourself in *The Cognitive Brain*. See p. 239 and
    p. 242, Fig. 14.5, here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter14.pdf

  139. Whenever people of good will seem to be at an impasse, I begin to suspect some unrecognized underlying problem. So, I decided to review the issue of primary-secondary properties (ala Roger Scruton’s “Modern Philosophy”, very useful as a mini encyclopedia) and was reminded that that has been for centuries, if not millennia, – and I gather remains – a hot topic of debate. So, it seems unlikely that we will resolve it here.

    On the bright side, I think it interesting and encouraging that we three seem to agree on the general idea we variously describe as “dual-aspect monism”, “psycho-physical parallelism”, “dual descriptions”, and “goal-specific vocabularies” (my term, derived from a Ramberg Essay in “Rorty and His Critics”).

    Hopefully we’ll revisit this issue here sometime. It’s been a real pleasure for me.

  140. Pingback: Preserving the Self for Later Emulation: What Brain Features Do We Need? – Ever Smarter World

Leave a Reply

Your email address will not be published. Required fields are marked *