Are we aware of concepts?

jennifer2Are ideas conscious at all? Neuroscience of Consciousness is a promising new journal from OUP introduced by the editor Anil Seth here. It has an interesting opinion piece from David Kemmerer which asks – are we ever aware of concepts, or is conscious experience restricted to sensory, motor and affective states?

On the face of it a rather strange question? According to Kemmerer there are basically two positions. The ‘liberal’ one says yes, we can be aware of concepts in pretty much the same kind of way we’re aware of anything. Just as there is a subjective experience when we see a red rose, there is another kind of subjective experience when we simply think of the concept of roses. There are qualia that relate to concepts just as there are qualia that relate to colours or smells, and there is something it is like to think of an idea. Kemmerer identifies an august history for this kind of thinking stretching back to Descartes.

The conservative position denies that concepts enter our awareness. While our behaviour may be influenced by concepts, they actually operate below the level of conscious experience. While we may have the strong impression that we are aware of concepts, this is really a mistake based on awareness of the relevant words, symbols, or images. The intellectual tradition behind this line of thought is apparently a little less stellar – Kemmerer can only push it back as far as Wundt – but it is the view he leans towards himself.

So far so good – an interesting philosophical/psychological issue. What’s special here is that in line with the new journal’s orientation Kemmerer is concerned with the neurological implications of the debate and looks for empirical evidence. This is an unexpected but surely commendable project.

To do it he addresses three particular theories. Representing the liberal side he looks at Global Neural Workspace Theory (GNWT) as set out by Dehaene, and Tononi’s Integrated information Theory (IIT)’ on the conservative side he picks the Attended Intermediate-Level Representation Theory (AIRT) of Prinz. He finds that none of the three is fully in harmony with the neurological evidence, but contends that the conservative view has distinct advantages.

Dehaene points to research that identified specific neurons in a subject’s anterior temporal lobes that fire when the subject is shown a picture of, say, Jennifer Aniston (mentioned on CE – rather vaguely). The same neuron fires when shown photographs, drawing, or other images, and even when the subject is reporting seeing a picture of Aniston. Surely then, the neuron in some sense represents not an image but the concept of Jennifer Aniston?  against theconservative view Kemmerer argues that while a concept may be at work, imagery is always present in the conscious mind; indeed, he contends,  you cannot think of ‘Anistonicity’ in itself without a particular image of Aniston coming to mind. Secondly he quotes further research which shows that deterioration of this portion of the brain impairs our ability to recognise, but not to see, faces. This, he contends, is good evidence that while these neurons are indeed dealing with general concepts at some level, they are contributing nothing to conscious awareness, reinforcing the idea that concepts operate outside awareness. According to Tononi we can be conscious of the idea of a triangle, but how can we think of a triangle without supposing it to be equilateral, isosceles, or scalene?

Turning to the conservative view, Kemmerer notes that AIRT has awareness at a middle level, between the jumble of impressions delivered by raw sensory input on the one hand, and the invariant concepts which appear at the high level. Conscious information must be accessible but need not always be accessed.  It is implemented as gamma vector waves. This is apparently easier to square with the empirical data than the global workspace, which implies that conscious attention would involve a shift into the processing system in the lateral prefrontal cortex where there is access to working memory – something not actually observed in practice. Unfortunately although the AIRT has a good deal of data on its side the observed gamma responses don’t in fact line up with reported experience in the way you would expect if it’s correct.

I think the discussion is slightly hampered by the way Kemmerer uses ‘awareness’ and ‘consciousness’ as synonyms. I’d be tempted to reserve awareness for what he is talking about, and allow that concepts could enter consciousness without our being (subjectively) aware of them. I do think there’s a third possibility being overlooked in his discussion – that concepts are indeed in our easy-problem consciousness while lacking the hard-problem qualia that go with phenomenal experience. Kemmerer alludes to this possibility at one point when he raises Ned Block’s distinction between access and phenomenal  (a- and p-consciousness), but doesn’t make much of it.

Whatever you think of Kemmerer’s ambivalent;y conservative conclusion, I think the way the paper seeks to create a bridge between the philosophical and the neurological is really welcome and, to a degree, surprisingly successful. If the new journal is going to give us more like that it will definitely be a publication to look forward to.

 

68 thoughts on “Are we aware of concepts?

  1. According to Tononi we can be conscious of the idea of a triangle, but how can we think of a triangle without supposing it to be equilateral, isosceles, or scalene?

    What about objects for which we don’t have any easy way of representing them? Say, four dimensional hypercubes? Calabi-Yau manifolds? The Monster group?

    I can certainly, in some sense, contemplate hypercubes, but I have no mental image of them. I’m not at all sure that this means I’m somehow having an experience related to the concept of a hypercube, though—one could probably construe my thinking about it as thinking about some minimal set of properties that suffice to pick it out exclusively (that four-dimensional convex polytope that relates to the cube in the same way as the cube relates to the square, perhaps).

    But I suppose in order to think about whether I ever am conscious of concepts, I’d first have to get clear about what sort of thing a concept even is. Perhaps I’m just thinking about the concept of ‘concept’?

  2. Studies of the brain are secondary. When we talk about concepts we’re aware of them.
    Mental phenomena have a reference and sense, the sense part is the concept. Perhaps there is no pure concept, that’s arguable- but I’d say it’s a mental phenomena without unique reference

  3. According to the retinoid model of consciousness, concepts as such exist only in the pre-conscious synaptic matrices of the brain. Our conscious experience is of imagistic and linguistic ( inner speech) descriptions or exemplars of relevant concepts. For example, see “Learning, Imagery, Tokens, and Types” in *The Cognitive Brain* (MIT Press 1991).

  4. Arnold –

    concepts as such exist only in the pre-conscious synaptic matrices of the brain

    In functional terms, what form do concepts take in those matrices? Eg, is the form in any way compatible with Sellars’ “to grasp a concept is to master the use of a word”?

  5. Jochen: “I ever am conscious of concepts, I’d first have to get clear about what sort of thing a concept even is. Perhaps I’m just thinking about the concept of ‘concept’?”

    I’d hazard the definition that a concept is a descriptive category as applied to objects and processes, so it’s a cognitive tool deployed in representing reality, not something we’re going to find out there in the world. To be conscious of the concept “cat” is just to apply it or think about it consciously, in which case there will be associated phenomenology. But of course we’re always categorizing things without explicitly applying a concept, e.g., when seeing a cat but not having the conscious episode of thinking of it as a cat. So I guess this is somewhat to agree with Peter that “concepts could enter consciousness without our being (subjectively) aware of them.” Perhaps more precisely: the cat as unconsciously categorized – that is, conceptualized – by me as a cat enters my consciousness when it walks by.

  6. Charles: “In functional terms, what form do concepts take in those matrices? Eg, is the form in any way compatible with Sellars’ ‘to grasp a concept is to master the use of a word’?”

    The take the form of images in the imaging matrices that are mapped to class-cell tokens, which in turn are mapped to words in our brain’s lexicon. The words are inputs to our semantic networks which form our lexical propositions. But the entire system of images, words, and propositions is activated pre-consciously as you read this comment. What you consciously experience are associated images and inner speech induced by the pre-conscious mechanisms of the brain. The only thing we are conscious of before our senses and unconscious semantic processing adds content to our phenomenal world is our being at the perspectival origin of a surrounding space (the world around us).

  7. Thanks, Arnold. So there’s a linkage between a concept (which in the RS is an image) and the use of a word, at least when the word is the name of an object that can be imaged. This suggests to me that Sellars might be on the right track in relating a concept and “mastering the use of a word”.

    What about Jochen’s hypercubes, which can’t be imaged? When I was learning about such spatial abstractions, in working some problems I’d first try to visualize a one, two, or three dimensional instance of the problem, formulate a solution in that lower dimension, then restate the steps generalized to the abstract space in question. Thus, although images played a role in the process, they did not constitute the concept itself but were merely tools used in learning to say things about the abstract entity – ie, in learning the use of a word.

    But what’s involved in “grasping a concept”? Concepts seem to emerge rather than suddenly popping up fully formed – a process, as is acquisition of facility with a word. In which case a better word than “grasping” might be “acquiring”.

    Based on these and other considerations, I’d elaborate Sellars’ statement something like:

    To acquire a concept relevant to a context-dependent community is to develop facility in the use of a word in a vocabulary appropriate to that context at a level adequate for effective communication within that community.

    From that perspective, what would it mean to “be aware” of a concept. In general, it would not require an ability to image a named object: use of the word “cat” doesn’t look like a cat (or like anything else). And talking about or using a word is a manifestation of facility with the word, not the facility itself – which presumably is a complex neural structure. In principle, one could even acquire such facility – say, by extensive reading – yet never actually employ it overtly.

    I find the vocabulary in Kemmerer’s article – not only his but also that used in the various quotes – so ambiguous as to preclude being sure, but it seems that what he means by “being aware of a concept” is to (in some sense) “recognize a concept as being present in a phenomenal experience”. And I’d agree that one can’t be
    “aware of a concept” in that way since a facility – implemented in a neural structure – can’t in any sense be “present in a phenomenal experience”.

    OTOH, if “being aware of a facility” is interpreted as knowing that one has that facility, then I’d say one can be aware of a concept. Although because of the “beetle-in-a-box” problem, it may be necessary to successfully exercise the facility within the relevant community in order to be sure.

    All of which seems to point to a recurring theme, viz, that a more precise vocabulary would be a tremendous help in addressing such issues. Or at the very least paring down the existing vocabulary by eliminating synonyms and not, as Kemmerer does (and others often do), listing several synonyms and then using them in seemingly arbitrary and inconsistent ways.

  8. Charles: “All of which seems to point to a recurring theme, viz, that a more precise vocabulary would be a tremendous help in addressing such issues.”

    I agree, but when we try to sharpen our understanding of such issues, we face the problem that they involve two different domains of description, the private 1st-person description, and the public 3rd-person description couched in terms of the underlying brain mechanisms. It is only recently that we have been able to express our understanding in the latter terms.

  9. OK, Arnold, I have to go back to basics. Suppose a subject is undergoing visual sensory stimulation by a uniform monochromatic field of light filling the FOV. Asked to describe what’s being “seen”, the subject responds “red”.

    Is that a first person “description”? If so, what specifically is it a “description” of? Certainly not of any associated neural activity, which presumably is what is described from a third person perspective.

    Or is it the phenomenal experience accompanying the sensory stimulation event that is “described”? If so, aren’t the first and third person “descriptions” of entirely different entities?

    And in any event, if the third person description is of the phenomenal experience E, doesn’t that make E causally effective contrary to the causal sequence I thought we had agreed was more likely, viz:

    sensory stimulation -> neural activity -> utterance of a word -> phenomenal experience

  10. Charles,

    That certainly is first-person description. It is a description of the retinoid system’s response to a uniform field of retinal stimulation of a particular wavelength (the third-person description).

    The correct causal sequence:

    sensory stimulation -> retinoid activity -> phenomenal experience 1 -> preconscious perceptual classification -> utterance of a word (“red”? -> phenomenal experience 2 -> preconscious perceptual classification etc, etc.

  11. Sellars distinguishes non-epistemological and epistemological experiences (the former apparently having the sense of “undergoing” – the bare event of having sensory receptors stimulated, the latter he calls an “experiencing” – the cognitive event of recognizing (with some degree of confidence) the source of the stimulation as being some named object or event. I take it that those more or less fit with your PE1 and PE2. If so, when “PE” is used here (ie, at CE), which do you assume is usually the referent?

    I don’t get your second “preconscious perceptual classification”. Is it a typo and should read “conscious” – which seems the logical successor to a “classification” event. If so, doesn’t that make PE2 causally effective?

    If it’s not a typo, how do the two classification events differ? And what follows the second? A new sensory stimulation event? In which case, do do your arrows signify only time sequence, not causality?

  12. As I see it, at the moment of PE 1 there is no referent because a perceptual classification has not been made. It is not yet an epistemological experience. PE 2 takes PE 1 as its referent for the inner speech predicate “Red”. The arrows signify time sequence and possible causal processes induced by successive PE states and perceptual feedback.

  13. It’s me being a bit of a broken record, but regards of whether you think you are aware of concepts, are you aware of being aware of concepts? What about being [aware of that [being aware of [being aware of concepts]]]

    You can see the recusive pattern where eventually you just run out of awareness. I put it in brackets above to better show the recursive pattern as we pull out from one bracket into a larger encompasing bracket, and from that into an even larger encompasing bracket that surrounds the first two…and there’s presumably an even larger one that we’re always blissfully unaware of, even as you can see from the pattern, it makes up our awareness.

  14. Arnold – Now that I understand the PE1-PE2 distinction in comment 10, I like it a lot, especially the separation of PE1 and PE2 since it makes clear how a neural activity pattern consequent to sensory stimulation (PE1) points to a learned color word which then plays a role in producing PE2. I gather that the “etc” in your sequence indicates that for multichromatic stimulation from the FOV the process repeats for each identifiable monochromatic region. If so, that seems consistent with my speculation that production of PE2 might be analogous to a paint by numbers picture with each distinguishable color word playing a role similar to that of a quale. I can see the value of matching combinations of spatially positioned color words to saved combinations that have been associated with named objects. But it still isn’t clear why we experience the mental imagery since it seems to add nothing. Once the objects in a FOV have been identified and positioned relative to the subject, the affordances presumably also can be identified and responsive action can be initiated without the need for mental imagery.

    Re terminology, I assume your PE1 is the “neural activity” in my comment 9. I’m not clear on exactly what your PE2 is. Visual mental imagery of each monochromatic patch? of a pattern of patches (AKA objects)? of something else? In any event, the latest post by Stephen Butterfill at the Brains blog seems to make a distinction similar to yours. He calls what I take to be roughly your PE1 “phenomenal expectation” and calls your PE2 “perceptual experience”. I prefer Sellars’ terminology (“epistemic experience” vs “non-epistemic experience” ) since it makes clear what I see as the essential difference between PE1 and PE2. But I guess “a rose by any other name …” applies. As long as they’re defined and applied consistently, any two labels will do.

  15. Arnold Trehub, in #10: “The correct causal sequence:

    sensory stimulation -> retinoid activity -> phenomenal experience 1 -> preconscious perceptual classification -> utterance of a word (“red”? -> phenomenal experience 2 -> preconscious perceptual classification etc, etc.”

    Does it follow from this sequence that one must say “Red” in order to classify the color seen as an instance of red, or in order to activate the concept of redness, or something else? What if one simply looks and doesn’t speak (or think a word)? Then is part of the experience missing?

  16. Cognicious: “Does it follow from this sequence that one must say “Red” in order to classify the color seen as an instance of red, …?

    No. The classification of color in the brain is made as soon as the sensory excitation pattern is mapped to a dedicated class cell in a synaptic matrix of the observers brain. Saying “Red” is simply the linguistic report in the public domain. See “Learning, Imagery, Tokens, and Types: The Synaptic Matrix” on my Research Gate page.

  17. Saying “Red” is simply the linguistic report in the public domain.

    I’m delighted to see “report” instead of “description” because the latter suggests to me to a homunculus in the Cartesian Theater “viewing” PE2 and describing what it “sees” – which leads us to say things like (Cognicious 15)

    one … say[s] “Red” in order to classify the color seen as an instance of red

    instead of

    one … reports “Red” in response to a neural activity pattern PE1 that is among those patterns associated with the word “red”

  18. Responding to Trehub #16 and Wolverton #17: It seems to me that by the time you can say “Red,” your phenomenal experience of perceiving a color is complete. I therefore have no idea what PE2 is. Why are two phenomenal experiences associated with identifying a seen color?

  19. Cognicious: “It seems to me that by the time you can say ‘Red’, your phenomenal experience of perceiving a color is complete. I therefore have no idea what PE2 is.”

    True, but PE2 is the following conscious experience of naming the perceived color in the appropriate lexicon.

  20. Arnold Trehub #19: “PE2 is the following conscious experience of naming the perceived color in the appropriate lexicon.”

    In the sequence given in #10, saying “Red” precedes PE2. I still don’t understand. Does the subject name the color before he or she consciously experiences doing so?

    Suppose that the experimental protocol is a little different (I’ve been imagining a lab situation), and instead of speaking, the subject pushes a particular button if red appears. Now is PE2 the dawning of awareness that one has pushed the button?

  21. Yes, the subject utters “Red” before he/she is conscious of having done so.

    “Now is PE2 the dawning of awareness that one has pushed the button?”

    Yes, one must act before one is conscious of having acted. No?

  22. Arnold Trehub: “Yes, the subject utters “Red” before he/she is conscious of having done so.”

    Strictly speaking, of course, uttering “Red” precedes becoming conscious of *having* done so. But does uttering “Red” precede becoming conscious of *doing* so? If people learn that they’ve said something only after saying it, the injunction “Think before you speak” is universally disobeyed.

    Speech is indeed sometimes involuntary, but this is rare, at least when speakers are awake. Normally, a person would *decide* to say “Red,” would say it deliberately, and would be aware of saying it, with no waiting period between speaking and awareness of speaking.

    I originally asked “Does the subject name the color before he or she consciously experiences doing so?” (not “. . . consciously experiences having done so”) and similarly for pushing a button.

  23. Cognicious,

    The decision to say “Red” is made pre-consciously. The activation of the motor routine to utter “Red” is made pre-consciously. And the conscious image of the vocalization “Red” follows its performance by some milliseconds. This seems to be what the evidence tells us.

  24. Cognicious,

    In scientific discussions, I doubt if everyone ever agrees on a particular theoretical claim. But I think that the current standard view is that decisions are made in the brain before one is conscious of the decision. There is a wealth of evidence in support of this view. Attempts to relate this issue to the “problem” of free will are a “red herring”.

    Also, if you are troubled by the distinction between 1st-person and 3rd-person descriptions, tell us why and we can discuss it.

  25. Arnold Trehub #25: “I think that the current standard view is that decisions are made in the brain before one is conscious of the decision.”

    I wasn’t aware that this was the standard view among academics. If it is, it contradicts the standard view among laypeople, which, I believe, is that people make decisions consciously, using their brains. If laypeople are wrong, then we’re all automatons, and consciousness is an epiphenomenon. But if consciousness is an epiphenomenon, why did it evolve? If consciousness doesn’t enable an animal to act deliberately and even, at later phylogenetic stages, thoughtfully, what advantage did it have in natural selection?

    “Also, if you are troubled by the distinction between 1st-person and 3rd-person descriptions, tell us why and we can discuss it.”

    Decision making is felt to occur in the first-person realm. (In exceptional situations, such as some emergencies, people do act without thinking, or they think so fast that later they only remember having acted.) To say that the brain makes decisions on its own puts deciding in third-person territory and removes deciding from awareness and personal control. My deciding is then analogous to the activity of my liver and kidneys: it’s a bodily function that a scientist can describe in a third-person way. This result is counterintuitive, and it makes hash of the idea of responsibility.

    The way the distinction between 1P and 3P descriptions is used in explanations troubles me more generally. I haven’t articulated the problem well enough to convey it.

  26. Cognitious: “…it contradicts the standard view among laypeople, which, I believe, is that people make decisions consciously, using their brains. [1] If laypeople are wrong, then we’re all automatons, and consciousness is an epiphenomenon. [2]”

    1. People do make decisions using their brain, but they are conscious of their decisions only after they have been pre-consciously made.

    2. This conclusion is wrong. Consciousness is not an epiphenomenon. Consciousness is our phenomenal world that we make decisions about in order to adapt and thrive!

  27. Cognicious –

    The reason you’re having trouble is that Arnold’ proposed sequence is counterintuitive. So, it’s necessary to think in terms of what goes on inside the brain (at least at an architectural sense). Introspection won’t help (what I take to be one of the take-aways from Scott Bakkar’s Blind Brain Theory.) Neither will trying to analyze overt behavior as in Libet-type experiments.

    First, a couple of observations. In vision, light stimulates sensory receptors, and preliminary processing of their response results in neural activity in the brain. Whatever information about the outside world contained in the light and remaining after that preliminary processing is contained in that neural activity in the brain. Further processing can’t create additional information (the Data Processing Theorem from information theory). Therefore, however visual mental imagery (what I take to be Arnold’s PE2) is produced, it can’t contain any additional information available for use in any decision making.

    The intuitive understanding of how visual mental imagery relates to responsive action is that via mental imagery we “see” what’s going on out in the world, based on that we “decide” how to react, and then we react. But if you accept the description in the previous paragraph, there can’t be any benefit in basing the reaction on the output of the additional processing which produces the mental imagery. Worse, the additional processing that produces the imagery introduces a delay relative to formulating and initiating the reaction directly from the neural activity.

    This suggests that mental imagery doesn’t play the role assumed in the intuitive understanding. In the particular case of a subject exposed to a uniform field of red light being asked to describe what is “seen”, it would then seem possible, in fact likely, that the response to the question is determined either prior to, or concurrent with, the production of the mental imagery. In Arnold’s Retinoid System, it’s prior. And that suggests the possibility that rather than being a consequence of the mental imagery, a color word instead plays a role in its production. Hence, Arnold’s sequence.

    FWIW, I have independently speculated on that possibility based only on these architectural considerations. But I don’t have relevant specialized knowledge, hence couldn’t argue for that speculation. I still don’t fully understand Arnold’s Retinoid System and so can’t assess it, but obviously I’m inclined to accept it’s architecture despite the unconventional conclusions to which it leads.

    Hope this helps.

  28. Charles Wolverton: “The reason you’re having trouble is that Arnold’ proposed sequence is counterintuitive. . . . Introspection won’t help . . . Neither will trying to analyze overt behavior as in Libet-type experiments.”

    Well, a proposal shouldn’t be so counterintuitive as to contradict introspection and behavior unless the clashes can be explained, as in, for example, explaining how an optical illusion works.

    “Therefore, however visual mental imagery (what I take to be Arnold’s PE2) is produced, it can’t contain any additional information available for use in any decision making.”

    He said that PE2 is the conscious experience of naming the color. By the way, you seem to use the phrase “visual mental imagery” for what I would call a visual sensation or a visual perception. Imagery, to my understanding, is generated without an external stimulus (i.e., light). Not trying to quibble; agreement on meanings of terms is important.

    “In the particular case of a subject exposed to a uniform field of red light being asked to describe what is “seen”, it would then seem possible, in fact likely, that the response to the question is determined either prior to, or concurrent with, the production of the mental imagery. In Arnold’s Retinoid System, it’s prior. And that suggests the possibility that rather than being a consequence of the mental imagery, a color word instead plays a role in its production. Hence, Arnold’s sequence.”

    The response to the experimenter’s question requires a color word because that’s how we answer such questions, but if you haven’t already identified the color to yourself, you won’t know what word to say.

    I can also ask how the retinoid model accounts for color perception by children who haven’t yet learned any color names. Surely they don’t see flowers in shades of gray until a helpful adult tells them “That color is called purple”! And if they did, they’d never learn what purple was.

  29. Cognicious –

    … unless the clashes can be explained

    Which is what Arnold’s RS purports to do.

    agreement on meanings of terms is important

    Absolutely. Or in the absence of agreement, explicitly define terms to be used. Unfortunately, I haven’t found much agreement on the terms used in this field of study, in particular, a term for the picture formed in a subject’s mind due to light reflected from a scene entering the subject’s eyes. “visual mental imagery” seemed descriptive and non-problematic, but perhaps you’re right that it really isn’t. So, how about “picture in your mind”?

    He said that PE2 is the conscious experience of naming the color. [in comment 19]

    True, and I too find the wording of that rather terse comment confusing. My guess is that Arnold meant something like:

    PE2 is the step in which the subject becomes conscious of having previously executed the “preconscious perceptual classification” (ie, naming) of the neural activity constituting PE1

    Perhaps Arnold will clarify for us.

    The response to the experimenter’s question requires a color word … but if you haven’t already identified the color to yourself, you won’t know what word to say.

    In Arnold’s sequence, the determination of a color word that has become associated (via previous learning) with a pattern of neural activity (which is what I take PE1 to be) occurs in the step he calls “preconscious perceptual classification”, which is indeed prior to uttering the word. But if by “identified the color to yourself” you mean “having become conscious that the picture in your mind (what I take to be PE2) is of a uniform red field, then your statement is clearly not consistent with Arnold’s sequence.

    I can also ask how the retinoid model accounts for color perception by children who haven’t yet learned any color names. Surely they don’t see flowers in shades of gray

    Implicit in this quote is the assumption that prelingual children have “color perception” and “see” something, by which I take it you mean “can produce a picture in the mind”. Whether that assumption is correct is what I take to be the issue at hand, and Arnold’s sequence seems to suggest that the assumption isn’t correct.

    In my previous comment I put “see”, “seen”, etc in quotes in order to suggest that care should be exercised in using those words in this kind of discussion. When speculating on exactly what it means to “see” something, we obviously can’t assume the meaning of “see”.

  30. Charles: “PE2 is the step in which the subject becomes conscious of having previously executed the “preconscious perceptual classification” (ie, naming) of the neural activity constituting PE1”

    Yes, PE2 is the conscious experience of uttering the color name. Becoming conscious of having previously executed the preconscious perceptual classification may or may not happen, depending, for example, on the sophistication of the subject.

  31. Charles Wolverton #30: “I haven’t found much agreement on the terms used in this field of study, in particular, a term for the picture formed in a subject’s mind due to light reflected from a scene entering the subject’s eyes. “visual mental imagery” seemed descriptive and non-problematic, but perhaps you’re right that it really isn’t. So, how about “picture in your mind”?”

    The conventional term for it in psychology (perhaps not in philosophy) is “percept.” Percepts may be visual, auditory, tactile, and so on. “Mental imagery” means to me what you see in your mind’s eye, not in reality. Percepts are also defined as occurring in the mind–I checked two dictionaries–but this seems an odd use of the word “mind.” Percepts are items in sensory experience.

    “In Arnold’s sequence, the determination of a color word that has become associated (via previous learning) with a pattern of neural activity (which is what I take PE1 to be) occurs in the step he calls “preconscious perceptual classification”, which is indeed prior to uttering the word. But if by “identified the color to yourself” you mean “having become conscious that the picture in your mind (what I take to be PE2) is of a uniform red field, then your statement is clearly not consistent with Arnold’s sequence.”

    By “identified the color to yourself” I mean “perceived the color,” with or without thinking its name.

    “[Me: “I can also ask how the retinoid model accounts for color perception by children who haven’t yet learned any color names. Surely they don’t see flowers in shades of gray”]

    Implicit in this quote is the assumption that prelingual children have “color perception” and “see” something, by which I take it you mean “can produce a picture in the mind”. Whether that assumption is correct is what I take to be the issue at hand, and Arnold’s sequence seems to suggest that the assumption isn’t correct.”

    I simply don’t believe that children are born color-blind and are relieved of this condition only when they learn the names of colors. For one thing, if you never had a sensation (percept) of red in your visual field, you wouldn’t learn what the word “red” meant. The word attaches to the quale. Further, it’s implausible that a child’s visual world would consist of black, white, gray, and red when the child picked up “red” as the first color word and then, if he or she forgot the word the next day, would revert to black, white, and gray.

    In Western societies, which, unlike some nonliterate societies, have names for many colors, I suppose the names of the spectral colors are learned first, along with black and white and brown; perhaps gray and pink come along soon. I don’t remember learning any of those words. But there are other color names that enter one’s vocabulary later, such as chartreuse, magenta, beige, and turquoise. Even later: ecru, mauve. There probaby are adult English speakers who can’t match the words “ecru” and “mauve” to the correct paint swatches, although they can see those colors.

    I don’t recall knowing what cyan was until adulthood, when I encountered it in connection with work. (Cyan is important in color printing.) The color cyan hadn’t been missing from my sensorium; I just hadn’t known it had a name.

    The point is, knowing the name of a color enriches your vocabulary but doesn’t enrich your visual experience. You see the color with or without having a word for it. If your mind generates the word, it can do so only because you perceive the color. That’s why I don’t believe that naming a seen color (or pushing a button, or writing the name, or tapping it out in Morse code) occurs before observing how it looks.

  32. Arnold –

    In the context of this thread I’m having trouble with “transparent” in your definition of “consciousness”. As I understand it, the visual part of your RS is effectively a 3-D model of the subject’s FOV, and its points (cells) are patterns of retinoid (essentially, neural) activity. Each preconscious classification step assigns a cell’s activity pattern a color word based on previously learned pattern-word associations which results in “patches” of cells with uniform color word assignments. The result can be viewed as a rendering relative to I! of the subject’s FOV in the medium of either neural activity patterns or color words. And it’s this final result that I take to be the basis of your “transparent representation”.

    In this context, I take “transparent” to mean that the conscious experience of the representation doesn’t involve the rendering process but only the result (content). But since neither neural activity patterns nor color words (or other set of pattern labels) seems to fit that definition, I don’t understand how to understand the “conscious experience of the representation”.

  33. Charles,

    The conscious experience of the representation is given by the fact that each “pixel” of the experience is located at some distinct 3D coordinate in perspectival relation to the self locus (I!) In retinoid space. This is in accordance with my working definition of consciousness.

  34. Understood, Arnold, but my specific question is: what is the nature of a “pixel”?

    It can either be set of features (what I mean by a “pattern”) of the neural activity at that point in virtual 3-D space, a color word (or any other scheme of indexing distinguishable sets of patterns), or something else. In the case of neural activity patterns, the representation isn’t “transparent” if that word means that the subject can’t introspect underlying neural activity (what I take to be a major point of Scott’s BBT). And it isn’t obvious to me what it would mean for the subject to experience an indexing scheme. That leaves “something else”.

    The word “pixel” suggests some experience of “color” in the every day sense of a “picture in the mind”. But since as far as I know there’s actually no such thing in the brain, what we think of as the mental picture of a color must be just some currently unexplained manifestation of the neural activity. And it’s not clear to me whether any such manifestation could be reasonably described as “transparent”.

    It always seems to me (perhaps mistakenly) to come down to the questions “how and why do we experience what we call (misleadingly) ‘seeing a color’?”

  35. Charles,

    I would say that we do not “see” a color; we *experience* a color. And the experience is transparent because we are oblivious to the complex physiological pattern of neuronal activity that constitutes the color experience which is out there in our phenomenal world — the egocentric pattern of autaptic-cell activation in retinoid space.

  36. Charles Wolverton #35: “It always seems to me (perhaps mistakenly) to come down to the questions “how and why do we experience what we call (misleadingly) ‘seeing a color’?””

    Are you asking (with your “why”) why animals developed color perception at all and (with your “how”) how incoming light of so-and-so-many nanometers gets translated into a color sensation?

  37. Cognicious –

    Based on Arnold’s last comment, I think that modulo some vocabulary he would more-or-less agree with something like:

    For a subject “to see a color” is to experience neural activity consequent to visual sensory stimulation by light with spectral power distribution from a certain family, to be able to associate the experience with a color word, and to locate the neural activity constituting the experience relative to the subject’s position.

    Based on that, I think we can resolve some issues you’ve previously raised.

    An infant experiences the neural activity (or has the “phenomenal experience”, as it’s often called) but can’t associate the experience with a color word and has no sense of the experience’s relative location. Hence, to describe the infant as “seeing a color” is misleading according to the above definition. In that technical sense, the infant is indeed born “color-blind”.

    In time, the infant/child is trained to associate such experiences with color words. However, there is still no sense of the experience’s relative location. This state of development is captured in Arnold’s sequence (comment 10) by the steps up to and including the ability to utter a color word.

    Finally, at some point the child develops the ability to locate such experiences in space relative to the child’s position (the PE2 step in Arnold’s sequence), at which point “consciousness” emerges (according to Arnold’s definition), and it then becomes meaningful to describe the child as “seeing a color” in the sense of the above definition.

    Are you asking … why animals developed color perception at all …

    Yes. It isn’t obvious (to me and some others) that there’s practical benefit in producing conscious experience. As I argued in comment 28, all the information that can be recovered from the visual sensory stimulation is present in the consequent neural activity, and no subsequent processing of that neural activity can add more. Hence, any response to the stimulation can be produced without the additional processing needed to get to consciousness. Of course, I’m open to being dissuaded from that position.

  38. Charles Wolverton #38: “An infant experiences the neural activity (or has the “phenomenal experience”, as it’s often called) but can’t associate the experience with a color word and has no sense of the experience’s relative location. Hence, to describe the infant as “seeing a color” is misleading according to the above definition. In that technical sense, the infant is indeed born “color-blind”.”

    I had thought that in #35 you said we *misleadingly* call an experience “seeing a color” because a color isn’t something outside us that we see in the same sense in which we see an object. (When you see a bluebird, there’s a real bluebird out there. When you see blue on the bird’s feathers, the blue is generated by your visual system.) Now it seems that your reason for using “misleadingly” was different: you don’t want to say the infant sees a color unless the infant knows the color word. This seems to me an arbitrary requirement. The business about the experience’s relative location, I don’t understand. Its location relative to what? An infant knows where the color is in his or her visual field. Only if we study the brain do we get a sense of where our neural activity is located – which lobes, for example – and we never feel that activity as it occurs in the brain.

    “In time, the infant/child is trained to associate such experiences with color words. However, there is still no sense of the experience’s relative location. This state of development is captured in Arnold’s sequence (comment 10) by the steps up to and including the ability to utter a color word.”

    Again, I’m unsure what “relative location” means here. Binocular vision develops in the first few months after birth, long before color words are learned, so if you mean that the child can name colors while yet unable to locate colored objects in space, no, that sequence is reversed.

    I understood the sequence that Arnold Trehub gave in #10 as describing what happens every time a person sees a color. Perhaps this was a mistake. You’re apparently talking about that sequence as an account of stages in a child’s development.

    “Finally, at some point the child develops the ability to locate such experiences in space relative to the child’s position (the PE2 step in Arnold’s sequence), at which point “consciousness” emerges (according to Arnold’s definition), and it then becomes meaningful to describe the child as “seeing a color” in the sense of the above definition.”

    Well, yes, given that this definition of “seeing a color” includes requirements that go beyond the conditions that justify the ordinary use of the term, then when those requirements are met, it becomes meaningful to apply the term as thus defined. But that’s tautological. Why object to the ordinary use, in which infants – and adults – see colors in the world around them without verbalizing anything? If I’m choosing paint, I can easily distinguish lemon yellow from cadmium yellow, and many shades in between, without thinking of color words.

    “It isn’t obvious (to me and some others) that there’s practical benefit in producing conscious experience.”

    Conscious experience gives animals more control over their behavior. Without consciousness, the information that impels responsive behavior is missing. For example, a fish that isn’t aware of a predator has no reason to swim away from it, and a predator that can’t locate prey by means of its senses will starve. An animal also needs to be conscious of its own behavior, such as fleeing or hunting, if the behavior is to be voluntary.

    Conscious experiences, both sensory and emotional, also provide motivation. If animals weren’t troubled by itching, they wouldn’t scratch, and scratching is adaptive because it removes skin parasites. Rarely, a person is born who can’t feel pain; keeping such people alive is difficult. Fear is a helpful motivator when something in the environment must be avoided. Feeling hungry motivates us to eat, and feeling no longer hungry motivates us (ideally) to stop.

    “As I argued in comment 28, all the information that can be recovered from the visual sensory stimulation is present in the consequent neural activity, and no subsequent processing of that neural activity can add more. Hence, any response to the stimulation can be produced without the additional processing needed to get to consciousness.”

    You’re applying a third-person view to a first-person process. If the neural activity takes place unnoticed, nothing will come of it. The “information” implied in neural activity informs no one if it isn’t perceived. Without consciousness, *no* response to the stimulation will be produced. I’ll even say that the kind of neural activity we’re talking about has the purpose of creating a conscious experience. It doesn’t make sense without doing so. Natural selection wouldn’t have acted on neural activity that had no consequences for the organism.

    Color perception specifically, as an item of experience, has a role for some animals in identifying food and in breeding.

  39. Cognicious –

    This topic is highly technical and specialized, so “common sense”, “ordinary use”, and what “most people” think are totally irrelevant. That’s why I keep harping on vocabulary. The everyday vocabulary of vision is misleading, but so far there seems to be no generally accepted technical alternative. So, we’re left with either using it – but carefully redefining the words – or making up new words. Most of the time we do the former but unfortunately often without the critical step of carefully redefining the words.

    All the spatial references are to the 3-D model of external space that is internal to, and a key feature of, Arnold’s Retinoid System (RS). They are from the perspective of his hypothesized position I! in that internal space. The RS is complex, so there’s no shortcut to understanding it (even vaguely, as in my case); you have to read about it in the material he frequently cites here. (Do a CE site search for “researchgate”.)

    The external world is irrelevant to my comments since my assumed input is just the information-bearing light incident on the retina. In principle, that light could be artificially generated and used to directly illuminate the subject’s retina, in which case there would be no external world in the relevant sense.

    As defined in my last comment, the ability “to see” something isn’t innate, it’s a cognitive achievement. (A difficult concept – at least for me – addressed in Sellars’ essay “Empiricism and Philosophy of Mind”, especially section 24. “Knowledge, Mind, and the Given” by deVries and Triplett is an explication of that notoriously difficult essay.) The innate-cognitive achievement distinction is similar to that between the ability to distinguish black print from a white background and understanding the print’s meaning. There are lots of steps in getting from the former to the latter, just as there are in getting from the ability to distinguish among experiences of neural activity to the ability to identify specific ones as being “of colors”. And that’s where learning color words comes in – an infant can’t identify an experience as “seeing red” until the ability to reliably say “red” when the experience occurs has been acquired. The experience doesn’t change but one’s relationship to it does.

    The question about causal efficacy of conscious experience is whether the behaviors you describe require it. Your arguments in favor are just assertions that it is. An argument pro or con has to address processing in the brain.

    And I have no idea what your final paragraph re 1pp vs 3pp is about. The quote to which you seem to be responding has to do with processing in the brain. The neural activity is question is (conceptually) input to one or more processors, so terms like “perceived” and “unnoticed” seem inapplicable.

  40. Mr. Wolverton, if this discussion assumes that one already understands and subscribes to the retinoid model, then the topic is narrower than I thought, and I don’t qualify to participate.

    This part, however, does not seem to involve the retinoid model: “The question about causal efficacy of conscious experience is whether the behaviors you describe require it. Your arguments in favor are just assertions that it is. An argument pro or con has to address processing in the brain.

    And I have no idea what your final paragraph re 1pp vs 3pp is about. The quote to which you seem to be responding has to do with processing in the brain. The neural activity is question is (conceptually) input to one or more processors, so terms like “perceived” and “unnoticed” seem inapplicable.”

    The points I tried to make are hard to articulate. To confine all remarks to “processing in the brain” is to take a third-person view, which leaves out consciousness altogether; consciousness is a first-person phenomenon. It’s no surprise that excluding the conscious being’s mental activity as experienced supports the position that this mental activity is unnecessary. One can then construe people as more like robots than like biological entities that have goals of their own, not goals programmed in by outsiders. But then the question arises how such a population could have evolved.

  41. Cognicious,

    The retinoid model explains how first-person phenomena are experienced in the brain. It rejects the proposition that such subjective mental events are unnecessary. Without them we would be left without our phenomenal world and human adaptation would be impossible.

  42. Arnold –

    How do “first-person phenomena”, “subjective mental events”, and “our phenomenal world” relate to PE1 and PE2 per your comment 10 sequence?

  43. Charles,

    All first-person phenomena depend on the activation of the primitive phenomenal world (minimal consciousness); call this PE0 — the initial activation of retinoid space upon awakening. PE1 and PE2, following your query about RED and naming RED, are subjective mental events, i.e., elaborations, qualia, *added to our phenomenal world* via visual perception and semantic-lexical productions. The point is that our primitive phenomenal world/subjectivity (activation of the RS) constitutes consciousness, and without it we cannot experience PE1 or PE2.

  44. Arnold –

    OK, got it (I think).

    I assume that when people talk about “phenomenal experience” being causal, they mean at the PE1/PE2 stage: we “see” the part of the environment in our FOV and make decisions based on what we “see”. My hypothesis is that the information available in the neural activity that constitutes PE0 is adequate for many, probably most, possibly all responsive actions. Maybe “not causally efficacious at all” is too strong, but it seems clear that in situations where quick response is critical – say, returning a high speed tennis serve – the response can’t be based on “seeing” in the PE1/PE2 sense. The additional processing to produce P1/P2 unnecessary delay.

    Independent of whether or not you agree with that, using your comment 10 terminology/symbology seems to be very helpful in discussing these very difficult (for me, anyway) issues. Thanks.

  45. Charles: “returning a high speed tennis serve – the response can’t be based on “seeing” in the PE1/PE2 sense.”

    Absolutely. Reflexive, sensory-motor action bypasses seeing in the PE1/PE2 sense.

  46. Arnold Trehub #46: “Reflexive, sensory-motor action bypasses seeing in the PE1/PE2 sense.”

    Are you sure you want to call that kind of response reflexive? In baseball, expert batters don’t aim at the approaching ball. The ball comes too fast for that. Instead, they adjust their posture and start their swing earlier, having intuitively plotted the ball’s trajectory by watching the pitcher’s movements as he winds up. This response occurs too fast to be verbalized (and probably couldn’t be verbalized completely), but it isn’t rigid and stereotyped like a true reflex, such as a knee jerk or a contraction of the pupil.

  47. Cognicious,

    I don’t know about expert batters, but in my own past experience in batting and as a hockey goalie, I would strike out every time and allow dozens of opponent goals if I based my reactions on how the pitcher winds up, or on the skater’s action just before he shoots the puck. It is where the baseball goes or where the puck is flying that automatically/reflexively governs where I would swing or where I would intercept the puck.

  48. Cognicious –

    It’s no surprise that excluding the conscious being’s mental activity as experienced supports the position that this mental activity is unnecessary.

    That’s not quite what I’m doing. I’m asking for examples of things we can we do with visual conscious phenomenal experience PE2 that we couldn’t do with only preconscious phenomenal experiences PE0 (see Arnold’s comment 44) and PE1. So far, no one I’ve asked has suggested any that are convincing. My data processing argument suggests to me a reason to be skeptical that there are any such examples, although I admit that doesn’t seem likely since the nature of evolution suggests that there should be.

    Are you sure you want to call that kind of response reflexive?

    Yes, because in this context Arnold probably is, and I definitely would be, using the word “reflexive” ala dictionary.com’s meaning #2, not meaning #1. (Actually, I consider all behavior in to be reflexive per meaning #1 as well since I’m skeptical about volition, but that’s a separable issue.)

    The example of a subject whose retina was stimulated by a uniform field of red light and was asked to respond with a color word was chosen to be as simple as possible in order to avoid complexities irrelevant to the immediate point. In general, the FOV content could be a white surface with a black square on it, or a baseball bat, etc – even a pitcher in the process of throwing a pitch which the subject is trying to hit. And the pitcher and the batter could have a long history of facing each other. Etc, etc. The processing is obviously much more complex in such multifaceted environments, but at base the model is still sensory stimulation in (including interoceptive inputs such as the batter’s physical well-being and remembered experiences), responsive behavior out.

    And the question remains: does PE2 play a role? Your intuition tells you that it does, and so does mine. But IMO, a convincing argument in support of that intuition will have to address what goes on inside the brain.

    BTW, “Charles” is fine. Unlike Arnold, I have no status in this arena (or any other, for that matter!)

  49. Charles Wolverton #49: “I’m asking for examples of things we can we do with visual conscious phenomenal experience PE2 that we couldn’t do with only preconscious phenomenal experiences PE0 (see Arnold’s comment 44) and PE1. So far, no one I’ve asked has suggested any that are convincing.”

    I take PE0 to be a conscious state, though without any definite content, simply the state of being awake and ready for experiences:

    “All first-person phenomena depend on the activation of the primitive phenomenal world (minimal consciousness); call this PE0 — the initial activation of retinoid space upon awakening” (Arnold Trehub #44).

    Accordingly, PE0 isn’t “preconscious,” it’s conscious!

    Then comes PE1, which I think is the experience of seeing a color, the word “seeing” having its ordinary,nontechnical sense:

    “sensory stimulation -> retinoid activity -> phenomenal experience 1 -> preconscious perceptual classification -> utterance of a word (“red”? -> phenomenal experience 2 -> preconscious perceptual classification etc, etc.” (Arnold Trehub #10).

    Seeing red, with or without naming the color, is a conscious experience. Again, why call PE1 preconscious?

    Finally, in PE2, we remember the name of that color and say “Red.” Speaking is also a conscious experience. You know you’re doing it while you do it.

    Now, “And the question remains: does PE2 play a role? Your intuition tells you that it does, and so does mine” (Charles #49).

    My intuition (or perhaps my understanding of the meaning of words) says that PE2 isn’t necessary for consciousness or for perception. Nonverbal animals and preverbal children are aware, to various degrees depending on their visual apparatus and their deployment of attention and so on, of what’s in their visual fields.

    What is it that PE2 plays a role in?

  50. Arnold: All first-person phenomena depend on the activation of the primitive phenomenal world (minimal consciousness); call this PE0 — the initial activation of retinoid space

    Cognicious: Accordingly, PE0 isn’t “preconscious,” it’s conscious!

    I don’t know why Arnold added the phrase “minimal consciousness” to the definition of PE0 since my understanding is that “consciousness” per his formal definition requires not only “activation of retinoid space” but also subsequent production of a representation of the subject’s 3-D FOV including recognition of colors, shapes, objects, etc, which is what I interpret PE2 to be. So, I ignored that phrase. But whatever it’s intended to indicate, “minimal X” obviously isn’t equal to “X”.

    why call PE1 preconscious?

    Arnold follows the PE1 step with “preconscious perceptual classification”. I infer that all prior steps are preconscious as well.

    in PE2, we remember the name of that color and say “Red.”

    Not according to Arnold’s sequence. A pattern of activity in retinoid space (PE1, the first step in the production of the representation of the subject’s 3-D FOV) is matched to a color word with which it has been associated via previous training (the “preconscious classification” step) which makes possible utterance, or other use, of the color word.

    Suppose that a subject has said “I’m seeing red” (in its ordinary sense)and the experimenter follows up with a request for the subject to explain in detail what “seeing red” means. A typical subject will have no idea what to say. We’re attempting to develop a technical response to that request. Obviously, injecting “seeing red (in its ordinary sense)” into the attempt introduces a circularity, eg,

    PE1, which I think is the experience of seeing a color

    In comment 36 Arnold reduces this to “we *experience* a color, thereby avoiding the circularity.” I’d elaborate this a bit and say that consequent to visual sensory stimulation by monochromatic light we have a phenomenal experience with which we learn to associate a color word. I prefer this because it avoids having to address exactly what it means to “experience a color”.

  51. Charles: “So, I ignored that phrase. But whatever it’s intended to indicate, “minimal X” obviously isn’t equal to “X”.”

    Minimal consciousness (PE0) is our basic/primitive conscious experience without sensory content. It is our essential precondition for perceiving and naming anything.

  52. OK, then let’s make this as explicit as possible for the simple case of a static FOV content:

    1. PE0 – RS is activated and ready to receive sensory input

    2. PE1 – Sensory input begins, RS’s cells start responding

    3. In an iterative process akin to foveating from the perspective of I!, distinguishable and recognizable areas of uniform RS cell activity (rate, amplitude, etc) are used to select color words based on previously learned associations

    4. Additional features – if any – of the recognizable areas of uniform RS cell activity (shape, orientation, etc) are used to select appropriate words (line, square, arc, etc) based on previously learned associations

    5. Steps 2 and 3 may result in more complex and recognizable 3-D area of non-uniform RS cell activity being used to select the names of objects based on previously learned associations (eg, “cat”)

    6. PE2 – the iterative processes of steps 3-5 result in a 1pp describable representation of the FOV content from the perspective of egocentric point I!. This constitutes consciousness per your formal definition except for the addition of “describable”. For present purposes, let’s ignore that difference.

    If this sequence (minus “describable”) is more or less accurate I don’t see what is gained – and in fact think clarity is lost – by using “minimal consciousness” and “preconscious” in labeling steps. The value of a precise definition is that whether or not something satisfies it is binary – yes or no. Adding qualifiers seems to detract from that and cause confusion (eg, the last dozen or so comments).

  53. Charles:

    “1. PE0 – RS is activated and ready to receive sensory input
    2. PE1 – Sensory input begins, RS’s cells start responding”

    Your PE1 at step 2 is misleading. In step 1 (PE0), all RS autaptic neurons are already firing above threshold, even without sensory input. This is the baseline for consciousness. Sensory input to RS then raises the excitation level in RS for its particular perceptual quale. If instead of a ganzfeld of red, there is a circumscribed circle of red, that particular region of RS would be targeted by the heuristic self-locus (selective attention) and cellular activation at those particular “coordinates” would be raised. This is how perception, as distinct from its uniform conscious background, works.

  54. So, for the case of a red circle in the subject’s FOV, sensory input raises the activity level in a circular area in RS space, and that pattern of activity constitutes PE1 – in which case the classification step selects the whole descriptive phrase “red circle”?

    If that’s correct, is my original step 6 also correct (assuming obvious changes to step numbering)?

  55. Charles,

    Your step 6 is almost correct, except for your statement that the detection of a red circle “constitutes consciousness”. The distinction between *consciousness* and *perception*, the detection and classification of the red circle, is important. Consciousness is a necessary *precondition* for perception. This view conforms very well to the neuronal mechanisms that do the whole job in the cognitive brain, at least in terms of the brain model that I have proposed. Many empirical findings support the model.

  56. Now I’m even more confused. If

    Consciousness is a necessary *precondition* for perception”

    and

    perception [is] the detection and classification of the red circle

    why do you call the classification step in your sequence “preconscious”?

    Let’s simplify even further. Suppose RS is up and running (ie, the subject is having PE0) and a red circle is placed against a white background in the subject’s FOV. I assume the subject will then have PE1. But also suppose that the subject has NOT previously learned to associate the raised activity in RS caused by the circle with either of the words “red” or “circle”. I would not describe the subject’s relationship vis-a-vis the red circle as “being conscious of a red circle” since the subject isn’t able to apply those words (what I meant by saying that “seeing” is a cognitive achievement, and what I assumed was the significance of PE2).

    How would you express the distinction vis-a-vis consciousness between being able to associate words with PE1 and not being able to do so, and how does the distinction affect PE2?

  57. Charles: “why do you call the classification step in your sequence “preconscious”?”

    Because the events in the synaptic matrices and semantic networks that do the classification are *not part of our conscious experience until they are projected in proper spatio-temporal register into retinoid space.* You have to understand the sequence of brain events that compose our distinct perceptions as they are highlighted within our global phenomenal world (retinoid space). Our mental concepts have to be understood in terms of the sequences of events in the cognitive mechanisms of our brain.

  58. Charles, what about this hypothetical case? The subject is a member of a nonliterate tribal culture, being tested by an English-speaking fieldworker for ethnographic research. The subject doesn’t know the words “red” and “circle” but can point to the correct figure among several on a test card, the figure that matches what was on the screen. Do you say he saw the red circle? Do you say he was conscious of it?

  59. Arnold:

    events in the synaptic matrices and semantic networks that do the classification are *not part of our conscious experience until they are projected in proper spatio-temporal register into retinoid space.*

    Understood, and I thought the result of those projections for all objects in a static FOV (in our simple example, only a stationary red circle) is PE2 as described in step 6 of my comment 54.

  60. Charles, I was answering your question “why do you call the classification step in your sequence “preconscious”?” You seem to accept my answer. If you agree that PE0 and PE1 are preconditions for PE2, then I’m not sure what you see as a problem now.

  61. Maybe this will help identify my problems. Subject S is sitting with eyes closed. There are two objects before S. Object A is classifiable by S as a “red circle”. Object B is not classifiable by S.

    S’s eyes open and RS activity begins resulting in S having PE0. The activity levels in the areas of RS space occupied by A and B rise above threshold and RS begins “attending” (right word?) to them resulting in S’s having two instances of PE1. Call this event “S becoming aware of” both object A and object B. Object A can then be classified as being a “red circle”.

    And this is where I get confused. I assume that at this point both the classified object A and the unclassifiable object B can be projected into RS space. But I’m unclear on whether it is before or after that projection that S can have a PE2 (I assume the latter), and whether there is a PE2 for each object individually or a single PE2 for both objects collectively. Your sequence in comment 10 seems to suggest the former. And in comment 31 you say:

    PE2 is the conscious experience of uttering the color name.

    The current scenario doesn’t involve uttering names, but in any event S can’t utter a name for the unclassified object B. This also suggests that PE2 occurs for individual objects but also seems to suggest that there is no PE2 for object B, which presumably is not true.

    I’m also unclear whether the transparent representation in your definition of consciousness includes only object A but not object B or includes both. Again, I assume the latter. But then it seems that we need to somehow distinguish being only “conscious of an object” from being “conscious of an object A as being a red circle” (the latter is what I’ve been describing as being a “cognitive achievement”). Otherwise, isn’t Cognicious is correct that the ability to name an object is arguably irrelevant (which I seriously doubt)?

  62. Charles: “S’s eyes open and RS activity begins resulting in S having PE0.”

    I think this is the source of your confusion. PE0 (the minimal conscious state) starts as soon as S wakes up in the morning. A congenitally blind person has PE0.

    I will have to unpack the rest of your example before I respond further.

  63. Arnold –

    I’ve revisited our exchange and may have identified one of my problems.

    the events in the synaptic matrices and semantic networks that do the classification are *not part of our conscious experience until they are projected in proper spatio-temporal register into retinoid space.*

    I assume that the inputs to those “synaptic matrices and semantic networks” are parameters of the cell activity in a region in the RS’s virtual 3-D space. The location of the region with respect to I! presumably is “known” to the RS. Therefore, once those cell activity parameters have been used to classify an object, the only remaining step is to “attach” the classification result – a name – to the region in RS space that provided the input. Is that step what you mean by the classification being “projected in proper spatio-temporal register into retinoid space”? And if so, is it that step that converts a PE1 into a PE2?

  64. Charles: “Is that step what you mean by the classification being “projected in proper spatio-temporal register into retinoid space”? And if so, is it that step that converts a PE1 into a PE2?”

    The diverse preconscious sensory images (shape, color, etc.) are first projected together in proper spatio-temporal register into retinoid space (PE1), then PE1 is analyzed and classified pre-consciously before the classification/name (e.g., “red car”) is projected into retinoid space to give PE2, an elaboration of PE1. Does this help?

  65. Yes. As I suspected, I had the general idea more or less right but was off on the details of the sequencing.

    But I want to return to the case of an object for which the subject has no associated name. How would you describe the subject’s relationship with that object: “aware of but not conscious of”, “conscious of”, other? I’m still uncertain about how being able to name an object factors into your definition of “consciousness”.

  66. Cognicious –

    Didn’t mean to ignore your last comments, but as you can tell from my intervening exchange with Arnold, I was having some difficulty meshing my ideas with the inner workings of his Retinoid System.

    In your comment 60, you ask “Do you say he saw the red circle?” What I’m trying to do is determine what it means to “see an object” in terms of what actually goes on in a person’s brain, at least at an architectural level. My interest in understanding Arnold’s Retinoid System is to see if it offers support at a much more detailed level for my claims at the architectural level. I think it does. But for purposes of the current discussion, I don’t need to get into the details of RS implementation.

    Stimulation of a subject’s visual sensors causes neural activity that may result in responsive behavior by the subject. The type of behavior (if any) will depend on the subject’s history of previous experiences of neural activity that are sufficiently similar (in some sense) to the current activity. In simple cases, that response can be non-cognitive in the sense that the subject need not be able to say anything about the content of the FOV that causes the stimulation. That’s the way I view the pointing behavior of the subject in your comment 60 – a non-verbal response based on merely matching current and previous patterns of neural activity and reacting in a way consistent with that matching. One can call that “seeing the object”, but only in that non-cognitive sense of “seeing”. (Relating this to Arnold’s sequence of events, a non-cognitive “seeing” requires only that the subject have the experience he labels “PE1”.)

    A subject with more extensive previous experiences of neural activity similar to the current activity may have learned to associate a word with neural activity similar to current neural activity. That allows the subject to describe an object, in which case one could call that “seeing the object as an X” (in your example, X=”red triangle”), a cognitive sense of “seeing”. (This is the additional ability gained when the subject can have the additional experience Arnold labels “PE2”.)

    The idea that a nonverbal subject can’t “see a surface as red” in the cognitive sense of “see” is obviously counterintuitive, but that’s because in non-technical discourse we typically don’t need to distinguish the cognitive and non-cognitive senses of “see”. If an infant’s behavior is described by family members as showing that the infant is “seeing a red ball”, no one is going to quibble about whether that’s technically accurate.

Leave a Reply

Your email address will not be published. Required fields are marked *