Picture: CorticothalamicThis paper on ‘Biology of Consciousness’ embodies a remarkable alliance: authored by Gerald Edelman, Joseph Gally, and Bernard Baars, it brings together Edelman’s Neural Darwinism and Baars’ Global Workspace into a single united framework. In this field we’re used to the idea that for every two authors there are three theories, so when a union occurs between two highly-respected theories there must be something interesting going on.

As the title suggests, the paper aims to take a biologically-based view, and one that deals with primary consciousness. In human beings the presence of language among other factors adds further layers of complexity to consciousness; here we’re dealing with the more basic form which, it is implied, other vertebrates can reasonably be assumed to share at least in some degree. Research suggests that consciousness of this kind is present when certain kinds of connection between thalamus and cortex are active: other parts of the brain can be excised without eradicating consciousness. In fact, we can take slices out of the cortex and thalamus without banishing the phenomenon either: the really crucial part of the brain appears to be the thalamic intralaminar nuclei.  Why them in particular? Their axons radiate out to all areas of the cortex, so it seems highly likely that the crucial element is indeed the connections between thalamus and cortex.

The proposal in a nutshell is that dynamically variable groups of neurons in cortex and thalamus, dispersed but re-entrantly connected constitute a flexible Global Workspace where different inputs can be brought together, and that this is the physical basis of consciousness. Given the extreme diversity and variation of the inputs, the process cannot be effectively ring-mastered by a central control; instead the contents and interactions are determined by a selective process – Edelman’s neural Darwinism (or neural group selection): developmental selection (‘fire together, wire together’), experiential selection, and co-ordination through re-entry.

This all seems to stack up very well  (it seems almost too sensible to be the explanation for anything as strange as consciousness). The authors note that this theory helps explain the unity of consciousness.  It might seem that it would be useful for a vertebrate to be able to pay attention to several different inputs at once, thinking separately about different potential sources of food, for example: but it doesn’t seem to work that way – in practice there seems to be only one subject of attention at once; perhaps that’s because there is only one ‘Dynamic Core’.  This constraint must have compensating advantages, and the authors suggest that these may lie in the ability of a single piece of data to be reflected quickly across a whole raft of different sub-systems. I don’t know whether that is the explanation, but I suspect a good reason for unity has to do with outputs rather than inputs. It might seem useful to deal with more than one input at a time, but having more than one plan of action in response has obvious negative survival value. It seems plausible that part of the value of a Global Workspace would come from its role in filtering down multiple stimuli towards a single coherent set of actions. And indeed, the authors reckon that linked changes in the core could give rise to a coherent flow of discriminations which could account for the ‘stream of consciousness’.  I’m not altogether sure about that – without saying it’s impossible a selective process without central control can give rise to the kind of intelligible flow we experience in our mental processes, I don’t quite see how the trick is done. Darwin’s original brand of evolution, after all, gave rise to speciation, not coherence of development. But no doubt much more could be said about this.

Thus far, we seem on pretty solid ground. The authors note that they haven’t accounted for certain key features of consciousness, in particular subjective experience and the sense of self: they also mention intentionality, or meaningfulness.  These are, as they say, non-trivial matters and I think honour would have been satisfied if the paper concluded there: instead however, the authors gird their loins and give us a quick view of how these problems might in their view be vanquished.

They start out by emphasising the importance of embodiment and the context of the ‘behavioural trinity’ of brain, body, and world. By integrating sensory and motor signal with stored memories, the ‘Dynamic Core’ can, they suggest, generate conceptual content and provide the basis for intentionality. This might be on the right track, but it doesn’t really tell us what concepts are or how intentionality works: it’s really only an indication of the kind of theory of intentionality which, in a full account, might occupy this space.

On subjective experience, or qualia, the authors point out that neural and bodily responses are by their nature private, and that no third-person description is powerful enough to convey the actual experience. They go on to deny that consciousness is causal: it is, they say, the underlying neural events that have causal power.  This seems like a clear endorsement of epiphenomenalism, but I’m not clear how radical they mean to be. One interpretation is that they’re saying consciousness is like the billows: what makes the billows smooth and bright? Well, billows may be things we want to talk about when looking at the surface of the sea, but really if we want to understand them there’s no theory of billows independent of the underlying hydrodynamics. Billows in themselves have no particular explanatory power. On the other hand, we might be talking about the Hepplewhiteness of a table. This particular table may be Hepplewhite, or it may be fake. Its Hepplewhiteness does not affect its ability to hold up cups; all that kind of thing is down to its physical properties. But at a higher level of interpretation Hepplewhiteness may be the thing that caused you to buy it for a decent sum of money.  I’m not clear where on this spectrum the authors are placing consciousness – they seem to be leaning towards the ‘nothing but’ end, but personally I think it’s to hard to reconcile our intuitive sense of agency without Hepplewhite or better.

On the self, the authors suggest that neural signals about one’s own responses and proprioception generate a sense of oneself as a separate entity: but they do not address the question of whether and in what sense we can be said to possess real agency: the tenor of the discussion seems sceptical, but doesn’t really go into great depth. This is a little surprising, because the Global Workspace offers a natural locus in which to repose the self. It would be easy, for example, to develop a compatibilist theory of free will in which free acts were defined as those which stem from processes in the workspace but that option is not explored.

The paper concludes with a call to arms: if all this is right, then the best way to vindicate it might be to develop a conscious artefact: a machine built on this model which displays signs of consciousness – a benchmark might be clear signs of the ability to rotate an image or hold a simulation. The authors acknowledge that there might be technical constraints, but I think they an afford to be optimistic. I believe Henry Markram, of the Blue Brain project, is now pressing for the construction of a supercomputer able to simulate an entire brain in full detail, so the construction of a mere Global Dynamic Core Workspace ought to be within the bounds of possibility – if there are any takers..?

27 Comments

  1. 1. Arnold Trehub says:

    From the Edelman, Gally, and Baars paper:

    “It should be added that consciousness itself is not causal (Velmans, 1993; Kim, 2000). It is the neural structures underlying conscious experience that are causal. The conscious individual can therefore be described as responding to a causal illusion, one that is an entailed evolutionary outcome of selection for animals able to make plans involving multiple discriminations.”

    I confess that I don’t know what to make of this statement. According to this view, consciousness is caused by the activity of particular kinds of neuronal mechanisms in the brain, but is itself some kind of event that is unable to cause anything. Is it some kind of non-physical event that is generated by the biophysics of neuronal activity? If consciousness is not causal, why would I be unable to write this comment if I were unconscious? Put another way, why do I need to BE conscious to do it? What is *consciousness itself*?

  2. 2. kczat says:

    I took note of that same passage.

    Doesn’t this sort of premise that qualia are non-causal lead you to the view that qualia are therefore non-physical, via Chalmer’s zombies argument? Maybe that’s true, but it’s a conclusion many would not accept, including the authors probably. So I don’t see why they could take that premise.

    It seems like this premise also falls to another easy argument as follows. If qualia are non-causal, then I should not be consciously aware of them because, if I am aware of them, then I can use them as part of my decision making process and they become causal, contradicting my assumption. On the other hand, if I am not consciously aware of qualia, then how could I be talking about them right now? Hence, qualia should be causal.

  3. 3. Lloyd says:

    I’m not so much concerned about the philosophical issue of the relationship between the neuronal activity and the awareness I have of what’s going on. To say that it was “my consciousness” that led to a certain action as opposed to the neuronal activity that gave rise to that consciousness seems a moot point.

    I am more concerned that so much of the “explanation” seems to be replete with recitations of Edelman’s old, old ideas of neural darwinism and the dynamic core. I don’t say those ideas are wrong. I only fear it will still be many, many years before we really know just what those intralaminar neurons are really doing that results in my awareness.

  4. 4. Tom Clark says:

    kczat:

    “On the other hand, if I am not consciously aware of qualia, then how could I be talking about them right now? Hence, qualia should be causal.”

    If qualia are causal, then they’d have to be identical to some set of neural events, since there’s no account on offer of how something non-physical interacts with something physical, such as neurons, to help produce behavior. So when the authors say that “consciousness itself is not causal,” they must be referring to something non-physical. As Arnold points out, they think that consciousness is somehow caused by neural activity. But here again, there is precisely no account on offer about how something non-physical is produced by neural activity. Traditional epiphenomenalism thus fails (or at least is radically incomplete) as a scientific theory of consciousness.

    But of course identifying qualia with neural activity runs into its own problems, one of which is that neurons are available to public observation, and qualia are not, so the two sides of the identity relation don’t have all properties in common, so they can’t be identical. Arnold will likely say that qualia are the subjective aspect of a single phenomenon, and neurons the public aspect of that phenomenon, so we can justly say that qualia are not epiphenomenal. I take a psycho-physical parallelism approach, with two parallel non-interacting causal stories, hence no causal illusions involved, http://www.naturalism.org/privacy.htm But we’ve gone around on this basic question a lot, with Peter’s patient indulgence.

  5. 5. Arnold Trehub says:

    Tom: “Arnold will likely say that qualia are the subjective aspect of a single phenomenon, and neurons the public aspect of that phenomenon, so we can justly say that qualia are not epiphenomenal.”

    Yes. But I also say that in order to pursue the *scientific explanation* of qualia/phenomenal content in terms of the activity of brain mechanisms we need to adopt this bridging principle between 1pp events and 3pp events: For any instance of conscious content (1p-aspect) there is a corresponding analog in the biophysical state of the brain (3p-aspect). This suggests that we need to find brain mechanisms that can generate proper analogs of objective indices/expressions of phenomenal content/qualia. If one were to adopt the stance of psycho-physical parallelism would this research strategy make sense?

  6. 6. Arnold Trehub says:

    From the Edelman, Gally, and Baars paper:

    “Consciousness is a concomitant of dynamic patterns of reentrant signaling within complex, widely dispersed, interconnected neural networks constituting a Global Workspace.”

    A Google server center as a part of the internet is a Global Workspace with a vast number of feedforward and feedback loops (reentrant signaling?). Is this system conscious? If not, is it because it isn’t a biological system? Or is it because it is a system without a privileged perspective — a subjective core? Should any entity, no matter how complex its signaling architecture and its integration of information, but without subjectivity, be considered a conscious entity?

  7. 7. Lloyd says:

    Tom, I don’t believe a Google server farm organizes its models of the world of information in a way that would be advantageous to its own survival. If one could say that it has, in any sense, a “model” or representation of its world, then that model is not organized in a way that would allow the server system be be conscious of the world or of itself.

    I believe that consciousness is the (subjective) effect of the operation of a world model when that model is organized and functions to the benefit of the survival of the mechanism in which it operates. It is not the operation of the model per se, but rather the way in which the model’s function “appears” to the operating organism. That “appearance” must arise if the model is complete and functioning.

  8. 8. Conscious Media Network » July 2011 says:

    […] A unified theory of Consciousness – consciousentities.com […]

  9. 9. Tom Clark says:

    Arnold,

    Seems to me your approach is essentially the same as the authors of this paper: to look for the neural correlates of consciousness, what you call the 3rd person neural analogs of 1st person experience. That there are such analogs is clear, e.g., the spatiotopic and visuotopical maps they discuss on page 4, plus sensorimotor signaling discussed on p. 5 correspond to the consciously experienced sense of an embodied self located in space.

    What’s somewhat hilarious is that they take themselves to have dispatched (“cured”) the hard problem by saying that there is a reliable entailment from the dynamic core to qualia (p. 5, 2nd paragraph and last paragraph on left column). But of course they haven’t specified what that entailment is, only described correlations. Then, on the same page (right column at the top) they say “The truly hard problem is to provide a biologically based mechanistic account of how, at the molecular, cellular, and systemic levels, the brain functions to actually entail consciousness.” But this *is* the hard problem! What *is* the nature of the entailment from neural activity to qualia? Why should only certain sorts of cognitive processes, those of the thalamo-cortical system, “give rise to” qualitative phenomenal experience?

    In the model I currently espouse, at http://www.naturalism.org/appearance.htm#part5 , I suggest the nature of that entailment very likely won’t be causal, since qualia aren’t visible products of neural states sitting in public space they way neurons are. The brain doesn’t give rise to qualia as something distinct from neural activity the way the liver secretes bile as something distinct from the processes that produce it. So the nature of the entailment, I suggest, is *non-causal*, having to do with the nature of being a recursive but necessarily limited representational system whose front line representations can’t themselves be represented, hence are cognitively impenetrable, hence qualitative (non-decomposable) for the system itself. In this, I follow Metzinger and others, so I don’t claim it’s original and I don’t suppose that it’s necessarily correct. But it’s at least a stab at specifying what the entailment is, something that Baars, Edelman and Gally acknowledge is what’s needed to solve the hard problem.

    This is to try to say *why and how*, as Lloyd puts it, the appearance of reality (phenomenal experience) must arise for the system if its self-world model is complete and functioning.

  10. 10. Arnold Trehub says:

    Tom, you wrote:

    “Arnold, Seems to me your approach is essentially the same as the authors of this paper [Biology of Consciousness]: to look for the neural correlates of consciousness, what you call the 3rd person neural analogs of 1st person experience. That there are such analogs is clear, e.g., the spatiotopic and visuotopical maps they discuss on page 4, plus sensorimotor signaling discussed on p. 5 correspond to the consciously experienced sense of an embodied self located in space.

    The essential point that must be recognized is that the spatiotopic and visuotopical maps discussed on page 4 and the sensorimotor signals discussed on page 5 in “Biology of Consciousness” do *NOT* correspond to “the consciously experienced sense of an embodied self located in space”. They are just the *preconscious* sensory mechanisms that are part of the cognitive brain (I discuss these in considerable detail in *The Cognitive Brain*, 1991). Without the egocentric spatio-temporal integration of these sensory features provided by the neuronal mechanisms of the retinoid system, subjectivity does not exist. A theoretical model that does not account for subjectivity might be a theory of cognition, but it is not a theory of consciousness.

    Here’s part of a comment I posted a while ago on the PHIL PAPERS forum. Maybe this will clarify the difference between what has been proposed by Edelman, Gally, and Baars and what I propose:

    I approach the problem of subjectivity/consciousness as a scientist, not as a professional philosopher. In this enterprise, I thought it necessary to have a working definition of consciousness. I suggested that consciousness is a transparent brain representation of the world from a privileged egocentric perspective. This implied subjectivity — a sense of something somewhere in relation to oneself. The scientific goal was to formulate credible brain mechanisms that have the structural and dynamic properties needed to represent something somewhere in relation to a fixed point of perspectival origin — an egocentric space. Taking this research strategy, how is one to judge if the putative brain mechanisms are reasonable models of the biological mechanisms whose activity constitute consciousness/phenomenal experience? We look to the causal properties of the theoretical models and test their logical implications with respect to objective indicators of salient phenomenal experience. The SMTT experiment is a good example of this approach. On the basis of the structural and dynamic properties of the retinoid system, I was able to induce and systematically control novel conscious experiences that were logically generated by the particular biophysical properties of the retinoid mechanisms.

    It seems to me that the original working definition of consciousness/subjectivity, the theoretical model of the retinoid mechanisms, and the subsequent successful empirical tests of these brain mechanisms, together provide strong presumptive evidence of the validity of this approach in pursuit of an explanation of consciousness. I think this is an advance in consciousness studies. What are your counter arguments?

  11. 11. Lloyd says:

    Tom: If you have a box that can experience seeing, why not just accept that it can see. I think you’re making too much of the “hard problem”. I agree completely that it’s devilishly hard to know (from the outside) whether or not your box can see. If there’s a hard problem at all, that’s it, not what “seeing” is.

  12. 12. Tom Clark says:

    Arnold:

    “A theoretical model that does not account for subjectivity might be a theory of cognition, but it is not a theory of consciousness.”

    Ok, but it isn’t as if they are ignoring subjectivity in their paper; they just have a different theory of the neural processes that correlate with it and maybe eventually explain it, and the evidence will eventually decide what theory wins the day.

    “…consciousness is a transparent brain representation of the world from a privileged egocentric perspective.”

    Here you’re identifying consciousness (phenomenal qualitative experience) with something the brain does, namely representation – that’s your basic hypothesis about consciousness, with which I largely concur (although I don’t draw an identity as you know). I take it that by “transparent” you mean what Metzinger and Van Gulick and others mean, that the representation isn’t seen *as a representation* by the system, such that all the system is privy to is the *content* of the representation, not the process of the representation. The system “looks through” the process so the process is in effect transparent or invisible to/for the system, which makes the content impenetrable for the system, hence (perhaps!) qualitative. I mention Metzinger’s proposal of “Transparency as efficient data-reduction” at http://www.naturalism.org/appearance.htm#part5

    Your theory then goes on to specify the brain mechanisms that could actually instantiate this sort of representation, and you ask:

    “…how is one to judge if the putative brain mechanisms are reasonable models of the biological mechanisms whose activity constitute consciousness/phenomenal experience?”

    Note here again that you’re *identifying* consciousness with biological mechanisms: the mechanisms’ activity constitutes phenomenal experience – there’s no other sort of non-physical stuff that consciousness is constituted by; it’s a fully physical process on your account (Actually I think you’d say that that consciousness is the subjective aspect of something that is not properly either subjective (1st person) or physical (3rd person), since we can’t have knowledge of whatever it is these things are aspects of. This seems to me importantly different than saying that consciousness is constituted by the activity of biological processes.)

    Your answer to the question above:

    “We look to the causal properties of the theoretical models and test their logical implications with respect to objective indicators of salient phenomenal experience.”

    If I understand you correctly, you’re saying we should look at the causal properties of the neural processes (e.g., the retinoid system) correlated with reports (and other indicators) of phenomenal experience and see if those properties logically entail phenomenal experience. And you say you’ve done this:

    “I was able to induce and systematically control novel conscious experiences that were logically generated by the particular biophysical properties of the retinoid mechanisms.”

    The natural question to ask at this point is: what’s the nature of the logical entailment going from the biophysical properties of the retinoid mechanisms to the existence of conscious experience? Why is it that just this system “logically generates” conscious experience (e.g., qualia), and how is the generation done, logically? (And note again that the idea that consciousness is logically (or causally) generated by mechanisms is importantly different than the claim that consciousness is *constituted* by those mechanisms. Whatever is constituted by something isn’t generated by it as an extra thing or effect, seems to me.)

    In any case, I’ve sketched a proposal about the entailment between being a representational system and having conscious experience at http://www.naturalism.org/appearance.htm#part5 and would be interested to hear your ideas about what the entailment might consist of.

  13. 13. Arnold Trehub says:

    Tom, thanks for your comments.

    You wrote:

    “Ok, but it isn’t as if they are ignoring subjectivity in their paper; they just have a different theory of the neural processes that correlate with it and maybe eventually explain it, and the evidence will eventually decide what theory wins the day.”

    They don’t ignore subjectivity. Its just that they don’t give a useful account of how subjectivity is realized in the brain. The problem is that there are innumerable neural processes that correlate with subjectivity. As best I can tell, their approach to subjectivity is summarized in their following statement:

    ” … the self emerges from brain responses to bodily signals arising in the sensorimotor system of an individual agent.”

    There is no attempt to describe the necessary brain mechanisms that would have the competence to make a self “emerge” as a response to one’s sensorimotor signals. Moreover, according to their account, one would think that Jean-Dominique Bauby (*The Diving Bell and the Butterfly*), as a locked-in patient, should have suffered a severely diminished sense of self. His vivid account of his own experience testifies to a very strong sense of self with the near absence of sensorimotor signals.

    Tom: “I take it that by ‘transparent’ you mean what Metzinger and Van Gulick and others mean, that the representation isn’t seen *as a representation* by the system, such that all the system is privy to is the *content* of the representation, not the process of the representation.”

    Exactly so. In terms of the retinoid model, the transparent representation is the global pattern of excitatory excitation among the egocentrically organized autaptic cells in retinoid space. It is this kind of representation (3pp) that constitutes phenomenal consciousness (1pp).

    Tom: “Note here again that you’re *identifying* consciousness with biological mechanisms: the mechanisms’ activity constitutes phenomenal experience – there’s no other sort of non-physical stuff that consciousness is constituted by; it’s a fully physical process on your account (Actually I think you’d say that that consciousness is the subjective aspect of something that is not properly either subjective (1st person) or physical (3rd person), since we can’t have knowledge of whatever it is these things are aspects of. This seems to me importantly different than saying that consciousness is constituted by the activity of biological processes.)”

    As a scientist, I say that consciousness is the subjective aspect (1pp) of biophysical activity in retinoid space (3pp). The structural and dynamic properties of the retinoid mechanisms are potentially knowable, but we probably cannot know the underlying *essential* physical nature of these knowable retinoid mechanisms. We can only explain phenomena in terms of what we can know, e.g., the brain’s retinoid system. So I’m saying that consciousness is the subjective aspect of something that is properly physical.

    Tom: “Why is it that just this [retinoid] system “logically generates” conscious experience (e.g., qualia), and how is the generation done, logically? (And note again that the idea that consciousness is logically (or causally) generated by mechanisms is importantly different than the claim that consciousness is *constituted* by those mechanisms. Whatever is constituted by something isn’t generated by it as an extra thing or effect, seems to me.)”

    The structural and dynamic properties of the retinoid system have the biophysical *competence* to control the global state of autaptic cell activation within its egocentrically organized retinoid space. It is the global supra-threshold state of activation within retinoid space that constitutes consciousness (not an extra effect of this biophysical state), and it is the operation of the specialized mechanisms of the retinoid *system* together with the brain’s preconscious cognitive mechanisms that generate the particular phenomenal contents of consciousness — patterns of activation — in retinoid space. Bottom line, according to this theory, there is no consciousness and there can be no qualia without a retinoid system.

    Tom: “In any case, I’ve sketched a proposal about the entailment between being a representational system and having conscious experience at http://www.naturalism.org/appearance.htm#part5 and would be interested to hear your ideas about what the entailment might consist of.”

    Thanks. I’ll read it, think about it, and respond in a later post.

  14. 14. Arnold Trehub says:

    Tom, you wrote:

    “I’ve sketched a proposal about the entailment between being a representational system and having conscious experience at http://www.naturalism.org/appearance.htm#part5 and would be interested to hear your ideas about what the entailment might consist of.”

    Here are a couple of problems that I see in your concept of entailment between representational systems and conscious experience:

    You claim: “Conclusion re logical entailments. Any representational system (RS) will have a bottom level, not further decomposable, unmodifiable, epistemically impenetrable (unrepresentable) hence qualitative (non-decomposable, homogeneous) and ineffable set of representational elements. [1] These elements are arbitrary with respect to what they represent since the RS only needs reliable co-variation, not literal similarity.” [2]

    1. It seems to me, that your basic premise here precludes the possibility a representational system (RS) composed of biophysical components like neurons. Assume that we are able to isolate the retinoid system (RS) of the human brain, would you really claim that the retinoid’s 3D array of autaptic neurons which I propose is the egocentric representational system that generates our phenomenal world are non-decomposable or ineffable from the 3rd-person perspective?

    2. “Reliable co-variation” works OK for actuarial enterprises like insurance companies, but it doesn’t work for a representational system that must have properties which capture *phenomenal features* of conscious content. If our representational system consisted only of sets of reliable *tokens* (physical correlates) of what is represented, with no pattern similarities, there would be no semantics. We would be able to *name* what we experience, but we would be unable to depict it or analyze it! This is why I have proposed the bridging principle of corresponding *analogs* between 3pp descriptions (objective) and 1pp descriptions (subjective).

    I think the concept of *entailment* between the activity of brain mechanisms (3pp) and phenomenal experience (1pp) is too weak. As I see it, there is a *causal* connection between the activity of a particular neuronal system in the brain (e.g., the retinoid system) and conscious/phenomenal experience. The supra-threshold pattern of egocentric activity in retinoid space — the phenomenal world in which we are centered — *constitutes* our conscious experience (see Fig. 1 here: http://journalofcosmology.com/Consciousness130.html).

  15. 15. Tom Clark says:

    Arnold, thanks for the clarifications in 13 and 14. A few remarks, more or less in order as they come up in your comments:

    Glad that we agree on transparency. I guess one question your theory could address (and maybe you’ve addressed it) is why the global pattern of excitation among the egocentrically organized autaptic cells in retinoid space becomes transparent to the system, which in turn might help clarify the entailment to the existence of qualia.

    “…we probably cannot know the underlying *essential* physical nature of these knowable retinoid mechanisms.”

    I wonder if we need to posit an underlying essential nature of reality that’s unknowable, given that reality necessarily appears to knowers in terms of representations and thus is logically barred from appearing as it is “in itself.” The very nature of knowledge rules out representation-free apprehension on the part of knowers (the point about “epistemic perspectivalism” I make at http://www.naturalism.org/appearance.htm#part1 ) so we can’t really complain that we’ll never see reality “as it really is.” In which case, the idea of an underlying essential nature of reality (in your case, of physical reality) as being unknowable should perhaps be dropped. On the other hand, it’s hard to drop it, given that the very idea of representation involves the notion of the object being represented as existing independently of the representation, and thus having its own nature that we capture accurately or inaccurately via our representations.

    To say that “consciousness is the subjective aspect…of biophysical activity in retinoid space” seems to me different than saying the “supra-threshold pattern of egocentric activity in retinoid space…*constitutes* our conscious experience.” And both expressions leave me wanting more in terms of an explanation of why it is that only certain physical processes either 1) have a subjective aspect or 2) constitute consciousness. By way of providing an explanation, you say

    “…it is the operation of the specialized mechanisms of the retinoid *system* together with the brain’s preconscious cognitive mechanisms that generate the particular phenomenal contents of consciousness — patterns of activation — in retinoid space.”

    Here you seem to *identify* phenomenal contents, that is, qualia, with patterns of activation generated by the retinoid system in concert with preconscious cognitive mechanisms. Let’s say that you amend this (very slightly, if at all, on my view) to the effect that such patterns *constitute* qualia. What is it about only these patterns of activation, visible in principle to outside observers, that makes them constitute something only available to the system itself? Or, if you say that consciousness is the subjective aspect of these patterns, why is it that only these patterns have a subjective aspect?

    “…would you really claim that the retinoid’s 3D array of autaptic neurons which I propose is the egocentric representational system that generates our phenomenal world is non-decomposable or ineffable from the 3rd-person perspective?”

    No, I wouldn’t. It’s qualia – the basic phenomenal contents of consciousness – that are non-decomposable and ineffable. The representational system and its activity are composed of parts and processes, hence is decomposable, and can be described in terms of those parts and processes, hence is not ineffable.

    “If our representational system consisted only of sets of reliable *tokens* (physical correlates) of what is represented, with no pattern similarities, there would be no semantics. We would be able to *name* what we experience, but we would be unable to depict it or analyze it!”

    I agree there are pattern similarities between representation and what’s represented, and presumably that’s what your theory describes: analogs (pattern similarities) between representational activity in retinoid space (plus associated non-conscious processes) and what’s reported in conscious experience. Conscious experience is (normally) about the world that representational activity represents, e.g., my current experience has, among other contents, the content of typing on my laptop. The question that I’m trying to answer is why there should be any experience *at all* that co-varies with that representational activity.

    “As I see it, there is a *causal* connection between the activity of a particular neuronal system in the brain (e.g., the retinoid system) and conscious/phenomenal experience. The supra-threshold pattern of egocentric activity in retinoid space — the phenomenal world in which we are centered — *constitutes* our conscious experience.”

    This seems to me a contradiction. If our phenomenal world is constituted by the supra-threshold pattern of egocentric activity in retinoid space, then there isn’t a causal relationship between the two, such that the activity produces or generates phenomenal experience, http://www.naturalism.org/appearance.htm#part2

  16. 16. Arnold Trehub says:

    Tom, I guess the difficulty we have in arriving at a mutual understanding is not surprising given the history how elusive consciousness has been for investigators to capture in scientific terms.

    Tom: “I guess one question your theory could address (and maybe you’ve addressed it) is why the global pattern of excitation among the egocentrically organized autaptic cells in retinoid space becomes transparent to the system, which in turn might help clarify the entailment to the existence of qualia.”

    Representational transparency follows naturally from the fact that the activity of autaptic neurons in retinoid space *constitutes* our conscious experience (1pp). Just as we are unable to directly see the interior of our own body, we are unable to see the patterns of neuronal activity of the autaptic-cell array that constitute our conscious experience/qualia.

    Tom: “To say that “consciousness is the subjective aspect…of biophysical activity in retinoid space” seems to me different than saying the “supra-threshold pattern of egocentric activity in retinoid space…*constitutes* our conscious experience.” And both expressions leave me wanting more in terms of an explanation of why it is that only certain physical processes either 1) have a subjective aspect or 2) constitute consciousness.”

    The “certain physical processes” to which you refer are the global autaptic-cell activation patterns in retinoid space.

    1. Retinoid space has a subjective aspect because it represents a volumetric space with a *fixed locus of perspectival origin* — the 0,0,0 neuronal coordinate in 3D space (the 0,0,0,0 coordinate in space-time). According to my theory, this locus of perspectival origin is the core self (I!) of any creature with a brain having a retinoid system. Any representation in retinoid space is in a perspectival relationship with respect to the core self at the center of its volumetric surround. So the physical processes of representation in retinoid space justify the claim that these processes have a subjective aspect. Notice that my claim that a *core self* is a real biological part of the brain differs from Metzinger’s claim that there is no such thing as a self. I argue that Metzinger’s phenomenal self model (PSM) cannot exist without the *prior* existence of a core self.

    2. My claim that neuronal activity in retinoid space *constitutes* consciousness follows from my working definition of consciousness:

    *Consciousness is a transparent representation of the world from a privileged egocentric perspective”

    The retinoid system generates a transparent representation of the world from the perspective of the core self, the “privileged egocentric perspective”.

    Are these formulations about consciousness scientifically justified? I think so because the structural and dynamic properties of the retinoid model enable one to explain many previously inexplicable phenomenal experiences, and to successfully predict novel phenomenal experiences.

    Tom: “By way of providing an explanation, you say

    ‘…it is the operation of the specialized mechanisms of the retinoid *system* together with the brain’s preconscious cognitive mechanisms that generate the particular phenomenal contents of consciousness — patterns of activation — in retinoid space.’

    “Here you seem to *identify* phenomenal contents, that is, qualia, with patterns of activation generated by the retinoid system in concert with preconscious cognitive mechanisms. Let’s say that you amend this (very slightly, if at all, on my view) to the effect that such patterns *constitute* qualia. …. Or, if you say that consciousness is the subjective aspect of these patterns, why is it that only these patterns have a subjective aspect?”

    It is because only patterns of autaptic-cell activity in egocentric retinoid space have a perspectival relationship with a fixed point of spatial origin, a core self (I!) centered within this very space (representing the phenomenal world around us). From inside the brain, this is the subjective 1pp-aspect of the 3-pp neuronal activity within retinoid space potentially observed from the outside.

    Tom: “This seems to me a contradiction. If our phenomenal world is constituted by the supra-threshold pattern of egocentric activity in retinoid space, then there isn’t a causal relationship between the two, such that the activity produces or generates phenomenal experience”

    Be careful with the language here. I said that there is a causal connection “between the activity of a particular neuronal system in the brain (e.g., the retinoid system) and conscious/phenomenal experience. The supra-threshold pattern of egocentric activity in retinoid space — the phenomenal world in which we are centered — *constitutes* our conscious experience.” This is to say that certain neuronal mechanisms which are a *part* of the retinoid *system* cause supra-threshold patterns of autaptic-cell activity in *retinoid space*, which is a particular *part* of the retinoid system. And these patterns of activity in this *part* (module?) of the retinoid system which represents egocentric space *constitute* our conscious experience. I see no contradiction here. If I’m wrong, can you elaborate a bit more?

  17. 17. Tom Clark says:

    Arnold,

    Thanks for the additional clarifications. In explaining subjectivity you say “Any representation in retinoid space is in a perspectival relationship with respect to the core self at the center of its volumetric surround.”

    I get that there’s a structural analog between the experience of being a self at the center of one’s experienced world (the experience of subjectivity) and the “patterns of autaptic-cell activity in egocentric retinoid space [which] have a perspectival relationship with a fixed point of spatial origin.” What isn’t clear to me is how one gets qualitative experience itself out of this activity (plus any necessary contributions from unconscious neural processes outside the retinoid system). One can always map *structure* from one domain to another (e.g., neural to experiential) but the puzzle is what sort of entailment gets us from quantifiable neural, public processes and their structural and dynamic properties, to the private, irreducible qualitative basics that make up private phenomenal experience (qualia). Absent qualia, there would be no experience, including the experience of subjectivity.

    “This is to say that certain neuronal mechanisms which are a *part* of the retinoid *system* cause supra-threshold patterns of autaptic-cell activity in *retinoid space*, which is a particular *part* of the retinoid system. And these patterns of activity in this *part* (module?) of the retinoid system which represents egocentric space *constitute* our conscious experience. I see no contradiction here. If I’m wrong, can you elaborate a bit more?”

    No, this seems fine, since you’ve spelled out the causal story within the retinoid system. You’re not saying that experience is an extra thing produced by the system; rather it’s constituted by part of the system’s activity. Of course the claim that experience is constituted by the activity of part of the retinoid system seems to place conscious experience in the publicly observable domain, which I think isn’t right. But you’ll say experience is the *subjective aspect* of that part’s activity, hence not observable from the outside. And it has a subjective aspect because of the “patterns of autaptic-cell activity in egocentric retinoid space have a perspectival relationship with a fixed point of spatial origin.” At which point I’ll reiterate the question about qualia posed above.

  18. 18. Arnold Trehub says:

    Tom,

    I think we are at the crux of the problem. You wrote (#17):

    “I get that there’s a structural analog between the experience of being a self at the center of one’s experienced world (the experience of subjectivity) and the “patterns of autaptic-cell activity in egocentric retinoid space [which] have a perspectival relationship with a fixed point of spatial origin.” What isn’t clear to me is how one gets qualitative experience itself out of this activity (plus any necessary contributions from unconscious neural processes outside the retinoid system).”

    Your question about GETTING *qualitative experience* (qualia) OUT OF neuronal activity in egocentric retinoid space suggests that you are unwilling to accept the theoretical proposition that *qualitative experience* just IS the complementary aspect of egocentric/subjective neuronal activity in retinoid space. Why should consciousness have
    both a biophysical aspect (3pp) and a phenomenal aspect (1pp)? I don’t think that we can answer this question any more than we can explain why light appears as both particle and wave. Shouldn’t we acknowledge the fact that the “explanatory gap” is not peculiar to the so-called mind-body problem, but is a gap common to all attempts to explain the *sheer existence* of any phenomenon. Science is a pragmatic enterprise, and its progress is judged by how well its theories are able to explain/predict the features (not the sheer existence) of interesting phenomena. By this criterion, the fact that the retinoid model is able to successfully explain/predict what were formerly inexplicable phenomenal experiences is evidence for its validity and a challenge for competing theories of consciousness.

    Tom, do you know of any theoretical alternative to the retinoid model that might explain the SMTT findings?

  19. 19. Tom Clark says:

    Arnold:

    “Shouldn’t we acknowledge the fact that the “explanatory gap” is not peculiar to the so-called mind-body problem, but is a gap common to all attempts to explain the *sheer existence* of any phenomenon.”

    You posed this question in another thread (Ephaptic Consciousness?):

    “…how can we explain the sheer existence of consciousness? You might call this an explanatory gap. But notice that it isn’t really a gap — it’s a barrier! And it is the same barrier faced by theoretical physics in trying to explain the sheer existence of the fundamental forces. Science is not omniscient, and is unable to explain the sheer existence of anything.”

    And I replied at http://www.consciousentities.com/?p=742#comment-166183 :

    “Guess I disagree. For me, the existence of phenomenal experience as arising within the physical, natural realm, and only when the physical realm is organized in certain ways, presents an explanatory puzzle quite distinct from the question of why fundamental physical forces and entities exist and have just the properties they do. You are content to say there’s a subjective, experiential aspect of reality that just happens to be manifest when things in physical, 3rd person reality are ordered in certain ways. If you think you’ve hit the final explanatory wall with this, then you won’t investigate further. Others, including myself, want something more in the way of explanation, perhaps impossibly and unrealistically, but there it is. Have pity on us!”

    My tentative suggestions for how to bridge the explanatory gap between function and phenomenology are at http://www.naturalism.org/appearance.htm#part5 which I think you’ve looked at. Our discussion then took up the issue of the relation between the possibility of theoretical, conceptual knowledge and consciousness itself.

    In the present conversation, you ask:

    “Tom, do you know of any theoretical alternative to the retinoid model that might explain the SMTT findings?”

    I don’t, but I’m not in a position to judge the virtues of the retinoid model compared to its competitors as an account of the structural aspects of consciousness (it doesn’t claim to explain qualia). Of course what I think a theory of consciousness should do is to explain qualia, since that to me is the primary explanatory target.

  20. 20. Arnold Trehub says:

    Tom,

    You wrote: “You are content to say there’s a subjective, experiential aspect of reality that just happens to be manifest when things in physical, 3rd person reality are ordered in certain ways. If you think you’ve hit the final explanatory wall with this, then you won’t investigate further. Others, including myself, want something more in the way of explanation … ”

    Yet in your discussion of appearances in (http://www.naturalism.org/appearance.htm#part5), you wrote:

    “Reality, including the experienced reality of being a self to whom the world appears, phenomenally appears [certainly *with all of its qualia*] by virtue of our being *certain sorts of representational systems*.” [emphasis mine]

    How do qualia which depend on our being certain “sorts of representational systems” differ from qualia which depend on our being a system in which things “… in physical, 3rd person reality are ordered in certain ways”? Is a *certain sort of representational system* different from a *representational system in which things are ordered in a certain way?* If qualia do not depend in some way on some difference in the organization of physical processes, then it seems to me that qualia must depend on differences in basic physical *substance*. Would you propose that there a special substance for each color, a substance for pain, a substance for each pleasure? I suppose the possibility exists. After all, science is a pragmatic enterprise in which most of us recognize that behind each answer there is another puzzle. The best we can do is formulate theories that are able to explain/predict interesting phenomena. The retinoid model explains/predicts many previously inexplicable phenomena. When a better theory comes along we either modify, discard, or fold the old theory into the new. But I certainly don’t foresee a cessation of investigation into the fundamental problem of phenomenal consciousness.

  21. 21. Tom Clark says:

    Arnold:

    “If qualia do not depend in some way on some difference in the organization of physical processes, then it seems to me that qualia must depend on differences in basic physical *substance*. Would you propose that there a special substance for each color, a substance for pain, a substance for each pleasure?”

    No, I think it’s the organization of representational systems (RSs), not their instantiating physical substrates, that’s likely the key to explaining the existence of qualia in all their variety. I sketch some hypotheses about what sort of organization might entail qualia at http://www.naturalism.org/appearance.htm#part5 Only certain sorts of RSs are host to phenomenal experience.

    “I certainly don’t foresee a cessation of investigation into the fundamental problem of phenomenal consciousness.”

    Glad we agree on this!

  22. 22. Arnold Trehub says:

    Tom, you list three logical entailments of being a representational system (RS):

    1. Root representational vocabulary
    2. Representational self-limitations
    3. Limits of resolution

    It seems to me that the retinoid model/system satisfies all of these entailments. Am I wrong?

    In addition to the conditions listed above, I would add the condition of *subjectivity*, the representation of a fixed locus of representational origin (the *core self*/I! ) within a volumetric surround (the phenomenal world). My theoretical proposal is that this additional entailment changes a non-conscious RS into a conscious RS.

    You discuss the problem of distinguishing the thermostat from the kind of natural RS with conscious experience. In my view, it is just the absence of the property of subjectivity in the thermostat (and in many lower organisms) that distinguishes conscious entities from non-conscious entities. See Fig. 1, here: http://journalofcosmology.com/Consciousness130.html.

  23. 23. Tom Clark says:

    Arnold:

    I’m proposing, in a very preliminary and tentative way, that only certain sorts of representational systems have the right sort of organizational characteristics to non-causally entail the existence of qualia. The retinoid system very likely has the first three you mention (root vocabulary, representational self-limitations, and limits of resolution) but on my proposal that gets it only *part way* toward entailing qualia, since there are other organizational constraints enumerated later on in that section that I suggest are needed to get all way. Whether the retinoid system meets these constraints I don’t know, and of course I don’t pretend these are anything more than a gesture toward what might get us “in the vicinity of qualia,” as I put it. But something is better than nothing at this stage of the game, I hope.

    As for subjectivity, I see that as part of the (normal) *structure* of consciousness, with both the self and its surround as contrasting *contents* of experience, composed of qualitative elements (although as Hume pointed out, it’s notoriously difficult to pin down the essential quale of the self – it’s kind of like a phenomenal posit that when you look for you can’t find). In some situations, e.g., some sorts of meditative or transcendent experiences, the sense of self is said to disappear, so subjectivity is perhaps not an absolutely necessary element of consciousness (Metzinger agrees with this).

    Of course, the idea of an experience undergone by no experienced self is pretty weird, but not impossible. This point comes up in my review of Metzinger’s The Ego Tunnel at http://www.naturalism.org/metzinger.htm I ask:

    “…how far can a self-maintaining representational system go in directly appreciating the fact that it simulates reality, *including itself*, instead of directly encountering it? Metzinger says late in the book:

    ‘The bigger picture cannot be properly reflected in the Ego Tunnel – it would dissolve the tunnel itself. Put differently, if we wanted to *experience* the theory as true, we could do so only by radically transforming our state of consciousness.’ (p. 209, emphasis added).”

  24. 24. John says:

    Hi, are comments off?

  25. 25. Peter says:

    No, they’re on OK.

  26. 26. Mark Munro says:

    Hello – A different approach; The base state of the brain is disorganization. Disorganization is energy consuming and dysfunctional. Cognition results from the work done to mitigate the liabilities of a disorganized brain. Hence a rationale and incentive for the brain to create its unique activity.
    Consciousness is a by product of cognition. Solve cognition and consciousness will follow.

    Mark

  27. 27. Marta Pastor says:

    Claro y conciso. Muchas gracias. Besos!

Leave a Reply