Electric BrainSue Pockett is back in the JCS with a new paper about her theory that conscious experience is identical with certain electromagnetic patterns generated by the brain. The main aim of the current paper is to offer a new hypothesis about what distinguishes conscious electromagnetic fields from non-conscious ones; basically it’s a certain layered shape, with negative charge on top followed by a positive region, a neutral one, and another positive layer.

Pockett sets the scene by suggesting that there are three main contenders when it comes to explaining consciousness: the thesis that consciousness is identical with certain patterns of neuron firing; functionalism, the view that consciousness is a process; and her own electromagnetic field theory. This doesn’t seem a very satisfactory framing. For one thing it doesn’t do much justice to the wild and mixed-up jungle which the field really is (or so it seems to me); second there isn’t really a sharp separation between the first two views she mentions – plenty of people who think consciousness is identical with patterns of neuronal activity would be happy to accept that it’s also a neuronal process. Third, it does rather elevate Pockett herself from the margins to the status of a major contender; forgiveable self-assertion, perhaps, although it may seem a little ungenerous that she doesn’t mention that others have also suggested that consciousness is an electromagnetic field (notably Johnjoe McFadden – although he believes the field is causally effective in brain processes, whereas Pockett, as we shall see, regards it as an epiphenomenon with no significant effects).

Pockett mentions some of the objections to her theory that have come up. One of the more serious ones is the point that the fields she is talking about are in fact tiny and very local: so local that there is no realistic possibility of the field from one neuron influencing the activity of another neuron. Pockett is happy to accept this; conscious experience doesn’t actually have the causal role we usually attribute to it, she says: it’s really just a kind of passenger accompanying mental processes whose course is determined elsewhere. She cites various people in support of this epiphenomenalist view, including Libet – she has some interesting things to say about his experiments (but doesn’t note that he too liked the idea of a mental field of some kind).

The new thesis about the shape of conscious fields appears to spring from observation of neuronal structure in the neocortex, and in particular the fact that in its fourth layer there are no pyramidal cell bodies. This is a feature which appears to be characteristic of the parts of the neocortex often associated with consciousness, and it implies that there is a gap or neutral region in the fields generated there, helping to yield the layered structure mentioned above. Pockett proposes a number of experiments which might support her view, some of which have already been carried out.

So what do we make of that? I see a number of problems.

First is the inherent implausibility of the overall theory. Why on earth would we identify conscious experience with tiny electromagnetic fields? I think the attraction of the theory comes from thinking about two kinds of consciousness, more or less the two kinds identified by Ned Block: the a-consciousness that does the work of useful, practical cogitation, and the p-consciousness which simply endows it with subjective feeling. Looked at from this angle it may be appealing to think that the actual firing of neurons is doing the a-consciousness while the subjectivity, the phenomenal experience, come from the subsidiary electric buzz.

But when we look at it more closely, that doesn’t make any particular sense. The problem is that conscious experience doesn’t seem like anything in physics, whether a field or a lump of matter. If we’re going to identify it with something physical, we might as well identify it with the neurons and simplify our theory – dropping the field entity in obedience to Occam’s Razor. Nothing about the electrical fields helps to explain the nature of phenomenal experience in a way that would motivate us to adopt them.

Second, there’s the problem of Pockett’s epiphenomenalism. Epiphenomenalism is very problematic because if consciousness does not cause any of our actions, our reports and discussions about it cannot have been caused by it either. Pockett’s own account of consciousness could not have been caused by her actual conscious experience, and nothing she writes could be causally related to that actual experience: if she were a zombie with no consciousness, she would have written just the same words. This is a bit of an uncomfortable position in general, but it also means that Pockett’s ambitions to test her theory experimentally are doomed. You cannot test scientifically for the existence of a supposed entity that has no effects because it makes no observable difference whether it’s there or not. All of Pockett’s proposed experiments, on examination, test accessible aspects of her theory to do with how the fields are generated by neurons: none of them test whether consciousness is present or absent, and on her theory no such experiment is possible.

While those issues don’t technically prove that Pockett is wrong, they do seem to me to be bad problems that make her theory pretty unattractive.

 

105 Comments

  1. 1. Eric Thomson says:

    Given the localized nature of the implementation, does she think “binding” is not an issue? E.g., does she discuss how a visual and auditory experience can be part of the same experience?

    Field theorists, who are not in general epiphenomenalists, will be interested in the new paper in Nature on causally efficacious fields in Drosophila (here).

    I am frankly surprised that someone would still use Libet to support epiphenomenalism. People who do this typically make an illicit slide from the vanilla claim, ‘Nonconscious processes can influence behavior and experience,’ to the much stronger claim that ‘Only nonconscious processes influence behavior and other neural processes.’ The former claim is innocuous, known to be true in the study of every sensory modality (and even the study of diaphanous phenomena like consciousness of intentions to act). The latter is a contentious claim with no strong empirical support, definitely the two aren’t equivalent and I have trouble understanding why so many people slide from one to the other.

    Finally, once your field theory gets fine-grained enough, it becomes tough to empirically differentiate from more standard neural theories. Action potentials in individual neurons generate EM fields, after all.

  2. 2. Peter says:

    Eric – no, so far as I can see she has no account of how binding comes about.

  3. 3. Scott says:

    “The problem is that conscious experience doesn’t seem like anything in physics…”

    Not quite true. This is beside the point, but ever since reading The Wraparound Universe I’ve been struck by the way consciousness, in a peculiar structural manner, does resemble the universe itself insofar both are finite and yet possess no discrete boundaries: the limits of the universe are not directly observable within the universe the same way the limits of consciousness are not directly observable within consciousness. Now, when I watch programs on cosmology, I can’t help but see astronomy as a form introspection, and keep waiting for Nature to pronounce that physicists have discovered the ‘representational content’ of the visible universe!

    Field theories are fascinating, otherwise, but they never seem to do any real explanatory work. Epiphenomenalism just strikes me as giving up. But then so do things like p-consciousness!

  4. 4. Tom Clark says:

    Thanks Peter, I share your skepticism about Pockett’s thesis. Some remarks which will no doubt sound familiar:
    “The problem is that conscious experience doesn’t seem like anything in physics, whether a field or a lump of matter.”

    Right. Moreover, all things that figure in physics and other 3rd person theories and explanations are potential public objects of observation, whereas experiences aren’t observable (not even by the experiencer), but undergone or had by singular individuals, so are categorically private and unavailable to science as explanatory entities. You can in principle observe the NCC of my pain, but not the pain itself. This makes it hard to draw an identity between experience and its NCC or any physical state of affairs. Indeed, phenomenal qualities – subjective feelings – don’t (and I would say can’t) figure in scientific explanations of behavior precisely because they are nowhere to be found in the world of public objects that’s the domain of science.

    “Epiphenomenalism is very problematic because if consciousness does not cause any of our actions, our reports and discussions about it cannot have been caused by it either”.

    I’m not sure that we need to suppose experiences add to what neurons are doing in causing actions, for instance the reporting of pain. We form the concept of pain and report having it, which seems to me explicable in neuro-behavioral terms: we can in principle trace the neural and muscular story all the way from the painful stimulus to concept formation (if a concept is deployed) to the speech act or other behavior. The concept of pain of course derives from having experiences of pain, but the experience itself need not cause the concept by means of interacting with neurons, and indeed there’s no story on offer of how that could occur anyway (the problem of dualist interactionism). Rather the concepts of experience and subjectivity arise in conjunction with our concepts of the external world, bodies, what’s mental, what’s physical, etc. Of course if I cry out when hurt I’m not deploying a concept, but the crying out is explicable within a neuro-functionalist description that, again, doesn’t appeal to phenomenal qualities.

    However, despite all this I’d say that pain (and all conscious experience) isn’t epiphenomenal with respect to action since as suggested above it doesn’t and can’t appear in 3rd person explanations. Only something that is potentially observable could either play a causal role or fail to play a causal role (be epiphenomenal) with respect to behavior. So it would be unfair and inaccurate to call experience epiphenomenal. Rather it’s explanatorily *orthogonal* with respect to 3rd person explanations, while playing a robust causal role in subjective explanations which routinely cite phenomenal feels as causes of behavior (I eat the cake because it tastes so good.) More at Respecting Privacy, http://www.naturalism.org/privacy.htm

  5. 5. Jay says:

    Firstly let me state how much I love this site. It is my loss that I can’t follow it more frequently.

    Concerning epiphenomenalism; I don’t understand what is so wrong with it. The main difficulty I hear is that it is a physically untestable claim(with at least one caveat). However, this difficulty exists only for physicalists. If you lean towards neo-dualism you would not expect all assertions about subjectivity to be physical and therefore wouldn’t have an issue with untestable epiphenomena. Some physicalists may not realize that they are arriving at dualist theories.

    Tom’s statement above resonates with me – particularly the notion of orthogonality (he is, I believe, referring to the deeper linear-algebra meaning of the word). I have thought about that word myself in relation to this topic. I agree, subjectivity in general seems orthogonal to physical explanations. This seems more than just mathematical metaphor.

  6. 6. Eric Thomson says:

    In my admittedly limited understanding of her work, Pockett is not a dualist, which makes her epiphenomenalism inexplicable. If she were a property dualist, it would be at least understandable.

    To risk derailing the discussion, this view that these explanatory patterns are orthogonal seems false, or at least dualistic. We often mix them together as in ‘He drove to the dentist because of the pain in his tooth.’

  7. 7. Arnold Trehub says:

    If conscious content were truly orthogonal to physical explanation then, it seems to me, it could play no role in arriving at a scientific explanation of consciousness. I have proposed that there is an *analogical* relationship between certain biophysical states of the brain and salient features of phenomenal consciousness. This suggests that the explanatory task for science is to detail the brain mechanisms that can generate proper analogs of relevant conscious experience.

  8. 8. Jay says:

    Eric – I too have limited understanding of Pockett’s work, and I agree with you that if she is in fact a devoted physicalist, then epiphenomena are a problem. However, it it not true that all scientists must be physicalists.

    I see at least one way that she could be both a dedicated empiricist and forward an epiphenominal hypothesis. Imagine a possible future world of completed science; where we can explain the entire, external universe through cause an effect to within the limits prescribed by quantum mechanics and chaos. If in this world her descendants cannot yet explain consciousness, then epiphenomenalism would seem true through the empirical exclusion of every other possible explanation. In that case science would argue for dualism and not physicalism.

    Concerning your second point: I think that the intuition of many is that ‘orthogonality’ in our explanations must be false. Note, however, that your sentence makes sense if the speaker is referring to the phenomenal experience of driving to the dentist.

    If ‘orthogonality’ exists then it certainly implies dualism…but what is necessarily wrong with that? The difficulty we have describing qualia in words (which would be necessary for hypothesis formation) speak to something akin to ‘orthogonality’ within our syntax. I think that is the whole debate put differently. Physicalists believe they can solve these syntax ‘problems.’ Scientists who are dualists believe they can discover new syntax that resolves these ‘problems.’ Dualists don’t believe these issues are ‘problems’ at all, but rather a line of evidence in favor of something else.

  9. 9. Jay says:

    Arnold – “If conscious content were truly orthogonal to physical explanation then, it seems to me, it could play no role in arriving at a scientific explanation of consciousness.” I think that I mostly agree with this…as far as I can tell. Consciousness may provide the curiosity, meaning and purpose of science, but it does not take part in scientific explanations. Science deliberately removes subjectivity from its process. This seems to go both ways. If science removes subjectivity then it seems impossible for science to describe subjectivity.

  10. 10. Arnold Trehub says:

    Jay, here is science that does not remove subjectivity from its explanatory processes:

    http://theassc.org/documents/where_am_i_redux

  11. 11. Jay says:

    Arnold, Thanks for the paper, it was clearly written and clever – as usual.

    Through history we have been able to manipulate the conscious experience of another and have them report on that change; whether it be poking out a subject’s eyes or exposing them to sophisticated virtual reality machines that give an out of body experience. Few would argue that information received through the senses and processed in the brain would not influence conscious sensation. Similarly, I don’t think anyone would argue that subject reports in brain experiments don’t provide us important information about brain processes.

    However, these reports always reference some functional status of the subject (I can’t see, I am not here but rather over there,…), not the subjective ‘what it feels like’ of the experience. So to me subject reports do not reference subjectivity, or consciousness.

  12. 12. Arnold Trehub says:

    Jay: “However, these reports always reference some functional status of the subject (I can’t see, I am not here but rather over there,…), not the subjective ‘what it feels like’ of the experience.”

    This remark has an echo of Stevan Harnad with whom I have had many debates. Jay, if your subjective feeling/conscious experience is not some functional state of your brain, then what is it?

    Here’s an excerpt from my forthcoming chapter titled “A Foundation for the Scientific study of Consciousness” (In press, Cambridge university Press). This is how I see the problem:

    …………………………………………………………………………….
    Dual-Aspect Monism

    Each of us holds an inviolable secret — the secret of our inner world. It is inviolable not because we vouch never to reveal it, but because, try as we may, we are unable to express it in full measure. The inner world, of course, is our own conscious experience. How can science explain something that must always remain hidden? Is it possible to explain consciousness as a natural biological
    phenomenon? Although the claim is often made that such an explanation is beyond the grasp of science, many investigators believe, as I do, that we can provide such an explanation within the norms of science. However, there is a peculiar difficulty in dealing with phenomenal consciousness as an object of scientific study because it requires us to systematically relate third person descriptions or measures of brain events to first person descriptions or measures of phenomenal content. We generally think of the former as objective descriptions and the latter as subjective descriptions. Because phenomenal descriptors and physical descriptors occupy separate descriptive domains, one
    cannot assert a formal identity when describing any instance of a subjective phenomenal aspect in terms of an instance of an objective physical aspect, in the language of science. We are forced into accepting some descriptive slack. On the assumption that the physical world is all that exists, and if we cannot assert an identity relationship between a first-person event and a corresponding third person event, how can we usefully explain phenomenal experience in terms of biophysical processes? I suggest that we proceed on the basis of the following points:

    1. Some descriptions are made public; i.e., in the 3rd person domain (3 pp).
    2. Some descriptions remain private; i.e., in the 1st person domain (1 pp).
    3. All scientific descriptions are public (3 pp).
    4. Phenomenal experience (consciousness) is constituted by brain activity that, as an object of scientific study, is in the 3 pp domain.
    5. All descriptions are selectively mapped to egocentric patterns of brain activity in the producer of a description and in the consumer of a description (Trehub 1991, 2007, 2011).
    6. The egocentric pattern of brain activity – the phenomenal experience – to which a word or image in any description is mapped is the referent of that word or image.
    7. But a description of phenomenal experience (1 pp) cannot be reduced to a description of the egocentric brain activity by which it is constituted (there can be no identity established between descriptions) because private events and public events occupy separate descriptive domains.

    It seems to me that this state of affairs is properly captured by the metaphysical stance of dual-aspect monism (see Fig.1) where private descriptions and public descriptions are separate accounts of a common underlying physical reality (Pereira et al 2010; Velmans 2009). If this is the case then to properly conduct a
    scientific exploration of consciousness we need a bridging principle to systematically relate public phenomenal descriptions to private phenomenal descriptions.

    Bridging Principle

    Science is a pragmatic enterprise; I think the bar is set too high if we demand a logical identity relationship between brain processes and the content of consciousness. The problem we face in arriving at a physical explanation of consciousness resides in the relationship between the objective 3rd person experience and the subjective 1st person experience. It is here that I suggest that
    simple correlation will not suffice. I have argued that a bridging principle for the empirical investigation of consciousness should systematically relate salient analogs of conscious content to biophysical processes in the brain, and that our scientific objective should be to develop theoretical models that can be demonstrated to generate biophysical analogs of subjective experience (conscious content). The bridging principle that I have proposed (Pereira et al
    2010) is succinctly stated as follows:

    For any instance of conscious content there is a corresponding analog in the biophysical state of the brain.

    In considering the biological foundation of consciousness, I stress corresponding analogs in the brain rather than corresponding propositions because propositions, as such, are symbolic structures without salient features that are similar to the imagistic features of the contents of consciousness. Conscious contents have qualities, inherent features that can be shared by analogical events in the brain but not by propositional events in the brain (Trehub 2007).
    Notice, however, that inner speech , evoked by sentential propositions, has analogical properties; i.e., sub-vocal auditory images (Trehub 1991).
    ………………………………………………………………………..

  13. 13. Tom Clark says:

    Eric: “this view that these explanatory patterns are orthogonal seems false, or at least dualistic. We often mix them together as in ‘He drove to the dentist because of the pain in his tooth.’”

    Right, we do mix them, since as persons we are both conscious subjects and observable physical entities, so we are simultaneously situated in both subjective and intersubjective realities (Arnold’s two aspects, about which more below). As a result, we are well acquainted with the elements that participate in what I call their respective explanatory spaces, 1st and 3rd person: private phenomenal experiences on the one hand and public observables such as physical objects on the other. We are, therefore, naturally inclined to combine elements from both explanatory spaces in our 3rd person accounts of behavior. This is harmless enough so long as we don’t suppose (as some folks do) that conscious states add extra “causal oomph” to what their NC accomplish in behavior control. And importantly, we shouldn’t draw the conclusion that because they don’t add causal oomph they are epiphenomenal: they’d be epiphenomenal only if consciousness were an observable phenomenon like the NCC, which it isn’t. The conclusion to draw is that phenomenal feels are a parallel subjective reality that, as Arnold suggests, can’t be reduced to or literally identified with brain states (which participate in objective reality described by science). As he puts it, we’re not going to find “a logical identity relationship between brain processes and the content of consciousness.”

    Dual-aspect monism as Arnold describes it has it that phenomenal feels are subjective descriptions of an underlying physical reality. But my sensation of pain and red aren’t descriptions in and of themselves, they are basic qualitative elements of subjective reality that reliably correspond to 3rd person states of affairs, both in my brain and in the world (including my body) as modeled by my brain. Pain doesn’t *describe* the corresponding brain state; it’s what I experience when that brain state exists, and the same goes for all phenomenal feels. Of course I often mention phenomenal feels (experiences such as the chair looking red, my back as aching) when describing the world, including my body, but I don’t think we should think of experience as describing the underlying physical reality of the NCC, which is what dual-aspect monism seems to claim. Underlying physical reality gets described in terms of quantitative (non-qualitative) metrics, while experiences are the subjective reality of the way the world privately appears to suitably organized systems such as ourselves that instantiate a behavior-controlling model of the system in the world. That model is not of the underlying physical reality that constitutes the system, but of that part of reality that it’s been adaptive for us as individual systems to model, e.g., largish objects (including our bodies) and intentional agents, especially con-specifics. What we have, I suggest at http://www.naturalism.org/appearance.htm#part1 , is a parallel modeling of the world from two epistemic perspectives, one pertaining to individual systems that results (somehow) in subjective experience, the other pertaining to the intersubjective community of science which uses quantitative descriptors which(speaking to Jay’s point) *by necessity* leave out phenomenal qualities because qualities aren’t, as Arnold acknowledges, intersubjectively available.

  14. 14. Arnold Trehub says:

    Tom,

    I don’t think that “phenomenal feels are subjective descriptions of an underlying physical reality”. I think that phenomenal feels are subjective (from within the brain) *aspects* of an underlying reality that are described by first-person descriptors.

  15. 15. Tom Clark says:

    Arnold,

    “I think that phenomenal feels are subjective (from within the brain) *aspects* of an underlying reality that are described by first-person descriptors.”

    On your view we have two different things: subjective aspects of underlying reality (certain sorts of brain states) and first-person descriptors of those aspects. Is it that the first-person descriptor “I’m in pain” describes the subjective aspect – the experience of pain – of the underlying reality of having a certain brain state? Or is the experience of pain itself a first-person descriptor of that aspect, which is something other than pain itself?

  16. 16. Arnold Trehub says:

    Tom, in my view the first-person descriptor “I’m in pain” describes the subjective aspect (aspect 1) – the experience of pain, which is a certain brain state that can have a public/scientific third-person description aspect 2- a perspective from outside a brain and its putative mechanisms.

  17. 17. Tom Clark says:

    Arnold, if I understand it correctly, your dual aspect monism has it that there are two aspects to the “common underlying *physical* reality” of brain states (my emphasis). One of these aspects, the subjective (conscious experience), isn’t accessible intersubjectively. But, physical reality as presented by science just is that which is intersubjectively accessible and gets described in non-qualitative terms. So instead of saying that conscious experience is an aspect of physical reality, it’s perhaps more apposite to say that it constitutes its own subjective reality, since after all we can’t deny the existence of (the reality of) experience, whether or not it corresponds to anything outside the head. As much as (undeniable) physical reality appears to us via consciousness, there is the undeniable *reality of appearances* (phenomenal experience) for the conscious system.

    There are of course dependence relations between physical and subjective realities, since as far as we can tell experience only exists when certain physically specifiable conditions exist. So in that sense the physical seems prior to the subjective, which is why the project of explaining consciousness has understandably been biased toward varieties of physicalism. But given that (as you and I agree) the subjective can’t be reduced to or logically identified with the physical, it seems legitimate to accord it its own (qualitative) ontology, which again marks it off as its own sort of reality.

    Dual aspect monism speaks to the desire for theoretical unification, which I share, but it seems to me it can’t successfully assimilate subjectivity to the physical given the very nature of the physical. Instead, the master unifying concept, I would suggest, is representation, in which two different epistemic, representational perspectives on the world, one public and collective (science), the other individual (being a brain and body), entail their respective realities, physical and qualitative.

    All this seems to have gotten away from Peter’s concerns about Pockett’s thesis that I first picked up on (epiphenomenalism and the fact that experience doesn’t seem like anything physical) but one thing led to another and here we are again, sorry!

  18. 18. Arnold Trehub says:

    Tom, you wrote: “But, physical reality as presented by science just is that which is intersubjectively accessible and gets described in non-qualitative terms. So instead of saying that conscious experience is an aspect of physical reality, it’s perhaps more apposite to say that it constitutes its own subjective reality… ”

    A couple of comments on this. First, I think its wrong to say that science *presents* physical reality. Instead, we should say that science *describes* what it provisionally takes to be a part of physical reality. If we say, as you propose, that conscious experience “constitutes its own subjective reality”, aren’t we then separating the subjective from the physical? Dualism?

    Tom: “But given that (as you and I agree) the subjective can’t be reduced to or logically identified with the physical, it seems legitimate to accord it its own (qualitative) ontology, which again marks it off as its own sort of reality.”

    Your proposal might be justified by some criteria. But if the goal is to naturalize consciousness — arrive at a physical explanation of consciousness — then, it seems to me, that your dualist formulation leads us into an explanatory tar pit.

  19. 19. Tom Clark says:

    Arnold,

    “If we say, as you propose, that conscious experience “constitutes its own subjective reality”, aren’t we then separating the subjective from the physical?”

    As I pointed out, there are empirically observed dependency relations between the two realities, physical and subjective, so there’s clearly a relation or entailment from one to the other. I suggest what the (non-causal) entailments might be at http://www.naturalism.org/appearance.htm But I don’t think we can assimilate the subjective (conscious experience) to the physical by saying it’s an aspect of the physical since it 1) isn’t observable in the way physical objects are (it isn’t intersubjectively available) and 2) has a qualitative ontology (phenomenal feels, qualia) that physical objects don’t.

    “Dualism?”

    As I’ve said a few times, my proposal is that there are two epistemic perspectives that generate their respective realities, with representation being the unifying concept, a concept that’s as they say “topic neutral” – http://plato.stanford.edu/entries/mind-identity/#Phe . The dualism of dual aspect monism seems forced for reasons I’ve stated above and earlier in this thread.

    “…if the goal is to naturalize consciousness — arrive at a physical explanation of consciousness — then, it seems to me, that your dualist formulation leads us into an explanatory tar pit.”

    I don’t see that the project of naturalizing consciousness needs to limit itself to physicalism or physical explanations along the lines of standard mechanistic or causal accounts. Rather, conceptually transparent explanations that do justice to both physical and subjective natural phenomena *might* involve what I think are usefully characterized as non-causal entailments of being representational systems, as suggested in “The appearance of reality” at the link above. But of course I could be dead wrong!

  20. 20. Arnold Trehub says:

    Tom: “But I don’t think we can assimilate the subjective (conscious experience) to the physical by saying it’s an aspect of the physical since it 1) isn’t observable in the way physical objects are (it isn’t intersubjectively available) and 2) has a qualitative ontology (phenomenal feels, qualia) that physical objects don’t.”

    1. Fundamental physical particles are *not* observable. For example, if the physical existence the Higgs boson is confirmed, it will not be because the Higgs boson was observed, but because the predicted *observable effects* of the theoretical properties of the Higgs boson were empirically confirmed.

    2. To claim that a biophysical system can *not* have a phenomenal feel is to arbitrarily rule out a physical explanation of phenomenal consciousness on the basis of a compelling intuition. Perhaps this is a key difference between philosophy and science. In science, intuition is trumped by evidence.

    Tom: ” I don’t see that the project of naturalizing consciousness needs to limit itself to physicalism or physical explanations along the lines of standard mechanistic or causal accounts.”

    If you stipulate that to naturalize consciousness does not require a causal account (which I take to be an explanation of consciousness within scientific norms) then you are justified in your stance of psychophysical parallelism. Correct me if I misunderstand your argument.

  21. 21. Tom Clark says:

    Arnold,

    Re 1: Agreed that some well-confirmed physical phenomena aren’t directly observable, but they exist independently of a conscious system, whereas conscious experience exists only for the conscious system having it.

    Re 2: I don’t claim that “a biophysical system can *not* have a phenomenal feel” since after all we are such systems and have them.

    Lastly, I don’t *stipulate* “that to naturalize consciousness does not require a causal account” but suggest that causal accounts face problems (see http://www.naturalism.org/appearance.htm#part2 ) and that there might be a non-causal entailment between being a certain sort of representational system and having experience (see http://www.naturalism.org/appearance.htm#part5 ). To pan out, a non-causal account of consciousness has to be consistent with all the empirical evidence regarding psycho-physical co-variation, so would be in good scientific standing. I’d say that the basic naturalistic philo-scientific norm is a commitment to empiricism, not to causation, in which case we can’t rule out the possibility that the most apt characterization of the mind-body relation is that of psycho-physical parallelism.

  22. 22. Arnold Trehub says:

    Tom, you wrote:

    #19: “2) [conscious experience] has a qualitative ontology (phenomenal feels, qualia) that physical objects don’t.”

    #21: “Re 2: I don’t claim that “a biophysical system can *not* have a phenomenal feel” since after all we are such systems and have them.”

    Is a biophysical system a physical object? If so, then #19 and #21 are in conflict. In any case, I’m glad to agree with #21.

    Tom: “I’d say that the basic naturalistic philo-scientific norm is a commitment to empiricism, not to causation, in which case we can’t rule out the possibility that the most apt characterization of the mind-body relation is that of psycho-physical parallelism.”

    I agree with you that, absent a valid causal explanation of the mind-body relation, psycho-physical parallelism might be the most apt characterization. True, pure empiricism that is purged of any explanatory goal, is content with no more than honest description of the natural world. But I take the project of naturalizing consciousness to imply an effort to arrive at a causal understanding of conscious experience. I think encouraging progress has been made in this effort. It seems that we don’t disagree on basic matters but that, for you, the safest bar is set at the descriptive level of psycho-physical parallelism.

  23. 23. Tom Clark says:

    Arnold:

    “Is a biophysical system a physical object?”

    The system is of course composed of physical objects, e.g., atoms, molecules, neurons, etc., but as far as we know none of them have phenomenal properties or host conscious experience. Rather, the evidence suggests that consciousness only exists when certain representational functions instantiated by the system are active. So it’s likely being a *system* that matters for consciousness, not being biophysical. Presumably other substrates, suitably organized, could entail the existence of phenomenal feels for the system.

    “But I take the project of naturalizing consciousness to imply an effort to arrive at a causal understanding of conscious experience.”

    Whether or not representational functions can cause consciousness in any standard sense seems to me a very open question (see the first link in #21). Moreover, the causal relation view doesn’t seem consistent with your dual aspect view: does the physical aspect of being a brain state cause the subjective aspect? In any case, your claim that only causal relations can be explanatory seems to me a little premature. Everyone here is looking to explain consciousness within a naturalistic evidentiary framework and we shouldn’t take any explanatory options off the table that are consistent with the evidence.

  24. 24. Arnold Trehub says:

    Tom: “Rather, the evidence suggests that consciousness only exists when certain representational functions instantiated by the system are active. So it’s likely being a *system* that matters for consciousness, not being biophysical. Presumably other substrates, suitably organized, could entail the existence of phenomenal feels for the system.”

    As I stressed in *The Cognitive Brain*, the structure and dynamic characteristics that account for the competence of any physical system are critically dependent on the properties of its components. Of course it is *possible* that non-biophysical components in a suitably organized system might generate phenomenal feels for the system. But to date, we have undeniable evidence that the biophysical substrate of a human brain exhibits consciousness, whereas there is no credible evidence that any artifact is conscious.

    Tom: “Whether or not representational functions can cause consciousness in any standard sense seems to me a very open question (see the first link in #21).”

    One of the problems in tackling questions of this kind is that there are few explicit working definitions of consciousness, so we can easily go round-and-round through the same revolving door in our arguments about consciousness. My working definition of consciousness is this:

    *Consciousness is a transparent brain representation of the world from a privileged egocentric perspective*

    What is your working definition of consciousness?

    Tom: “Moreover, the causal relation view doesn’t seem consistent with your dual aspect view: does the physical aspect of being a brain state cause the subjective aspect?”

    Being a *particular kind of brain state — an activated retinoid space* causes the subjective aspect. This is surely consistent with the dual-aspect view. Moreover, this is not merely an untested theoretical claim. In my SMTT experiments, I was able to successfully induce in all subjects the qualitative features of a vivid subjective experience without a corresponding sensory input. These novel conscious experiences were were successfully predicted by the biophysical structure and dynamics of the retinoid model. For example, see *Evolutions Gift: Subjectivity and the Phenomenal World*, here:

    http://evans-experientialism.freewebspace.com/trehub01.htm

  25. 25. Jay says:

    Arnold – My principal difficulty with many forms of dual aspect monism is that they do not attempt to capture the ‘what it is like.’ I certainly agree that the descriptions of the conscious mind (1p) are different than objective, scientific descriptions (3p). Such incommensurability of description is even common in engineering among networks of unconscious devices. However, this incommensurability is not the end. Certainly if we are seeking a theory of consciousness the ‘what it is like’ part is what we are most interested in explaining. By setting the bar lower we leave out what it is we seek to understand.

    If we try to explain the ‘what it is like’ part of consciousness it is very hard not to ascribe some phenomenal quality to brain processes and, in so doing, drift into property dualism.

  26. 26. Eric Thomson says:

    Jay wrote:
    Such incommensurability of description is even common in engineering among networks of unconscious devices.

    This is interesting. What do you have in mind here? Something like Fourier vs time-domain representations of the same transform?

    We shouldn’t be all that bothered by dualism at the level of descriptions or concepts. We can have two concepts that apply to the same thing. The best hypothesis I have seen, to the point where I don’t really consider it a hypothesis anymore, is that these subjective experiences are brain states of a certain type (with caveats and qualifications I could add). There is really no good evidence to suggest this is false, plenty of evidence to suggest it is true, and the main lines of argument against it are based on introspective conceptual analyses, which have never been a good guide to ontology.

  27. 27. Eric Thomson says:

    Note, however, I don’t think everyone should summarily dismiss the conceptual issues (google ‘aliens versus materialists’ for my most recent, ongoing, foray), but I do think that people tend to give them too much weight. Especially philosophers, for obvious reasons.

  28. 28. Jay says:

    “This is interesting. What do you have in mind here? Something like Fourier vs time-domain representations of the same transform?”

    Take, for instance, a central processor connected to smart sensor nodes through a limited protocol, limited bandwidth network. Each node on the network has its own hw/sw architecture, internal protocols, and its own sophisticated processing that takes the primitive sensor data (radar returns, light patterns, etc…) and generates from it a detailed description about the state of its surroundings. Following your suggestion, some architectures could interpret frequency domain descriptions and others only description in the time domain, etc… Each node passes only actionable summary descriptions unto the network for other nodes to interpret. If the computational content of the universe is reduced to this network (the designer is removed) then the internal descriptions of sensory information within the nodes are not public descriptions as they cannot be passed to other components. Even if they could be then the other components would not have the architecture to interpret those ‘descriptions.’ This truncated universe has both internal and external descriptions that cannot be reconciled by the system or its components.

    Note, however, that consciousness is not a necessary condition for the incommensurability of information among nodes. Neither a node, nor the system produces a ‘what it is like’ in this example. We have only streams of electrons moving in the way they were prescribed upon design. The descriptions produced are without meaning or any form of subjective sensation like qualia. If, somehow, one of the nodes produced qualia, or a ‘what it is like’ sensation, then we would want to know how, and this would be a different, more basic question.

  29. 29. Arnold Trehub says:

    Peter, my comment #24 was submitted yesterday but has not yet been posted. Is there a problem with this comment?

    [Sorry, Arnold, Akismet didn't like it for some reason - should be OK now. -Peter]

  30. 30. Eric Thomson says:

    Jay I find that a very interesting analogy, one that deserves a paper-length treatment. Do you have one, or are you aware of one? I will have to chew on it a bit before I say more, and I don’t want to derail the thread. Maybe a good subject for philosophyofbrains blog….

  31. 31. Arnold Trehub says:

    Jay: “My principal difficulty with many forms of dual aspect monism is that they do not attempt to capture the ‘what it is like.’”

    Yes, dual-aspect monism, as such, doesn’t attempt to capture the ‘what it is like’ character of conscious experience/subjectivity. But the retinoid model of consciousness, which is justified within the scope of dual-aspect monism, does deal with ‘what it is like’ because it provides a neuronal analog of *what it is like to be conscious*. Is there a primitive irreducible general content of ‘what it is like’ that characterizes any and all particular instances of conscious experience? I believe that there is. Imagine the moment of awakening from a deep dreamless sleep before the content of your experience is elaborated by contributions from from your exteroceptive and interoceptive sensory modalities. My claim is that the moment of gaining consciousness is just like *being here* — a fundamental sense of being at the origin of a spatio-temporal surround. This is your primitive subjective/perspectival (1st-person) world that is later filled up with all other kinds of phenomenal content — an unbounded collective of what-it-is-likes. This implies that one is conscious if and only if one has an internal 3D spatial representation of *something somewhere* in perspectival relation to one’s self (in my terms, the core self, tokened as I!). We can think of this as one’s phenomenal world. So the question is: “What kind of credible biophysical system can generate an internal representation of one’s phenomenal world?”. I believe that the neuronal structure and dynamics of the retinoid model can do the job, and I have provided a large body of empirical evidence that supports the validity of the theoretical model. The most decisive evidence, in my opinion, is given by the SMTT experiments in which subjects have a vivid conscious experience of an object moving in space when there is no such object in the subject’s visual field. The detailed phenomenal features of this experience were predicted by the neuronal structure and dynamics of the putative retinoid mechanisms.

  32. 32. Tom Clark says:

    “The detailed phenomenal features of this experience were predicted by the neuronal structure and dynamics of the putative retinoid mechanisms.”

    What can’t be predicted, seems to me, is the phenomenal “what it is like” of the experienced colors of the object and its surround. Because the experience is private, there’s no objective referent by which the success of the prediction can be judged. I know by acquaintance what my red is like, but not yours.

    Another way to see this is if someone were suddenly equipped with the visual apparatus necessary to experience colors in the infrared or ultraviolet spectrum. Based on the neuronal structure and dynamics of the perceptual mechanisms, I don’t see how we’d be able to predict what such experiences are like. But of course any phenomenal features involving lines, shapes, durations, movement, etc. (anything quantifiable) are in principle predictable since these aren’t single basic qualia.

  33. 33. Arnold Trehub says:

    If we accept the validity of the retinoid model of consciousness and the bridging principle of corresponding analogs between phenomenal content and biophysical states of the brain, then we are forced to rethink our concept of “what it is like”.

    Tom: “What can’t be predicted, seems to me, is the phenomenal “what it is like” of the experienced colors of the object and its surround. Because the experience is private, there’s no objective referent by which the success of the prediction can be judged.”

    I have shown that primary phenomenal qualities such as shape, size, motion, and location *can* be predicted on the basis of the properties of the putative retinoid mechanisms. What about secondary phenomenal qualities such as color, sound, emotions? In my model of the cognitive brain, there is just one innate phenomenal quality that is definitive of consciousness, that is a sense/representation of being at the center/origin of a spatial surround (our phenomenal world). All phenomenal elaborations (qualia) of this primary conscious experience are contributed by pre-conscious *sensory patterns* from imaging matrices in the sensory modalities that are projected in recursive loops into retinoid space. According to the retinoid theory, this is as true for the color of a rose, or a sense of elation, as it is for the shape of a triangle. Given this assumption, there should be some objective referents by which the success of a prediction based on the neuronal structure and dynamics of the cognitive brain model can be judged. Obviously, more work is needed to extend the retinoid model to cover the full range of secondary “what-it-is-likes”. However, it seems to me that color-matching and primary-color mixing experiments could provide objective referents by which to judge the success of predictions about color qualities that are based on the retinoid model.

  34. 34. Arnold Trehub says:

    Tom, there was a delay in the posting of my #24. Have you seen it? I would be interested in your response, particularly your own definition of consciousness.

  35. 35. Tom Clark says:

    Arnold in 24:

    “…we have undeniable evidence that the biophysical substrate of a human brain exhibits consciousness…”

    I’d say it isn’t the substrate that exhibits consciousness, but the behavior-controlling functional organization. Non-conscious brain processes are instantiated using the same substrate, but don’t entail consciousness because they aren’t performing those functions empirically found to be associated with it.

    “What is your working definition of consciousness?”

    My definition of consciousness is the explanatory target involved in the hard problem: phenomenal experience, the basics of which we call qualia.

    “Being a *particular kind of brain state — an activated retinoid space* causes the subjective aspect. This is surely consistent with the dual-aspect view.”

    I was under the impression that the two aspects, subjective and physical, had equal ontological status in dual aspect monism, so thanks for correcting my misunderstanding. If the subjective aspect is caused, produced or generated by the physical aspect, then the physical has ontological priority, and indeed you think the “underlying physical reality” is really all that exists. But if consciousness is a causal product of brain states, we would expect to be able to observe it as something distinct from brain states, something further those states produce. Since we don’t, since consciousness merely *accompanies* certain types of representational functions and since it exists only *for the system* as a subjective reality, I’m skeptical of the causal product hypothesis.

    Arnold in 33:

    “…there should be some objective referents by which the success of a prediction based on the neuronal structure and dynamics of the cognitive brain model can be judged… it seems to me that color-matching and primary-color mixing experiments could provide objective referents by which to judge the success of predictions about color qualities that are based on the retinoid model.”

    What’s needed is an objective measure of the *experience* of color which is exactly what can’t be had since experience is private and unobservable. There’s nothing out in the world by which we could judge the success of a prediction that my experience of red is like *this*. The objective referents you’re suggesting, e.g., a red card, are physical objects with various light reflectances that participate in producing (waking) experience, but they can’t tell us anything about my experienced quality of red, or yours. We can see the red card, but not our experiences.

  36. 36. Eric Thomson says:

    There is this property X that is a complex biological process of a certain sort. When I study X I do not thereby instantiate X myself. This is not interesting ontologically or epistemically, but a fairly vanilla fact–we don’t become what we study.

    Let X=’photosynthesis’ or ‘pregnancy’ and this is obvious. But when X=’experience of red’ people think we should be able to be in that state (i.e., to have the experience of red ourselves).

    Tom, why aren’t you simply implementing this fallacy? If subjective experiences are complex brain processes of a certain type, then I don’t expect to have such experiences unless I am in that brain state. And to “know what it is like” to have that experience is simply to have the experience (and perhaps instantiate certain reactions to having that experience).

    This is indeed a limitation of science, but I would submit it isn’t particularly interesting or specific to subjectivity, but is just to say that we are not identical to what we study.

  37. 37. Arnold Trehub says:

    Tom, you wrote:

    “I was under the impression that the two aspects, subjective and physical, had equal ontological status in dual aspect monism [1], so thanks for correcting my misunderstanding. If the subjective aspect is caused, produced or generated by the physical aspect, then the physical has ontological priority, and indeed you think the “underlying physical reality” is really all that exists. But if consciousness is a causal product of brain states, we would expect to be able to observe it as something distinct from brain states, something further those states produce [2]. Since we don’t, since consciousness merely *accompanies* certain types of representational functions [3] and since it exists only *for the system* as a subjective reality, I’m skeptical of the causal product hypothesis [4].

    1. It seems to me that under dual-aspect monism the subjective and the physical are simply two different aspects/perspectives on a monistic physical ontology. I’m not clear on what the misunderstanding is.

    2. In #24 I wrote: “Being a *particular kind of brain state — an activated retinoid space* causes the subjective aspect.” Retinoid space is a particular kind of brain mechanism, and I claim that activation of this brain mechanism causes us to become conscious (the subjective aspect; e.g., we wake up from a deep sleep), and consciousness/subjectivity is *constituted* by the activated state of retinoid space, so consciousness is *not distinct* from the brain state of an activated retinoid space. The problem is that our *descriptors* of the subjective (1st-person) and of the physical (3rd-person) are distinct.

    3. Consciousness could be said to merely *accompany* the representational function of retinoid space only if it were ontologically separate from the activation pattern of the brain’s retinoid space. But in the retinoid theory of consciousness, this is not the case. Consciousness *is* the global pattern of autaptic-cell activation in retinoid space. At the same time, we can’t assert a *formal* identity between consciousness and an activated retinoid space because subjective descriptions and physical descriptions are in separate domains. Hence the need for the bridging principle of corresponding analogs to enable us to conduct empirical tests of our candidate models of consciousness.

    4. Consciousness exists *in* the system and *for* the system as a physical reality (from the 3rd-person perspective) and has real causal properties for the system. At the same time, consciousness exists as a phenomenal reality (from the 1st-person perspective) and accounts for our notions of psychological explanation.

    Tom: “What’s needed is an objective measure of the *experience* of color which is exactly what can’t be had since experience is private and unobservable.”

    I certainly agree that we can’t measure subjective experience as such. But we *can* measure objective *analogs* of subjective experience. That’s why I propose the bridging principle of corresponding analogs. The cards that are used to test color blindness, as a simple example, give us a useful measure of what a person’s color experience is like.

  38. 38. Eric Thomson says:

    Arnold I’m a bit uncomfortable with saying that this brain processes causes consciousness. Is this important to you, or are you fine with saying that this brain process is conscious? This is not a big issue, but when we start talking about the brain causing consciousness, that makes people think they are two separate things, like whistle and its sound that it produces.I know it is fairly common to write and talk that way, especially for some reason among naturalists, but it has always rubbed me the wrong way.

  39. 39. Arnold Trehub says:

    Eric, I understand why you feel some discomfort in saying that a brain process *causes* consciousness. But is this any different than saying that powering up your computer causes photonic patterns on its screen? In the case of the brain mechanism of retinoid space, if the activation of its autaptic cells is below a critical threshold, it is not conscious; but if diffuse priming excitation from the ascending reticular activation system is strong enough (turning the power on?), it causes autaptic-cell activity to reach the critical threshold and retinoid space becomes conscious (*is* conscious; wakes up?) in the same sense (analogically) as powering the computer causes photonic patterns on its screen. Of course, your display screen is not a biophysical mechanism subjectively organized as a 3D spatiotopic structure with a fixed locus of perspectival origin, as is retinoid space.

    But regardless of our intuitions, what really matters is if the structure and dynamics of the theoretical model enable us to explain/correctly predict relevant conscious experience.

  40. 40. Jay says:

    Arnold – I like the retinoid theory as a brain model. To the extent that the theory can predict rudimentary experiments, this should give you some confidence that you are on track to describe certain brain processes. I am much more skeptical that the model explains phenomenal consciousness. For me an analogue representation of a 3d surround that interacts with an ego-centric representation of ‘self’ is, in principal though not in detail, not different than the mechanistic processes that describe a robot, flight control system, etc…. These are instruments for processing and reacting to data that lack the subjective character we are seeking in an explanation of consciousness. It is not clear how this issue is alleviated by further complexity, recursion, ‘self-concept’, etc…

    Just as the pixelated images you are reading on the screen have phenomenal meaning to you but are meaningless to your computer; our mind has a tendency to endow to our brain models a set of phenomenal properties. It is a difficult matter to separate our mind from the models it is thinking. In the past I have referred to this as anthropomorphization of a model. To me this is what the retinoid model is doing when it makes phenomenal claims.

  41. 41. Jay says:

    Eric – Referring to your earlier post…thanks for the encouragement to publish this pedagogical example. I will consider including it in something I am working on.

  42. 42. Arnold Trehub says:

    Jay: “For me an analogue representation of a 3d surround that interacts with an ego-centric representation of ‘self’ is, in principal though not in detail, not different than the mechanistic processes that describe a robot, flight control system, etc…. ”

    I disagree. This is the essential difference between all existing robotic artifacts that I know of and the neuronal analog of a volumetric space with a fixed center of perspectival origin (I!) that is our brain’s putative retinoid space. For example, here is an advanced robotic “mule” that does some very interesting navigation:

    http://www.defensenews.com/article/20121219/DEFREG02/312190012/DARPA-Robot-Growing-Smarter-Tougher-8212-Preparing-RIMPAC

    If you look at the frame at 1’55″ in this video, you will see a display of what is supposed to be the robot’s perception as it detours around an obstacle in following its leader. But this contrived 2D image is not an analog structure within the “mule” that is used to control its behavior; it is an image that is created through digital addressing of charges on a photo-sensitive surface, which is essentially a *propositional* process, not an *analogical* one.

    Jay: “… our mind has a tendency to endow to our brain models a set of phenomenal properties. It is a difficult matter to separate our mind from the models it is thinking. In the past I have referred to this as anthropomorphization of a model.”

    But I don’t propose that the retinoid model, as such, has phenomenal properties. What I claim is that the putative retinoid system in our living brains, which the retinoid model *describes*, does indeed have phenomenal properties. I surely do not “anthropomorphize” my model.

  43. 43. Tom Clark says:

    Arnold in 37:

    “1. It seems to me that under dual-aspect monism the subjective and the physical are simply two different aspects/perspectives on a monistic physical ontology. I’m not clear on what the misunderstanding is.”

    It seems strange to say that a physical ontology has a physical aspect. Physical ontology or reality (physical objects and their properties) is just that which is intersubjectively available, while the subjective ontology/reality of phenomenal experience is available only to individual conscious systems. Perhaps you mean that two perspectives on brain states produce the two aspects, but as individuals we don’t have a perspective on our brain states, so I don’t see how that could produce conscious experience. What *might* (non-causally) entail the existence of consciousness as a private reality (what I call “system reality”) is the fact that complex behavior controlling representational systems like ourselves need a root representational vocabulary which perforce ends up being qualitative (cognitively impenetrable, non-decomposable) for the system and only for the system; plus the facts that representational systems like ourselves are recursively limited and have limited resolution, see http://www.naturalism.org/appearance.htm#part5

    2. “consciousness/subjectivity is *constituted* by the activated state of retinoid space, so consciousness is *not distinct* from the brain state of an activated retinoid space.”

    So we agree that consciousness is not caused or produced as a separate effect, for instance the effect of having a perspective on a brain state. But if consciousness is the same thing as (not distinct from) brain states, then it would be publicly observable, but of course it isn’t. Consciousness (the private subjective reality of phenomenal feels) is distinct from its neural correlates (public physical reality) because it only exists for the subject, but as I think we agree it isn’t a causal product of its NC either.

    3. “Consciousness *is* the global pattern of autaptic-cell activation in retinoid space. At the same time, we can’t assert a *formal* identity between consciousness and an activated retinoid space because subjective descriptions and physical descriptions are in separate domains. Hence the need for the bridging principle of corresponding analogs to enable us to conduct empirical tests of our candidate models of consciousness.”

    Seems to me the separate domains (subjective and physical) are not just matters of description, but are parallel realities, one private/qualitative, one public/quantitative. So yes, models of consciousness can look for analogs (correspondences and co-variances) between them to help pin down the physical-functional correlates of phenomenal feels, such as in your retinoid model and SMTT experiment. The models can predict co-variances by (as you say below) measuring *objective* analogs of (reported) *subjective* experience, but as I think we agree they can’t predict what phenomenal qualities are like since there are no public criteria to judge the success of the prediction. The fact that we are always and only dealing with correspondences and analogs between the two domains, private/qualitative and public/quantitative, militates against the single reality thesis, the idea that, as you put it, “consciousness *is* the global pattern of autaptic-cell activation in retinoid space.”

    4. “Consciousness exists *in* the system and *for* the system as a physical reality (from the 3rd-person perspective) and has real causal properties for the system. At the same time, consciousness exists as a phenomenal reality (from the 1st-person perspective) and accounts for our notions of psychological explanation.”

    Consciousness certainly exists for the system as a *phenomenal* reality, but not, it seems to me, as a physical reality for observers (the 3rd person perspective) in the way that brain states do, since, as we agree, phenomenal states aren’t observable. As you put it in your closing paragraph, “I certainly agree that we can’t measure subjective experience as such.”

  44. 44. Arnold Trehub says:

    Tom, maybe I didn’t express myself clearly. Look at it this way:

    1. Everything in the universe is physical (monism).

    2. Brains are biophysical objects that can be observed/experienced from the outside (physical/3pp/first aspect)).

    3. Some brains contain a neuronal *mechanism* called a retinoid system which contains a neuronal module called retinoid space.

    4. When the autaptic cells in retinoid space are activated by the other mechanisms of the retinoid system, they *constitute* consciousness, which is an experience that can be had only from *inside* a brain (subjective/1pp/second aspect).

    5. Therefore, both an activated retinoid space and the subjective experience that it *constitutes* exist as two aspects of a common physical reality.

    6. *Descriptions* of #2 (physical) and #4 (subjective) exist in separate *epistemic* domains.

    7. *Descriptions* of #2 and #4 are physical and intersubjectively available.

    This makes sense to me as a neuroscientist. Is it a philosophical puzzlement?

  45. 45. Roy Niles says:

    Arnold, If for example the so-called wave functions in the universe are only or merely physical, then you might have an argument. Many physicists however will argue that they aren’t. And some neurologists will argue that our mental strategies, which we could not arguably operate our mechanistic systems without, are functional elements that also are not physical.
    The distinction seems to be quite important if we’re to understand why any of the things in our systems work, rather than assuming that if we can find out how they work, then we will somehow automatically and magically know why. Or know as much of the why as we need to know apparently.
    But wasn’t it the initial and strategic purpose of philosophers to ask about that more revealing type of why?

  46. 46. John Davey says:

    Roy

    ” If for example the so-called wave functions in the universe are only or merely physical, then you might have an argument. Many physicists however will argue that they aren’t. ”

    Who ? What else are they ? And if not, why are physicists studying them ? They may have no readily identifiable physical analogue – something that’s never mattered much in physics since Newton proposed action-at-a-distance 300 years ago – but they are only ever used to describe physical systems.

    I don’t see why wave functions need to be “so-called” wave functions. What else would they be called, what else do they do ? It strikes me that wave functions are nothing other than mathematical functions that describe waves. Nothing hidden there.

  47. 47. John Davey says:

    Arnold

    “5. Therefore, both an activated retinoid space and the subjective experience that it *constitutes* exist as two aspects of a common physical reality.
    6. *Descriptions* of #2 (physical) and #4 (subjective) exist in separate *epistemic* domains.
    7. *Descriptions* of #2 and #4 are physical and intersubjectively available.
    This makes sense to me as a neuroscientist. Is it a philosophical puzzlement?”

    I think you’ve pointed out Arnold that the mind-body problem isn’t a problem at all, and it takes a great deal of education of the wrong type to think that it is.

    I think it is as simple as you make out – mental phenomena have an objective natural existence and a subjective character.

  48. 48. John Davey says:

    “So we agree that consciousness is not caused or produced as a separate effect, for instance the effect of having a perspective on a brain state. But if consciousness is the same thing as (not distinct from) brain states, then it would be publicly observable, but of course it isn’t.”

    Atoms are not “publicly observable”, but they exist don’t they ? There is a theory of atoms so it is possible to look at them – albeit indirectly (based upon a bed of theory that atoms exist to start with). When there is a theory of consciousness there is no reason why it too shouldn’t be measurable. Anaesthetists measure consciousness every day, albeit somewhat crudely.

  49. 49. Roy Niles says:

    John Davey asks, re my purportedly non physical wave functions:
    “Who ? What else are they ? And if not, why are physicists studying them ?”

    Who? Lawrence Krauss for one, but you’ll have to ask him why he studies them. Otherwise I’d say that’s a dumb question to ask any of them.

    What else are they? Some would say they’re probability amplitudes in quantum mechanics, and then some would not say that at all. They’d say that actually wave functions are sometimes used to simply describe how physical forms must follow their functions. But in your case, of course, they’re used to show how these so-called “mathematical” functions must supposedly follow forms.

    What is a mathematical function anyway? Something that continuously proves itself?
    With ironic humour then? Nothing hidden there? Indeed.

  50. 50. Roy Niles says:

    By the way, Krauss also wrote that the laws of physics existed in the proto universe before the big bang produced the physical elements of the universe from nothing (or a something that was nothing physical in any case). So that in addition to the wave function or functions that existed as prior nothings, so were the laws of physics nothings as far as the physical properties of the universe were considered.
    So I suppose John Davey will tell us that these laws are also physical entities, even though they can supposedly exist as nothing outside of the forms and forces that they regulate. Well yes, Krauss also didn’t say that physical forces outside of the elements they service are nothing, but if wave functions were the genesis of those lawful forces, I must caution Davey that Krauss might well have.

  51. 51. Kar Lee says:

    John [48],
    “Atoms are not “publicly observable”, but they exist don’t they ?”
    Not to take side on this debate, but John have you checked out the 1989 Nobel Prize for physics?

    A single atom is *publicly observable* in an ultra cold atom trap and it appears as a dot of light. I cannot claim that I had seen it but the experiments were performed one floor below where I was.

    But I take your point. Quarks are not “publicly observable” because they are “confined” (confined as a technical term), and yet they are physical, well, kind of.

    So, this is a recurring theme: what is physical?

    Happy New Year everybody!

  52. 52. Vicente says:

    Happy New Year Kar Lee et al.

    Kar Lee, Physical is what can be physically treated, in theoretical and experimental terms (i.e. it is part of a physical model of the Universe or of one of the subsystems in it). Maybe.

    Arnold, regarding your question about magnetic fields and retinoid spaces in the previous post. A magnetic field is physical for the reason above, the neural retinoid system is physical for the same reason, consciousness (the whole of the phenomenon) is not, for the moment, for the same reason. With a magnetic field I can “explain” (to some broad extent) the repulsion between magnets, or the propagation of EM waves, or the interaction between the retinoid system and the MEG squid probes. With the retinoid system I cannot explain the “feeling” of empty space, and with the visual cortex I cannot explain the red color. I have been trying to find an original answer for us to avoid to revisit known land and common places, but I have failed, sorry.

    Anyway, as many times already said, once you find an answer, it is just a matter to dig a bit deeper, and that’s it, other questions remain unanswered.

  53. 53. Arnold Trehub says:

    Vicente: “With the retinoid system I cannot explain the “feeling” of empty space, and with the visual cortex I cannot explain the red color.”

    But the point I want to make is that with the retinoid model you *can* explain the feeling of an empty space and a red color. Whether or not we should *believe* that the retinoid model explains consciousness/feeling (within scientific norms) will depend on how well it is able to predict objective measures of relevant conscious phenomena. For example, if you look at look at Fig.3 here: http://theassc.org/documents/where_am_i_redux, the feeling of empty space between the central foreground pattern and the background pattern is explained by the Z-plane structure of retinoid space and the excitatory priming properties of the heuristic self-locus (I!*).

  54. 54. Tom Clark says:

    “But the point I want to make is that with the retinoid model you *can* explain the feeling of an empty space and a red color. Whether or not we should *believe* that the retinoid model explains consciousness/feeling (within scientific norms) will depend on how well it is able to predict objective measures of relevant conscious phenomena.”

    But this is the difficulty noted before in this thread: there are no objective measures of the hard problem’s explanatory target – qualitative experiences, e.g., the subjective redness of red – since they are only experienced, not observed. On the other hand, structural relations reported within experience – shapes, lines, motion, etc. – are predictable since qualities drop out in their specification; they are essentially quantitative. Scientific explanatory norms require relating quantifiable observables to observables, so can’t get a purchase on subjective qualities, hence the hard problem.

  55. 55. Richard J R Miles says:

    Categorizing, which is so necessary for both science and philosophy, can sometimes cause problems and restriction to thought and explanation. It is not surprising that these problems caused by such disciplined categorisation can continue to reoccur. An example is 1st and 3rd P.O.V. Perhaps it is time to consider, as I try to, how these P.O.V’s are connected just as white turns to black via grey, and vice versa, or as all the colours of the spectrum blend-like a rainbow. Another example of this is your red or my red and again as Arnold pointed out there exists ways of determining colour blindness differences, e.g red/brown,blue/green etc.
    Finding words for me is sometimes difficult, but that is the wonder of this forum as others do not find such difficulty. I would like to continually thank Peter for his efforts plus all who have considered my ramblings and have contributed positively to this superb site, which if nothing further was added would still be a formidable site of reference from Peter and all his knowledgeable contributors.

  56. 56. Arnold Trehub says:

    Tom: “On the other hand, structural relations reported within experience – shapes, lines, motion, etc. – are predictable since qualities drop out in their specification; they are essentially quantitative [1]. Scientific explanatory norms require relating quantifiable observables to observables, so can’t get a purchase on subjective qualities [2], hence the hard problem.”

    1. I don’t agree that that the subjective qualities of shape “drop out” when they are specified. In the SMTT experiments, the subjective qualities of a changing triangular shape, egocentric location, and lateral motion, are all part of the subjects’ experience as well as being quantifiable qualia for the experimenter (observables).

    2. Consider the following thought experiment:

    a. You are seated in front of a projection screen with three graduated manual controllers, each controlling the intensity of one of three different light projectors, one projecting blue, one green, and one red, overlapping on the screen.

    b. You are given a colored chip, one at a time, and instructed to adjust the intensity of the colored light from each projector so that the final color on the screen matches the color of your current chip sample.

    c. After two chips have been presented, this is the observed result:

    Chip #1: Blue = 2, Green = 0, Red = 8.

    Chip #2: Blue = 5, Green = 4, Red = 1.

    I take these results as providing an observable quantitative measure of your subjective experience of the color of the two chip samples. On what grounds would you reject this conclusion?

  57. 57. Kar Lee says:

    Arnold,
    What do you make of the “inverted spectrum problem”, such as the one depicted here:
    http://en.wikipedia.org/wiki/Inverted_spectrum ?

  58. 58. Arnold Trehub says:

    Kar, I agree with Hofstadter:

    *Douglas Hofstadter argues that the inverted spectrum argument entails a form of solipsism in which people can have no idea about what goes on in the minds of others …. He presents several variants to demonstrate the absurdity of this idea: the “inverted political spectrum”, in which one person’s concept of liberty is identical to another’s concept of imprisonment …*

    Remember that science is a pragmatic enterprise in which the status of a theory depends on the weight of relevant empirical evidence. I wouldn’t worry about an inverted phenomenal color spectrum.

  59. 59. Eric Thomson says:

    Tom Clark wrote:
    This is the difficulty noted before in this thread: there are no objective measures of the hard problem’s explanatory target – qualitative experiences, e.g., the subjective redness of red – since they are only experienced, not observed.

    But the materialist will just say (pace my previous comment) that I would no more expect to know what it is like to see red when I study your red-seeing brain, than I would expect to photosynthesize when I study plants. Given that experiences are brain states of a certain type, I am not shocked that when I study those brain states in others I do not go into the state myself. Subjectivity itself is this brain state, so I don’t expect to have subjectivity unless I am in that state. What is the difficulty?

    Tom wrote:
    On the other hand, structural relations reported within experience – shapes, lines, motion, etc. – are predictable since qualities drop out in their specification; they are essentially quantitative. Scientific explanatory norms require relating quantifiable observables to observables, so can’t get a purchase on subjective qualities, hence the hard problem.

    Even assuming this is true, it doesn’t seem to specifically bring up a hard problem of consciousness, but a hard problem for any variable that is only defined implicitly within an equation. E.g., what is mass? Well it is F/a. OK, but that doesn’t tell me what mass “intrinsically” is.

    Similarly, psychology already includes experiences in their “equations” (e.g., frequency of shifting between two percepts during binocular rivalry). Such variables are typically not a stopping point where we pack our bags and declare there is a hard problem, but an invitation to do more research into their basis. As we have done more research, it seems experiences are turning out to be complex brain states (just as mass has turned out to have a someone strange description when you really push it).

    It seems we’d need an additional argument that conscious experiences are somehow special or uniquely impossible to dissect scientifically.

    I’m not talking conceptually here. I am not saying that my concept ‘experience of red’ is the same as my concept ‘brain state y,’ but that this is an identity we have to empirically discover. So in that sense there is a ‘hard problem’ that is a semantic issue. There is a conceptual gap between experience-talk and brain-talk, but not an ontological gap. I still have a semantic gap between ‘life’ and physico-chemical processes, but the ‘hard problem’ of life has all but disappeared.

  60. 60. Eric Thomson says:

    Just to make more clear my response to the first bit above: if subjectivity is a brain process, then saying ‘there is no objective measure of subjectivity’ would be false. Neuroscience provides the objective measure, but it is limited in that it doesn’t make your brain go into that state (i.e., a brain’s states aren’t replicated in its students). So if to ‘measure’ a subjective experience is to experience it, Tom you would be right. But I would submit that having the experience isn’t really to measure a thing.

  61. 61. Vicente says:

    Maybe we can present the inverted spectrum problem in black and white, o grey scale. There are people that suffer this condition, and I believe there are animals with B/W vision. Now take that case, and swap negative and positive.

  62. 62. Eric Thomson says:

    Vicente: while a conceptual possibility at this point in time, if we take seriously the possibility that such experiences are brain processes of a certain type, then they should not be invertible. E.g., if process X is experience A, and process Y is experience B, then you cannot simply swap A and B without swapping X and Y. Sure, conceptually you can imagine this, imagine experience A removed and stapled onto process Y, and B to process X, but this would be like imagining removing ‘life’ from my physico-chemical organization and stapling it to a tuba. It’s just not how it works even if at a conceptual level it is coherent for philosophers to imagine and make paychecks from.

  63. 63. Tom Clark says:

    Arnold in 56:

    1. “I don’t agree that that the subjective qualities of shape ‘drop out’ when they are specified. In the SMTT experiments, the subjective qualities of a changing triangular shape, egocentric location, and lateral motion, are all part of the subjects’ experience as well as being quantifiable qualia for the experimenter (observables).”

    We disagree over what counts as a quality or qualia. Experiences of lines, shapes and motion are defined by contrasts (e.g., a red line on a white background), so don’t depend on particular colors and are amenable to description by adverting to their features (length, orientation, angles, speed, etc.). Whereas experiences like the sensation of red, sweetness, pain, etc. involve singular phenomenal feels (what are standardly called qualia) that are ineffable: we can’t describe what they are like by pointing to any further features since there are none. It’s explaining these that constitutes the hard problem.

    2. “Consider the following thought experiment…”

    You suggested this experiment in 33 above:

    “…it seems to me that color-matching and primary-color mixing experiments could provide objective referents by which to judge the success of predictions about color qualities that are based on the retinoid model.”

    And I replied as follows, and it applies equally to the thought experiment in 56 [text in brackets refers to that]:

    “What’s needed is an objective measure of the *experience* of color which is exactly what can’t be had since experience is private and unobservable. There’s nothing out in the world by which we could judge the success of a prediction that my experience of red is like *this*. The objective referents you’re suggesting, [a chip sample and color matched screen], are physical objects with various light reflectances that participate in producing (waking) experience, but they can’t tell us anything about my experienced quality of [the matched colors], or yours. We can see the [chip sample, color matched screen], but not our experiences.”

    In your 56 experiment, what’s being measured isn’t our experience, but the characteristics of the external physical objects that participate in generating our experience. And you agree, saying in 37 “I certainly agree that we can’t measure subjective experience as such.” That I can match my experience of the screen color with my experience of the chip color doesn’t give information about what those (matched) experiences are like. It gives information about the light reflectances coming off the chip.

  64. 64. Arnold Trehub says:

    Tom: “We disagree over what counts as a quality or qualia.”

    Apparently we do disagree on the definition of qualia. I believe that any selected aspect of subjective/phenomenal experience is a quale in that it is always *like something*; i.e. it bears a systematic relationship to something that has features that are similar or analogous to the mental/brain state.

    From Wikepedia:

    *There are many definitions of qualia, which have changed over time. One of the simpler, broader definitions is “The ‘what it is like’ character of mental states.*

    Tom: “Whereas experiences like the sensation of red, sweetness, pain, etc. involve singular phenomenal feels (what are standardly called qualia) that are *ineffable* [emphasis added by me] … ”

    Something “ineffable” is something that cannot be described in words. And I agree that what it feels like to perceive red (or any other color) is poorly described in words. But depictions or exemplars of qualia are not like words; they are overt intersubjectively accessable images that bear some kind of similarity relationship to the qualia that are being communicated. Exemplars of qualia can be objects of study and can contribute to the scientific explanation of subjective qualities.

    It seems to me that if our private qualitative experiences were truly ineffable and beyond intersubjective communication by overt exemplars, human society could not exist as we know it.

  65. 65. Vicente says:

    Eric,

    It is clear, heavy evidence supports it, that there is a relation between brain processes or states and those experiences i.e. NCCs.

    Now, how does this scheme work? I bet for dualism in a first approach. Dualism understood as the need for an additional component of reality, no identified or formally described yet, in order to understand consciousness. Once that new element had been incorporated to our models of reality, dualism will not make sense any more, as it does not in any global or total model of reality, whatever that is.

    I will try to make a simple comparison, for the inverse spectrum problem. If you have a computer connected to a screen by three RGB cables, the images on the screen are the result of the processes in the videocard. Then you could plug the cables in different sockets, mismatching the output for the same videocard processes. The question for this comparison is: if the videocard corresponds to the visual cortex, what are the cables and the screen? and mostly important, who is watching the screen? of course, I have no answers, not even weak hypothesis.

    I don’t buy dual-aspect monism. It is like a Victorian euphemism, “…as hard core scientists we would be ashamed and embarrassed to speak about dualism….”, so let us come out with this dual-apect monism pantomime, that means nothing, to show I can’t explain anything.

    N.B. general comment no personal referene

  66. 66. Charles Wolverton says:

    Tom -

    I have long felt that we are largely in agreement on many relevant issues, but I continue to have difficulties interpreting some of your statements because we use different vocabularies. So, I’d like to try to translate “subjective redness of red” into terms that I prefer and see if the result captures some or all of the intent of that phrase.

    For me, to “see red” for a particular pattern of neural activity neural activity (or brain state, if you will) to occur directly consequent to sensory stimulation by light that has certain spectral characteristics. The only thing that makes this neural activity “red” for a subject is that the subject has learned to respond to that neural activity by signifying the word “red” (eg, by uttering, writing, or pointing at it). There may be concomitant neural activity that results from associations with things like related words (eg, “blood”, “flame”) or events in memory; call responses to this neural activity “E-phenomenal”, (AKA, “emotions” or “feelings”). Normally, there will also be concomitant neural activity that results in an attendant so-called “mental image”; call responses to this neural activity “V-phenomenal”. In general, the total response to all this neural activity will include behavioral components. I assume that the E- and V-phenomenal responses are part or all of what you mean by the “subjective redness of red”.

    The neural activity per se is 3rd person observable, and attempts to find causal chains between it and components of a subject’s total response can be fairly described as taking place in an “explanatory space” – although I prefer to say that they employ a particular vocabulary in framing explanations, in particular the composite vocabulary of the physical sciences. The problem is in dealing with the fact that the subject “experiences” the whole stimulus-response process, and that “experiencing” by it’s nature isn’t directly 3rd party accessible. But even though I can’t directly access your experience of “seeing red”, I can understand almost any common use you make of the word “red” because we share the concept of “red” in its role as a member of Eric’s “experience-talk” vocabulary. But that vocabulary isn’t being used in an “explanatory space” – it’s not appropriate for scientific explanations. Nonetheless, as suggested by Donald Davidson, that an experience is private doesn’t preclude a useful interpersonal discussion of it by the subject and others since being able to interpret a language speaker requires that the interpreter be mostly right in assuming that what’s going on in the speaker’s mind is much the same as what’s going on in the interpreter’s mind. So, in that limited sense I can indirectly access your experience.

    In short, I agree with the thrust of Eric’s comment 59, especially the last paragraph, which is essentially a point I’ve raised before: discussing the mental and discussing the physical each requires a “vocabulary” tailored to purposes specific to its topic and therefore not necessarily reducible to the other. But here, irreducibility is just irreducibility – there is nothing exciting to be inferred from it. I also agree with the thrust of his comment 36, viz, it isn’t clear why such great importance is being attached to the privacy of “experience”. (Which isn’t to say that I don’t think that explaining the mental image is indeed a very “hard problem”.)

    Arnold -

    I agree with Tom that it seems that your three-color experiment merely identifies a feature of the light incident on the retina (namely, that it’s a metameric match to the apparent color of the chip) but says nothing about the phenomenal experience consequent to the incident light.

    One can, of course, define a vague concept like qualia however one likes, but my impression is that part of most concepts of qualia include some version of irreducibility. So, suppose that a subject is viewing a square and asked to say something about its color per se. The response can be no more than utterance of a color word that the subject has learned to associate with the relevant neural activity – no elaboration is possible). In that sense the verbal response to viewing a color can be said to be “descriptively irreducible” [but see * below]. However, if asked to say something about the shape per se, although the uttered response could be simply “square”, it could instead be something like “equilateral, quadrilateral, perpendicular adjacent sides”, and in that sense the verbal response to viewing a square can be said to be descriptively reducible.

    * In my view, these utterances are not really “descriptions”, in particular not descriptions of the mental image; they are just learned responses to the neural activity consequent to the occurrence of certain visual sensory stimulation (and the experimenter’s directive, ie, aural stimulation). One’s mental image attendant to viewing a square may be square-like or not, or even non-existent (as in blindsight). In any case, I agree with Tom that it’s causally inert (and additionally speculate that causality may even go from the ;earned verbal responses to the mental image).

  67. 67. Arnold Trehub says:

    Charles: “So, suppose that a subject is viewing a square and asked to say something about its color per se. The response can be no more than utterance of a color word that the subject has learned to associate with the relevant neural activity – no elaboration is possible).”

    How about “red with with a slight tinge of purple”, or in the case of achromatically induced colors (e.g., spinning radial lines) “a faint blue near a faint red”, or “bright red here; dull red there”, etc.

    Charles: “One’s mental image attendant to viewing a square may be square-like or not, or even non-existent (as in blindsight). In any case, I agree with Tom that it’s causally inert (and additionally speculate that causality may even go from the ;earned verbal responses to the mental image).”

    If mental images were to have no causal efficacy, what do you think would guide the painting of an artist? As for causality going from a learned verbal event to a mental image, I surely agree. See p. 325, “Interaction between analogical and symbol/token representations”, here:
    http://people.umass.edu/trehub/YCCOG828%20copy.pdf

  68. 68. john davey says:

    “What else are they? Some would say they’re probability amplitudes in quantum mechanics, and then some would not say that at all.

    As I expected, you didn’t have the remotest idea what quantum wave equations are.

    “They’d say that actually wave functions are sometimes used to simply describe how physical forms must follow their functions.”

    Reverting to the original question, the suggestion on your part was that wave equations were “not physical”. So the question still stands – if wave equations are “not physical”, what are they ?

    “But in your case, of course, they’re used to show how these so-called “mathematical” functions must supposedly follow forms.”

    Why “so-called” ? What else would they be described as ? What is the hidden agenda with these mathematical objects ? Explain.

    “What is a mathematical function anyway? Something that continuously proves itself?”

    Ignorance of a subject certainly doesn’t seem to bar your expert comment upon it.
    If you don’t know what mathematical functions and quantum mechanics are, why comment ?

  69. 69. john davey says:

    “By the way, Krauss also wrote that the laws of physics existed in the proto universe before the big bang produced the physical elements of the universe from nothing”

    The big bang did not emerge from nothing. Empty space is not nothing, it is something. In a kind of standard vernacular it is nothing, but empty space is not a nullity : it has properties.It is physical without being material. In fact I belive that is the basis of the Mr Krauss’s advocacy of a certain unoproven hypothesis of the universe genesis – that vacuums can have instabilities.

    “So that in addition to the wave function or functions that existed as prior nothings, so were the laws of physics nothings as far as the physical properties of the universe were considered.”

    Explain how the laws of physics do not relate to physical entities such as space, time or mass-energy. You’ll struggle.

  70. 70. john davey says:

    Kar Lee

    “A single atom is *publicly observable* in an ultra cold atom trap and it appears as a dot of light. ”

    You are proving my point : an “ultra-cold” atom trap is a highly derived notion built upon layer and layer of mathematical theory. The “point of light” is an experimental outcome : exciting and we must believe the dot of light to correspond to one atom, but we still aren’t “seeing an atom”. We’re seeing the outcome of a very complicated experimental process.

  71. 71. Roy Niles says:

    John Davey asks, “If you don’t know what mathematical functions and quantum mechanics are, why comment ?”
    Because of course I do know, and your evasive answers show clearly that you don’t. There are of course no mathematical functions that function outside of the realm of mathematics. There are mathematically measurable functions outside of that realm of course, with such measurements always inexact and in and of themselves inexplanatory where the otherwise purposefully strategic functions that we hope to measure are concerned.
    Davey also comments: “empty space is not a nullity : it has properties.It is physical without being material.”
    Krauss would beg to differ that the physical laws I mentioned (the essence of which Davey is ignoring) are neither physical nor material. So as usual Davey has irrelevantly missed the point.
    But never mind, he goes on to disingenuously ask: “Explain how the laws of physics do not relate to physical entities such as space, time or mass-energy.”
    But John, that’s a deliberate perversion of the question I asked you. Which was for you to at least try to explain why Krauss has said these laws were the non physical something that existed before the big bang, and in what he saw as a space that was something with no physical substance. (But then I’ll admit you’d have had to understand what you’ve read of Krauss to know that.)
    The mystery then being how and why the immaterial laws came before the material they were meant to regulate. (And again, I know that’s a question that you can’t get our mind around, but in a way that’s why I had to ask it.)

  72. 72. Charles Wolverton says:

    Arnold -

    “Descriptively irreducible” means that the “description” (in the sense I laid out) of the neural activity can’t be reduced to descriptions of components. The utterance “a faint blue near a faint red” clearly doesn’t satisfy that requirement.

    “Red with a slight tinge of purple” confuses description with identification. “The color of light with the spectral power density function illustrated on the sheet of grey paper in the yellow folder in the dark green file cabinet” is descriptively irreducible despite being complex and using multiple color words.

    The optical illusion of the spinning achromatic line plays on the ambiguity of the word “color”. Is color “on” the viewed surface? Or “in” the light” – or “in” the mind? As I’m using “color”, it’s “in” the brain – the neural activity that evokes a color word (or other identifier). It doesn’t matter how that neural activity is produced, so in a sense the spinning line is just another way of creating a metameric match.

    Nice tries, but I think they all fail.

    I’m pleased to see that the idea of T’ -> S’ isn’t completely off the wall – or at least if it is, I have company in my delusional thinking!!

    Because considering the creative activity of an artist involves too many distracting but irrelevant considerations, let’s consider something simpler like catching a ball. It seems pretty clear that in principle a robot could be (probably has been) programmed to do that – or something equivalent – using only the digital input from a camera-like visual system, ie, with no internal “mental image”. That suggests that the mental image can at most enhance what can be done without it. But once the brain has the neural activity consequent to the visual sensory stimulation, it seems that in principle the brain, using only that input “data”, should be able to effect any responsive action that it could by first creating a mental image and using that image to effect the action.

    It certainly seems that from an evolutionary perspective the mental image should have some such benefits, but so far I’ve been unable to convince myself that it does. Although counterintuitive, the conclusion that it doesn’t may have information-theoretic support. The “data processing theorem” states that processing received data can’t enhance it’s information content. Ie, processing the neural activity to produce a mental image can only decrease the information available for decision making.

  73. 73. Arnold Trehub says:

    Charles: “I agree with Tom that it seems that your three-color experiment merely identifies a feature of the light incident on the retina (namely, that it’s a metameric match to the apparent color of the chip) but says nothing about the phenomenal experience consequent to the incident light.”

    What you call “the apparent color of the chip” is in fact the unobservable phenomenal experience of the subject. The RBG components that the subject creates are in fact a set of objective measures of an overt exemplar (on the projection screen) of the subject’s private color experience. When words fail us, exemplars can do the job of communicating features of qualia. As I stated earlier, if our private qualitative experiences were truly ineffable and beyond intersubjective communication by overt exemplars, human society could not exist as we know it.

    Charles: “‘Descriptively irreducible’ means that the ‘description’ (in the sense I laid out) of the neural activity can’t be reduced to descriptions of components.”

    If I understand you correctly, you have formulated *description* in a way to insure that there cannot be any neuronal explanation of qualitative experience.

    Charles: “Because considering the creative activity of an artist involves too many distracting but irrelevant considerations, let’s consider something simpler like catching a ball.”

    This is amusing. In a forum like this you can’t just answer your own questions. The creative activity of an artist is surely a relevant example to consider in discussing color qualia.

    Charles: “It certainly seems that from an evolutionary perspective the mental image should have some such benefits, but so far I’ve been unable to convince myself that it does.”

    My claim is that without a mental image (an activated retinoid space) you could not possibly have a subjective experience of the world around you. For example, see “Evolution’s Gift: Subjectivity and the Phenomenal World”, here:
    http://evans-experientialism.freewebspace.com/trehub01.htm

  74. 74. Tom Clark says:

    Charles in 72:

    “It certainly seems that from an evolutionary perspective the mental image should have some such benefits, but so far I’ve been unable to convince myself that it does.”

    Agreed. If mental images are construed as being subjective accompaniments to neural activity, but not neural activity itself, then there’s no reason to suppose they confer benefits since there’s no account on offer about how they could add to what neural activity accomplishes in behavior control (the problem of dualist interactionism). On the other hand, if mental images/subjective accompaniments *are* identical with or constituted by neural activity, then they don’t add to what that activity accomplishes. There’s no special role for consciousness above and beyond the physicalist neuro-biological functional story that natural selection needs to or could explain.

  75. 75. Eric Thomson says:

    But if it is identical to the neural, there is nothing additional property of consciousness that we would event want to give a “special” different role, above the neural.

    Seems to be a lot of cryptodualism in this thread. Or maybe not very crypto?

  76. 76. Arnold Trehub says:

    John, I’d be interested in your thoughts about my post in #81, here: http://www.consciousentities.com/?p=1016.

  77. 77. Tom Clark says:

    Eric in 75:

    “But if it is identical to the neural, there is nothing additional property of consciousness that we would event want to give a “special” different role, above the neural.”

    Agreed. The question is whether it *is* identical, which seems problematic given that experiences can’t be observed (are private, exist for the conscious system only) but neural states can (are public, exist intersubjectively).

  78. 78. Arnold Trehub says:

    Tom, what would you say to the claim that the world itself is *never* directly observed, that it is only our direct/immediate global *phenomenal experience* of the world (retinoid consciousness?) that is decomposed for observation by our brain’s neuronal mechanisms?

  79. 79. Tom Clark says:

    Arnold, I’d say that we observe the world (not experience) using our sensory-perceptual system. This is direct as observation can get, since observation necessarily involves the modeling/mapping of one part of the world by another using representational elements that co-vary with what is represented. In our case the brain models the world outside the head, which results (somehow) in conscious experience. But we don’t observe that experience, rather we have or undergo it and it’s the qualitative terms in which the world is presented to us as subjects.

  80. 80. Eric Thomson says:

    Tom yes I addressed that a few times above I see no good reason to opt for dualism.

  81. 81. Arnold Trehub says:

    Tom: “Arnold, I’d say that we observe the world (not experience) using our sensory-perceptual system.”

    Tom, do you believe that *sensing* and *perceiving* are identical brain processes? If not, what is the difference between sensing and perceiving?

  82. 82. Vicente says:

    When was the precise evolutionary instant in which phenomenal consciousness happened to appear?

    Was it a certain mutation that made that particular offspring have phenomenal experiences? what did that particular mutation caused?

    At what point could this happen?

  83. 83. Arnold Trehub says:

    Vicente: “When was the precise evolutionary instant in which phenomenal consciousness happened to appear?”

    I don’t think we can identify “the precise evolutionary instant” when consciousness came into existence. But I would say that it happened with the evolutionary emergence of a particular kind of brain mechanism that provided a creature with an internal representation of its surrounding space including a fixed locus of its perspectival origin; i.e. a primitive retinoid space. As a guess, I would be comfortable in saying that prehistoric bird-like creatures that navigated and hunted in flight would need to have a retinoid system and should therefore be considered to be conscious creatures.

  84. 84. john davey says:

    Roy

    ” There are mathematically measurable functions outside of that realm of course,”

    sorry roy but this is gibberish.

    “Krauss would beg to differ that the physical laws I mentioned (the essence of which Davey is ignoring) are neither physical nor material. ”

    No he wouldn’t.

    The “laws of physics” are mathematical physics. Expressions in algebra equating two quantities relating time, space and matter (or matter-energy) – as well as a force field or two. Tell me how that does not constitute a physical description ?

    Lawrence Krauss would not suggest that the laws of physics are not mathematical physics. That is your imagination/lack of understanding of what Lawrence Krauss says. Lawrence Krauss discusses the idea of “nothing” in some depth, and suggests that a universe could have cropped up from empty space and even no space at all – but still requiring gravity. That’s still not ‘nothing’.

    But that doesn’t mean that physical laws wouldn’t be about physical things. They couldn’t be about anything else. Quantum mechanics is given no genesis in Krauss terms – they are taken as a given – but they are most definitely relations between time, space and matter. Nothing else.

    ” Which was for you to at least try to explain why Krauss has said these laws were the non physical something and in what he saw as a space that was something with no physical substance.”

    Roy, I just said to you that Lawrence Kruass does NOT say that the big bang came from nothing. If you don’t believe, watch any one of the dozens of videos available from him. Empty space is NOT A NULLITY. It has properties. You also have to remember that the genesis arguments he gives are based upon a particular, as yet unproven but feasible theory about quantum gravity.

    This theory states that empty space is full of energy (not nothing) and in certain cirumstances this energy converts to matter in the shape of basic particles. Matter and space are interchangeable, because matter and energy are interchangeable.

    The laws of physics are still physical laws even before matter exists and even before space exists. Its the nature of physics to be about the physical which is, in the scope of physics, more than the material.

  85. 85. Charles Wolverton says:

    Tom -

    “If mental images are construed as being subjective accompaniments to neural activity, …”

    As I’ve indicated before, I don’t much care for “subjective” because it seems to have multiple related but distinguishable meanings:

    1, private, in the sense of ownership. No one else can “have” my experience of THIS pain.

    2. in Davidson’s view of scheme-content, the scheme component in accordance with which one divides up the flow of sensory input into objects, events, etc. This is not entirely private in that others may employ a similar scheme – and in the case of language users, must employ similar schemes in order to communicate.

    3. “created” in the individual brain (possibly by something like Arnold’s RS) but having no independent existence

    In which, if any, of these senses were you using the word the the quoted phrase?

    FWIW, in the case of a mental image, I’m inclined toward a definite 1, a probably 3, and that it’s content is a consequence of 2.

  86. 86. Arnold Trehub says:

    Charles: “2. in Davidson’s view of scheme-content, the scheme component in accordance with which one divides up the flow of sensory input into objects, events, etc.”

    But it is important to understand that our brain does not *directly* divide *sensory* input into the objects, events, etc. of our conscious experience. The output of our separate sensory modalities must first be represented (projected and properly bound) within our phenomenal world (retinoid space) before this global neuronal activity is parsed, filtered, and recognized in terms of *objects* and *events*, etc., occurring somewhere with respect to ourself. The latter operations are performed by unconscious and pre-conscious brain mechanisms which then back-project into retinoid space (via recurrent axonal loops) to update our phenomenal world with inner speech, images, and other kinds of conscious experience. The biophysical properties of retinoid space explain why we have illusory experiences such as the moon illusion. The putative brain mechanisms that do this are described in *The Cognitive Brain*.

  87. 87. Roy Niles says:

    John Davey with his own gibberish:
    “The “laws of physics” are mathematical physics. Expressions in algebra equating two quantities relating time, space and matter (or matter-energy) – as well as a force field or two. Tell me how that does not constitute a physical description ?”
    The laws of physics themselves are regulatory, not mathematically arranged or mathematically inspired. We humans, as philosophers, invented mathematics to measure what we saw initially here as nature’s laws, but we invented logic first to try to understand how such powerful regulatory systems came to exist and evolve in the first place.
    Try to describe the regulatory aspects of these apparent laws as physical substances. You can’t. And more importantly, the actual scientists involved with physics know they can’t.

    And as usual you’ve slipped in your sly deceptive tactics to make it seem that my initial positions were actually yours:
    “Roy, I just said to you that Lawrence Kruass does NOT say that the big bang came from nothing.”
    And where did I say that he did? What I said was that Krauss said that the universe’s space before the big bang was something with no physical substance. That was his explanation on many forums as to why the book had referred to a universe from nothing. (And otherwise what else do you contend that he meant by “nothing?”)
    I’ve discussed this subject extensively on other forums, and there’s general agreement overall with what I’ve tried to point out to you.
    I’d suggest that it’s you who should go back and reexamine the book, videos, etc., if I didn’t also see that as being, for you, a hopeless task.

    You quoted me as follows and called it gibberish:
    ”There are mathematically measurable functions outside of that realm of course,”
    But of course you had to take that section out of the context of the rest of the comment to do so – another of your favorite sneaky tactics.

    Here’s what I actually wrote that includes its context: “There are of course no mathematical functions that function outside of the realm of mathematics. There are mathematically measurable functions outside of that realm of course, with such measurements always inexact -”

    Although I’ll grant the possibility that you actually saw all of this as gibberish, and didn’t know there was a different if not smaller role for mathematics than you’d always thought there was. Another example of the ignorance that doesn’t know that the “learning” that reduces ignorance may require a readjustment of one’s even more ignorant earlier beliefs.

  88. 88. Charles Wolverton says:

    Understood. I can’t address the implementation (being incompetent to do so), but for the purposes of my comment I don’t see that it matters.

    You presumably are familiar with Edelman’s Theory of Neural Group Selection. Any opinions? I assume your “recurrent axonal loops” and his “reentrant” connectivities are intended to capture the same architectural idea. (Although “reentrant” seems a bit misleading since as I recall, that word has a precise meaning in programming that doesn’t apply in this context.)

  89. 89. Vicente says:

    Arnold, that could work for organs and functions like wings and flying, where you could expect a gradual or staggered process… But for phenomenal consciousness you have it or not. So yes, we could think of a precise evolutionary moment at which phenomenal consciousness was “turned on”. I admit that there could be prior proto-neuro-organs, but at a certain point there was a change (morphological, anatomical, physiological, biochemical,…) that switched the machine on. When was that? and what were the critical changes (new features, tuned parameters) involved?

    The other possibility is that phenomenal consciousness is related to all sentient beings right from the beginning, and that neural complexity just serves for conscious content enhancement.

  90. 90. Richard J R Miles says:

    If you have an understanding of how awareness might have evolved then it should not be much of a progression to see how self-awareness occured. If self-awareness is linked with conscious memory of past experience you have the advantage of phenomenal conscious planning, which has proved so advantageous for human evolution.

  91. 91. Vicente says:

    Ok Richard, but I don’t yet fulfill step one in your chain. Besides, I am struggling at this moment at a much lower level of reality description/understanding, conscious planning lies a long way ahead.

    Consider that for something to evolve has to come into existence in the first place.

  92. 92. Arnold Trehub says:

    Charles: “You presumably are familiar with Edelman’s Theory of Neural Group Selection. Any opinions? ”

    Yes. His theory does not explain consciousness because it does not account for subjectivity. Consider these shapes:

    http://people.umass.edu/trehub/Rotated%20table-2.pdf

    From an immediate subjective perspective, the two tables look distinctly different even though objective measurement shows that their dimensions are essentially equal. An explanatory theory of consciousness must be able to account for phenomena like this by some kind of credible biophysical mechanism. Neural Group Selection doesn’t do the job. The retinoid model of consciousness details a neuronal brain mechanism that can cause subjective experiences like this.

  93. 93. Arnold Trehub says:

    Vicente: “The other possibility is that phenomenal consciousness is related to all sentient beings right from the beginning, and that neural complexity just serves for conscious content enhancement.”

    This would depend on your definition of *consciousness* and *sentient*. My working definition of consciousness posits a first-person perspective/subjectivity. If you think that “sentient” implies subjectivity, I would agree with you; otherwise I would disagree.

  94. 94. Vicente says:

    Arnold, yes, this time you caught me. Now I will have to appeal to the intelectual (non spatial I!) definition of subjectivity. I wouldn’t be able to answer without getting myself entangled in all kind of subjective considerations and experiences…

  95. 95. Charles Wolverton says:

    But Arnold, in your phrase “subjective experiences like this”, what is the “experience” and in what sense is it “subjective”? Is the experience the sensory input, the neural activity consequent to it, the formation of the mental image, the emotional reaction to either or both, other? And what does the word “subjective” contribute.

    Here’s my ad hoc, implementation-independent speculation on what may be going on in this illusion. As I understand it, the visual processing system can “identify” lines, by which I mean that in response to viewing a line drawing, the subject can produce utterances that could be used – if, for example, prompted by the test conductor – in describing the drawing. (Recall that by “describe” I mean simply “learned utterances in response to specific neural activity.) Similarly for the angles between lines, for polygons, et al.

    Your example is basically just a complex line drawing, so a sufficiently geometry-literate subject should be able to accurately describe it “objectively”. But a subject who is additionally perspective-literate may quite naturally elaborate the basic verbal responses based on obvious clues that suggest perspective: the converging background lines and correspondingly foreshortened polygons, the opaque surface, the frame and legs (suggested by heavy lines). [See note below.] In any event, all of this taken together (AKA, integrated via “recurrent axonal loops” or “reentrant connections”) may result in a description something like “long narrow table on the left, approximately square and wider table on the right”. That something along those lines is going on can be confirmed by covering up the background and the four lines of the “tables” that are not pairwise equal in length. Seen out of context, the remaining two right angles might very well be described by a subject as “apparently having the same dimensions”, as suggested by my resulting mental image (recall my assumption of reverse causality: the latter is caused by the former).

    The illusion is created by misdirecting the straightforward “objective” description using perspective clues, making it an example of my “type-3 subjectivity”. But I see nothing added by so labeling it rather than simply trying to explain the phenomenon (and justify the explanation with experimental evidence, of course).

    I should add that by making this kind of argument I don’t mean to minimize your achievement in creating a credible model of what may be going on at the implementation level. In fact, I’ve found it useful at times to think in RS terms. But often discussions of these matters seem confusing (and confused) because of seemingly loose language, in particular mixing of vocabularies intended to serve different purposes (as addressed in a couple of earlier comments by others). In support of this position, I’ll note that in my regular but infrequent visits to this forum, I often find much the same debate going on among mostly the same parties using much the same (IMO) inappropriate mixture of vocabularies. Since at base there seems to be considerable agreement among the parties, the absence of convergence suggests that something is amiss. (None of which is intended to suggest that I have the “right” vocabulary; any that is precise, coherent, and consistent should be an improvement.)

    ======================
    Note: These clues seem like possible examples of what Edelman calls “values”, apparently rules that constrain which of the many possible responses to the sensory stimulation is selected – except that he describes them as inherited, and I’d be somewhat surprised were responsiveness to perspective clues inherited.

  96. 96. Arnold Trehub says:

    Charles: “But Arnold, in your phrase “subjective experiences like this”, what is the “experience” [1] and in what sense is it “subjective”? Is the experience the sensory input, the neural activity consequent to it, the formation of the mental image, the emotional reaction to either or both, other? [2] And what does the word “subjective” contribute. [3]

    1. The subjective experience that I “pointed to” is the *conscious experience that you have when you look at the drawings of the two tables. Every person to whom I have shown this (including those who are geometry illiterate) reports that the tables clearly appear to have a different shape.

    2. The experience is *subjective* in the sense that it is an experience of an object in one’s egocentric space, that is, in one’s brain representation of *something somewhere* in relation to the locus of a fixed perspectival origin (I!) within the neuronal space. This is retinoid space, and I have described its minimal structure and dynamics in several publications.

    3. The word “subjective” identifies the experience as a *conscious* brain experience to distinguish it from an unconscious or pre-conscious brain experience. We have vastly more non-subjective experiences than subjective experiences during the course of a day.

    One way to reduce loose language in this context is to have a detailed system of causal mechanism to point to for linguistic referents.

    Charles: “The illusion is created by misdirecting the straightforward “objective” description using perspective clues, making it an example of my “type-3 subjectivity”. But I see nothing added by so labeling it rather than simply trying to explain the phenomenon (and justify the explanation with experimental evidence, of course).”

    I label the illusory experience “subjective” because it is caused by the particular kind of brain mechanism (the retinoid system) that realizes our subjectivity/consciousness. I fail to see the problem here. I might add that in the biological sciences, the question of implementation is extremely important.

  97. 97. Richard J R Miles says:

    Vincente, re 91. Consider………
    I do not think Peter or others would appreciate my ramblings on how life might have got started and evolved. Plus it would be an epistle before I got to awareness and would only be my opinion.

  98. 98. Vicente says:

    Richard, I bet Craig Venter knows as much as we do about the origin of life. In my comment I was assuming that life was already there and evolving and then considering at what point phenomenal consciousness could have appeared. Probably an insect is somehow conscious, not so sure about bacteria or even jellyfish. Here is where Arnold pointed out that sentient beings require a proper definition in order to relate them to consciousness. Otherwise the idea is a tautology. But of course, how life started is an issue.

    I wouldn’t mind learning what is your idea about the origin of life: e.g. how did the first procariot bacteria appear?

  99. 99. Arnold Trehub says:

    Tom #79: “In our case the brain models the world outside the head, which results (somehow) in conscious experience. *But we don’t observe that experience* [emphasis mine], rather we have or undergo it and it’s the qualitative terms in which the world is presented to us as subjects.”

    If I understand this statement correctly, you agree that our brain model of the world is the way that the physical world outside of our head is presented to us (our conscious experience). Yet you also say that we don’t observe our brain model of the world. This implies that we observe the unmediated physical world and seems to suggest that you endorse direct/naive realism. Do you?

  100. 100. Tom Clark says:

    Hi Arnold,

    The brain isn’t in a position to observe itself, so its behavior-controlling, neurally instantiated model of the world goes unobserved as well. Rather the process of observation (of the world) is how that model adjusts itself to the world via sensory input. Part of that model represents the organism itself and its relationship to the world, including the representationally recursive fact that it’s modeling the world, that the world appears to it by means of and in terms of a representational process.

    Some part of this neural activity has its experiential analog in that in consciousness we discover ourselves as a phenomenal subject centered in the world, looking out at it and being presented with it in terms of qualitative experience. So there’s a veridical conscious experience of presentation,** of observation, that corresponds to the fact that the model is adjusting itself to variation in the world, which is what observation consists in. But there’s no observation of experience going on, or of the model itself, http://www.naturalism.org/kto.htm

    Since the world as it appears to representational systems is always in terms of that the system itself deploys, as shaped by its needs and limitations, I don’t see this as direct/naive realism. The world always and necessarily appears to cognitive systems in terms of a model, what I call epistemic perspectivalism, http://www.naturalism.org/appearance.htm#part1 (a cousin perhaps to Hawking and Mlodinow’s model-dependent realism, http://en.wikipedia.org/wiki/Model-dependent_realism ). But still, it’s the *world* that appears to the system via the model, not the model that appears.

    ** Metzinger calls this the PMIR, the phenomenal model of the intentionality relation, about which see http://phantomself.org/a-special-form-of-darkness-metzinger-on-subjectivity/

  101. 101. Richard J R Miles says:

    Vicente, Although I would not want to belittle Craig Venter’s achievements, he has, like Edison, taken the credit for everything connected with his laboratories. I think there may have been an even simpler form of energy/life prior to the prokaryote, after all nothing has been static in the micro/macro universe since time began. I think the size plus ability of the living entity, and of course the environment it is evolving in, plays a huge role in its need to evolve a brain at all, i.e as I am sure you know, the sea squirt digests its brain when it no longer needs to move around. Many small creatures brains react purely on information from their senses with no other reason, and like small flying insects are moved around by their environment like a leaf blowing in the wind, or like trying to grab a small bar of soap in the bath. Larger creatures also follow their senses for sustenance and living conditions. I think creatures eventually evolved consciousness with the addition of memory, encouraged to do so when food became scarce,and they were confronted by other creatures doing the same and also trying to eat them. Planning with memory is phenomenal consciousness.

  102. 102. Roy Niles says:

    Sorry boys but all of earth’s living creatures had a form of trial and error intelligence and a functional repository that remembered what they had learned and reminded them of what to expect from prior learning. The difference between living and non-living is that very ability to learn, remember and make expectant choices. They all did “planning with memory” and were conscious of whatever they observed that fed their memories.

  103. 103. Niklas Grebäck says:

    Disclaimer: I just stumbeled on this awesome site while googling on consciousness and mirror neurons. I haven’t read everything and I’m not a professional philosopher or scientist. Just a curious psychologist passing time. Therefore, the following comment is nothing more, no less than a quick note on the fly.

    First, my position is that consciousness cannot be directly conscious of itslef or observe itslelf in action. No more than the eye can see itslelf or have a direct view of “watching” as a process.
    Now, I happened on the term Epiphenomenalism and found it consistent with my experience in Zen and otherwise. I was thus recommended to re-think that “problematic” theory of mine/mind.
    So here goes..

    “Epiphenomenalism is the view that one’s actions are not caused by one’s thoughts:”

    Of course action is not directly caused by our thoughts.
    I can falsify that on the spot like this: In a moment I will try to produce the thought “I will now pat myself on the head”. If the idea that thoughts cause action is true I will act accordingly. Here we go……
    …nope, didn’t happen. Conclusion: thoughts does not cause action.
    In what way thought might, or rather must influence action is another question.

    “that we are really just passive spectators under the illusion that we control our own behaviour.”

    Not causing our actions by means of thinking does not mean we’re passive, nor spectators. Everyday I act a lot in the absence of directing or controlling thoughts. If every activity I undertook required an initial conscious thought, not much would be done. If someone calls my name I turn around and answer “Yes, that’s me”. Like so many other actions, that is a no-brainer.
    As for being a spectator, that implies some sort of dualistic quality of doer-observer. There is of course no “me” watching my “self”.
    The illusion could of course go beyond “control”. It could be that the concept/experience of “me” is illusory too, as we know from Buddhist tradition as well as some of contemporary neuroscience.

    “The idea appeals to those who want to accept that physics explains the causes of all events, including the behaviour of human beings, while still regarding consciousness as something over and above this merely mechanical process.”

    I have yet to come across a solid argument that there is something that is not either of physical nature or in the absence of such. Physical matter, down to quarks and photons, likely plays a role in anything that happens. That doesn’t mean physics can explain everything, or anything for that matter. The top dogs in physics are themselves made of the stuff they observe, including Heisenberg.

    “The idea is lent some (much-needed) plausibility by Libet’s famous experiments which seem to show that decisions are effectively made before we become aware of having made them: and by many others which demonstrate people’s remarkable tendency to invent reasons, and accept responsibility for, behaviour which was actually caused by a post-hypnotic suggestion, a brain malfunction, or other factors they were not consciously aware of. Strictly, of course, speech is another form of behaviour, so a rigorous epiphenomenalist would actually have to hold that these rationalisations and confabulations were also nothing to do with the person in themselves.”

    “Nothing to do with the person” would be strange indeed. How can something coming out of a persons mouth have NOTHING to do with him/her? Surely, the position must be that the person does not have control over what happens albeit having some or much influence.
    I suggest the main factor is “other factors they were not consciously aware of”. If I try to determine what causes me to write this I cannot do it. I can build a grand chain of cause and effect but it will end up extremely speculative and whimsical. Truth is, I do not know. What will soon be obvious is the fact that I didn’t choose to do it. Perhaps I could’ve chosen NOT to do it, but the active action is caused by “other factors” such as your text and what made you write it and what made that happen and so on ad infinitum.
    Even if I do not have control over this here “writing”, I have a lot to do with it. Bottom line, we cannot control what happens AND we cannot escape influencing what happens.

    “Philosophically, epiphenomenalism is hard to maintain because it makes the person-in-themselves so completely irrelevant.”

    That implies that the only relevant piece of universe is the humans sense of “self”. Although seemingly important for our respective senses of “self”, isn’t it a bit arrogant to label everything outside the self-experience as “irrelevant”?

    “It seems better to locate personhood somewhere in the material process, or even do without it altogether. Epiphenomenalists are faced with the task of explaining how our conscious thoughts and our physical actions remain in such close harmony, which is impossible to do without raising further difficulties. Philosophically, therefore, the idea is generally regarded as untenable (though it has some appeal for David Chalmers, for example): in psychological discussions, a looser sense of the word is sometimes used, to mean a doctrine that people merely have much less conscious control over their actions than they believe.”

    Conscious thought and physical action is – might be – in close harmony because both are subject to the same, ever present, ever changing context. There’s no causal link between them and there need not be one. There’s a co-variation due to the good ol’ “other factors (they were) we are not consciously aware of”.

    “We could illustrate the distinction between these two readings of epiphenomenalism by drawing an analogy between a human being and a large corporation. A ‘corporate epiphenomenalist’ would assert that the chief executive actually had no control over the organisation. On the strict philosophical reading, this would have to mean he had no influence on the organisation at all; on this view we should find ourselves forced to admit he drew no salary (because that would affect the balance sheet, however slightly), had no office, and, in fact, was effectively invisible and intangible. On the looser reading, these difficulties would not arise: the chief executive could be a substantial member of the corporate community: he could even write the Annual Report and issue convincing press releases; he just wouldn’t normally be able to get any internal memos or instructions acted upon.”

    The executive have indeed no control, but surely he will influence what happens.(he cannot avoid influencing any situation ha appears in). To make a distinction; the psysical entity The Executive have this direct influence, but the physical entity/movement within him that creates the executive’s “sense of self” does not. The sense of self can go on 24/7 – “I am in control of this corporation and will direct it towards infinite success” , but that won’t change anything outside of him. Not until he actually does something with the physical body that holds this sense of self. Then influence is inevitable, but control…nahh…

    Well, that’s my two cents. It will not make anyone embrace epiphenomalism. More likely, it will trigger the myriad of objections already presented.
    One thing is for sure, I remember a sense of excitement when typing.
    Thanks for being part of making that happen.
    How could it have been otherwise?

  104. 104. Peter says:

    Thanks, Niklas

  105. 105. Niklas says:

    Oh, I suddenly realized consciousness is a mirror that makes copies not reflections.

Leave a Reply