Hard Problem not Easy

boy blueAntti Revonsuo has a two-headed paper in the latest JCS; at least it seems two-headed to me – he argues for two conclusions that seem to be only loosely related; both are to do with the Hard Problem, the question of how to explain the subjective aspect of experience.

The first is a view about possible solutions to the Hard Problem, and how it is situated strategically. Revonsuo concludes, basically, that the problem really is hard, which obviously comes as no great surprise in itself. His case is that the question of consciousness is properly a question for cognitive neuroscience, and that equally cognitive neuroscience has already committed itself to owning the problem: but at present no path from neural mechanisms up to conscious experience seems at all viable. A good deal of work has been done on the neural correlates of consciousness, but even if they could be fully straightened out it remains largely unclear how they are to furnish any kind of explanation of subjective experience.

The gist of that is probably right, but some of the details seem open to challenge. It’s not at all clear to me that consciousness is owned by cognitive neuroscience; rather, the usual view is that it’s an intensely inter-disciplinary problem; indeed, that may well be part of the reason it’s so duffucult to get anywhere. Second, it’s not at all that clear how strongly committed cognitive neuroscience is to the Hard Problem. Consciousness, fair enough; consciousness is indeed irretrievably one of the areas addressed by cognitive neuroscience. But consciousness is a many-splendoured thing, and I think cognitive neuroscientists still have the option of ignoring or being sceptical about some of the fancier varieties, especially certain conceptions of the phenomenal experience which is the subject of the Hard Problem. It seems reasonable enough that you might study consciousness in the Easy Problem sense – the state of being conscious rather than unconscious, we might say – without being committed to a belief in ineffable qualia – let alone to providing a neurological explanation of them.

The second conclusion is about extended consciousness; theories that suggest conscious states are not simply states of the brain, but are partly made up of elements beyond our skull and our skin. These theories too, it seems, are not going to give us a quick answer in Revonsuo’s opinion – or perhaps any answer. Revonsuo invokes the counter example of dreams. During dreams, we appear to be having conscious experiences; yet the difference between a dream state and an unconscious state may be confined to the brain; in every other respect our physical situation may be identical. This looks like strong evidence that consciousness is attributable to brain states alone.

Once, Revonsuo acknowledges, it was possible to doubt whether dreams were really experiences; it could have been that they were false memories generated only at the moment of awakening; but he holds that research over recent years has eliminated this possibility, establishing that dreams happen over time, more or less as they seem to.

The use of dreams in this context is not a new tactic, and Revonsuo quotes Alva Noë’s counter-argument, which consists of three claims intended to undermine the relevance of dreams; first, dream experiences are less rich and stable than normal conscious experiences; second, dream seeing is not real seeing; and third, all dream experiences depend on prior real experiences. Revonsuo more or less gives a flat denial of the first, suggesting that the evidence is thin to non-existent:  Noë just hasn’t cited enough evidence. He thinks the second counter-argument just presupposes that experiences without external content are not real experiences, which is question-begging. Just because I’m seeing a dreamed object, does that mean I’m not really seeing? On the third point he has two counter arguments. Even if all dreams recall earlier waking experiences, they are still live experiences in themselves; they’re not just empty recall – but in any case, that isn’t true; people who are congenitally paraplegic have dreams of walking, for example.

I think Revonsuo is basically right, but I’m not sure he has absolutely vanquished the extended mind. For his dream argument to be a real clincher, the brain state of dreaming of seeing a sheep and the brain state of actually seeing a sheep have to be completely identical, or rather, potentially identical. This is quite a strong claim to make, and whatever the state of the academic evidence, I’m not sure how well it stands up to introspective examination. We know that we often take dreams to be real when we are having them, and in fact do not always or even generally realise that a dream is a dream: but looking back on it, isn’t there a difference of quality between dream states and waking states? I’m strongly tempted to think that while seeing a sheep is just seeing a sheep, the corresponding dream is about seeing a sheep, a little like seeing a film, one level higher in abstraction. But perhaps that’s just my dreams?

65 thoughts on “Hard Problem not Easy

  1. I thought it was accepted that lucid dreaming is possible?

    Doesn’t this undermine the idea that dreams are always recollections of earlier experiences? Or is the assertion that there are no new qualia in dreams?

    I think neuroscience can at least keep us grounded, though I don’t think it can solve the real Hard Problem about *why* certain arrangements of matter correlate so heavily with consciousness. That said, it’s arguable that problem is insoluble.

  2. I’m assuming most readers here don’t accept the evidence for psi, so I’ll leave that aside.

    As far as the richness or vividness of dreams vs waking, I suspect the position of the individual on this matter may predict their attitude toward psi, NDEs and the hard problem in general.

    As a life long lucid dreamer, I would say a majority of lucid dreams I’ve experienced are far more substantial, more “real” feeling, more vivid, than almost anything I’ve experienced in what is normally referred to as “waking.”

    I’m baffled by the point about the difference between a dream state and unconscious state – if anyone could explain further I’d appreciate it. I don’t even see it as significant for the hard problem if it could be shown that the brain state of dreaming vs waking “seeing” a shape would be identical. This could easily be taken to show that what we call “waking” is as much an experience “IN” consciousness as what we call “dreaming”.

    The problem here, I think, is the word “physical”, which seems to me to have devolved to the point of complete uselessness.

    To get a sense of this, try a thought experiment, if you dare – assume the evidence for the “big four” in psi (remote viewing, telepathy, precognition and psychokinesis) is as good as the 60% of physicists in the United States are willing to believe it to be (I know – 90%+ of neuroscientists don’t accept it – but who would you trust on this – neuroscientists or physicists? I’ll take the physicists).

    Now, if that’s the case (remember – you don’t need to fight this – we’re just doing a thought experiment), then shared dreaming is possible.

    If that’s the case – if you’re still with me – how would you empirically identify the difference between shared dreaming and what we call “waking”?

  3. A follow up point: “If that’s the case – if you’re still with me – how would you empirically identify the difference between shared dreaming and what we call “waking”?

    And if you are unable to empirically identify the difference, what becomes of the word “physical”?

    (and to anticipate one more objection, one psychologist – intense anti-psi fellow – told me the only possible explanation for not being able to distinguish waking from dreaming is if someone is psychotic. If you think this, look up “false awakenings” – Bertrand Russell reported 100 successive false awakenings; I think his count may have been off; but I’ve experienced at least 4 or 5 successive ones many times, and this is quite commonly reported)

  4. Bias warning: I have a soft spot for Revonsuo’s work. It’s always easy for me to favourably interpret what he’s saying.

    I’ll try my own reading of his paper, I don’t see it as two headed as you do. Below is a schematic representation of how I read the flow of his argument (ignoring that most of the paper is concerned with what I synthesise in point 5).
    1. Cognitive Neuroscience is committed to the study of Consciousness, so it doesn’t make much sense to try side stepping it. The problem is hard and he is going to show why.
    2. In terms of mechanistic explanations, one needs to look inside the phenomenon and produce a “multi-level, causal-mechanical network that shows what constitutes the phenomenon”. This is the “downward-looking, constitutive explanation”.
    3. This approach looks difficult because we currently don’t have a list of alternatives on “how possibly” consciousness is generated by brain activity. [Science normally proceeds by narrowing down such lists]
    4. One possible explanation of why we don’t have such a list is that Externalism is true: the brain alone can’t generate conscious experience. If that’s the case, then the hypothetical list of how the brain [alone] may generate consciousness only contains false hypotheses.
    5. However, externalism is false, because while dreaming we do have conscious experiences: they are somewhat different from waking experience, but not radically so. Ergo, the brain in itself has some way to generate conscious experiences.
    6. The real problem is that we don’t have the technical means to find and describe possible internal mechanisms: we currently lack an empirical way to generate the list we need.
    7. Thus, “Until cognitive neuroscience reaches that point [where a list of possibilities is generated], those weak in faith will continue to be tempted to make radical metaphysical conclusions about the nature of consciousness”.

    I’d subscribe to all of the above, but I have conveniently left implicit the point I don’t agree with: namely that to the only way to generate downward looking hypotheses is to have the technological means to probe the internal mechanisms in enough detail. Yes of course we will need such tools, but we can still try to generate reasonable hypotheses. However, saying so somewhat undermines his take-home-message, which is that imagining inventive radical solutions (Koch’s recently acquired panpsychism being the explicit example) is futile. I sort of agree, but would also say that imagining inventive hypotheses to populate the currently empty list (3) is fun and may kick-start the whole process while we are still waiting for the perfect brain scanner.

    Peter: you say that point 5 rests on “dreaming of seeing a sheep and the brain state of actually seeing a sheep hav[ing] to be completely identical, or rather, potentially identical”. I don’t think that’s the case. In our local parlance, all Revonsuo needs to show is that some kind of phenomenal experience is instantiated entirely within the brain (dreams). Describing the difference in detail would be a question for later stages, but there is no need to negate that some difference exists. Once you accept that there is something it is like dreaming a sheep, his point is made.

    In short, Revonsuo is making a very similar point to Boden (in the interview you linked recently): we are collecting data, but we lack viable ideas on how to interpret them.

  5. Thanks Arnold,
    will give it a look as soon as I can (probably not soon, sorry!).

    Don: esp is not my thing. I’m happy to accept the very well informed position of people I trust, like Prof. Blackmore. As for the evidence, I don’t have access to Revonsuo’s article from home, but he cites plenty, and you could as well find much more on your own.

  6. @Sergio:

    I would agree that the search for mechanisms should continue to look for the exact brain wiring which allows humans subjective experience & intentionality (which is not to say they’ll necessarily be the same.)

    That said I do wonder about this point:

    “However, saying so somewhat undermines his take-home-message, which is that imagining inventive radical solutions (Koch’s recently acquired panpsychism being the explicit example) is futile.”

    The idea of what constitutes an “inventive radical solution” would seem to depend on the priors one is coming in with. If I start with Rosenberg’s claims that all thoughts are illusory due to the indeterminate nature of the physical, it seems to me materialist theories which are not ultimately eliminativist would be radical? My explicit example would be Graziano’s idea that puppets are conscious entities.

    Of course Rosenberg’s eliminativism itself would, to just about everyone, be seen as radical.

    @Arnold:

    I think you might have touched on this in the past, but do you have paper criticizing Koch and Tononi’s IIT? I honestly can’t recall. Also thanks for the review of Inner Presence, clarified some things for me.

  7. @Sci,

    I don’t have a paper on the Tononi-Koch IIT model, but I have argued in several online forums that IIT is not an explanation of consciousness because it gives no principled account of subjectivity, which I consider to be an essential aspect of conscious experience.

  8. @Arnold: I would agree with that assessment. It seems to me either IIT is materialist, in which case it seems to make – as IMO all computationalist theories do – magic out of “information” or it’s panpsychic in which case it runs into other conceptual problems.

    That said if the theory has made successful predictions or is close to providing some useful application* I think it’s worth keeping around regardless of what philosophical paradigm it leans toward.

    *I recall something about the treatment of coma patients coming out of IIT? Possibly from the Atlantic article on IIT from a few years back?

  9. @Sci #9

    The idea of what constitutes an “inventive radical solution” would seem to depend on the priors one is coming in with.

    Of course it does. In this case, I would take it for granted that the starting point is a fairly orthodox version of Cognitive Neuroscience (if there is one). So you would be assuming that consciousness is the result of some physical mechanism, which can be understood in terms of signal processing (or thereabout), is likely to be based on what neurons do, etc. Pretty much what we’ve been discussing in the previous post comments, but without worrying too much about the theoretical “essence” of “computation”.
    Whatever departs from this area of assumptions (details do vary) would be seen as radically inventive. At least that’s the take-home message I’ve got from reading the paper.

    @Sci #11
    I recall reading about a family of related studies all based on the same (or similar) hypothesis. Unfortunately I didn’t take notes, so I had to search for them again and found this one:
    Casali, A. G., Gosseries, O., Rosanova, M., Boly, M., Sarasso, S., Casali, K. R., Bruno M., Laureys, S.,
    Tononi, G. and Massimini, M. (2013). A theoretically based index of consciousness independent of sensory processing and behavior. Science translational medicine, 5(198), 198ra105-198ra105.

    The specific assumption is that “information integration” correlates to the complexity of signal passed between brain areas. You can measure the latter, so you can compare patients on a scale that estimates information integration. IIT roughly assumes that more information integration equals more consciousness, and the technique piloted in the study does seem to allow to estimate how conscious a patient is, based on measures of neural activities (Warning: I haven’t read the paper).
    This is all good, but would work just as well if our basic assumption was less axiomatic and simply stated: more information integration correlates with more consciousness. I find the latter statement perfectly reasonable while I see the former as unsubstantiated (I see neither evidence nor argument to prefer it to the less grandiose assumption).

  10. Hi Peter, Interesting topics and congratulations on the book. I’m looking forward to reading it.

    Cardiology, pulmonology, kinesiology etc. are all fields of medical science but do not fundamentally explain the problem nature was solving; distributing nutrients/disposing waste, exchanging gases, movement etc. The science of consciousness likewise does not always make clear the fundamental problem nature solved as well, which is modeling the external environment. The “fossil record” for simpler creatures from amoebas through insects makes this conclusion obvious. Reptiles, amphibians and of course mammals have the more advanced brain secondary structures which nature added on to the more basic structures.

    As secondary structures they may resemble secondary brains but are still well integrated into fundamental brain structures. The secondary structures may build higher models and models we can manipulate so we can anticipate movements of predators etc. The congenitally paraplegics have these same modeling structures so they too can dream of their own movements. Dreams closely resemble waking reality because they use the same structures and as you pointed in an earlier post awareness appears to get distributed throughout the brain, so what’s to say in our sleep that our more fundamental emotional brain does not activate the images, sounds and of course other emotions.

    As far as integration of the cortex, the thalamocortical loop structures closely resemble the structures of visual system or better put the fossil record for the advanced visual system probably follows the pattern for the earlier structures. In other words the neocortex is a lot like the eye and visual cortex because it is layered and integrates into other structures. The brain itself ‘sees’ in a similar manner which is why I like Scott Baiker’s Blind Brain Theory. He may have caught the proper way to ‘look’ at brain structure or philosophically he has caught the error of science with its computational fallacy. As an engineer with a computer background I know where computationalism fits and does not.

  11. Are these comments getting posted? Is there a rule about posting I’m unaware of?

    [Don – I’m not aware of any current problems and there’s nothing from you in the spam filter. Is stuff disappearing? If so, drop me an email. Peter.]

  12. If you’ve had a lucid dream, you know with as much certainty as you can know anything that dreams are experiences, since you can think to yourself, “Wow, this is a very vivid, ongoing experience I’m having, even though I’m asleep” (I’ve had many lucid dreams and have had exactly this thought). This defeats the extended mind hypothesis pretty definitively.

    Arnold, I think you’re being a bit premature to say “The theoretical approach of Tononi and Koch, it seems to me, is a dead end.”

    IIT actually starts with key characteristics of phenomenal consciousness, then hypothesizes causal mechanisms that could instantiate informational states, which, from the system’s “internal perspective” (and only from that perspective) might be phenomenal states. Although talk of a perspective here might be a little misleading, I like this since it rightly suggests that phenomenal, qualitative states exist only for the system, not for outside observers, so (I would conclude, not sure about Tononi et al.) they can’t be identical to observable physical brain states. Here are two recent IIT papers worth a look, the second (with Koch) less technical than the first, but still a good overview:

    http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003588

    http://arxiv.org/abs/1405.7089

  13. Tom,

    IIT may not be a dead end in the context of philosophical discussion, but if you set subjectivity as a necessary condition for consciousness, and you realize that Tononi-Koch have no principled explanation of subjectivity, then I think that IIT, from the standpoint of science, is a dead end. In a recent short book from MIT Press, Koch reaffirms that IIT commits him to panpsychism. All bets are off in this finesse of the problem of consciousness.

  14. Arnold I agree that panpsychism is pretty much a non-starter, but the Tononi and Koch paper distinguishes IIT from panpsychism in several respects.

    If by subjectivity you mean the I-centeredness of experience (the experience of being a self that has experience), you may be right that IIT hasn’t addressed that specifically. I’m not completely sold on IIT by any stretch, just pleased that it takes the phenomenal seriously and tries to get in the vicinity of qualia using ideas of information and representation.

  15. @Don – Are you posting links? Sometimes a post with links gets eaten by the spam filter. On the subject of Psi, since that’s a catch-all word I’m amenable to it. If, after all, Rosenberg is right about materialism meaning we don’t have thoughts then it’s easier for me to fathom some non-materialist alternative than it is to grasp the kind eliminativism Rosenberg – and if I understand him correctly Bakker – proposes.

    Possible ironic but this may just be a lack of imagination on my part – I can’t conceive that my conceiving is an illusion. 🙂

    @Sergio – Thanks for the reply and the info on IIT. I guess I’m just a bit wary of science cleaving to close to any philosophical paradigm, especially when phrases like “weak in faith” come up.

    Arnold had a good line in one of his papers, I can’t do it justice but basically he was saying a theory that predicts something unlikely gains credence. That seems like a better standard to me, though this might be my bias toward Orch-OR as a good starting point talking…

  16. A brief correction – I said IIT was a computational theory but as I recall there was a paper stating if IIT is true the brain must be noncomputational? Can’t seem to find it now though.

  17. Experience while awake and experience while dreaming MUST be structurally different when you look at the big picture of the mind, opposed to just “consciousness”. Come on.

    What changes is the amount of information. The reason why we don’t realize we are dreaming is a basic form of anosognosia: we lack information that would let us detect a dream. Or expose the fact that this fictional dream reality is made of absurdity, holes, things that just won’t make sense if we stopped and thought about them.

    I’ve dreamed very often of having very long hair and a long beard. But not once I realized in the dream that this was odd. I just took it for granted, same as I take for granted my hair are short when I’m awake.

    Always: to detect an illusion you need more information. Detecting an error requires more information.

    When we wake up, we suddenly access information that was occluded while dreaming (like memory, while dreaming you can’t remember what you did two hours before): so we are certain we are now awake, and certain we were dreaming.

    A dream state is just one world trapped inside another. As long you don’t get information coming from the world that is outside the one you’re currently in, you’ll always believe that the world you’re currently in is the “real” and ultimate one.

    To distinguish between “fantasy” and “reality” you need to access the information coming from reality while being trapped into fantasy. That creates awareness. But usually the brain PREVENTS consciousness to access that data while dreaming. It’s information that is normally sealed away.

    “more information integration correlates with more consciousness”

    This also looks true from my point of view, at a very basic level. As long we also believe that consciousness is only a tiny part of the activity of the brain.

    Being awake integrates more information than when asleep, and that’s what creates the difference. Always in relationship between consciousness -> overall brain activity (of which consciousness is only a tiny bit).

    “In other words the neocortex is a lot like the eye and visual cortex because it is layered and integrates into other structures.”
    And:
    “As an engineer with a computer background I know where computationalism fits and does not.”

    This is about second-order observation, right? Observing the observer. Meaning that it is different from computation where there’s usually a “blind” algorithm that doesn’t self-observe and self-correct (like evolution).

    If so I understand and agree with that too 🙂

  18. Btw, an important point is this:

    the principle of increasing information that is integrated, and so obtain an “improved consciousness” would look like a nice goal to have. A sort of positive progress.

    But if all theories are correct, that doesn’t look like a very useful scenario. It leads to a mental process that is much slower, and if really improved it will lead to that thing called “analysis paralysis”.

    Which is very close (if not identical, in my opinion) to non-computability.

    Basically: an improved consciousness means losing what makes it powerful and effective. An improved consciousness eliminates it.

    So, in general, being more conscious might as well not be so great and useful.

  19. The part I quoted above from comment 12.

    More information integration.

    In a more discursive way it would mean being more aware of actual brain processes. Being able to track them more reliably.

    Or: have the light of consciousness shine a little brighter so that it can show stuff further away. More information.

  20. @Abalieno: “In a more discursive way it would mean being more aware of actual brain processes. Being able to track them more reliably.”

    Ah, so being more conscious of meta-cognitive processes then? The extreme eliminativist angle is rather fascinating, admirable in the sense that it doesn’t beat around the bush about how our intentionality as well as our subjectivity must be illusory if materialism as currently conceptualized is true.

    Of course, this also runs into the issue that extraordinary claims require extraordinary evidence.

  21. The problem with eliminativism is that it eliminates something that was NEVER there.

    That doesn’t mean we can’t measure a consciousness as a bundle of information that a certain area of the brain can self-track, nor it does mean that this amount of information can’t be altered.

    As Bakker would say, “eliminating” something implies intentionality, but of course this is a theory that describes something. It doesn’t act.

    So eliminativism says that something we see doesn’t exist. Yet it means it DOES EXIST, but as a cognitive illusion. Which is a thing, made of data, it can be altered in various ways. And so on.

    Eliminativism doesn’t eliminate consciousness, it only reduces it to an illusion. The illusion exists, as an illusion you can see, can measure 🙂

  22. Doesn’t this just take us back to the tagline at the top of this blog? Who is being fooled by this illusion?

    Let me know if I’m misunderstanding, but from Rosenberg’s work the problem is that we don’t have thoughts at all. Not that we are mistaken about what our thoughts are, or their origin, but the very aboutness of our mental content. For someone to think they are thinking about Paris is to be mistaken, because this would imply – as he puts it – physical wiring in the brain would require a quality that lets it intrinsically represent Paris.

    I think this is the extraordinary claim eliminativists must provide evidence for by showing us the actual mechanisms that lead to our mistaking false intentionality for the real thing. I admit I don’t see how this could be possible unless I’m misunderstanding the claim eliminativists about intentionality are making.

  23. Well, the description I use is the one I’ve wrote about in the lengthy comments that you can find if you look a few posts on this blog, down to the one about the book. Scroll down and you find my comments 🙂

    I generally believe that the consciousness cognitive illusion is easily understood if you think of the example of the weather, or the “skyhook” of Dennett:

    Imagine a skyhook, but due to a “perceptive problem” the skyhook is surrounded by mist, and the only part you actually see is the very topmost. What does it seems to you? That the hook up there floats in the sky, and MOVES! Since it seems suspended and not attached to anything it might look like an UFO, flying around with a will of its own. Why? Because the crane that links the hook to the ground was covered by the mist.

    The weather looks in the same way. A thunderstorm is “caused” by a high number of factors, it may even “begin” because of a butterfly! But what do you see? That suddenly a storm arrives, there’s lightning and so on. In the past, people looking at something like that would take it as sudden, unexpected manifestation. A gestalt. This because, as in the skyhook example, they were unable to track the chain of cause/effect that leads to a thunderstorm. So they see something like an apparition. And what is it? The will of a god! The god must be upset! Why? Because if there’s no cause you can track you’d assume the cause is right in what you see. An independent manifestation. A will, a god.

    A “will” is literally something that has an independent thought. Why it’s independent? Because you cannot see the actual causes. So assume it’s independent.

    Consciousness is again like these other two examples. Consciousness only sees the tail end of the whole brain process. So what does it look like? A truncated process, whose chain of actual cause/effect vanishes, like the thunderstorm, like the skyhook in the mist. So, unable to track the origin of a thought, consciousness sees a gestalt. An apparition! A ghost in the shell. Consciousness looks to consciousness like a “ghost”, because like the skyhook it is seen as suspended, with a will of its own. MY WILL!

    It’s like if you suddenly find yourself with a dead body covered in blood at your feet, a knife in your hands. Oh shit! I must have killed someone!

    Because consciousness only sees a truncated process, and so it can’t see the origin of it, then it “appropriates” the process itself. Consciousness did it.

  24. @Abalieno: Thanks for the response – I did read your comments under that other post. I admire the eliminativist’s honesty about what materialism must explain away yet I don’t see how the arguments you presented suffice as a convincing argument for the illusory nature of subjectivity or intentionality. (This also means will is left unexplained since it follows from intentionality, something Peter deals with a bit in his book.)

    I don’t see how the appeal to information loss + (cryptic IMO) complexity provides an explanation for who is fooled by consciousness and intentionality if “I” am illusory as well, or how either qualifies as an illusion. I think this sky hook argument suffers from the very congnitive leap you offer as an example with the corpse at my feet with me covered in blood.

    Basically it seems to turn on the idea that since the physical world has been explained by reductionism so should the mental. We have all this success in harnessing the physical world (the corpse), we’ve found all these correlations between mind & brain (the blood covering my body), so the conclusion should be obvious (murder was the case that they gave me, as Snoop would say).

    But isn’t it too early to assume that physicalism is correct, when it’s the characteristics of subjectivity & intentionality that make the mental differ from the physical have made them the focus of the “Hard Problem”? The fundamentally qualitative nature of our subjectivity is different that a hook in the mist, since the latter is experienced in consciousness. Our intentionality is different that holding mistaken beliefs, because intentionality pertains to the question of how we can have beliefs *about* anything at all whether those believes are grounded in what’s true or not.

    What I feel is missing is something no one can yet provide – an actual explanation for how the mental arises from the physical when the physical is assumed to possess no subjective experiences of its own nor have any intrinsic representation (aboutness) of any other aspect of reality. Of course no one has explanations for how mind-souls interact with bodies, or how panpsychism overcomes the combination problem, and so on. I’m not advocating another paradigm here.

    Philosophy has been useful in helping to show the challenges science faces in attempting to understand the mental, but the easy yet unsolved part of the Hard Problem – finding the physical wiring that allows for subjectivity/intentionality – has to be handled by science. Riding on John Davey’s suggestion really the best place to start is biology and the varied questions we’ve yet to answer about worm memory, how we smell, do birds use quantum entanglement to help navigate, and so on. The answers could prune the number acceptable theories down, perhaps by a great deal. Best of all is none of this really depends on whether some philosophical faction is correct/incorrect.

    After we’ve learned more about these simpler questions & pruned away theories rejected due to scientific findings rather than philosophy we might, ideally within the next few decades, be able to turn our attention toward solving the problems of intentionality and subjectivity.

  25. “I don’t see how the arguments you presented suffice as a convincing argument for the illusory nature of subjectivity or intentionality.”

    Ask yourself what “intentionality” means for you.

    Intentionality is the answer to the question “who did it?”

    If you believe a human being has “choice”, then you believe he has intentionality. He did it. If not, you have to look at some prior cause. In a deterministic world, who did it is the Big Bang itself. After that you only have a cascade of cause/effect.

    So intentionality is merely the origin of a process. What or who jumpstarted it.

    Complexity only explains we are unable to backtrack cause and effect. And so we wrongly attribute an intention to stuff that doesn’t have it.

    “how the mental arises from the physical when the physical is assumed to possess no subjective experiences of its own”

    The mental doesn’t arise 😉 In the sense it is a cognitive illusion if you look at it from that perspective. So it cannot exist. There’s no contradiction.

    You see a contradiction because you see the same thing from two points of view at the same time. Let’s say I’m on a train, and you are outside. The train moves. I’ll say that to me it looks like you’re moving away, you’ll say that for you I’m the one moving away. Why the contradiction? Aren’t we looking at the same thing?

    Of course, but we are looking at the same thing from two different perspectives, so the same thing looks different.

    Subjective/objective are merely the same as inside/outside. Points of view, on the same thing. But from the outside the description you have is complete without any inside. So subjectivity is explained away. It never existed.

  26. In the comment I will try to address some interesting points rapised by many of you.

    @Peter,

    We know that we often take dreams to be real when we are having them, and in fact do not always or even generally realize that a dream is a dream: but looking back on it, isn’t there a difference of quality between dream states and waking states? I’m strongly tempted to think that while seeing a sheep is just seeing a sheep, the corresponding dream is about seeing a sheep, a little like seeing a film, one level higher in abstraction. But perhaps that’s just my dreams?

    Maybe your memory is about seeing a sheep, but it is weird to say that a dream is about seeing a sheep.
    I have experienced a few lucid dreams and I must say that the quality of dreams varies. Similarly quality of waking life varies. When I’m really tired, I feel that my consciousness is less rich, less vivid than when I’m running or riding a bicycle. When I’m dreaming I also experience varied strengths of consciousness. Sometimes after I wake up I remember only bleak fragments, other times I am amazed by the richness of experience: visuals, sounds, even pain.

    About IIT:
    – IIT indeed leads to some weird conclusions. For example earthquakes should be superconscious.
    – IIT states that “consciousness IS information integration”. We should ask then: integration of what exactly?…

    @VicP:

    In other words the neocortex is a lot like the eye and visual cortex because it is layered and integrates into other structures.

    That, my dear friend, leans dangerously toward making neocortex a form of a homonculus.

    Extended mind, mainly to Tom Clark:
    It is important to differentiate between two “extended mind” ideas. One is a bogus psi magical story pushed by Rupert Sheldrake. The other one is a philosophical idea by Andy Clark and David Chalmers. The main notion is that mind can extend beyond the brain and the body, because a mind is a “doing”, a function that can encompass for example tools, which are integral part of this enminded activity.

    About panpsychism,
    Why exactly we are denying that consciousness is a property of all things. Is it so unreasonable to say that consciousness is in a way universal, like mass is a property of atoms?

    Abalieno,
    Nice summary of Dennett’s anticonsciousnessism (:p). However when you write things like “Consciousness looks to consciousness like a “ghost””, then you already presuppose a conscious experience of “seeing something like a ghost”. [Free] will or causes of experiences are totally irrelevant for the problem of consciousness. The problem is, how does consciousness happen. Saying that we are illusioned to think that consciousness has no cause is not even on topic, I’m afraid.

  27. “However when you write things like “Consciousness looks to consciousness like a “ghost””, then you already presuppose a conscious experience of “seeing something like a ghost”. [Free] will or causes of experiences are totally irrelevant for the problem of consciousness. The problem is, how does consciousness happen.”

    I don’t 100% understand where the objection is.

    My point was about why we see intentionality. So “consciousness has no cause” means consciousness looks like it has an intention of its own.

    The analogy with a ghost was merely to underline as the apparition is sudden and without motive. Something that looks like it’s there even if it’s not.

    “The problem is, how does consciousness happen.”

    But the thesis is that consciousness DOESN’T happen. What is to explain is why it LOOKS LIKE it happens. We have to explain the illusion.

  28. @Abalieno: Thanks for taking the time to reply – I don’t agree but I do appreciate the engagement.

    “Intentionality is the answer to the question “who did it?””

    Not really? The question is the fixed nature of mental content, which differs from the indeterminate nature of the physical. As Ihtio notes the question of origin and providing an actual explanation for mental content are not the same.

    “Complexity only explains we are unable to backtrack cause and effect. And so we wrongly attribute an intention to stuff that doesn’t have it.”

    But if there’s no intentionality there’s no “we” to attribute anything to anything? IIRC it was Putnam who noted this in a discussion with Hayek, that you run into a problem when trying to provide a causal explanation for intentionality precisely because intentionality is necessary to zero-in the cause of tokening.

    “In the sense it is a cognitive illusion if you look at it from that perspective. So it cannot exist. There’s no contradiction.”

    But who precisely is looking when mental content is illusory? I feel like there is some kind of Zen koan type argument where the light bulb realizes there are no light bulbs ever, but I prefer waiting for the actual scientific account. Philosophy is important but this is a question that requires the extraordinary evidence to be rooted in scientific explanation.

    @Ihtio

    “Why exactly we are denying that consciousness is a property of all things. Is it so unreasonable to say that consciousness is in a way universal, like mass is a property of atoms?”

    I think the problem with proposing panpsychism from a perspective of scientific exploration is it can allow a theory that explains little to nothing to make claims about accessing a fundamental consciousness. Admittedly this is a problem with materialist theories since those make much of “complexity”.

    Really it’s too early for academia to take a hard stance on any metaphysical paradigm, unless of course you’re a philosopher desperate for recognition and $$$.

  29. “But who precisely is looking when mental content is illusory? I feel like there is some kind of Zen koan type argument where the light bulb realizes there are no light bulbs ever”

    It’s a counter-intuitive argument that reveals the nature of the paradox. Once again you put in the same line two descriptions that are in contradiction.

    There’s “someone looking” when you are in the illusion. “Mental content” is illusory (doesn’t really exist) when you are outside the illusion.

    I’ll try to explain this other way:

    1st person = mythology, religion, philosophy (subjectivity)
    3rd person = science (objectivity)

    For example, “dualism” is a concept that exists in 1st person, but is gibberish in 3rd (a reason why I think Arnold Trehub account is undermined, since it’s based on dualism).

    When you are in the 3rd perspective (Science) you have a 100% complete description of the world, and in this description no subjectivity exists, no dualism, no intentionality, no consciousness. No cognitive illusion, either. It’s an impersonal, mechanical account where all these aspects are “explained away”. It’s all erased.

    No one is looking, no one is deluded. Metzinger “Being no one”. Okay?

    But if instead we switch to the 1st person we switch to a partial description of reality that is only based on a chunk of the totality. Whereas in 3rd person we had the whole deal, in 1st person we access only a slice of the pie. Because this is a slice, and not the whole, we get all the cognitive illusion and “appearances”.

    Once you lift the curtain of the 1st person and move to the third, all that stuff goes away.

    Take it as Theory 1 versus Theory 2. They are in open contradiction. When you make a claim you need to know if you are in field 1 or 2, because if you keep confusing them then you end up creating more and more contradictions because you crossed over to the other, and they are incompatible.

    If you are looking at non-contradicting claims and strictly knowing the Truth, then make sure you are strictly in 3rd person perspective. But if you do that, then don’t ask “what is consciousness”, because from that perspective consciousness has been already eliminated. It just isn’t there. You just have a brain and its physicality. That physicality does its job without needing or creating any intention, any consciousness, anything abstract or unexplainable.

  30. A aliens: “For example, “dualism” is a concept that exists in 1st person, but is gibberish in 3rd (a reason why I think Arnold Trehub account is undermined, since it’s based on dualism).”

    My account is certainly not based on dualism. It is based a dual-aspect physical monism. The problem of providing a causal physical explanation of consciousness is an epistemological problem arising from the separate domains of reference for 1st- person vs 3rd-person descriptions. See “A foundation for the scientific study of consciousness” on my Research Gate page.

    The self that Metzinger denies in “Being No One” is a metaphysical self. I think he now agrees with me that the “phenomenal self model” (PSM) cannot be constructed without the spatiotemporal perspectival origin of a core self in an evolved brain.

  31. “The problem of providing a causal physical explanation of consciousness is an epistemological problem arising from the separate domains of reference for 1st- person vs 3rd-person descriptions.”

    Well, we seem to agree on that, at least.

    My bad on dual-aspect monism, I wrongly thought it was a variation on some form of dualism.

    And wow, I arrived at monism on my own. I thought there wasn’t any -ism that actually represented my idea, but dual-aspect is just it.

    On the wiki: “In the philosophy of mind, double-aspect theory is the view that the mental and the physical are two aspects of, or perspectives on, the same substance.”

    Yep, that’s what I wrote up there in the comments. Details might be slightly different, but it’s in the ballpark of what I came up with.

  32. …or I might have found a discrepancy with my interpretation.

    It seems the official theory works in a context of “neither nor”, meaning that underlying truth is something else compared to the two points of view. Some kind of joint perspective that completes only when you have both (and that’s why that paper makes me skeptical when it tries to account for both into one model).

    Instead I’d say that the physical contains already everything. There’s literally nothing unexplained. So when you pick one perspective that perspective already contains its own model or interpretation of the other. With the only caveat that the 3rd person is authoritative, while the 1st one only very partial and “false”. So I always see a hierarchy that is clearly established and that gives a priority of ontology to the 3rd person (a bit like we know a dream is a lesser realm compared to the reality of being awake, one is always inside the other and you can’t invert this priority).

    Like, if you imagine a picture, it’s one circle within the other, but where the smaller one goes away when you look from the bigger one.

    Following my idea when you deal with a perspective you have to discard completely the other one.

  33. Abalieno:

    But the thesis is that consciousness DOESN’T happen. What is to explain is why it LOOKS LIKE it happens. We have to explain the illusion.

    You say that we have to explain the illusion, but the illusion is a conscious experience. Therefore we have to explain consciousness. There doesn’t seem to be any way around it.

    Your response to Sci reveals an interesting presupposition, namely that you believe that somehow science produces “objective” knowledge. It most certainly doesn’t. All knowledge is acquired by people in a given time, culture, using some language and tools. All these makes all scientific measurements, explanations, theories instrinsically perspectival. The perspective of science is generally that of homo sapiens. There is nothing objective in it. There is no such thing as “objective” or “objectivism”, just as there is no such thing as “nothing” that some people say was before the big bang. The is no objectivity because all that can be in any way, by anyone or anything perceived, felt, understood, contacted by (by this I mean even basic mechanical interactions) is perceived, etc. in a way that is specific to the perceiver (regardless whether it is a person or a particle or whatever else). Objectivity is just an abstraction.

    Sci:

    I think the problem with proposing panpsychism from a perspective of scientific exploration is it can allow a theory that explains little to nothing to make claims about accessing a fundamental consciousness. Admittedly this is a problem with materialist theories since those make much of “complexity”.

    Many theories have restricted domains. Biology is about ~0.000000001% of the universe or even less. Even if some panpsychism based metaphysics explains just a little it still may be valuable.

    Really it’s too early for academia to take a hard stance on any metaphysical paradigm, unless of course you’re a philosopher desperate for recognition and $$$.

    Academia really do take hard stance on metaphysics. They just don’t talk about it that much. The prevalent metaphysics taken for granted, without much deliberation or scrutiny, is some form of materialism or object-oriented physicalism. Of course the latter is slowly diffusing due to quantum mechanics becoming more popular.

  34. @Arnold:

    Isn’t dual aspect monism assuming that neither physicalism nor idealism is true? That there is some kind “stuff” from which both mind and matter derive?

    @ihtio:

    I’ve no problem with panpsychism as a possibility, though I don’t see how you test for it with our currently available technology? Out of curiosity how might be we distinguish between that paradigm and any others? I see the path may go through what the IQOQI physicist Zeilinger has called the coming fall of local realism, but I confess my physics isn’t strong enough to grasp what this actually means with relation to consciousness.

    And I definitely agree that academics often assume materialism but I think setting this aside doesn’t change the path that should be taken. Find the cut off points – my go to example is quantum biology as it relates to the brain – that prune off whole classes theories. Another might be Arnold’s retinoid theory. Then if those bare fruit we’ve drastically reduced the search space.

  35. “you believe that somehow science produces “objective” knowledge. It most certainly doesn’t. All knowledge is acquired by people in a given time, culture, using some language and tools. All these makes all scientific measurements, explanations, theories instrinsically perspectival.”

    Well, of course a distinction exists between the practice of science and science as an ideal.

    Somehow science DOES produce objective knowledge. That’s the whole point that distinguishes it from just fantasy. If you have a theory and it is proven science, you can use it to obtain a concrete result. If it’s just a fantasy you don’t obtain much.

    I do believe in bullets, you don’t, you die anyway. I don’t believe in fireballs, you do, I’m still alive.

    Science, because it is objective, doesn’t conform to what you wish for. Fantasy does. We use science only because the data comes from a objective point of view. That data may not be completely pure, but at least it leads you toward objectivity the more you figure out about it to correct perception. Practical science is just a path, “on the way” to objectivity.

    Anyway, you were expressing the perspective of constructionism when you state that you cannot bypass experience.

  36. Btw, that claim on the impossibility to bypass perception can also be extended to its extreme: “bla bla bla… but we might always be just a brain in a vat”.

    Science will never be able to prove that possibility wrong.

    The counterpoint to that is that it’s a thesis with zero usefulness. Believing that won’t bring you more food, provide shelter or make you happier. It’s just totally useless. You will never know, it will never make any difference.

    So you are forced to focus on the now, exactly the way it looks (this part is well explained in a little book called The Aristos, by John Fowles).

  37. @Abalieno:

    I think Ihtio’s point is not to worry about the Matrix but rather to point out that we ourselves are collectively bound to limited frames. Culturally we can find ourselves believing, due to the agreement of our peers and/or experts, that we’ve explained/understood an aspect of reality without that being the case. A good example would be the laws of nature, which only recently is being seen generally as requiring explanations themselves. (Also time, causality, realism, all that good stuff.)

    But worse than this is the realization that we might be limited by the sensory apparatus gifted unto us by evolution. This is the argument made by Mysterians like Chomsky and McGinn – that we simply can’t understand how consciousness/intentionality work because we lack the necessary physical forms to do so.

    I believe Bakker has made a similar argument, that we don’t possess the necessary machinery to model our selves the way we model the world. Where it seems Bakker differs – if I read him correctly – from the above two more agnostic (AFAICTell) philosophers is to suggest that while mysteries remain we know enough about matter (namely that it lacks consciousness/intentionality) to safely infer what doesn’t exist in the world outside our minds doesn’t exist. Thus the mind itself must be illusory.

    As I note above I’m not convinced we do know enough about the external world outside our individual mental selves to make such a bold claim.

    =-=-=

    On intentionality being illusory ->

    It seems to me -following Nagel & iirc Popper – the erasure of intentionality leads to issues about how reason is grounded. (Or, at the least, it further shows what has to be scientifically explained by the eliminative account.)

    If the aboutness of mental content is illusory then it would seem that our chains of thought are, in truth, not actually based on their content but on the collision of atoms. Yet for a conclusion to be grounded in reason is, AFAICTell, dependent on the semantic content of these supposedly illusory thoughts. It would seem an odd coincidence that the collision of atoms coincided with our use of logic?

    I suppose one could argue that we evolved to be rational, but it’s not clear how this would be the case when the thoughts of our animal ancestors are as illusory as any other kind? In fact Bakker’s BBT, as I recall, suggests our brains has evolved to utilize heuristics that get the job done rather than give us a true account of reality. (Similar work has been done by Donald Hoffman, who argues natural selection “drives true perception to swift extinction”).

    Thus is seems the question of rationality’s existence is a further issue the eliminativist must include in the final scientific account.

  38. “I believe Bakker has made a similar argument, that we don’t possess the necessary machinery to model our selves the way we model the world. Where it seems Bakker differs – if I read him correctly – from the above two more agnostic (AFAICTell) philosophers is to suggest that while mysteries remain we know enough about matter (namely that it lacks consciousness/intentionality) to safely infer what doesn’t exist in the world outside our minds doesn’t exist. Thus the mind itself must be illusory.”

    Bakker says (well, I think) that science will provide the data. And that data produces change.

    He thinks the illusion can actively conquered and removed, once you have the data. There isn’t any abstract inference. Just experiments to find out the truth. Bakker very rigorously sticks to science.

    “It seems to me -following Nagel & iirc Popper – the erasure of intentionality leads to issues about how reason is grounded.”

    Well, of course. We discussed that recently on Bakker’s blog. If you go there you can’t say anything. That perspective, once intentionality is removed, is a perspective without a voice.

    Interesting enough: isn’t the whole issue coming up even in genetics?

    I mean, evolution proceeds “blindly”, yet evolution also created human brains, that supposedly work as “intelligent design”. So evolution should still contain within itself a complete account of the intelligent design too.

    It’s just a different formulation of the mind/body problem, just in a different field.

  39. Sci #41:
    As with any conceivable metaphysics there usually is no easy way to test it. You cannot test solipsism, materialism or panpsychism. We can however look at the consequences of these views, particularly how they push our scientific endeavors.
    See for example:
    Cobb, J. B. (1988). Ecology, science, and religion: Toward a postmodern worldview. The Re-enchantment of Science, SUNY Press, New York.
    Earley Sr, J. E. (2012). An invitation to chemical process philosophy. In Llored, J.-P., editor, Epistemology of Chemistry: roots, methods and concepts, pages 529–539.
    Rieppel, O. (2009). Species as a process. Acta biotheoretica, 57(1-2):33–49.
    Stein, R. L. (2004). Towards a process philosophy of chemistry. Hyle: International Journal for Philosophy of Chemistry, 10(4):5–22.

    Abalieno #42:
    The main line of demarcation of science from non-science is testability of scientific theories, hypotheses, predictions. Not “objectivity”. You can check whether the experiment falsifies a hypothesis. You cannot prove (outside of formal sciences: mathematics and logic) a theory.
    Whether you believe in bullets or not doesn’t make those bullets “objective”, as “bullet” is a human made concept, thus not objective.
    Saying that 3rd person view on human behavior is objective is, according to philosophy of science, a metaphysical view. Hardly a scientific fact. And it is a 1st person view, for that matter.
    Objective knowledge is not possible, because all knowledge is somebody’s knowledge.
    And maybe we will be able someday to make a good theory about human behavior and brain processes. But I don’t see how dismissing the problem of consciousness will satisfy our hunger for explaining it.

  40. “Objective knowledge is not possible, because all knowledge is somebody’s knowledge.”

    Yeah, semantics 😉

    Strictly speaking, if you are rigorous, it’s right. But we certainly can use a weaker form of the word where it means we refer to data not directly subjective.

    One thing is to say you can never be sure of objectivity, another is to say that anything goes and that it’s all arbitrary.

    Testability relies upon the principle there’s a fixed external world. And the principle of the external world exists out of… convenience.

    http://www.preposterousuniverse.com/blog/2007/04/11/what-i-believe-but-cannot-prove/

  41. I liked the Sean Carroll essay…though part of me feels he betrays the scientific ideal when he hawks the multiverse so much.

    I think science is best when it proceeds in smaller steps, realizes what hasn’t been answered, and makes an effort to be self-correcting. While I think philosophy is useful in showing how what seems like a good scientific answer may in fact be inadequate, it seems to me the hawking of metaphysical paradigms is a case where philosophy moves toward the kind of bizarre certainty that results in adherents for MWI.

    Psychology studies suffered from selection bias, the decline effect reveals potentially deep problems with the culture of science, it’s questionable how many published findings would be successful replicated now, and so on. There’s enough data for philosophers to cherry pick and self-promote a paradigm, but not enough – IMO anyway – for anyone to be certain of their theory will ultimately pass muster.

    IMO we’ve jumped the gun, but we can return to sensible exploration of the human mind by figuring out some basic things about animals, about how our senses work, and then come back to the big questions when the foundations have been properly shored up.

  42. Abalieno,
    yup, the question whether there is something outside of our minds is a separate one, and an irrelevant one in the context of our discussion.

    Relevant question is: “can we know something objectively?” And the simple answer is “No – for to know is to know from a certain subjective perspective”.

    If by “objective world” we mean “all that exists”, then certainly a subjective perspective (+ experience) is a part of it. It requires an explanation. If it is an “illusion” (in the sense that it is something else than we think it is), then we would like to explain how does this “illusion” comes about and what is a better way to talk about the phenomena in question (here: consciousness).

  43. Hi ihtio,

    In simple terms, what a simple plant, cell or even particle ‘feels’ when externally stimulated is still the ‘feeling’. What complex brains like ours do is convert the ‘feeling’ into a complex model of the stimulus. So we think there is an objective world that is actually a model built out of our subjective self but separated by the physiological structure of the brain from a core subjective self. It’s not a hard model to create because of the fossil record from lower species.

    “#13 VicP:In other words the neocortex is a lot like the eye and visual cortex because it is layered and integrates into other structures.

    #32 ihtio :That, my dear friend, leans dangerously toward making neocortex a form of a homonculus.”

    Dangerously close is ok.

  44. @Sci: “Isn’t dual aspect monism assuming that neither physicalism nor idealism is true? That there is some kind “stuff” from which both mind and matter derive?”

    Dual-aspect *physical* monism assumes that the universe is physical, but that we have only indirect/biologically-filtered access to the universe of which we are a part. The dual aspects derive from our epistemological dilemma where our 1st-person (subjective) descriptions and our 3rd-person (objective) descriptions occupy separate descriptive domains. I have proposed a bridging principle to enable us to understand the relationship between the subjective perspective and the objective perspective.

  45. VicP,

    There is a great difference between a [conscious] experience and a [complex] model of the stimulus. I don’t really see how thinking about neocortex as an “eye” could help is here.

    Also, neocortex is not like an eye, because there are far too many backward projections from the neocortex to other areas, and between different places of the neocortex. So neocortex doesn’t just wait to the stimuli – it is very, very active and initiates neuronal actions, modules oscillations, etc.

  46. Ihtio: As I stated in a simple system the stimulus and response SR are closely linked. You then reinforced my argument by stating that the brain has internal feedback and looping or an internalized system of self generated stimulus response. The SR becomes internalized into the system.

    I draw the similarity between overall brain structure and the eye because nature follows the same methodology i.e. layering, serial parallel architecture, optic nerve vs rooting of neocortex into old brain via thalamus and hippocampus etc.

  47. VicP, Yes, I understand your point. The jump from making models of stimuli to “making” conscious experience is a grand one, nonetheless.

  48. Of course, this also runs into the issue that extraordinary claims require extraordinary evidence.

    Problem is usually its insisted the evidence must be one within the human range of regular sensory/feeling perception and can’t be outside that. If, say, the evidence was in the infrared range, it seems it gets rejected.

  49. @Callan: “Problem is usually its insisted the evidence must be one within the human range of regular sensory/feeling perception and can’t be outside that. If, say, the evidence was in the infrared range, it seems it gets rejected.”

    I have no idea what you’re talking about here? Are you referencing a historical example?

  50. Ihtio: The armature in a motor does not “make” a magnetic field but causes its physical emergence. Likewise neural networks do not make qualia but their structure and neural activation cause qualia or qualia is physical emergence.

    ******THEORY ALERT******

    The panpsychist perspective holds because biological cells contain huge fixed patterns of physical particles. Physical particles locked into patterns in these relatively huge molecular structures and relatively huge cellular spaces—-:-)S L O W Patterns OF Nature. Metabolically lock those patterns across cells because the forces of nature are still at work; and qualia emerges.

  51. VicP, I’m not sure what are you getting at. We know that neural activity somehow causes consciousness. The question is “how?” And the analogy between a cortex and an eye doesn’t seem to be opening any new doors.

    I’m sorry but I can’t see the relevance of you mentioning panpsychism for the topic. Could you please explain?

  52. Arnold, I get an impression that there is little agreement as to what empirical data amounts to “evidence”, and even less when “relevance” of those is at stake.

  53. ihtio: “I get an impression that there is little agreement as to what empirical data amounts to “evidence”, and even less when “relevance” of those is at stake.”

    If this is the case, then the entire enterprise is no more than a venting of intuitions!

  54. ihtio: neurons no more ‘somehow causes’ consciousness than an electric motor armature ‘somehow causes’ a magnetic field that does work. If you are an engineer you know that the magnetic properties are inherent in all matter, so the ‘panmagnetism’ is analogous to panpsychism. However panpsychism is a placeholder for an emergent property of neurons which possibly entails all of the physical forces.

    The biological descriptions which suffice for all cells except neurons only entail the properties of chemical bonding, energy exchange or limited chemo physical properties. The consciousness mystery reveals the missing properties which are occurring inside of cells and across neurons. You are correct that panpsychism perspective is the same as elan vital perspective because of missing properties.

  55. In science, evidence trumps intuition. If a theoretical model of consciousness is able to successfully explain/predict previously unexplained conscious experiences it should be taken provisionally as the best explanation despite contrary intuitions.

  56. @Arnold: ‘If this is the case, then the entire enterprise is no more than a venting of intuitions!’

    Why I feel like we need to start smaller, deciphering how varied qualia (sight, smell, etc) are delivered/translated from environment to the body. It seems odd to me to try and figure out consciousness when at least some sensory systems resulting in qualia have yet to fully explained.

  57. @Sci,

    I doubt if a FULL explanation of any biophysical process is achievable. But science makes progress on the grounds of partial explanations even if we cannot grasp the essential nature of what we experience. A key question: What transforms a sensation into a perception (a quale)? There is a theoretical model that proposes an answer, and there are relevant observations and experimental findings that lend credence to the model.

Leave a Reply

Your email address will not be published. Required fields are marked *