neuron questionBy now the materialist, reductionist, monist, functionalist approaches to consciousness are quite well developed. That is not to say that they have the final answer, but there is quite a range of ideas and theories, complete with objections and rebuttals of the objections. By comparison the dualist case may look a bit underdeveloped, or as Paul Churchland once put it:

Compared to the rich resources and explanatory successes of current materialism, dualism is less a theory of mind than it is an empty space waiting for a genuine theory of mind to be put in it.

In a paper in the latest JCS William S Robinson quotes this scathing observation and takes up the challenge.

Robinson, who could never be accused of denying airtime to his opponents, also quotes O’Hara and Scott’s dismissal of the Hard Problem. For something to be regarded as a legitimate problem, they said, there has to be some viable idea of what an answer would actually look like, or how the supposed problem could actually be solved; since this is absent in the case of the Hard Problem, it doesn’t deserve to be given serious consideration.

Robinson, accordingly, seeks to point out, not a full-blown dualist theory, but a path by which future generations might come to be dualists. This is, in his eyes, the Hard Problem problem; how can we show that the Hard Problem is potentially solvable, without pretending it’s any less Hard than it is? His vision of what our dualist descendants might come to believe relies on two possible future developments, one more or less scientific, the other conceptual.

He starts from the essential question; how can neuronal activity give rise to phenomenal experience? It’s uncontroversial that these two things seem very different, but Robinson sees a basic difference which causes me some difficulty. He thinks neuronal activity is complex while phenomenal experience is simple. Simple? What he seems to have in mind is that when we see, say, a particular patch of yellow paint, a vast array of neurons comes into play, but the experience is just ‘some yellow’.  It’s true that neuronal activity is very complex in the basic sense of there being many parts to it, but it consists of many essentially similar elements in a basically binary state (firing or not firing); whereas the sight of a banana seems to me a multi-level experience whose complexity is actually very hard to assess in any kind of objective terms. It’s not clear to me that even monolithic phenomenal experiences are inherently less complex than the neuronal activity that putatively underpins or constitutes them. I must say, though, that I owe Robinson some thanks for disturbing my dogmatic slumbers, because I’d never really been forced to think so particularly about the complexity of phenomenal experience (and I’m still not sure I can get my mind properly around it).

Anyway, for Robinson this means that the bridge between neurons and qualia is one between complexity and simplicity. He notes that not all kinds of neural activity seem to give rise to consciousness; the first part of his bridge is the reasonable hope that science (or mathematics?) will eventually succeed in characterising and analysing the special kind of complexity which is causally associated with conscious experience; we have no idea yet, but it’s plausible that this will all become clear in due course.

The second, conceptual part of the bridge is a realignment of our ideas to fit the new schema; Robinson suggests we may need to think of complexity and simplicity, not as irreconcilable opposites, but as part of a grander conception, Complexity-And-Simplicity (CAS).

The real challenge for Robinson’s framework is to show how our descendants might on the one hand, find it obvious, almost self-evident, that complex neuronal activity gives rise to simple phenomenal experience, and yet at the same time completely understand how it must have seemed to us that there was a Hard Problem about it; so the Hard Problem is seen to be solvable but still (for us) Hard.

Robinson rejects what he calls the the Short Route of causal essentialism, namely that future generations might come to see it as just metaphysically necessary that the relevant kind of neuronal activity (they understand what kind it is, we don’t) causes our experience. That won’t wash because, briefly,  while in other worlds bricks might not be bricks, depending on the causal properties of the item under consideration, blue will always be blue irrespective of causal relations.

Robinson prefers to draw on an observation of Austen Clark, that there is structure in experience.  The experience of orange is closer to the experience of red and yellow than to the experience of green, and moreover colour space is not symmetrical, with yellow being more like white than blue is. We might legitimately hope that in due course isomorphisms between colour space and neuronal activity will give us good reasons to identify the two. To buttress this line of thinking, Robinson proposes a Minimum Arbitrariness Principle, that in essence, causes and effects tend to be similar, or we might say, isomorphic.

For me the problem here is that I think Clark is completely wrong. Briefly, the resemblances and asymmetries of colour space arise from the properties of light and the limitations of our eyes; they are entirely a matter of non-phenomenal, materialist factors which are available to objective science. Set aside the visual science and our familiarity with the spectrum, and there is no reason to think the phenomenal experience of orange resembles the phenomenal experience of red any more than it resembles the phenomenal experience of Turkish Delight. If that seems bonkers, I submit that it seems so in the light of the strangeness of qualia theory if taken seriously – but I expect I am in a minority.

If we step back, I think that if the descendants whose views Robinson is keen to foresee were to think along the lines he suggests, they probably wouldn’t consider themselves dualists any more; instead they would think that with their new concept of CAS and their discovery of the true nature of neuronal complexity, that they had achieved the grand union of objective and subjective  – and vindicated monism.

37 Comments

  1. 1. Philosopher Eric says:

    When I entered these talks about a month ago I submitted that I was in need of education about what is practically happening in our field, and in payment that I might be able to present some useful perspectives which should naturally elude established thinkers. So for me Peter, I must ask why a discipline which has achieved no accepted understandings to date, would even attempt to answer a question which is perceived as “hard”? Why not put the focus on questions which are relatively “easy”? Notice that if such understandings were then to occur, “hard” aspects of reality should at least gain better positions from which to be answered.

    I do not mean to imply by this, however, that I have no associated opinions. I presume that reality occurs through a “cause and effect” process (rather than through “dualist,” or “magic” mechanisms) simply given standard observations. But to the extent which I happen to be wrong about this, observe that science itself does become obsolete. So my presumption is that it is indeed possible to “build a conscious entity,” simply because evolution seems to have done exactly this. Of course evolution may actually have invoked “magic,” as our dualists friends seem to believe, but I do take the opposite position for no better reason than to preserve the means by which humanity effectively explores reality, or the institution of science. Furthermore I do not presume that humanity will ever build such things itself, simply given our tremendous relative ignorance.

    One related but overlooked point from our last discussion, is that I consider “the hard problem” to also be “inconsequential” — other issues were dealt with instead. But to exchanging “an irrelevant hard problem” for “an important easy problem,” consider this: The logical way to determine the essence of “good/bad” for humanity is to 1) build a great list of that which seems good and bad for the human, 2) find the common denominator between these circumstances, and then 3) use this idea as a functional definition. For any philosopher who would like an example of how to practically do this, my associated work is provided here under my name. Furthermore I was then able to use this understanding in order to (among other things) build a functional model of the conscious mind.

  2. 2. Callan S. says:

    I think if you switch the basic deal, it’ll help – imagine if you adjusted, from within their brain, someones capacity to see so they see red when actually they are seeing yellow.

    That’s basically it – the thing is it would look utterly red to that person.

    We don’t think of our senses as being deceptive. That’s part of the ‘hard’ of talking about qualia and how dang yellow that yellow is – the discussion carries along the baggage of the senses just being ‘true’.

    Once you hit a ‘relativism’ that yellow can be red, then yellow could be blue, yellow could be green, yellow could be the scent of a rose (yes, cross sensual deception), ‘yellow’ ceases to seem such an honest salesman after all.

    When seen straight on though, you can only compare – yellow seems more white than blue. Ie, it seems more like one (potential?) deception than another deception? And so discussion is stuck in the terms of the potential deceptions, unable to build a ladder to escape it because the only things to build a ladder out of are other potential deceptions. So how could qualia/phenominal discussion escape such when every time it tries to take a hand that would lead it out, that hand is of it’s deciever? Mild poeticism in that description – forgive me for not giving a disclaimer beforehand! Anyway, maybe that’s part of why it’s a hard problem?

  3. 3. Vicente says:

    Peter, in the question:

    how can neuronal activity give rise to phenomenal experience?

    Does he use the expression “give raise to”? if so, he is already taking a position, not completely dualist btw.

    We should start with something like: how is neuronal activity related to phenomenal experience? since we are giving a chance to dualist positions, let’s be generous and “open minded”.

    then: Briefly, the resemblances and asymmetries of colour space arise from the properties of light and the limitations of our eyes; they are entirely a matter of non-phenomenal, materialist factors which are available to objective science

    It is possible to induce visual phenomenal experiences by direct stimulation of the cortex. I believe that we should distinguish between the sheer relationship of neural activity and phenomenal experience, and the evolutionary appearance of senses (sight).

    Finally: O’Hara and Scott’s dismissal of the Hard Problem. For something to be regarded as a legitimate problem, they said, there has to be some viable idea of what an answer would actually look like, or how the supposed problem could actually be solved; since this is absent in the case of the Hard Problem, it doesn’t deserve to be given serious consideration.

    Because they say so!

    problem /?pr?bl?m/ n

    1. anything, matter, person, etc, that is difficult to deal with, solve, or overcome
    2. (as modifier): a problem child
    3. a puzzle, question, etc, set for solution
    4. a statement requiring a solution usually by means of one or more operations or geometric constructions
    5.(modifier) designating a literary work that deals with difficult moral questions: a problem play

    Etymology: 14th Century: from Late Latin probl?ma, from Greek: something put forward; related to proballein to throw forwards, from pro-² + ballein to throw

  4. 4. Jim Z says:

    Eric you do not seem clear to me. Why should we equate “dualism” with “magic”? Furthermore, why should we assume that dualist mechanisms, if they do exist, would also make science obsolete?

  5. 5. Eric Thomson says:

    Why isn’t he just committing the basic content-vehicle confusion? E.g., because a qualia (content) seems simple, the actual vehicle doing the experiencing must be simple? Isn’t that like saying you cannot write the word ‘red’ unless you use red ink?

  6. 6. Philosopher Eric says:

    The thing about me, Jim, is that I’m not just a “hard physicalist,” but also a “hard determinist.” I see reality as something which functions such that there is reason, or foundation, for effects to actually happen. So when people theorize that there may be a “non-physical” aspect of reality, representing sensations of “color” or “pain” for example, to me this can only be termed “magic” — there is no foundation for such things without a physical platform. If someone puts an ice cream cone in your hand, this will obviously occur through cause and effect dynamics. But if it comes by just wishing for it, this would presumably be through magic. Such things would be beyond science to understand, simply because when effects happen without causes, or causes without effects, we should not be able to develop functional models from which to describe them.

    Vicente momentarily brought up Quantum Mechanics last time, and while I might be in the majority on physicalism, I am certainly in the minority here. But to me the same logic does applies. Most physicists today seem to interpret Heisenberg’s Uncertainty Principal such that “natural uncertainty” exists, or what I term “magic.” While I grant that there is a statistical probability that an ice cream cone will indeed materialize in my hand after I wish for it, if this occurred I would not see it as “natural uncertainty,” but rather as a cause and effect dynamic, most of which I have very little potential to actually comprehend.

  7. 7. scott bakker says:

    Eric Thomson: “Why isn’t he just committing the basic content-vehicle confusion?”

    Because hopefully he realizes that vehicle and contents are every bit as mysterious as the mystery they are taken to resolve! ;) Otherwise it is a bona fide mystery why consciousness possesses the cartoon grain it does – and from the sounds of it, Robinson takes it to be the *central* mystery. A dualistic twist on Jackendorf’s intermediate level hypothesis?

    For me, information asymmetry (which lies at the heart of my approach) is one more thing that recommends taking a hard look at EMFs. It provides a good candidate for a mechanism that straddles the simple/complex divide, at least.

  8. 8. Arnold Trehub says:

    When we ask “How can neuronal activity give rise to phenomenal experience?”, we are asking how events that enjoy publicly agreed upon descriptions (objective) can be demonstrated to cause events that have no such descriptions (subjective). An approach to this problem is given in these papers:

    “Evolution’s Gift: Subjectivity and the Phenomenal World”

    “A Foundation for the Scientific Study of Consciousness”

    On my Research Gate page here:

    https://www.researchgate.net/profile/Arnold_Trehub/?ev=hdr_xprf

  9. 9. Eric Thomson says:

    Scott: even granting for argument that both contents and vehicles are mysterious, that doesn’t mean it is ok to conflate them. I visually experience snow happening outside my home. Does that mean the mechanism of experience is outside too? Beware this seduction so many field theorists have fallen for, to think that simple (seeming) experience implies simple vehicle (or that “field-like” experience implies “field-like” vehicle).

    At any rate, if that is his reasoning, then it seems a pretty vanilla instance of content-vehicle confusion, which means this.

  10. 10. scott bakker says:

    Eric Thomson: “I visually experience snow happening outside my home. Does that mean the mechanism of experience is outside too? Beware this seduction so many field theorists have fallen for, to think that simple (seeming) experience implies simple vehicle (or that “field-like” experience implies “field-like” vehicle).”

    But on a position like mine this only seems impressive because of the way metacognition neglects the machinery involved. What’s *actually* going on is that there’s weather systematically interacting with ambient light systematically interacting with a neural system adapted to generate behavioural responses to things like weather. When that neural system’s metacognitive systems attempt to problem-solve this larger weather-light-brain system (when the philosopher ‘reflects’ on her ‘experience’), it’s position within the greater system forces it to go radically heuristic (how could it not?), to solve that system without any real access to its mechanical details. It remains a blindspot, as does all the machinery of the brain, leaving access only to whatever systematicity our ancestor’s required to adapt their behaviour, communicative or otherwise. The philosopher, blind to the parochial, heuristic nature of their metacognitive access, mistakes what they do access for a plenum, then spend millennia trying to explain what is actually a form of metacognitive anosognosia, a cognitive illusion in effect, as another mechanism in nature–an inexplicable one.

  11. 11. calvin says:

    materialist approaches are inherently dualist. There is simply no way to get to the idea of materialism itself from a material starting point.

    I think it’s fascinating that Robinson is contrasting the issue of simplicity and complexity problem spaces as a path toward solving the hard problem. When the “complexity” of neurons and neurological systems is itself an idea. we only say they are complex because it is our idea they are complex. the idea of complexity is clearly not a physical phenomena. the fact we experience complexity when thinking about brains, and neurons, and dendrites, and neurotransmitters, and neurotransmitter vesicles etc. is that these are complex ideas. “complexity” isn’t a feature attached to the neurons, or molecules of the neurons cell wall.

    this approach, and others like it, can never escape the hard problem because the approach is inherently dualist. thinking the hard problem can be solved with any kind of materialist angle that solves a subset of the hard problem ignores the simple fact that ideas exist. the fact of ideas is the hard problem.

  12. 12. Vicente says:

    Calvin,

    we only say they are complex because it is our idea they are complex.

    Well, I don’t know if Robinson has given any definition for what it understands by complex and simple.

    Complex systems are a scientific discipline on their own right, and complexity is well defined in engineering depending on the application field, usually relating to number of parts, independendent of coupled, risk of failure, common mode failures, uncertainty in outputs etc etc. In such way you can say that an Airbus 380 is much more complex than a glider or that algae are simpler than elephants.

    The physiological machinery of cells, including neurons, can much be said more complex than the working of the brain as an organ, depending on the definition.

    Regarding the complexity or simplicity of phenomenal experience, I believe it is not possible to define an objective criterion to quantify it. Are there any identifiable: parts, interactions, outputs…

    There might be an statistical simplification, for example, some phenomenal features are the average “values” of their neural correlates (or counterparts), I don’t know.

    Without a clear definition of complexity it is very difficult to progress in this reasoning.

    My guess is that since both systems are in very different domains it doesn’t make much sense to compare the complexity of them.

  13. 13. scott bakker says:

    Vicente/Calvin:

    Imagine you were born with two very different visual systems, the one HD, adapted to resolving innumerable details, the other myopic in the extreme, adapted to resolving blurry gestalts at best, blobs of shape and colour. Both are exquisitely adapted to solve their respective problem-ecologies, the problem-ecologies just happen to be divergent.

    Now imagine that it belonged to the myopia of this second visual system that it could not signal its myopia, so that it signalled the *same degree of fidelity* as the first.

    Now imagine the eye of this second system is placed on the back of your head.

    And that you are rooted in place like a plant. Save for the odd storm, which blows you about from time to time, there is very little overlap in their respective visual fields, even though each engages (two different halves of) the same environment.

    Dualism certainly becomes an attractive option in this scenario. Since our second visual system insists (by default) that it sees everything there is to be seen, you have no reason to think resolution is a problem. So you think you are a special being that dwells on the interstice of two very different worlds. You might even begin speculating on the differences in complexity between the two worlds, come to the conclusion that this is the key difference…

  14. 14. Eric Thomson says:

    Scott:
    I largely agree, and that helps me make my point: why the focus on EMFs? There’s much more to the brain than EMFs, so why single them out as special?

  15. 15. scott bakker says:

    Eric Thomson: Generally speaking I would be surprised if the brain doesn’t take advantage of field effects somehow. CNS’s have been around a long, long bloody time, and I find it hard to believe that evolution wouldn’t have stumbled on a mechanism possessing so much potential power at some point.

    More specifically, they plausibly answer the problems neural identity accounts have with explaining integration. Even Barrett has recently started to warm up to the possibility (check out “An integration of Integrated Information Theory with fundamental physics”). You should expect that EMFs will find their way back to the ‘serious’ research agenda, at any rate.

    More specifically still, they seem to provide my own account with what I call ‘default identity.’ Why is it, for instance, that crossing frequency detection thresholds in flicker experiments leads to ‘fusion,’ the perception of continuous stimuli (like the screen before you)? For me, I think this cuts to the root of concerns (like Peter’s here or Eric Schwitzgebel’s more generally) regarding the arbitrary relationship between ‘information’ and consciousness. In the case of flicker fusion you have a positive percept – continuity – arising out of a processing *incapacity.* On my Blind Brain account, give me default identity and I can explain (away) a great number of conundrums regarding intentionality and consciousness. Insofar as my view has abductive warrant, EMFs do as well.

  16. 16. Eric Thomson says:

    Scott: so you aren’t saying they are *the* essential component, but *one* important component? I’m fine with that. That said, firing rates and information integrated across more traditional networks of neurons are pretty crucial. Ephaptic communication may be real, but synaptic communication is definitely and demonstrably important, and I would guess you can see the signatures of flicker fusion frequencies therein (just as you can see the hallmarks of phenomena like backward masking in lower visual areas in monkeys if you look at spikes).

  17. 17. scott bakker says:

    Eric Thomson: Of course. But I’m curious: if you’re sympathetic to the picture I paint above regarding representation, how can you see it as a solution to the informational asymmetry of experience and neurophysiology?

  18. 18. Eric Thomson says:

    I just meant I am sympathetic specifically to 1) The caution you urge about getting too attached to our reliance on folksy intuitions and metaphors that we use to get things off the ground, and 2) The claim that field effects are probably more important than people think. I didn’t mean to endorse any specific theory of representation you might be advocating. My understanding of your view is that it is much more radical and eliminativist than mine (hell, these days I am sympathetic to dualism 2 out of every 10 minutes).

  19. 19. scott bakker says:

    Eric Thomson: “(hell, these days I am sympathetic to dualism 2 out of every 10 minutes).”

    I’ve stamped my pineal gland pretty much to mud by this point – for better or worse. I don’t buy the kind of run-of-the-mill dismissals of the hard problem you find in the likes of Dennett or Flanagan, simply because I think any interpretavist/eliminativist approach needs some way of explaining – as opposed to waving – away all the intuitions that seem to have such a deathgrip on so many brilliant people. But aside from that, I’m pretty much convinced the worst case scenario is true. Check out: http://rsbakker.wordpress.com/2013/05/27/the-something-about-mary/

    And, given science’s track record of recontextualizing our prescientific intuitions… Mother Nature has a heart of flint.

  20. 20. Hunt says:

    “Set aside the visual science and our familiarity with the spectrum, and there is no reason to think the phenomenal experience of orange resembles the phenomenal experience of red any more than it resembles the phenomenal experience of Turkish Delight. If that seems bonkers, I submit that it seems so in the light of the strangeness of qualia theory if taken seriously – but I expect I am in a minority.”

    You’re saying we would all be the guy who tastes colors had we not properly familiarized ourselves with the visual spectrum and how it’s distinct from taste sensation?

    Otherwise, surely there is at least a basic structure to experience, that provided by our separate senses.

  21. 21. Philosopher Eric says:

    In comment 13, Scott, you built a scenario where something might be “fooled into dualism,” though I would like to refine it. We essentially had a conscious tree thing that has a good eye on one side and a bad eye on the other that it believes is a good one. Using such a scenario to expose a “dualism illusion,” however, seems overstated.

    So also consider a very similar subject in the vicinity that is “just a computer,” or essentially has no consciousness dynamic. Note that visual “input” exists for them both, that they should each “process” such information, and furthermore that they should each have the potential for associated “mental output.” Personal experience tells us that the conscious entity will have what I call vision “sensations,” while the other is defined to not have this dynamic (though it may indeed still have good visual perceptions). The associated question is, are the “visual sensations” which one of them has but the other does not, a physical element of reality? Though all of us here seem to accept this to be the case, the dualist says that there must be “something more,” or “magic” associated with “sensations.”

    Peter invites everyone to his parties, and my perception so far is that the people who “stay the latest and drink the hardest,” are essentially trying understand how to build a conscious entity. In these efforts, archaic computer/neurology discussions take place, with profuse citations to the work of favored theorists. I, however, bring none of this to the party. If fact, “the hard problem” will always remain relatively inconsequential to me. Instead my interest lies in helping a discipline which has achieved no accepted understandings of reality to date, enter the realm of science. Not only would I then become “the father of science based philosophy,” but Psychology, Psychiatry, Sociology, and cognitive science in general, would finally gain themselves “a Newton.” Furthermore as far as “drinking hard and staying late” goes, this is indeed what I live for!

  22. 22. Vicente says:

    Scott, regarding your question, I don’t think so. I believe that if I can’t understand the statement of the problem then I can understand the solution. Whether I can solve the problem or not is a different issue, but if someone were to tell me the answer I would understand it. I am not sure there is really a cognitive closure, but a procedural, methodologic, algorithmic closure, you name it. Mind you, many people don’t really understand the hard problem statement.

    Finally, you are taking for granted that we only have the cognitive outcoming from environmental pressure… but that is not the way evolution works… evolution applies to the survival and selection. Evolution allows species to have many features and traits useless for survival in their niche. So we could have developed cognitive capacities beyond our ancestors, that are useless to hunt, but useful to understand and solve the hard problem. Who knows.

  23. 23. Philosopher Eric says:

    Vicente I just can’t let your last comment go unchallenged. If someone simply tells me that the answer is “4,” I will understand this answer to the extent that I know what this number means. But until I know the question that this number is suppose to solve, I will have no potential to comprehend any such uncertainty. Telling people answers without there associated questions, will obviously not teach them much. But given that you’ve also mentioned that some people do not understand what “The Hard Problem” is, and right after I gave an alternative to Scott’s associated scenario, perhaps this comment was actually directed at me? So here I must ask, do you have a different interpretation of this problem than the one that I’ve just presented?

  24. 24. scott bakker says:

    Vicente: “Finally, you are taking for granted that we only have the cognitive outcoming from environmental pressure… but that is not the way evolution works… evolution applies to the survival and selection. Evolution allows species to have many features and traits useless for survival in their niche. So we could have developed cognitive capacities beyond our ancestors, that are useless to hunt, but useful to understand and solve the hard problem. Who knows.”

    But I’m sure you’ll agree that the more surplus capacity required for a brain to solve the inverse problem of itself the lower the probability that we have happily evolved that capacity. Setting aside the fact that our present abject inability to solve the problem of consciousness is precisely what you might expect given the lack of such surplus capacity, I just don’t see how any evolved brain could possibily solve itself the high-dimensional way it solves its environments. It has to constitute a cognitive blindspot in several fundamental respects, one requiring radically schematic, heuristic means of managing. As it so happens, intentional phenomena bear all the hallmarks of just such specialized heuristic modes. Since we have no metacognitive access to this fact, these modes just seem part our general problem solving capacity, and we find ourselves perennially baffled by our inability to intergrate them with environmental cognition.

    In other words, the surplus capacity, you mention, could just as easily be the very thing leading us astray! In the meantime, the realist owes a story of just how it is the brain could come to know itself other than in a haphazard, task specific way. I mean, you read something like Deacon, gobsmacked that someone could put so much ingenuity and work trying to get to the point where one might *begin* to argue original intentionality – and never consider the out-and-out implausibility that the truth of this original intentionality, were it even possible, is not something any brain could ever intuit in the first place.

    In other words, I think we can be pretty certain there is no original intentionality simply because intuition is so bent of saying there is! Neuro-conspiracy theory… ;)

  25. 25. Philosopher Eric says:

    One “heuristic” (or rule of thumb) way to approach consciousness, Scott, might be “Thou shalt not build… that which is not functionally defined.” Can’t we at least attempt to “start from the start” on this? Must we answer “The Hard Problem” right now? Shouldn’t we at least try to understand this thing which is commonly known as “consciousness,” before actually trying to build it?

    I don’t know what Peter thinks here, though his next move might be to kick me out for annoying his regulars (and I think you know that I’ve been kicked out of far less classy joints). As an outsider looking in, however, my assessment of the situation has indeed been stated: at this point we have merely “scratch the surface.”

  26. 26. Hunt says:

    “Finally, you are taking for granted that we only have the cognitive outcoming from environmental pressure… but that is not the way evolution works… evolution applies to the survival and selection. Evolution allows species to have many features and traits useless for survival in their niche. So we could have developed cognitive capacities beyond our ancestors, that are useless to hunt, but useful to understand and solve the hard problem. Who knows.”

    If panpsychism happens to be true (!) then consciousness was just something that evolution needed to “handle” before it got in the way of survival. This could also easily include epiphenomenalism. Consciousness, the troublemaking evolutionary spandrel, just needed to be given some busy work while the real input/output processing of brain survival mechanism did its important work. Note, I’m not saying I believe this, but with each scientific report countering free will, or that says I’ve already “made up my mind” before I even did so, it starts to sound more believable that “I” am just along for the ride.

    There’s rich irony in the idea that consciousness is just an albatross around the neck of evolution.

  27. 27. Richard J R Miles says:

    Humans have long since bypassed evolution by natural selection,instead it is more like what we have done with dogs,unnatural.

  28. 28. scott bakker says:

    Eric: The fact that so many are split on the actuality of the hard problem means we don’t have a consensus view on what the explanandum even is. Not only do we not know what the answer to the riddle is, we can’t even agree on the riddle.

    Hunt: Epiphenomenalism has never struck me as credible. If you’re interested, I actually spend quite some time on this topic in my recent review of Dehaene’s Consciousness and the Brain (http://rsbakker.wordpress.com/2014/02/17/the-missing-half-of-the-global-neuronal-workspace-a-commentary-on-stanislaus-dehaenes-consciousness-and-the-brain/).

  29. 29. Philosopher Eric says:

    That sounds about right to me Scott, so thanks!

    You’ve raised some interesting ideas Hunt that I’d like to address from my own “Physical Ethics” perspective:

    Yes panpsychism seems quite silly, simply given that living humans do appear to lose their consciousness from time to time. We won’t be attributing conscious to rocks any time soon regardless. Furthermore “Epiphenomenalism” becomes lost once we define mind as a physical (or “non magic”) entity. Thus we are left to assume that evolution found it useful to bestow consciousness upon the human and the fish, and perhaps the ant, and perhaps not the tree, and certainly not the virus. But then what about those remaining philosophical questions? I’ll set aside one “hard” issue, though I do have answers for that which is important.

    Consider “self” as a punishment/reward “sensations” idea. Here my personal value is based upon the instantaneous sensations which I experience at a given moment. When these are compiled over a given period of time, however, this defines how negative to neutral to positive my existence is over that period. Furthermore “social welfare” becomes a matter of arithmetic increase. But then what about Derrick Parfit’s “mere addition paradox” or “repugnant conclusion”? If reality can indeed be repugnant, why would we assume that accurate theory which describes reality, would not also be repugnant? The concept of free will is relatively simple under my physical perspective. Consider this as something which only exists to the limits of a given perspective — so while there is not “ultimately” any, in practice our vast ignorance let’s us use this concept quite liberally.

    There is only so much that I can say right here (though the rest is under my name) but please understand this: Philosophical elements of reality are no less real, or certain, than the rest of reality. We idiot philosophers have now fumbled these issues for so long, that science has risen up with a gapping hole right where our theory should naturally reside. (In a conceptual sense, how does one propose to practice Psychology, with no functional model of consciousness?) Philosophical questions will not be resolved because philosophy demands this, but rather because science does.

  30. 30. Hunt says:

    “Philosophical questions will not be resolved because philosophy demands this, but rather because science does.”

    Where science can, science usually does. This is the old “philosophy begins the conversation, science ends it” idea, or that philosophy is just a scrapheap of unanswered scientific questions. Outside those devoted to “scientism” I think this is usually just a way to bash philosophy. At other times, it’s exactly how things should be viewed, as with consciousness studies.

    I pretty much share everyone’s disdain for epiphenomenalism, if only because epi- doesn’t appear like even a biologically spandrel trait. You don’t see some mammals running around that seem less conscious than others (do you?), not like tail or snout length. The one compounding factor would be panpsychism: if consciousness HAD to exist concomitant with neurological complexity, then I could see it being something evolution simply had to deal with and perhaps sweep under the carpet. Fortunately, I think they’re both false.

  31. 31. Vicente says:

    Eric #23: My first assumption was that I do understand the problem statement, i.e. I understand the question for which the answer is “4”.

    Regarding the second point, I am not very sure if it makes sense to bring out the hard problem for somebody that has not realise it on his own reflections.

  32. 32. Vicente says:

    Scott, what I am trying to say is that the cognitive closure should apply to the statement of the problem also. Why when we say the eye cannot see the eye, we do not question why the eye thinks there is an eye in the first place. So if the eye has the capacity to suspect there is an eye, I believe it is possible, at any rate, to build a mirror that helps the eye to see the eye, or at least it reflection. Now, how to produce that mirror.

    I would be different if the eye does not even suspect of its own existence, then probably even if it sees itself it would make no sense about it.

    One nice thing about algebra is that you can apply “existence and unicity/uniqueness” theorems, so that you don’t waste your time looking for something that doesn’t exist, or looking for better solutions once you’ve got one….

    A useful thing philosophers could do is to try to prove if the hard problem can be stated in logical terms, and if it has a possible solution…. I this were not possible, think of the kind of problem we are facing….

  33. 33. scott bakker says:

    Vicente: But it all depends on the domain closed, doesn’t it? In fact, it has to be the case that some domains are open to be able to state that some domains are closed. When you add complimentary/competing modalities of access, the situation becomes even more complicated. This is why some Extreme Walleyes are able to insist on environmental monism despite the ‘evidence of their senses.’ They have discursive access to certain facts they cannot intuit. They can chalk up their intuitions to the parochialism of their perspective.

    The big point of the little thought experiment is to show how systems can mistake differences pertaining to information privation to differences pertaining to ontology.

    You have to admit, Blind Brain Theory has a pretty parsimonious way of shrugging away what seem to be insuperable problems otherwise…

  34. 34. Philosopher Eric says:

    Thanks for the clarification Vicente. At the time I did actually wonder if you meant “can” rather than “can’t” in that statement, but then I decided that you must be saying something far more profound that needed challenge. And I also agree that there are varying degrees of comprehension. You would expect a person that figures something out to be far more proficient than one who is just told the answer. As I perceive it, this is actually my current problem.

    I’ve spent untold hours over the last half my life deriving answers to “important” philosophical questions (unlike “the hard problem” in my view). At this point, however, all I can do is describe the questions and answers as I see them. I still need others travel some of the paths that I have taken, but so far (and it has only been around a month) I think I’m viewed as a bit of an arrogant bully. This situation is certainly NOT in my own interest, so I do hope to remedy it soon.

    I know that like Peter, you are highly versed in existing philosophical theory (as I, intentionally, am not) so your perspective is one that I am keen to develop. I seek to end philosophy’s culture of failure — but I cannot do this alone. Others must be willing to travel some of the roads that I have taken. I now seek this with my open question in the next article (#9).

  35. 35. Vicente says:

    Scott, I’m afraid I don’t fully follow you in #33. One remark, it seems that in the BBT you compare the number of operations at subconscious and conscious level, as the basis to quantify for the lost information.

    The point is that “operations” are not defined for the brain architecture and “switching”. Operations make sense for a processor flow at assembler of machine code level. An operation is each step in the algorithm, for example, move content of register A to register B, or erase memory position H0000B…etc. For the brain, there is no such a thing, no real algorithm or operations defined… so this information processing and loss, at different levels, can’t be handled straight forward. IMO this is a very serious weakness of the BBT.

    Maybe, as a lot of information is filtered when passing from the subconscious to the conscious level, also the conscious level produces new relevant information, nto available at subconscious level, could it be?

  36. 36. scott bakker says:

    Vicente: “so this information processing and loss, at different levels, can’t be handled straight forward. IMO this is a very serious weakness of the BBT.”

    How is this a weakness particular to BBT? We know that the brain is modular and heuristic, that it solves problems via selective information uptake – which is to say, via neglect. We also know that brains have evolved to solve the ‘curse of dimensionality': whether it uses something like random projections or something radically different, the take away lesson seems pretty clear: metacognition simply cannot be trusted to provide ‘ontological accuracy.’ A fortiori, any position that depends on metacognitive data (such as ‘qualia’) is in deep, deep trouble. They need to explain how, given all the machinery required to cognize our environments with ontological accuracy, it’s remotely possible for ANY evolved brain to metacognize itself with ontological accuracy.

    The fact is, the contemporary muddle cognitive science and consciousness research finds itself in is the very one you should expect, given the way BBT construes the brain. Meanwhile, the more we learn about the brain, the more empircally obvious it becomes that we suffer numerous, profound forms of neglect with regard to ourselves: http://rsbakker.wordpress.com/2014/02/17/the-missing-half-of-the-global-neuronal-workspace-a-commentary-on-stanislaus-dehaenes-consciousness-and-the-brain/

    Looking at the sheer complexity of the brain, it’s actually hard to imagine how it could be any other way.

  37. 37. Vicente says:

    Scott, yes but those a very general statements. I need more concrete and specific statements. The curse of dimensionality is again a computer AI world derivation that doesn’t necessarily apply to brains. There was a time, many years ago, in which analogue computers were used, even more “emulation”, not simulation, was used. For example, in order to solve EM field distribution problems (the ones that draw your attention now), an equivalent system made with electrodes and wires was build, and iron filings were put on an oil pool around the system. Then, the right voltages were applied and the iron filings got organised along the field lines. The experimenter then tried to get figures from the filing geometry observed and measured. This was also a computer. I think the brain is much closer to this kind of system, than to digital computers, just imagination. Or maybe, brains evolved for you to produce as many offspring as possible with the same kind of brains… the problem is that now you are also using your brain to write novels… and people are reluctant to have children… and the brain becomes a mess and doesn’t know what to do…

    Qualia depends on metacognitive data? I don’t see how.

    And yes, I agree that the epistemic processes very rarely lead to ontologic conclusions.

Leave a Reply