oldhardThe Hard Problem may indeed be hard, but it ain’t new:

Twenty years ago, however, an instant myth was born: a myth about a dramatic resurgence of interest in the topic of consciousness in philosophy, in the mid-1990s, after long neglect.

So says Galen Strawson in the TLS: philosophers have been talking about consciousness for centuries. Most of what he says, including his main specific point, is true, and the potted history of the subject he includes is good, picking up many interesting and sensible older views that are normally overlooked (most of them overlooked by me, to be honest). If you took all the papers he mentioned and published them together, I think you’d probably have a pretty good book about consciousness. But he fails to consider two very significant factors and rather over-emphasises the continuity of discussion in philosophy and psychology, leaving a misleading impression.

First, yes, it’s absolutely a myth that consciousness came back to the fore in philosophy only in the mid-1990s, and that Francis Crick’s book The Astonishing Hypothesis was important in bringing that about. The allegedly astonishing hypothesis, identifying mind and brain, had indeed been a staple of philosophical discussion for centuries.  We can also agree that consciousness really did go out of fashion at one stage: Strawson grants that the behaviourists excluded consciousness from consideration, and that as a result there really was an era when it went through a kind of eclipse.

He rather underplays that, though, in two ways. First, he describes it as merely a methodological issue. It’s true that the original behaviourists stopped just short of denying the reality of consciousness, but they didn’t merely say ‘let’s approach consciousness via a study of measurable behaviour’, they excluded all reference to consciousness from psychology, an exclusion that was meant to be permanent. Second, the leading behaviourists were just the banner bearers for a much wider climate of opinion that clearly regarded consciousness as bunk, not just a non-ideal methodological approach. Interestingly, it looks to me as if Alan Turing was pretty much of this mind. Strawson says:

But when Turing suggests a test for when it would be permissible to describe machines as thinking, he explicitly puts aside the question of consciousness.

Actually Turing barely mentions consciousness; what he says is…

The original question, “Can machines think?” I believe to be too meaningless to deserve discussion.

The question of consciousness must be at least equally meaningless in his eyes. Turing here sounds very like a behaviourist to me.

What he does represent is the appearance of an entirely new element in the discussion. Strawson represents the history as a kind of debate within psychology and philosophy: it may have been like that at one stage: a relatively civilised dialogue between the elder subject and its offspring. They’d had a bit of a bust-up when psychology ran away from home to become a science, but they were broadly friends now, recognising each other’s prerogatives, and with a lot of common heritage. But in 1950, with Turing’s paper, a new loutish figure elbowed its way onto the table: no roots in the classics, no long academic heritage, not even really a science: Artificial Intelligence. But the new arrival seized the older disciplines by the throat and shook them until their teeth rattled, threatening to take the whole topic away from them wholesale.  This seminal, transformational development doesn’t feature in Strawson’s account at all. His version makes it seem as if the bitchy tea-party of philosophy continued undisturbed, while in fact after the rough intervention of AI, psychology’s muscular cousin neurology pitched in and something like a saloon bar brawl ensued, with lots of disciplines throwing in the odd punch and even the novelists and playwrights hitching up their skirts from time to time and breaking a bottle over somebody’s head.

The other large factor he doesn’t discuss is the religious doctrine of the soul. For most of the centuries of discussion he rightly identifies, one’s permitted views about the mind and identity were set out in clear terms by authorities who in the last resort would burn you alive. That has an effect. Descartes is often criticised for being a dualist; we have no particular reason to think he wasn’t sincere, but we ought to recognise that being anything else could have got him arrested. Strawson notes that Hobbes got away with being a materialist and Hume with saying things that strongly suggested atheism; but they were exceptions, both in the more tolerant (or at any rate more disorderly) religious environment of Britain.

So although Strawson’s specific point is right, there really was a substantial sea change: earlier and more complex, but no less worthy of attention.

In those long centuries of philosophy, consciousness may have got the occasional mention, but the discussion was essentially about thought, or the mind. When Locke mentioned the inverted spectrum argument, he treated it only as a secondary issue, and the essence of his point was that the puzzle which was to become the Hard Problem was nugatory, of no interest or importance in itself.

Consciousness per se took centre stage only when religious influence waned and science moved in. For the structuralists like Wundt it was central, but the collapse of the structuralist project led directly to the long night of behaviourism we have already mentioned. Consciousness came back into the centre gradually during the second half of the twentieth century, but this time instead of being the main object of attention it was pressed into service as the last defence against AI; the final thing that computers couldn’t do. Whereas Wundt had stressed the scientific measurement of consciousness its unmeasurability was now the very thing that made it interesting. This meant a rather different way of looking at it, and the gradual emergence of qualia for the first time as the real issue. Strawson is quite right of course that this didn’t happen in the mid-nineties; rather, David Chalmers’ formulation cemented and clarified a new outlook which had already been growing in influence for several decades.

So although the Hard Problem isn’t new, it did become radically more important and central during the latter part of the last century; and as yet the sherriff still ain’t showed up.

42 Comments

  1. 1. Christophe Menant says:

    Interesting post.
    Let me however look at the question a bit differently. It is true that the problem about the nature of human consciousness is not new. But the hard problem is new as focusing human consciousness on phenomenal consciousness (PC).
    Before such a focus on PC, the problem of human consciousness was mostly with self-consciousness (SC), with Kant empirical and transcendental SC. Phenomenology, as a dominant philosophical school during the last half of the xxth century, has erected PC as being the reference for human consciousness. And SC ‘has fallen on hard times. Though once regarded as the very essence of mind, most philosophers and psychologists today treat it as a marginal and derivative phenomenon’ (Van Gulick, 1988). So the hard problem, as tightly linked to phenomenology, looks to me as quite recent a concept

  2. 2. Sergio Graziosi says:

    Peter,
    wonderful stuff: the original essay, duly complemented with your addendum, make for the best reading I’ve had in a long time.
    I would add my own: behaviourism has indeed been a calamity (see previous thread of comments for hints to why it might have been a necessary calamity) and we are not entirely over it quite yet. The consequence is that the coincidence of Crick’s commitment to the scientific study of Consciousness, and of Chalmers’ efficacious way of isolating the hard-core of the philosophical problem, do indeed make for a good watershed signpost. What happened next is that (neuro)science found (was shown?) a way to break the ugly shackles of behaviourism and started gathering increasing amounts of data on consciousness itself.

    Of course this was possible because of all the work and thinking that happened before… Beware of oversimplifications: life is complicated.

    As you know, I have enough reservations on the direction taken by many (most?) of the scientific efforts, but that doesn’t make me less happy to see that they are happening (and growing).
    Even if I wasn’t listening at the time, I also agree that the arrival of AI had a disruptive effect. I like you “saloon” image 🙂 – we can still hear the echoes of many dirty brawls.

  3. 3. Sci says:

    I’m not sure if Turing can be inferred to be a behaviorist from a reading of Computing Machinery and Intelligence, linked here for anyone who isn’t familiar with the text:

    http://www.loebner.net/Prizef/TuringArticle.html

    I think that particular quote was saying the question “Can Machines Think?” is meaningless because the word “thinking” is ill-defined. Part of my reasoning is because he suggests AI might be thinking in a different way than humans would.

    He also acknowledges the mystery of consciousness within the paper (section 6.4), though he sees it as irrelevant to the “imitation game”.

    On the other hand his comparison of the Turing Test to the Problem of Other Minds to me suggests sympathy with behaviorism, though the final accounting of where his sympathies lie might depend on whether one thinks he was joking about telepathy in section 6.9….

  4. 4. Vicente says:

    At the end of the essay he points out: The mistake is to think we know enough about the nature of physical reality to have any good reason to think that consciousness can’t be physical. It seems to be stamped so deeply in us, by our everyday experience of matter as lumpen stuff, that not even appreciation of the extraordinary facts of current physics can weaken its hold. To see through it is a truly revolutionary experience.

    He’s inventing this mistake, the problem is how to extent the scientific models of the Universe to include consciousness. The models of the Universe are conscious contents themselves, and maybe this is part of the problem. For example, does the question: how to extent spacetime to include the Universe, make sense?

    The issue is meaningless, everything is physical as long as you have the adequate physical models to account for it (reality is reality irrespective of your models about it). But consciousness is the frame wherein all physical models exist. In fact, physical models are not physical, maybe somebody can tell me the size or weight, or temperature of any physical model. Some people would say that the weight, size and temperature of the brain matter supporting the processes that take place when mulling over a certain model,or those of the tissue that stores the info coding the model, if any. For those who think in that way, good luck.

    So when we try to solve the problem of consciousness we have to take a different approach from that we use when we try to understand what is the dark energy. Now whether that approach is considered a physical theory or not depends on both, on the approach and on the current paradigm of physics. If the XX century guys swallowed quantum physics and relativity, why can’t our XXI century fellows open their minds a bit. Because the can’t measure! they don’t know what they are talking about! there is no theory to be proven wrong in the lab! no discrepancies! no Mercury orbit anomaly! no absortion spectrum disaster. THERE IS NOTHING TO BE PROVEN WRONG. This is why our XXI century Einsteins are lost and disoriented.

    Do we want to understand consciousness or be nice with the establishment?

  5. 5. Scott Bakker says:

    Vicente: “But consciousness is the frame wherein all physical models exist.”

    It is? Sounds like you’re assuming that consciousness is in fact representational and self-sufficient when these are the very questions we need to answer.

    In dialectical terms at least, the problem is best understood, I think, as to where we should beg the question. Do we presume that consciousness *as it appears to introspection* comes first, or that nature as characterized by science does? I think this is actually an easy question to answer. We have no reason whatsoever to trust introspection, and science remains the only source of reliable theoretical cognition we have ever known.

    Wonderfully written piece, Peter.

  6. 6. Vicente says:

    Scott, I don’t know how it sounds, the statement is simple and clear. Maybe far too simple?

    Perhaps, you could prove it wrong by referring to and telling us something about any physical model without being conscious about it.

    If you want, later, I can tell you why my statement is not watertight.

    Regarding science reliabilty… you must be joking. That is not the point.

  7. 7. Richard J.R.Miles says:

    Peter I love the drawing for a moment I thought it was me.

  8. 8. Tom Clark says:

    Galen Strawson, via Vicente: “The mistake is to think we know enough about the nature of physical reality to have any good reason to think that consciousness can’t be physical.”

    We may well not know the full specification of the physical in terms of properties, entities, laws, structure, etc. But whatever that specification is, what’s physical participates in a potentially observable, mind-independent, quantifiable, spatially extended reality that we model using physics and the special sciences. Whereas phenomenal experience is neither observable, mind-independent, spatial or quantifiable using any physical metrics: it’s a private, qualitative reality that exists for the subject alone.

    To expect to the find the subjective reality of consciousness out there in the world as described by physics – Strawson’s expectation – seems to me to misconstrue the basic character of the explanatory target, plus it’s somewhat of an argument from ignorance: we don’t know all of physics yet, so can’t rule out that consciousness might be physical.

    But at least Strawson acknowledges the reality of the phenomenal as something in need of explanation, not an illusion of introspection (a quintessentially conscious process). To say that we only *seem* to be conscious (Dennett, Graziano) invokes phenomenal experience itself, so can’t eliminate consciousness as a concrete reality to be reckoned with. Strawson makes this point against the eliminativists.

  9. 9. Scott Bakker says:

    Vicente: “Perhaps, you could prove it wrong by referring to and telling us something about any physical model without being conscious about it.”

    Well, I have been known to mumble philosophy in my sleep! And my knowledge seems to relate me to the world whether I’m thinking about it or not.

    “Regarding science reliabilty… you must be joking.”

    Compared to what? Philosophy? Religion?

  10. 10. Scott Bakker says:

    Tom Clark: “But at least Strawson acknowledges the reality of the phenomenal as something in need of explanation, not an illusion of introspection (a quintessentially conscious process). To say that we only *seem* to be conscious (Dennett, Graziano) invokes phenomenal experience itself, so can’t eliminate consciousness as a concrete reality to be reckoned with. Strawson makes this point against the eliminativists.”

    The eliminativist only asks that you bracket the definitions/explanations under dispute, so as to avoid begging the question. Saying that we ‘only seem to have phenomenal experience (as you theorize it) invokes phenomenal experience (as you theorize it)’ is pretty clearly assumes the very thing at issue: the proper account of phenomenal experience. Graziano and Dennett have different accounts.

    Eliminativism’s Achille’s Heel is abductive, not logical. But if the eliminativist is on the hook to explain why consciousness seems the way it does (and I think I can), then the realist is on the hook to explain how we could possibly metacognize any of these extraordinary things they claim to metacognize, given the draconian straight-jacket of bounded cognition. Think of all the ancient machinery required to accurately apprehend simple systems in our environment: how did our brains evolve the capacity to accurately apprehend it’s own vastly more complicated operations in a few hundred thousand years?

    How could ‘mind,’ ‘consciousness,’ ‘experience,’ or any of these things be anything but simple heuristics?

  11. 11. Tom Clark says:

    Scott, I guess one point you’re making is that the illusion of being conscious, an illusion which most of us labor under, is a function of our cognitive limitations. In a way that’s along the lines of Metzinger’s approach, except that he doesn’t think experience is illusory. Rather, it results from what he calls “a special form of darkness”: the fact that the representational character of certain representations isn’t available to the system, see http://phantomself.org/a-special-form-of-darkness-metzinger-on-subjectivity/ Thus the representational content is directly, transparently available to us as phenomenal, qualitative content. And all this happens in the context of our being a phenomenal subject that too is transparently represented. We can’t see our self-model *as* a model: we “look through” the representational character onto the represented content ‘me’.

    Experience, on your view, is merely a heuristic, not a felt reality. There’s nothing substantive or real about intense pain, or rapture, or any experience. These are *necessarily* illusions on your view since what’s real can only be physical. But, the realist replies, the illusion of experience in all its intensity and variety is experience itself, since the illusion has all the characteristics of experience. Explain the illusion and you’ve explained experience, not eliminated it. So I don’t think I’m begging any questions about the proper account of phenomenal experience in the mere claim that it exists and isn’t illusory.

  12. 12. Vicente says:

    Scott,

    REM phase is a conscious state of mind. Also, lucid dreaming is a very interesting technique through which intense concentration and focusing can be achieved (I have some experience… very intereting believe me).

    my knowledge seems to relate me to the world whether I’m thinking about it or not

    I’m sorry I don’t fully understand what you mean. The subconscious provides inputs non-stop, but this is another issue.

    Compared to what?

    Compared to nothing. Reliable as itself. Simple engineering principles indicate that a system reliability is to be verified against a standard or a set of requirements and specifications. Just have a look at the history of science, check number of ideas that were accepted and the dismissed. But this is the great value of science (ala Popper).

    I insist this is not the point, I am the first one to defend science as the only frame for a common undestanding as well as as the only guidance for rules and decision making. No objection to the scientific method, I just believe that in order to study consciousness the current paradigm will have to evolve, or either be complemented by other means, in order to be able to tackle the complete range of problems we face.

  13. 13. Scott Bakker says:

    Vicente: “I insist this is not the point, I am the first one to defend science as the only frame for a common undestanding as well as as the only guidance for rules and decision making. No objection to the scientific method, I just believe that in order to study consciousness the current paradigm will have to evolve, or either be complemented by other means, in order to be able to tackle the complete range of problems we face.”

    I see. But then my parsimony point applies just the same, doesn’t it? If we can understand the consciousness that you’re looking for as the product of certain, predictable metacognitive illusions, then our task is considerably more simple. We just need to rethink our approach to consciousness, not to reality.

    “I’m sorry I don’t fully understand what you mean. The subconscious provides inputs non-stop, but this is another issue.”

    I mean our relation to our environment, cognitive or otherwise, is far more an unconscious one than a conscious one. To say that consciousness is the ‘frame to our cognitive relation to the world’ is to assume that consciousness is what makes that relation possible, that it hogs ‘cognitive efficacy,’ as opposed to, say, being a kind of parallel mediating mechanism for updating pre-existing relations in specific ways.

  14. 14. Scott Bakker says:

    Tom: “Thus the representational content is directly, transparently available to us as phenomenal, qualitative content. And all this happens in the context of our being a phenomenal subject that too is transparently represented. We can’t see our self-model *as* a model: we “look through” the representational character onto the represented content ‘me’.”

    I’ve debated Thomas on this point before: he’s open to the possibility that representationalism is fundamentally flawed, and thinks I might have a viable alternative. Back when I first read Being No One, auto-epistemic closure dropped my jaw because it was the closest anyone (aside from Dennett) I had read had come to my own thesis. But it remains a representational concept, and so one buying into the notion that we possess the metacognitive capacity required to intuit some special, inexplicable order/level of inner reality. I’ve spent years now travelling the web asking people how such a capacity is possible: no one knows. It just feels that way.

    Do away with representations, then you do away with the need for a PSM, replace it with post hoc metacognitive judgments. This approach has many virtues, not the least of which is explaining the wild divergence in accounts of the self between offices, let alone across times and cultures.

    Representational thinking is heuristic thinking, powerful in certain problem solving contexts, pernicious in others. Like other varieties of heuristic thinking it allows us to cognize locally perceived efficacies while neglecting the systems globally responsible. (If you’re interested check out: https://rsbakker.wordpress.com/2014/11/02/meaning-fetishism/ )

    “Experience, on your view, is merely a heuristic, not a felt reality. There’s nothing substantive or real about intense pain, or rapture, or any experience. These are *necessarily* illusions on your view since what’s real can only be physical. But, the realist replies, the illusion of experience in all its intensity and variety is experience itself, since the illusion has all the characteristics of experience. Explain the illusion and you’ve explained experience, not eliminated it. So I don’t think I’m begging any questions about the proper account of phenomenal experience in the mere claim that it exists and isn’t illusory.”

    ‘Mere’ heuristic? No. Experience engulfs me as much as anyone: I just don’t think it possesses a special metacognitively tractable ‘nature.’ What I think is that pain possesses a limited number of (non-natural) predicates (‘is intense,’ ‘is located,’ ‘is like x’) because our metacognitive capacity/ access vis a vis pain is limited to practical contexts–the very thing we should expect, given any plausible account of bounded cognition.

    The ‘illusion-is-something-too’ complaint is one of the more common ones I receive. The claim is that ‘experience’ is a heuristic (and no less profound for it), a way to communicate something opaque to deliberative metacognition (philosophical reflection), but solvent in any number of other problem-ecologies. When I call the product of philosophical reflection on experience an ‘illusion,’ I’m simply saying that it is systematically insolvent. Thus the millennial inability of philosophers to agree on experience (or ‘illusion,’ for that matter!).

    Essentially, I’m accusing you all of metacognitive dogmatism. When you reflect on pain you need to consider–and here’s the thing, regardless of any self-validating ‘feel’–what information is available for reflective problem-solving, and whether this information is adequate given the resources available. Accurate cognition is expensive. Our brain, meanwhile, is the most complicated thing we know of. Now why should any brain evolve the capacity to accurately (as opposed to usefully) metacognize something like pain? What I’m saying is that the apparently inexplicable characteristics of phenomenality are a function of mistaking metacognitive incapacity for positive ontological features. Our brain possesses an ancient, high-resolution cognitive relationship to its external environments, and a far more youthful, fragmented, low-resolution cognitive relation to itself. Given this ‘blindness,’ we have no inkling that we’re missing anything at all, and so assume we have everything we need to make reliable ontological determinations.

  15. 15. Sci says:

    @Scott: Are you still planning to rewrite your initial paper on BBT? I’m not necessarily in agreement with you but I do think this last comment is one of the clearest formulations I’ve seen yet.

  16. 16. Tom Clark says:

    Scott:

    “…I call the product of philosophical reflection on experience an ‘illusion,’…”

    So experience *itself* isn’t an illusion on your view. As you put it: “Experience engulfs me as much as anyone.” So experience on your view is a perfectly real, legitimate explanatory target. What’s illusory on your view is the idea that experience has “apparently inexplicable characteristics of phenomenality”, an idea bequeathed to us by the brain’s “fragmented, low-resolution cognitive relation to itself.”

    The characteristics of phenomenality are what constitute experience – its qualitativeness, privacy, unity, modalities, and so on, so I don’t think these are illusions if one grants the existence of experience. And, according to Metzinger at least, these are necessary entailments of what the brain is doing in terms of representing the world, the self, and the representational relation itself (PMIR).

    Since you don’t think the brain represents anything, your theory of experience has to do with our “cognitive relationships” to the world and to ourselves. We don’t have “adequate information given the available resources” about what the brain is doing, so end up with experience as a felt reality. To me this sounds a lot like Metzinger’s account, but tries to avoid what you see as the pitfalls of representationalism.

  17. 17. Sergio Graziosi says:

    @Scott #14

    Essentially, I’m accusing you all of metacognitive dogmatism.

    Well, I’d like to try working my way out of this particular pit.

    The starting point is in my last comment in the previous post (was delayed by the spam filter – some might have missed it). In particular:

    Thus, the proto-modelling regulatory mechanisms, at the level of the whole organism, got segregated on a specialised system (or organ, following Hohwy). This in turn facilitates the decoupling (as now we have something that even Arnold may recognise as a model, and we can change the effects it has on the organism by simply sending an axon in another direction), and we know some decoupling is useful because it allows adaptability. Importantly, it is also dangerous, because it may generate dis-adaptive behaviours.

    In my own understanding, the core of BBT, would be something like: metacognition can only grasp faint glimpses of what happens before and within conscious elaborations. Therefore, we can be sure that introspection systematically deceives us. This is because we (introspective metacognitioners) only have access to faulty/incomplete information. Such information is not designed to properly inform metacognition, but to reliably guide adaptive behaviour instead.

    In this interpretation of BBT, I am fully on board, and actually believe I’m running ahead. The addition I would want to make is that, in evolutionary terms, well-informed metacognition is potentially an existential risk. Consider the following: you fall in love, and have access to (most – it can never be all) internal mechanisms that this implies. Thus you can dissect what the feelings that you experience are, and potentially find a way to dissociate from them. If this happens, you won’t be compelled to seek the company and approval of your lover, would fail to bond, mate and stick around long enough to provide for the eventual children. In fact, even the limited amount of metacognitive abilities that we do possess allow us to do just that, so I don’t really think there can be much disagreement on this point. [I am always open to be challenged, though]

    In short, I appear to be on Metzinger’s side, and claim that “the representational character of certain representations isn’t available to the [conscious] system”, more than that, I claim that this representational character can’t be fully available (due to straightforward computational constraints: no system can comprehensively simulate itself in real-time) and that availability itself would be undesirable in evolutionary terms (the ability to do so in full can never be stably encoded in the genome of any species).

    However, aside from (my own) understanding of the core of BBT (how can one strive to charitably interpret other people’s theories without risking to read one’s own beliefs into them? Is it even possible?), I do not understand your argument about (radical or not) heuristic and (against?) representational thinking.
    1. Are perceptions (at some stage) representational? If not, how do you negate the core of my argument in the comment I link above? Photons bounce on an apple and hit my retina. This produces a cascade of electrochemical phenomena to pass through neurons in my brain. Can these phenomena, that transiently travel across brain areas, be seen as symbolic representations (encoded signals) of the (perceived)apple? If not, why not? And more importantly, how do you describe them otherwise?
    2. What kind of reasoning, perception and/or knowledge is not heuristic? I won’t explain why (you know, it would require a full book, not just a full essay), but to me all knowledge, perceptions and all types of cognition are inescapably heuristic (with weak caveats).

    A side note for Sci: do you see how this comment and the one linked above start building an entirely naturalistic/physicalist account of intentionality(/aboutness)? (“seeing how” would not imply “agree with”!)

  18. 18. Vicente says:

    Scott, I have to admit (shamefully) that for the moment I have not been able to completely grasp your position.

    One simple question, why is the brain blind to its own processes but is it is not blind to the, let’s say for example, kidney processes.

    You say: Our brain possesses an ancient, high-resolution cognitive relationship to its external environments, and a far more youthful, fragmented, low-resolution cognitive relation to itself

    Why such an asymmetry? why should the brain have different relationships with itself, with the liver or with any other item? why, in principle, should there be any difference between the brain processes and any other processes?

    What is an external environment? the retina for example?

  19. 19. Scott Bakker says:

    Tom: “So experience *itself* isn’t an illusion.”

    Experience as traditionally conceived is illusory. Recall Wittgenstein’s bit about pain being neither a something nor a nothing? I’m saying much the same (the difference being I can actually explain what the hell I’m talking about). Experiences are obviously something–perhaps even the most obvious thing–in certain ecologies, and nothing substantive in others (those we happen to be interested in here). The term systematically relates us to our environments when deployed in practical contexts, but begins to jam gears (generate illusions) as soon as we try to drive it up the theoretical hill.

    Think of visual imagery, how it simply does not support visual questions the way visual experiences can. Conjure an image of your house: What’s reflected in the windows? Is the lighting direct or indirect? Can you see any rust on the eaves? What kind of shadow does it throw on the front lawn? The low dimensionality of visual imagery typically renders these questions pointless: these kinds of visual questions do not belong to whatever entertaining visual imagery allowed our ancestors to solve. Likewise, theoretical questions regarding the high-dimensional nature of experience do not belong to whatever pondering experience is adapted to solve. Our blindness to the heuristic structure and limits of our metacognitive resources dupes us into thinking we have all the information we need, and thus leaves us perpetually perplexed. Here’s this thing, the most pervasive, obvious thing imaginable, and yet it vanishes every time we go rooting for it. We forget that there’s a lot of things you can do in a darkened room, so long as you keep it simple.

    As a figment of information scarcity, experience has to evaporate on any high-dimensional (naturalistic) account. Which brings me to qualitativeness, privacy, unity–the apparent positive characteristics of experience that you cite. Just turn them upside down, view them through the lens of metacognitive *incapacity* rather than capacity. Qualitativeness refers to an inability to quantify, privacy to an inability to compare reports, unity to an inability to make high dimensional distinctions. What is ‘quality’? A special purpose (severely bounded) cognitive relation possessing very few degrees of cognitive freedom, but enough to have allowed our ancestors to solve some high-frequency/high-impact problem. Take this upside down eye to any topic in philosophy of mind, and the mangled terrain actually begins to make eerie, and very troubling, sense.

  20. 20. Scott Bakker says:

    Sci: “Are you still planning to rewrite your initial paper on BBT? I’m not necessarily in agreement with you but I do think this last comment is one of the clearest formulations I’ve seen yet.”

    I gotta. I cringe whenever I see how much traffic it receives. I have a collection of posts coming out under the title Through the Brain Darkly in the near future. The introduction I’m working on is meant to be a primer to the theory–that’s where I cribbed the comment from!

  21. 21. Scott Bakker says:

    Vicente: “Why such an asymmetry? why should the brain have different relationships with itself, with the liver or with any other item? why, in principle, should there be any difference between the brain processes and any other processes?”

    The environment is what does the selecting. But you’re right: we’re ourselves part of our environment, which is why troubleshooting ourselves actually possesses evolutionary consequences. The problem the brain faces cognizing its own systems as opposed to external systems is at least threefold: First, brains are simply too complex to be cognized in high-dimensional causal terms, and so must be cognized otherwise. Second, human metacognitive capacities are very young, developmentally speaking. And third, the brain possesses a bound, parochial perspective on itself: it lacks the functional independence to solve the inverse problem, for instance (imagine an anthropologist studying a troop of chimpanzees following them in the field versus tied in a sack with them).

    Given these, we should expect to have a cartoonish heuristic self-understanding.

  22. 22. Callan S. says:

    We aren’t blind to our kidney processes? I thought even science hadn’t nailed down all the methods of filtering it does.

    Raise a baby on an island by themselves to adulthood, would they even know they even had a kidney?

  23. 23. Vicente says:

    Scott & Callan,

    So, as I see it, it is not the brain that is blind to anything in particular. It is our methodology. Our brain has the same mechanisms to invesigate the stars, the kidney or its own processes, i.e. intelligence and consciousness. But amazingly, we have been able to construct physical models for the stars, the kidney and the spinal cord neuronal pathways responsible for funny, “unconcious” reactions, crucial for survival or reproduction. These models are not complete and perfect (you are right Callan), but sufficient to understand what’s going on to some extent. But for consciousness, and this is where the asymmetry pops up, we have not. So when individual A’s brain looks at individuals B’s kidney it comes out with a model for filtering, but when individual-A’s brain looks at individual-B’s brain, becomes blind and produces nothing for consciousness. I don’t see why should happen. Materialism or eliminativism or representationalism cannot account for this fact by no means. To summarize, the brain is as capaple (or as blind) for introspectively “metacognize” (I would just say understand) his own conscious processes as to “cognize” (I would also say understand) the filtering processes of the kidney, but it succeeds in the latter and fails in the former. There must be a reason for that.

  24. 24. Arnold Trehub says:

    Vicente: “To summarize, the brain is as capaple (or as blind) for introspectively “metacognize” (I would just say understand) his own conscious processes as to “cognize” (I would also say understand) the filtering processes of the kidney, but it succeeds in the latter and fails in the former. There must be a reason for that.”

    This is a good way to put it. But I disagree that we are now failing to cognize/understand conscious processes, even though it has long been assumed that we can’t. The reason is that the brain is much more complicated than the kidney. I suggest that the retinoid model of consciousness is a good example of our current ability to cognize/ understand our own conscious processes.

  25. 25. Tom Clark says:

    Scott: “Experience as traditionally conceived is illusory.”

    That is, you’re saying the traditional *conception* of experience is false, so doesn’t refer to anything that actually exists. Experience itself – e.g., pain – isn’t illusory. So the question is, what’s a true account of how experience comes to exist? Nothing can evaporate or eliminate it.

    Your suggested answer to the question: “…qualitativeness, privacy, unity–the apparent positive characteristics of experience… Just turn them upside down, view them through the lens of metacognitive *incapacity* rather than capacity. Qualitativeness refers to an inability to quantify, privacy to an inability to compare reports, unity to an inability to make high dimensional distinctions.”

    I think this is on the right track (like Metzinger’s “special form of darkness” and see Sergio in #17)), but you end up marginalizing experience as a real phenomenon by saying its positive characteristics are only *apparent* (as opposed to real). Consciousness is only a *figment*, born of information scarcity, you say. But whatever the true account or conception of experience, it isn’t as if that will render it unreal, since the phenomenal characteristics of experience constitute our subjective reality – our ego tunnels – within which we operate in our (conscious) attempts to understand consciousness itself.

  26. 26. Vicente says:

    Arnold,

    I don’t think it’s really a matter of complexity. First, to define a system’s complexity is not easy, what criteria would you follow? Number of parts, relations between parts, number of functions, accuracy, operational aspects. The kidney is an extremely complex organ. I realized this, some years ago, when I participated in a project for artificial kidneys (or dialysis machines) control.

    Even to appeal to complexity can be contradictory from your own point of view. I remember that we discussed that a basic retinoid space could be implemented using a reduced number a neurons, on a microscope slide, and that if stimulated with the right “input signals”, according to your theory, it will produce a primary conscious experience. There you have a simple (much more simple than the whole brain or a kidney) system, raising consciousness.

    I think we don’t understand consciousness for some very fundamental aspect.

  27. 27. Arnold Trehub says:

    Vicente,

    I think that we can understand consciousness. But I also think that it is very hard for us to understand consciousness because of the epistemological problem presented by the gap between 1st-person and 3rd-person descriptions of consciousness. This is what I discuss in my book chapter “A foundation for the scientific study of consciousness”.

  28. 28. Scott Bakker says:

    Tom: “That is, you’re saying the traditional *conception* of experience is false, so doesn’t refer to anything that actually exists. Experience itself – e.g., pain – isn’t illusory. So the question is, what’s a true account of how experience comes to exist? Nothing can evaporate or eliminate it.”

    Not quite: I’m saying that experience as it appears to reflection–which is to say, intrinsically first-personal, intentional–doesn’t exist. This experience-as-metacognized is best understood as a kind of cognitive illusion, as the product of neglect (not as a PSM). There is literally no such thing as a phenomenal first-person, no now, no personal identity, and so on. These are all figments of neglect, blind applications of specialized metacognitive capacities recycled to solve general problems absent the requisite information.

    So if by ‘experience’ you mean something non-intentional geared to the solution of certain practical problems, possessing only limited theoretical application, then that is as ‘real’ as can be.

    Think of ‘duration neglect,’ the way individuals, in the wake of something painful, tend to only recall the intensity of their suffering, not the duration. As a result they’re inclined, when given a choice to repeat a miserable experience, to pick less intense, prolonged pain over ripping the band-aid off quickly. Or think (to use one of Metzinger’s examples) of so-called Raffman qualia, our blindness to the comparative poverty of our diachronic versus synchronic capacity to make distinctions. In each case, the incapacity does not exist, we run afoul what Kahneman calls ‘WYSIATI,’ assume that all the information required is available. Information that isn’t conserved for conscious report simply plays no role in our conscious determinations. As a result crucial distinctions are consigned to oblivion, and we intuit identities where none are to be found. The now always seems to be same now, even though it is obviously different. Experience seems to unfold within a frame of identity. And so on.

    The experience available to reflection is nothing but a postcard, a synopsis that reads like a novel for neglect. The hardness of the hard problem resides in our blindness to our metacognitive straits, and thus the inability to see the peculiarities stemming from our blinkered self-relation as extraordinary properties belonging to the ‘mental,’ etc. There are no such properties.

  29. 29. Scott Bakker says:

    Vicente: “But for consciousness, and this is where the asymmetry pops up, we have not. So when individual A’s brain looks at individuals B’s kidney it comes out with a model for filtering, but when individual-A’s brain looks at individual-B’s brain, becomes blind and produces nothing for consciousness. I don’t see why should happen. Materialism or eliminativism or representationalism cannot account for this fact by no means. To summarize, the brain is as capaple (or as blind) for introspectively “metacognize” (I would just say understand) his own conscious processes as to “cognize” (I would also say understand) the filtering processes of the kidney, but it succeeds in the latter and fails in the former. There must be a reason for that.”

    I’m not sure how I didn’t answer this question, but here’s another go. The easy response is to ask why we’ve been able to figure out the kidneys but not the brain, or world peace, or what have you. Different problems present different challenges for different reasons, so I guess I’m not clear on the significance of your point.

    But here’s a more provocative way to respond: Why was it we were able to figure out the movement of the planets across the sky, predict them with amazing accuracy (with the exception of Mercury) but required a conceptual revolution (which still didn’t solve the Mercury problem–that took Einstein) to figure out the earth was just another planet?

    The simple answer is that we were *simply too close* to earth to sense it’s movement, and too far from the heavens to see, contra Aristotle, that they did not possess an ontology fundamentally different from the terrestrial. Until Galileo’s ‘Dutch spyglass’ we lacked the information required to decisively solve the problem of earth’s place in the solar system. Why? Because what little information we had available, absent the proper interpretative approach, seemed to decisively argue otherwise.

    Our ability to detect motion is bounded, not absolute. We had no evolutionary need to intuit the earth’s position in the heavens, and so we evolved lacking any such capacity. So when we posed the problem, we naturally assumed the earth was the motionless frame of movement. We could solve other planets, predict their positions with great accuracy, but we could not solve the earth.

    The question really is yours, Vicente: Why should we assume we can intuit any of the information we need to do anything other than systematically mistake what we are when we seek to theorize our nature? I don’t need to prove anything to make the question stick: You clearly are presuming the accuracy/reliability of your metacognitive intuitions regarding consciousness. It’s up to you to explain why, especially if you wish to endorse the extraordinary thesis that we comprise some kind of exception to nature as we presently conceive it. The easy answer is that we’re just too close to ourselves, too blind to our details, to cognize ourselves as another bit of nature (another planet), and so evolved specialized ways to cognize ourselves otherwise.

    Why should the brain be able to track its own operations just as easily as it tracks objects in its environments? I’ve given three big reasons why not (if kidney’s aren’t less complex than the brain, then why is deciphering the brain so much more difficult?) How does it manage this? If this were the case, then why does introspection lack neural information?

  30. 30. Vicente says:

    Scott,

    The reason is that that conceptual revolution was the introduction of the scientific way a thinking at all levels and domains. Before models were accepted on a philosophical and religious levels (ask poor Galileo). But consciousness has been scientifically approached for quite some time already with little success.

    Still, the origin of relativity is related to much more profound conceptual aspects of space absolute references and laws invariance, rather than to orbits observations. Einstein’s capacity for abstraction is overwhelming. How is it that his brain evolutionary absolutely blind to most of the required concepts was able to see so clearly the system?

    In any case the problem would not be that we are too close to the system but that we are the system, or in the system. If you want take this example, also related to astronomy: You can’t provide absolute positions or velocities. If consciousness were your referenece system it cannot be referred to itself, you could refer anything but consciousness.

    You are assuming that consciousness has to rely on very complex mechanisms. Why?

    The brain tracks its own operations as much as it tracks any other hidden or microscopic natural process. Actually, all the brain does is to track his own processes. In a way, we are in continous introspection since we have no direct access to the external world, it is indirect through the senses. We are not blind, rather locked in a room with some communication and surveillance systems, not very reliable…

    I don’t understand what is meant by “lack of neural information”.

    I have the impression that you are trying to explain why we lie before explaining how is it that we have language in the first place.

    Thanks for your patience.

  31. 31. Scott Bakker says:

    Vicente: “Thanks for your patience.”

    You can’t peddle counter-intuitive claims the way I do without having some kind of zest for explanation! I relish these opportunities.

    “You are assuming that consciousness has to rely on very complex mechanisms. Why?”

    Because its brain-bound.

    “The brain tracks its own operations as much as it tracks any other hidden or microscopic natural process. Actually, all the brain does is to track his own processes.”

    As a biological mechanism, all the components of the brain do is systematically respond to other components. But what we’re talking about is the brain’s ability to generate/select felicitous behaviours via conscious endo-environmental inputs (what Tom (and most everyone) would call the brain’s ability to ‘represent’ itself and its functions). Surely you don’t think that animals have the same capacity to troubleshoot problems given access to their own brain states as humans. Obviously they’re ‘blind to themselves’ in a way that humans are not. The question is how much *more* do humans see. I’m saying (with many others), not much. Unlike others, I’ve come up with a way of understanding this ‘not much’ that allows us to unravel a bunch of very old problems. Surely you’re not saying ‘everything,’ are you?

    “In a way, we are in continous introspection since we have no direct access to the external world, it is indirect through the senses. We are not blind, rather locked in a room with some communication and surveillance systems, not very reliable…”

    ‘Blindness,’ like your locked room, is just a metaphor for information scarcity. If we’re not blind to ourselves, then why cognitive science, why the need to spend billions discovering what we are? Why, for that matter, do we remain blind to consciousness?

    “I have the impression that you are trying to explain why we lie before explaining how is it that we have language in the first place.”

    Perhaps this is because you still trust the deliverances of reflection, still assume that consciousness-as-metacognized is the ‘frame,’ the condition of possibility of even entertaining these issues. This is my challenge: explaining why anyone should trust these intuitions? How could any brain have possibly evolved the capacity to cognize itself in the same high dimensional way it cognizes its environments?

  32. 32. Arnold Trehub says:

    Scott: “How could any brain have possibly evolved the capacity to cognize itself in the same high dimensional way it cognizes its environments?”

    By evolving a retinoid system integrated with the kinds of cognitive mechanisms detailed in *The Cognitive Brain* (MIT Press 1991), a brain can model itself and cognize as well as model other brains in the way it models and cognizes its environments.

  33. 33. Callan S. says:

    Vicente,

    but when individual-A’s brain looks at individual-B’s brain, becomes blind and produces nothing for consciousness.

    Yes, but it’s failing to live up to the reportage of B.

    If B was reporting he saw dragons in the sky all the time, is A failing to identify how B can see the dragons OR is the reportage of B full of rubbish?

    What if the reportage of conciousness is pretty much rubbish? Like Scott’s example of people reporting their state on the world and the rest of the cosmos – ie, they report the sun revolves around them. They report they are still, not spinning at 600 miles per hour, for sure! Etc. That all came from a lack of information – lots of rubbish reporting comes from a lack of information.

    We figure things from triangulation, generally – but what if both your measuring points are in the same position (like the ancient person who could only measure how the sun acted from their one position)?

    How do we gain distance on our own mind for that second triangulation measurement point? Surely not more introspection – isn’t that no distance at all?

    How do we get distance? Or does it seem no distance is needed? But if were spinning at 600mph (so to speak), we wont know without a distant second measure. It’ll seem we are perfectly still. It’ll seem we introspect everything.

  34. 34. Callan S. says:

    Arnold, I can’t even cognize the computer in front of me. I don’t know how it works (to give a contrast: it’s not impossible to personally build a basic processor and memory arrangement – in which case you’d understand what you’re working with a lot more intimately). I’m pretty sure other brains are far more complex than the comp in front of me – and that I’ve walked around with my sunglasses propped up on my head, looking for my sunglasses. Maybe other people have too.

  35. 35. Arnold Trehub says:

    Callan,

    It takes a particular kind of education to model and cognize a computer. And it takes a particular kind of education to model and cognize other brains. Science has norms for deciding if our models/cognizings (?) are valid.

  36. 36. Callan S. says:

    Well actually there’s an interesting question – what norms does science have for saying we are accurately (or somewhat accurately) cognizing other brains?

    Or do we still just go off folk standards?

  37. 37. Vicente says:

    Callan,

    The point is that the “reportage” should be irrelevant once the model has been established or, even more, not required to develop it.

    I don’t know of any physical model that requires of “reportages” to be elaborated.

  38. 38. Sci says:

    Hutton argues rather clearly against reductionism/physicalism:

    http://uhra.herts.ac.uk/bitstream/handle/2299/537/900137.pdf?sequence=1

    The critique was good, but the solution – a kind of “absolute idealism” left me scratching my head. Specifically I’m not sure how this concept differs widely from the pluralism he critiqued/rejected. It’s also a bit odd to see him refer to his strategy has “tax avoidance” rather than “tax evasion” in the conclusion.

  39. 39. Arnold Trehub says:

    @Sci,

    Hutton: “The diagnosis of this failure is connected to the fact that consciousness cannot be treated in its own terms while being simultaneously fitted into an object-based conceptual schema.”

    This epistemological problem is discussed and a resolution is suggested in “A foundation for the scientific study of consciousness” on my Research Gate page. Also, it seems to me that “consciousness is treated in its own terms while being simultaneously fitted into an object-based conceptual scheme” in my SMTT experiments.

  40. 40. Vicente says:

    Sci, I think he’s moving the attention from the main difference.

    Hutto claims:

    the difference is that avoiding trouble here requires a change in our basic philosophical framework

    First, there is no such thing as a philosophical framework, no big deal, but the issue is that where the change is required is in our basic *scientific* framework.

  41. 41. Sci says:

    @Arnold – thanks, I’ve had your stuff on the back burner for awhile. Guess it’s time to take a look.

    @Vincente – so you’d agree with the Fodor quote Hotto mentions? But even if we have no idea what it is like for materialism to be true, does this necessarily mean we should have a new scientific framework?

    Admittedly it’s also not clear to me what this new framework in philosophy or science would look like, and I’m not sure Hutto really clarified it. I suspect itt would be better to accept consciousness as a mystery not amenable to abstraction, though doing so at this stage seems premature.

    All that said, I did enjoy Hutto quoting Fodor about how the philosophy of consciousness has yielded no fruit…

  42. 42. Scott Bakker says:

    Sci/Vicente: That must be one of his earliest publications. In Radicalizing Enactivism (which I highly recommend for it’s critique of content) he takes a facile conceptual view, arguing that the Hard Problem is not a problem at all, given that it’s insoluble in principle. Not a whisper of any Hegelian madness. Anyone who makes belief in scientific theoretical cognition contingent on philosophical theoretical cognition is a de facto mysterian if you ask me.

    From an institutional standpoint, the problem pretty clearly involves an inability to determine the explanandum. From your perspective, I’m simply changing the subject, not explaining consciousness at all. From my perspective, you’re committing a version of Edwin Jayne’s ‘mind projection fallacy,’ confusing epistemological effects for ontological features.

    Note that the only way to resolve this impasse is to take a hard look at human metacognitive capacity. Before impugning science for it’s inability to make sense of your (extraordinary) explanandum, you need to justify your explanandum. How could humans develop anything other than a blinkered perspective on themselves? (This is certainly in keeping with what we’ve learned about cognition more generally!)

    The tendency of realists is to roll the mystery of intuiting consciousness into the mystery of consciousness, but this just doubles the argumentative burden, places him even further out the argumentative branch.

Leave a Reply