The Meta-Problem

Maybe there’s a better strategy on consciousness? An early draft paper by David Chalmers suggests we turn from the Hard Problem (explaining why there is ‘something it is like’ to experience things) and address the Meta-Problem of why people think there is a Hard Problem; why we find the explanation of phenomenal experience problematic. While paper does make clear broadly what Chalmers’ own views are, it primarily seeks to map the territory, and does so in a way that is very useful.

Why would we decide to focus on the Meta-Problem? For sceptics, who don’t believe in phenomenal experience or think that the apparent problems about it stem from mistakes and delusions, it’s a natural piece of tidying up. In fact, for sceptics why people think there’s a problem may well be the only thing that really needs explaining or is capable of explanation. But Chalmers is not a sceptic. Although he acknowledges the merits of the broad sceptical case about phenomenal consciousness which Keith Frankish has recently championed under the label of illusionism, he believes it is indeed real and problematic. He believes, however, that illuminating the Meta-Problem through a programme of thoughtful and empirical research might well help solve the Hard Problem itself, and is a matter of interest well beyond sceptical circles.

To put my cards on the table, I think he is over-optimistic, and seems to take too much comfort from the fact that there have to be physical and functional explanations for everything. It follows from that that there must indeed at least be physical and functional explanations for our reports of experience, our reports of the problem, and our dispositions to speak of phenomenal experience, qualia, etc. But it does not follow that there must be adequate and satisfying explanations.

Certainly physical and functional explanations alone would not be good enough to banish our worries about phenomenal experience. They would not make the itch go away. In fact, I would argue that they are not even adequate for issues to do with the ‘Easy Problem’, roughly the question of how consciousness allows us to produce intelligent and well-directed behaviour. We usually look for higher-level explanations even there; notably explanations with an element of teleology – ones that tell us what things are for or what they are supposed to do. Such explanations can normally be cashed out safely in non-teleological terms, such as strictly-worded evolutionary accounts; but that does not mean they are dispensable or not needed in order for us to understand properly.

How much more challenging things are when we come to Hard Problem issues, where a claim that they lie beyond physics is of the essence. Chalmer’s optimism is encapsulated in a sentence when he says…

Presumably there is at least a very close tie between the mechanisms that generate phenomenal reports and consciousness itself.

There’s your problem. Illusionists can be content with explanations that never touch on phenomenal consciousness because they don’t think it exists, but no explanation that does not connect with it will satisfy qualophiles. But how can you connect with a phenomenon explanatorily without diagnosing its nature? It really seems that for believers, we have to solve the Hard Problem first (or at least, simultaneously) because believers are constrained to say that the appearance of a problem arises from a real problem.

Logically, that is not quite the case; we could say that our dispositions to talk about phenomenal experience arise from merely material causes, but just happen to be truthful about a second world of phenomenal experience, or are truthful in light of a Leibnizian pre-established harmony. Some qualophiles are similarly prepared to say that their utterances about qualia are not caused by qualia, so that position might seem appealing in some quarters. To me the harmonised second world seems hopelessly redundant, and that is why something like illusionism is, at the end of the day, the only game in town.

I should make it clear that Chalmers by no means neglects the question of what sort of explanation will do; in fact he provides a rich and characteristically thorough discussion. It’s more that in my opinion, he just doesn’t know when he’s beaten, which to be fair may be an outlook essential to the conduct of philosophy.

I say that something like illusionism seems to be the only game in town, though I don’t quite call myself an illusionist. There’s a presentational difficulty for me because I think the reality of experience, in an appropriate sense, is the nub of the matter. But you could situate my view as the form of illusionism which says the appearance of ineffable phenomenal experience arises from the mistaken assumption that particular real experiences should be within the explanatory scope of general physical theory.

I won’t attempt to summarise the whole of Chalmers’ discussion, which is detailed and illuminating; although I think he is doomed to disappointment, the project he proposes might well yield good new insights; it’s often been the case that false philosophical positions were more fecund than true ones.

32 thoughts on “The Meta-Problem

  1. To me, this discussion, like so many on the subject, avoids the real central issue, which is to explain, to sort through mechanisms, to get into the structure. This, on the other hand, is all about the procedures, the methods, the rationale. To make more specific what I am saying, does anybody ever talk about Igor Aleksander’s ideas, about a world model? To me, the processing that must go into maintaining and using such a model goes a long way toward conceiving of what the experience would be like to the organism that has and uses such a model. For me, that’s the essence of the “hard problem”. That is, to understand why the organism is “aware” of the operation of the model.

  2. I think we can only know we’ve solved these problems by building a system that demonstrably does the same and understanding what features of its architecture give rise to, in the context of this blog, phenomenal awareness.

    I don’t think it’s particularly hard, just that arguments in words, rather than in implementable architectures, lose their way because they can struggle to deal with iterative loops, and twists of the nature of information around those loops.

    Briefly outlining an architecture in words, the brain sets up processes, persistent for a while, that enable us to rapidly translate inputs into outputs in a given situation. That’s what it is to know that thing – to have set up in the brain a control process that can deal with it.

    They by looping back the identifiers and parameters of those control processes as though they were an additional sensory input, I know that I know, which gives us conscious awareness.

    When one of those control processes tracks my own state now, in the past, and potentially in the future set against my potential actions, and how that will make me feel, my conscious awareness includes my own continuing existence.

    Therefore all this boils down to the different sets of information that are now available to us to act upon (or communicate about) alongside sensory data. These include our own current interpretation and forecasts about that sensor data, and our own current knowledge and expectations about ourself.

    More, with pictures, at: https://www.amazon.co.uk/Mechanisms-Consciousness-How-consciousness-works-ebook/dp/B01N4LPLAD

  3. I think I’m on the same page as Peter. Let me put my view another way. I don’t care so much what people think about it or what they say about me for thinking about it. I just want to know how it works.

  4. I’ve just read the “preview” of Peter’s book, referenced from his note. I do not have a Kindle, so I cannot go any further. But I do have a comment. The main issue I see so far, is that I see no mechanism for continued awareness when perception shuts down — as in dreaming, etc. To my view, that’s the function of the world model. He may well get into this later in the work.

  5. Looks like Chalmers is trying to go for a ride on the dead horse I’ve been beating… 😛

    Less facetiously, I think that focusing on the ‘meta-problem’ is a fruitful strategy. To me, the apparent mystery of consciousness comes about simply because the models we use in explanation are essentially computational, however, the way they’re hooked up to the world isn’t (and can’t be). Hence, we can’t explain this hooking up; but we can derive some of its properties: first of all, it must bring us into contact with the properties of things in the world (else, it wouldn’t be hooking up); second, it would seem mysterious, and ineffable (as we can’t talk about things we can’t describe). So there: the sort of stuff that composes our experience of the world is mysterious, intrinsic, and ineffable.

    I’ve got no idea if any of that’s right, but it seems at least a simple and not obviously false explanation.

  6. #4 Lloyd
    Continued awareness (of a sort) happens during dreaming because the feedback loops that I mention continue to function even if the sensory input and motor output is suppressed during sleep.

    Should you wish to read the book at the link I gave, on the same page (bottom left) is the link to the Kindle app which enables it to be read on any device.

  7. My understanding of illusionism is not that qualia don’t exist but that they are a sort of magician’s trick. Something happened but your perception of it is wrong. You saw the lady cut in half but that isn’t what really happened. Something did happen though.

  8. Peter #6, What you’re saying would require refreshing loops of sensory input that possibly happened years ago, if ever. What about hallucinations? Stuff like that has to get conjured up somehow, not just “refreshed”. I believe the model gets built from memory and only touched up second-by-second during consciousness. Evidence suggests that visual input only paints in swatches. It’s nowhere nearly sufficient to keep the entire visual scene working continuously.

    Thanks for the book tip. I will look into it.

  9. by building a system that demonstrably does the same

    How on earth will you demonstrate that your system has phenomenal awareness when it can’t even be done for human beings?

  10. Peter #9: Quite true. Of course, if you deny zombies, that goes a long way toward believing what people say tey experience.

  11. #9 Peter “How on earth will you demonstrate that your system has phenomenal awareness when it can’t even be done for human beings?”
    You can only do it by a white box analysis, not a black box analysis, that is to say you have to work out what a system exhibiting phenomenal awareness needs to be able to represent internally, and check whether your architecture provides that. You can’t do it definitively by a black box analysis (which is what the Turing test is), although it might well be able to communicate quite persuasively about its inner life.

  12. Peters #9 and #11: Hod Lipson has gone some way toward that goal in being able to watch his desktop crawler detect its broken leg.

  13. Phenomenal awareness isn’t really something you can exhibit. According to Chalmers’ own classic zombie twin thought experiment, I could conceivably have a perfect physical twin who was exactly like me but simply lacked phenomenal experience. If you think that’s nonsense, you’re a sceptic about the whole thing, which is a perfectly respectable position; if you think it sounds right then there’s really no way you could ever tell whether your system had it or not.

  14. #13: If I believe the zombie thing is nonsense, then I’m a skeptic about the whole thing” !! What whole thing? I certainly believe consciousness is possible, but just that zombies are nonsense. Suppose Hod Lipson’s graphics proceeds another decade or so. You could watch the interior picture proceed in concert with the verbal report. It might not be definitive, but it would certainly put the mechanisms of consciousness in stark view. Of course, Lipson may never get there.

  15. What whole thing?

    Phenomenal consciousness (as opposed to access consciousness). The Hard Problem (as opposed to the Easy Problem). Qualia.

  16. Sorry I’m fuzzy on the philosophical terminology. I’m talking about qualia. I am not sure what access consc. would be. Is it what you can tell another person about? Anyway, it seems to me that a graphic window into a being’s world model in concert with that being’s verbal report would say a lot about what that being might experience. I especially cannot understand how a “zombie” could possibly talk about its internal model which it does not have.

  17. I personally think the meta-problem, along with its most straight forward answers, makes the hard problem moot. Yes, it’s possible to find answers to the meta-problem that keep the hard problem alive, as Chalmers does, but they seem like question begging to me.

    That said, I’m not personally comfortable characterizing phenomenal consciousness as not existing. There’s something to the argument that if raw experience is an illusion, then the illusion *is* the experience. I think it’s more productive to characterize experience as something constructed by the brain, and the mechanics of the construction are beyond our introspective reach.

    This limitation makes primal experience subjectively irreducible, and seemingly magical, something that can’t arise from physics. But this is only due to limitations in our internal perspective, not any deep metaphysical principle.

  18. Peter #11. I would put the world model in the middle of your Transient module, as the main component of the mechanism for making sense of sensory inputs. I have little argument with anything you say throughout and completely agree with your final section.

    I would love to talk more about this. My web site is horribly out of date, but the link at “Contact” seems to sort-of work.

  19. One way you can think of qualia is as formatting. If you accept that sensory inputs are mechanical (photons (vision), sound waves (hearing), molecules (smell and taste) and mechanical deformations (touch)impingements which our sensory apparatus encodes as neural activity (action potentials, neurotransmitter releases and uptakes et cetera) then it stands to reason that different brain work centers (memory, planning et cetera) need to access sensory percepts formatted in ways they can use. A quale can be though of as a format for a sensory percept. As Wikipedia puts it

    “Olfaction occurs when odorants bind to specific sites on olfactory receptors located in the nasal cavity. Glomeruli aggregate signals from these receptors and transmit them to the olfactory bulb, where the sensory input will start to interact with parts of the brain responsible for smell identification, memory, and emotion.”

    In effect, the sense of smell encodes the chemical composition of molecules that bind to olfactory receptors. A smell, considered as a quale, is a format by which that chemical composition is made available for cognitive processing by other areas of the brain upstream, so to speak, from the olfactory bulb.

    A quale, considered as a format for a sensory percept, can be seen as a merely biological phenomenon, produced by evolution in much the same way as the formats by which photographs are encoded for transmission over the internet are produced by engineering. To put it another way, if Mary knows everything there is to know about vision, then she knows how her brain will generate qualia/formatting for any visual percept her eyes are actually capable of perceiving. The quale generated by her first experience of seeing red will be the quale she expected her visual apparatus to generate given what she knows about how her visual apparatus formats percepts for use by other brain processes. She will not learn anything new from her first time seeing red.

  20. In the OP, Peter said

    Certainly physical and functional explanations alone would not be good enough to banish our worries about phenomenal experience. […] We usually look for higher-level explanations even there; notably explanations with an element of teleology – ones that tell us what things are for or what they are supposed to do. Such explanations can normally be cashed out safely in non-teleological terms, such as strictly-worded evolutionary accounts;

    Maybe I’m mis-reading this, but it seems like Peter is saying functional explanations lack an element of teleology. If so, I would challenge this idea. I would suggest that a purpose is an absolute necessity in a functional explanation. A function is always created/situated for a purpose. That purpose may be a naturally arising purpose such as that which leads to natural selection, and the purpose may not apply at some distant time, but the purpose is always there are the creation (or repurposing) of the function.

    I believe this distinction is important because I further suggest that this functional aspect is absolutely central to solving the hard problem as well as the meta problem. Whatever physical event is responsible for the generation of a phenomenal experience, there will be a functional description of that event. There will also be a functional point of view of that event. That is, the subjective experience is the functional experience. The functional description of the meaning of the input for that event is the qualia.

    *

  21. Ah, the sweet smell of unnecessary work! As Dilbert (yes, the Scott Adams cartoon) would say. Peter challenges us with

    there have to be physical and functional explanations for everything. It follows … that there must indeed at least be physical and functional explanations for our reports of experience …. But it does not follow that there must be adequate and satisfying explanations.

    The unnecessary work is to come up with satisfying explanations (of qualia and reports of qualia).

    Instead, the meta-problem allows us to come up with a satisfying meta-explanation of *why we can never have that satisfying object-level explanation*.

    And the way to do that is to run with the ideas gestured at by Peter Martin (caveat – I haven’t read the linked item yet), Jochen at #5, and Michael Murden with qualifications on the last part about Mary. Actually Mary will learn a new idea by which to refer to old objects and properties.

    To put it in my own words: qualia concepts are gotten from *being* in certain brain states, while physical concepts of interest here are gotten by *observing* brain states with microscopes, MRI machines, etc. And moreover, neurology shows interesting overlaps between the brain states of merely *deploying* qualia concepts, with the brain states of actually *undergoing* those experiences. While on the other hand, it shows no significant overlap between for example seeing red or tasting sweetness on the one hand, and peering at an MRI scan on the other. So, the physicalist hypothesis *predicts* that we won’t ever be able to look at a brain description and have an intuitive feeling that *of course* anyone in that brain state must experience the relevant qualia. We would of course have the abstract intellectual expectation that the qualia will happen, if we’ve learned our neuroscience well, but that’s different.

  22. I take your point, James; we might be using words slightly differently. Perhaps because I tend to think of functionalism and computationalism as practically the same thing, I think of a functional explanation as approximately one that gives the algorithm, or sets out the process. We usually want more explanation than that, I feel.

  23. Dr. Delgado.. A new Chalmers paper? Is it in free pdf form or only behind a pay wall? I am a Chalmers fan, read several books and not only his PoM. Not that I am in much agreement with him other than on the logical impossibility of the map. He calls himself a property dualist but I am a substance dualist tho I hope of a more sophisticated form than Descartes.

    But even in your view it doesn’t sound like he addresses the problem suggested by your title.. who or what is the “I”, the PERSON having the experience? Even property dualists dismiss or attribute it to illusion that I know of… I do not. But does Chalmers say anything about this in this essay or is he still focused on qualia?

  24. Peter in the OP:

    “I say that something like illusionism seems to be the only game in town, though I don’t quite call myself an illusionist.”

    If we say that we don’t really have experiences of color, then wouldn’t it follow that it’s an illusion that objects are colored? Or feel, smell, taste or sound a particular way? For the illusionist, we don’t really discriminate objects in terms of experienced qualities that we attribute to them, it only seems like we do. Putting phenomenal experience in question puts the qualitatively discriminated world in question too, seems to me.

  25. Peter (#24),

    I also treat functionalism and computationalism as the same. And I appreciate that you want more explanation than the functional description, and I think that such further explanation is available. Here is some of it:

    While a “functional description” lays out the algorithms and their functions, specifying which inputs will generate which outputs and how those outputs serve the function, that description does not explicitly provide the “point of view” of the software that is implementing those algorithms. Let’s call the software executing those algorithms the “functional agent”.

    Take Amazon’s AI-ish assistant Alexa as an example. Somewhere there is a high level description of Alexa’s functions. Similarly, there is a lower level functional description which is the actual code. None of these is a functional agent. The functional agent is the software running on my shelf as I type this. The software running on your shelf would be a very similar functional agent, but it would be a separate functional agent.

    My claim is that qualia are about the functional descriptions of inputs to experiential events, but from the perspective of the functional agent.

    Let’s say a functional agent has access to exactly three possible binary inputs. Each input is either on or off. The physical nature of these inputs is arbitrary, unknown, and irrelevant to the functional agent, but let’s say it is a voltage above or below a threshold. Let’s say that these inputs are tied to a camera watching a room. Specifically, the functional description says if something red is in the room, one of the inputs is on. We’ll call that one the “red input”. The other inputs are for something blue and green respectively. The functional agent can distinguish each input. Let’s say the functional specification says record which things are on once per second, “r” when the red input is on, “b” and “g”, respectively.

    So the question is how would the functional agent describe a red input? The answer is certainly not a voltage in a certain place. It’s not even “red”. The answer is more like “that one that makes me record r”. The point being, how it discerns which input is “on” is ineffable, but the discernment is real. I claim this discernment is the qualia of red. It’s not so much a feeling of redness as a feeling of “that one”ness.

    *

  26. If I’m reading it correctly, what Michael #20 is talking about is pretty much the same thing that vision and hearing software people used to call sensory fusion, the step of combining a variety of sensory inputs into a single, unified representation. If I have that right, then I totally agree that this is the final necessary step in producing qualia. It is that final step that Aleksander’s models are based on.

  27. #27 James
    I think your analysis and blogged ideas are in the right direction, eg “It’s not so much a feeling of redness as a feeling of “that one” ness”, and by its nature this is subjective.

    Generalising this, we only have to show that something would exist to the conscious entity on its own terms. Once the brain has a mechanism for determining whether something exists for its own purposes, knowledge, awareness and consciousness are off and running.

  28. He’s right, but the meta-problem is “why do people think that physics can predict the existence of everything, even though it’s just a human invention and therefore likely to be scoped and limited”

    If you don’t have the above cultural (mot definitely NOT scientific) bias, then consciousness isn’t a mystery – it’s just another phenomenon of interest

    It’s abundant, it’s everywhere and it’s no more magical or curious than the existence of space and time themselves.

    JBD

  29. Pingback: Meta-problem vs. Scandal of Self-Understanding | Three Pound Brain

Leave a Reply

Your email address will not be published. Required fields are marked *