Don’t Sweat the Hard Problem

Set the Hard Problem aside and tackle the real problem instead, says Anil K Seth in a thought-provoking phenomenological-investigationpiece in Aeon. I say thought-provoking; overall I like the cut of his jib and his direction of travel: most of what he says seems right. But somehow his piece kept stimulating the  cognitive faculty in my brain that generates quibbles.

He starts, for example, by saying that in philosophy a Cartesian debate over mind-stuff and matter-stuff continues to rage. Well, that discussion doesn’t look particularly lively or central to me. There are still people around who would identify as dualists in some sense, no doubt, but by and large my perception is that we’ve moved on. It’s not so much that monism won, more that that entire framing of the issue was left behind. ‘Dualist’, it seems to me is now mainly a disparaging term applied to other people, and whether he means it or not Seth’s remark comes across as having a tinge of that.

Indeed, he proceeds to say that David Chalmers’ hard/easy problem distinction is inherited from Descartes. I think he should show his working on that. The Hard Problem does have dualist answers, but it has non-dualist ones too. It claims there are things not accounted for by physics, but even monists accept that much. Even Seth himself surely doesn’t think that people who offer non-physics accounts of narrative or society must therefore be dualists?

Anyway, quibbling aside for now, he says we’ll get on better if we stop worrying about why consciousness exists at all and try instead to relate its features to the underlying biological processes. That is perfectly sensible. It is depressingly possible that the Hard Problem will survive every advance in understanding, even beyond the hypothetical future point when we have a comprehensive account of how the mind works. After all, we’re required to find it conceivable that my zombie twin could be exactly like me without having real subjective experience, so it must be possible that we could come to understand his mind totally without having any grasp on my qualia.

How shall we set about things, then? Seth proposes distinguishing between level of consciousness, contents, and self. That feels an uncomfortable list to me; this is uncharacteristically tidy-minded, but I like all members of a list to be exclusive and similar; whereas as Seth confirms, self here is to be seen as part of the contents. To me, it’s a bit as if he suggested that in order to analyse a performance of a symphony we should think about loudness, the work being performed, and the tune being played. That analogy points to another issue; ‘loudness’ is a legitimate quality of orchestral music but we need to remember that different instruments may play at different volumes and that the music can differ in quality in lots of other important ways. Equally, the level of consciousness is not really as simple as ten places on a dial.

Ah, but Seth recognises that. He distinguishes between consciousness and wakefulness. For consciousness it’s not the number of neurons involved or their level of activity that matters. It turns out to be complexity: findings by Massimini show that pulses sent into a brain in dreamless sleep produce simple echoes; sent into a conscious brain (whose overall level of activity may not be much greater) they produce complex reflected and transformed patterns. Seth hopes that this can be the basis of a ‘conscious meter’, the value of which for certain comatose patients is readily apparent. He is pretty optimistic generally about how much light this might shed on consciousness, rather as thermometers transformed…

“our physical understanding of heat (as average molecular kinetic energy)”

(Unexpectedly, a physics quibble; isn’t that temperature? Heat is transferred energy, isn’t it?)

Of course a complex noise is not necessarily a symphony and complex brain activity need not be conscious; Seth thinks it needs to be informative (whatever that may mean) and integrated. This of course links with Tononi’s Integrated Information Theory, but Seth sensibly declines to go all the way with that; to say that consciousness just is integrated information seems to him to be going too far; yielding again to the desire for deep, final yet simple answers, a search which just leads to trouble.

Instead he proposes, drawing on the ideas of Helmholtz, that we see the brain as a prediction machine. He draws attention to the importance of top-down influences on perception; that is, instead of building up a picture from the elements supplied by the senses, the brain often guesses what it is about to see and hear, and presents us with that unless contradicted by the senses – sometimes even if it is contradicted by the senses. This is hardly new (obviously not if it comes from Helmholtz (1821-1894)), but it does seem that Seth’s pursuit of the ‘real problem’ is yielding some decent research.

Finally Seth goes on to talk a little about the self. Here he distinguishes between bodily, perspectival, volitional, narrative and social selves. I feel more comfortable with this list than the other one – except that these are are all deemed to be merely experienced. You can argue that volition is merely an impression we have; that we just think certain things are under our conscious control – but you have to argue for it. Just including that implicitly in your categorisation looks a bit question-begging.

Ah, but Seth does go on to present at least a small amount of evidence. He talks first about a variant of the rubber hand experiment, in which said item is made to ‘feel’ like your own hand: it seems that making a virtual hand flash in time with the subject’s pulse enhances the impression of ownership (compared with a hand that flashes out of synch), which is indeed interesting. And he mentions that the impression of agency we have is reinforced when our predictions about the result are borne out. That may be so, but the fact that our impression of agency can be influenced by other factors doesn’t mean our agency is merely an impression – any more than a delusion about a flashing hand proves we don’t really have a hand.

But honestly, quibbles aside this is sensible stuff. Maybe I should give all that Hard Problem stuff a rest…

67 thoughts on “Don’t Sweat the Hard Problem

  1. I agree that Seth’s article was a high quality piece, definitely one of Aeon’s better ones in this area.

    Explicit dualism has largely been left behind in both the philosophy and science of mind, but I think implicit dualist thinking still infects a lot of the discussion. It seems to me that one of the reasons theories like IIT get so much attention is that they kind of uphold a type of backdoor dualism, an implied idea that maybe we can figure out how the ghost is generated, although no one explicitly says that.

    On the hard problem, a better name might be “the hard truth”, or even better, “the hard truths”. The first truth is that experience is subjectively irreducible. We can’t experience the mechanics of experience; the brain just isn’t structured to do that, but that doesn’t mean it’s *objectively* irreducible. The second truth is that an observer can never access the subjective experience of another conscious entity. (We can never know what it’s like to be a bat.) This subjective / objective divide can’t be closed, only clarified. But we shouldn’t see it as an obstacle to progress.

    But seeing consciousness as a prediction mechanism strikes me as an imminently plausible answer to the question, “Why do we have experience?” We evolved awareness, the ability to model our environment and our self, to in effect generate an inner world, as a predictive guide to action.

    Incidentally, this prediction idea isn’t a new one from Seth. I recently read in “The Ancient Origins of Consciousness” by Todd Feinberg and Jon Mallatt (a book I highly recommend) that it’s a viewpoint held by other neurobiologists.

  2. SAP: “But seeing consciousness as a prediction mechanism strikes me as an imminently plausible answer to the question, ‘Why do we have experience?’ We evolved awareness, the ability to model our environment and our self, to in effect generate an inner world, as a predictive guide to action.”

    Seth himself doesn’t think his work has answered this question, which he sets aside near the beginning of the article. His project is “how to account for the various properties of consciousness in terms of biological mechanisms; without pretending it doesn’t exist (easy problem) and without worrying too much about explaining its existence in the first place (hard problem).”

    There’s nothing wrong (and a lot right) with the idea of having an inner model that works as a predictive guide to action, but the question remains of why and how private, qualitatively irreducible states (experiences) are entailed by the neural processes found to be associated with them (the NCC). I think we can make progress on this question by discovering exactly what the NCC do – what Seth and others are working on.

  3. Peter,
    I’ll trade some quibbles with you:

    It is depressingly possible that the Hard Problem will survive every advance in understanding, even beyond the hypothetical future point when we have a comprehensive account of how the mind works.

    At first I thought I caught you being unusually liberal with words/concepts, but then you write:

    we could come to understand [the zombie’s] mind totally without having any grasp on my qualia

    This second sentence suggests that perhaps you are choosing your words carefully (as always). The quibble: on the first sentence, I thought you should have used the word “brain” instead of “mind”. The second sentence instead suggests that I’m mistaken: I’ve always used the word “mind” to indicate what contains my conscious thoughts and experience. Am I doing it wrong? IOW, I would argue that p-zombies don’t have anything like a mind, just a brain. The feeling that I’m missing something is strong now!

    Apart from the above, I thought Seth’s article was unusually good for a couple of reasons:
    1. He presents much of the latest scientific developments that I also find very promising. It’s a short, non-technical essay, but manages to cover a big proportion of what is worth keeping an eye on (IMVHO).
    2. He did so without coming across as hopelessly naive on the philosophical side. This is extremely rare in essays written by full-on scientists and is a breath of fresh air for me personally.

    I did not spot the shortness of the initial list (level of consciousness, contents, and self) perhaps because I was already enjoying the whole thing a little too much (or I was too interested in reading the whole thing). In this respect, another quibble: your doubts are sensible, but perhaps the rest of the discussion, with additional distinctions, does help? I could say that we need to start by proposing some initial, rather crude distinctions to get any scientific program started, and then refine them as theories evolve…

    Now, I’ll try to exercise a little critical thinking and see what it is that I don’t like about Seth’s essay. There isn’t much, but I can still quibble:

    a. In the comments someone made a similar remark, but I’ll try my own take:

    the real problem: how to account for the various properties of consciousness in terms of biological mechanisms; without pretending it doesn’t exist (easy problem) and without worrying too much about explaining its existence in the first place (hard problem).

    The problem here is that the easy problem doesn’t require to pretend that consciousness doesn’t exist, while the hard problem isn’t merely about explaining its existence (in my view, it’s about explaining how it can be produced by mere mechanisms, similar but not identical). The result is that I don’t see a meaningful distinction between Seth’s “real problem” and the easy one. I’ve always thought that the easy problem is prohibitively difficult and that a complete-enough solution of it will also dissipate the hard problem.

    b. A lot of the things he mentions would require much more time and attention. He gets many additional points for using IIT without buying into its metaphysical claims, but other areas get merely a mention (if at all). Perhaps it’s just me: I enjoy this kind of essays because they are not bogged down by the academic requirements of peer-reviewed journal articles. Thus, they have the potential of being clear, fairly accessible “snapshots” of where research is right now. I’d like to see more, and more in-depth similar efforts.

    @SelfAwarePatterns, Just checking: are you aware that “brain as prediction machine” research programmes are big and well developed? The link to consciousness is being developed right now, but Predictive Processing is big and firmly established. I ask because your comment makes me think it’s fairly new to you, which would be somewhat surprising.

    End of quibbles. Not much “content”, I’m afraid.

  4. It is obvious from reading the article that Anil is not just a researcher but also student of popular philosophy who addresses the popular theories of enactivism (prediction machine)and invocation of the rubber hand is an address to Metzinger’s work whom every researcher has on his bookshelf.

    The brain is a beautiful prediction machine but prediction is another invocation of objective science along with IIT, with information being another objective human invention along with mathematics.

    Actually brains don’t predict or are prediction machines, but in reality nature is a learning and shortcut machine. The brain builds its own repertoire or library of objects so our ancestors did not have to relearn an apple from a pear from a banana each time they perceived one. Only in this day and age do we worry that some evil daemon may have substituted plastic fruit for what’s growing on the tree.

    The whole root of science is a dualism of language itself which applies secondary stimuli or words to the objects of reality. Dualisms are true because language (and thought) is the secondary system of communication (and stimuli) naturally created by the advanced mammalian brain. Because we are blind to the language mechanism, little wonder all the quibbles arise.

  5. @Sergio,

    I take your point about zombies, but I’d be inclined to say they have minds of some kind. You can discuss philosophy with them, after all! The topic gets difficult for me because I don’t really believe they are possible, so in discussing their properties I tend to lose track of what I think is true and what I’m merely allowing for the sake of argument…

    I’ve always thought that the easy problem is prohibitively difficult

    The Easy Problem isn’t easy and it isn’t one problem either…

  6. Uh, oh. The sky is about to fall on our heads: did I just score half a point or something? Wasn’t supposed to happen ;-).

    Seriously, though: I get your point as well. I don’t think zombies are possible and I even struggle to see how the thought experiment might be useful. To me, if accepted, it just moves the problem uncompromisingly outside the reach of empirical science, which has to be premature: until there are (so called) easy problems (yes, there are many – one more agreement) to be solved, it remains possible that their solution will extend to the hard problem.
    IOW, accepting the p-zombie argument at this stage is a sure way to muddy the waters, for example (using our current case): if zombies “have minds of some kind”, then why not the PC I’m using now? If both zombies and PCs are just a bunch of mechanisms, why not a mill? You know the drill: we end up strolling away from the interesting/answerable questions. By accepting p-zombies, the difference between patients in deep coma and locked-in ones becomes impossible to discern empirically, because our theory poses that there needn’t be any measurable difference. Not my cup of tea!

    One thing that frequently looms in our discussions is that you keep referring to zombies, or qualia of the supernatural kind, as if it was established that they both should be taken as “accepted arguments”. Then you remind me that you don’t actually believe they make sense, but I don’t think I’ve ever asked: OK, so why do you keep dragging them in?
    I ask because I expect that you’ll have a good answer. [i.e. I don’t think that “merely allowing [zombies/qualia] for the sake of argument” counts as a very good answer, sorry!]

  7. Tom,
    It seems like we agree more than we disagree here. I’d just note that to have any chance of answering the question about experience, I think we have to ask what experience actually *is*, to essentially unpack the word “experience”. The problem, as I mentioned above, is that experience is subjectively irreducible.

    But there’s no reason to conclude that it’s not *objectively* irreducible. Of course, as soon as we attempt to do that reduction, many people troubled by the hard problem insist we’re not addressing the actual problem. This conundrum leads me to agree with a tweet that Sean Carroll made, that we’ll never “solve” the hard problem, just eventually stop seeing it as an actual problem.

  8. Sergio,
    I had heard the prediction idea discussed before, notably as an aspect of intelligence in AI research. But I have to admit that, prior to reading Feinberg and Mallatt’s book, I hadn’t considered it as being what consciousness is essentially about. (F&M’s book shook me out of a lot blind spots, particularly out of what was, in some ways, an overly anthropocentric focus.)

    I have to further admit that until your comment, I wasn’t conscious that there had been a lot of work on this idea in particular. So I guess it is fairly new(ish) to me. Any recommendations on additional reading?

  9. SAP: I find the work of Andy Clark the most accessible, it was what I myself used to catch up (he has a new-ish book out “Surfing Uncertainty: Prediction, Action, and the Embodied Mind” which I confess is still in my “to-be-read” ever growing pile).

    Gentle starters online are:
    1. (shameless self-promotion) my own two little posts, the first in particular. At the bottom you’ll find the link to the FT of “Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science” which is the article that got me started.
    2. Clark’s series on the Brains Blog, which includes a number of posts describing what’s in the latest book. Starts here.
    3. Still on the philosophy side, there is also Jakob Hohwy’s book The Predictive Mind, which does help familiarising with the approach. I did think it goes too far in many respect, though.

    I guess the above should give you all that’s needed to then dive in the actual scientific literature, if you’re so inclined (links/book above will contain pointers to get started: as always, there is a lot out there!).

  10. The zombie conjecture was very good at separating the behavioral aspects or the objective science of what brains do vs the aspect of inner experience. For complex brains inner experience is actually multiple senses from within the body and without which get enveloped by the system into all form of observable behavior.

    The less technically trained people are, the more philosophical we can be. It is true that nobody solved liquid water but simply proved it is the same H2O molecules behaving in a unique way within a set energy range that enhances the dipole effect. In other words it is how the forces of nature behave inside of and between neurons when they activate. Network views are just objective mathematical data that do not capture what is occurring inside of them. The flat earthers had no need for forces either because all of the data proved everything heavy moved from up to down.

    People were blind to the engineering of early TV’s because nobody saw them as two radio receivers in the same box. One receiver for audio and one for visual. The complexity is what escaped us. The point of the article is that the uncovering of complexity gives us better understanding or the ability to throw away misconceptions or bad data. Of course the greatest aspect of brain complexity is how we derive the objective world along with language and eventually science which is universal human language and culture within the scientific group.

  11. OK, so why do you keep dragging them in?

    Well, I suppose:

    a) out of respect for the wider dialogue and range of views, and
    b) because even propositions that are wrong can be illuminating and interesting.

    If I only discussed my own opinions then instead of depicting the magnificently chaotic phantasmagoric circus that is consciousness studies, I should have a snap of a tubby bloke on a squeaky unicycle.

  12. Singularity of existence is a linguistic myth.

    In Philosophy, one very important question needs to be resolved before we can get rid of confusion. It is the “Boundary Problem”. Which is linked to genesis of self.

    For anything to exist it must

    #1. Occupy space
    #2. Have internal structure as cause of being
    #3. Oscillator : a sustained source of work
    #4. Have external structure to make effect upon or to be effected by.

    The unity of name is not unity of existence. Everything is endlessly divisible. Everything is made up of its constituents. The seeming unity is observational or social convinience of reference. All ontological entities are “many-in-one” entities.

    Where internal structure ends and wherefrom external structure begins?

    Let us think of a human baby in a mother’s womb or dog “babies” in a womb of a bitch. Does mother’s self include baby’s self? Or the baby has its own self?

    Hard Problem is a myth created on the basis of anthropocentric worldview.

  13. #14 Shaikh Raisuddin

    For anything to exist it must:

    #1. be what it is
    #2. not be what it is not

    🙂

    The first condition establishes self-referential and hence ineffable stuff, a special case of which are the qualia of consciousness.

    The second condition establishes relations of that stuff to everything else (including part-whole relations), enabling relational/structural descriptions of the stuff.

  14. Peter,
    that’s fair enough! Hopefully after quibbling with you about the same issue for the Nth time, I can finally stop asking similar questions over and over…
    I most definitely do not wish you to stop engaging with the “chaotic phantasmagoric circus”, you are providing a very valuable service which I think we all appreciate more than you would think. [We/I would also very much love to see the unicycle snap! ;-)]
    Overall: I will try to stop asking for equivalent clarifications in the future (knowing my stubbornness, I fear that I can’t promise I won’t).

  15. Seth’s understanding of philosophy seems weak, but at the same his idea that we should have a better understanding of reality via science before committing to philosophical positions is the right one.

  16. “Conceptual Singularity” is NOT “Ontological Singularity”

    “Knowing” is NOT “Feeling”

    Knowing is social, linguistic and communicable.

    Feeling is individual, mechanical and non-communicable.

    There is no Hard Problem.

  17. real subjective experience

    I wonder if that’s dualism’s new clothes? Instead of mind/material, the borderline has shrunk back until there’s real subjective experience/zombie? Never mind a whole mind anymore, not asking that much, just real subjective experience?

    Have any thought experiments been floated where a zombie has half real subjective experience? Like half of it they just don’t have? Or is it unhalvable?

    Because what if you could split it further – down and down, into pin points of real subjective experience scattered amongst a lot of zombie perception? And indeed, zombie non perception that the zombie can’t see any more than we can see the edge of our vision? What if the half zombie took all the pinpoints and its incapacity to see the lack of perception and treated the pinpoints as all there is to itself?

  18. Peter


    After all, we’re required to find it conceivable that my zombie twin could be exactly like me without having real subjective experience, so it must be possible that we could come to understand his mind totally without having any grasp on my qualia.

    Who requires it ? Or are you just explicating the specific Chalmers viewpoint ?


    (Unexpectedly, a physics quibble; isn’t that temperature? Heat is transferred energy, isn’t it?)

    Think you’re right there Peter. Average kinetic energy will increase with an increase in temperature. Heat flows over temperature differentials.


    Indeed, he proceeds to say that David Chalmers’ hard/easy problem distinction is inherited from Descartes. I think he should show his working on that.

    Yes, that view itself is probably inherited from Searle .. I think in contemporary terms the most likely people to be mired in substance quandaries are mathematicians/philosophers who have done a bit of physics but not enough. They tend to be the ones who can’t distinguish stuff from the model of the stuff, which is a root of the hard/soft issue. t

    SelfAware:


    We can’t experience the mechanics of experience; the brain just isn’t structured to do that, but that doesn’t mean it’s *objectively* irreducible.

    I think a better way perhaps of saying this is that although mental phenomena are irretrivably irreducible – in any way – that doesn’t mean that it doesn’t have causes.
    I may be assuming .. did you mean something else ?

    JBD

  19. John,
    That’s not quite what I meant, although I definitely agree that all mental phenomena have causes.

    What I actually meant is that experience is composed of lower level components, which we’re unable to experience. For example, if I see a tiger charging toward me, that’s an experience. But that experience is composed of me taking in sensory patterns and building image maps, which are then associated with previously learned models of what type of animal I’m looking at, what its capabilities and inclinations are, and what its location and motion are relative to my body (which I also have a model for), all of which in turn trigger emotional reactions felt as affective states (in this case, most likely adding up to panic).

    In the instance of having this experience, I’m not aware of all the lower level cognitive machinery building that experience. I’m only aware of the experience itself. The brain has sensory apparatus to take in information about the environment and the state of its body, but doesn’t have any sensory apparatus in itself. It evolved to be the perceiver, not the perceived, so its ability to know itself is profoundly limited.

    That’s why, subjectively, experience is irreducible. But that *subjective* irreducibility isn’t evidence for it being *objectively* irreducible. Due to the nature of how the mind evolved, studying the components of experience will always feel like we’re studying the other, it will never feel like it adds up to our actual felt experience, because we can’t feel the lower level components of feeling, or experience the lower level processes that compose experience. Said another way, we can’t directly experience our internal machinations of experience.

  20. Or are you just explicating the specific Chalmers viewpoint ?

    Yes, I’m really setting out the Chalmers view (though he sort of owns the argument, I suppose). In my own view the switch from possibility of zombies to conceivability is a dubious move that tends to mislead out intuition, BTW

  21. SAP


    But that experience is composed of me taking in sensory patterns and building image maps, etc

    Are we talking here about a genuine case of “reducibility” or merely acknowledging the fact that day-to-day experience is composite ?

    I think you’ve made a little informational leap in your analysis.

    I think that your definition of ‘objectively reducible’ relates only to those components of experience that are informational in nature. Information will always be a component of experience but that information is distinguishable from the non-informational aspect of experience. To say ‘I am scared’ is information but to feel it is not information. Of course we tend to treat feelings as being “as good as” information in our private lives and that usually leads to disaster .. but information they are not and they are not reducible.

    I don’t think that anybody would suggest that experience wasn’t reducible,at least as far the information of that experience was concerned.

    J

  22. John,
    If I’m understanding the distinction you’re making between genuine reducibility and composite, I’m not talking about things we consciously group together, such as grouping all the experiences of an event under a memory of the overall event. I’m talking about primal conscious experience.

    I don’t agree on the distinction you’re drawing between information and feeling. Just as we have to ask what “experience” is, we also have to ask what “feeling” is. Feeling is, again, subjectively irreducible, but that doesn’t mean it’s objectively irreducible.

    Consider a feeling of disgust. Maybe sensory information flows in from the visual and olfactory systems, which brings up learned or instinctive models, which in turn generates a response from the limbic system of an emotional reaction, which is registered as a negative affect in the affective awareness system. We’re not aware of all this processing when we, say, see a sewer system malfunctioning. We just experience the unified feeling. In other words, we can’t feel the mechanics of feeling, but that doesn’t mean those mechanics aren’t there.

    All of which is to say, that I think feelings are electrochemical signalling, in other words, information. Although I’m totally open to considering alternate theories of what feeling might be.

  23. I think SAP’s distinction between subjective irreducibility and objective irreduciblility is a powerful and pivotal one. The idea that ‘I can’t reduce it any further – therefore no one and nothing can reduce it any further’ needs to be questioned.

  24. SAP


    I don’t agree on the distinction you’re drawing between information and feeling.

    Why ? Can you explain why they are the same ?


    Just as we have to ask what “experience” is, we also have to ask what “feeling” is

    This doesn’t amount to an explanation of why information is the same thing as feeling ..


    Feeling is, again, subjectively irreducible, but that doesn’t mean it’s objectively irreducible.

    OK. Let’s say I want to eat, let’s say I’m hungry. Decompose that feeling for me – and I’m not interested in biological/dietary issues as they are irrelevant. I’m also not interested in your computational representations of hunger, as they are also irrelevant (as well as being speculative).

    Decompose the feeling

    Your analysis of disgust is I think not a reduction of “disgust” – in any sense of the word – but a speculative mechanism of the mechanics that, some way down the line, eventually lead to the feeling of disgust. If I was to reduce the sense of “pain” caused by hitting my thumb with a hammer I wouldn’t start by including the hammer in any “reduction”, but that seems to be what you are doing – “a leads to b leads to c leads to a feeling”.

    JBD

  25. SAP


    “All of which is to say, that I think feelings are electrochemical signalling, in other words, information. ”

    The analysis of a speculative model of the causes of feeling is informational. You are describing a potential mechanism. That is what you have provided.

    BUt the information of an analysis of the causes of a feeling is not the same thing as a feeling.

    If I can think about something then I produce information in relation to it. There is no other way to be.

    A brick, for instance. A brick is not “information” – it is real matter – but nonetheless I, as a conscious being, cannot help thinking about a brick when I experience it. My brain generates information about that brick, otherwise it will not fit into my consciousness. But is the brain that creates that information. It is not in the brick .

    It is exactly the same with feelings. To experience them necessitates that your consciousness tracks them and create information in relation to them, as in matter interactions. But feelings themselves are no information, any more than a brick is.

    JBD

  26. John,
    As I finished in my last comment, I’m open to alternate theories about what feeling is. More specifically on information, if there is evidence of something there besides electrochemical signaling, I’m totally prepared to change my mind. But having read my share of neuroscience books, I haven’t seen any such evidence, certainly nothing rising to the level of the empirical evidence of a brick.

    On decomposing hunger, I fear any attempt to do so will only end with you saying the same thing you’re saying here, that I haven’t explained hunger, only what causes it. No matter how detailed an explanation I researched, no matter what scientists are ever able to discover about it, you’ll always have that rejoinder.

    But let’s say I took that tactic with temperature. I could challenge you to explain temperature. You might discuss the speed of molecular interactions and how temperature emerges from the kinetic energy of all those particles. But I could insist you haven’t explained temperature, only what causes it. You could point out that there’s no evidence that temperature is anything other than that molecular kinetic energy. But if I’m stubborn about this, if I’m determined to keep a dualistic conception of temperature, what explanation could you give to convince me you were really explaining temperature?

    If we’re not willing to at least attempt to pierce the veil of what experience and feelings are, if we just fold our arms and insist that they’re fundamental and any attempt to figure our their components is invalid, then no progress is possible. Maybe there is something to feeling besides information, but it’s not clear how we’re ever going to figure that out if we’re not willing to explore what a feeling actually is.

  27. SAP


    “More specifically on information, if there is evidence of something there besides electrochemical signaling, I’m totally prepared to change my mind.”

    I don’t understand. Are you denying the existence of mental phenomena ? Sounds like it.


    “But having read my share of neuroscience books, I haven’t seen any such evidence, certainly nothing rising to the level of the empirical evidence of a brick.”

    That is clearly accountable as a lack of scientific knowledge. As no third party explanation of mental phenomena exists, you won’t find one. If there was no explanatiom of a sunset, would you deny it’s existence ?


    “I could challenge you to explain temperature. ”

    Hardly comparable. Temperature is not a mental phenoemena, it’s a thermodynamic, epistemically objective metric. It’s clearly related, as are all metrics, to aspects of consciousness – namely the feeling of ‘hotness’.

    Kinetic theory is a theory of what causes heat and hence the feeling of hotness. Nobody would suggest that the kinetic theory was a tautological equivalent of the feeling of hotness – seems a strange thing to want to suggest. I’m sure most people see it as an explanation of hotness nonetheless.


    f I’m determined to keep a dualistic conception of temperature

    That is not possible, as temperature is not a mental phenomena.


    If we’re not willing to at least attempt to pierce the veil of what experience and feelings are, if we just fold our arms and insist that they’re fundamental and any attempt to figure our their components is invalid, then no progress is possible.”

    Feelings are irreducible : they can’t be decomposed. But that doesn’t mean to say they don’t have epistemically objective causes, despite being mentally subjective and irrecucible in nature. There is no contradiction except to a certain computationalist mindset which wants to believe qualities cam become quantities if you do enough mmathematical modelling.

    J

  28. The fallacy of hotness lies in our hidden notion about language and memory.

    #1. Is language “non-physical”?

    NO !

    Each word of language has its “physical correlate”. If we define hotness in terms of its physical correlate we will abandone the notion.

  29. There is no contradiction except to a certain computationalist mindset which wants to believe qualities cam become quantities if you do enough mmathematical modelling.

    It’s not a matter of wanting to believe, it’s a matter of being stuck there. Someone sitting on their car bonnet in the middle of nowhere doesn’t want to believe their car is out of fuel. They are just stuck there.

    If some damage to the brain causes you to be blind, blindness is occurring not because you want to believe you are blind – you’re just stuck with it. Unless you engage in Anton–Babinski syndrome.

    I’m not sure SAP wants qualities to become quantities – it’s just by his estimate that that is where he is stuck. It’s not a matter of wanting to believe it.

  30. Callan,
    That’s a good way to put it. I’m not arguing about the way I *want* things to be, only about the way I think the data points to them being.

    I do have to admit that I find attempting to quantify things to be one of the most productive ways to understand them, which effectively makes me guilty as charged on John’s accusation of a computational mindset.

  31. SAP: “I do have to admit that I find attempting to quantify things to be one of the most productive ways to understand them…”.

    Quantification is the objective mode of description par excellence, which is why qualities don’t figure in physics and why I think it’s a mistake to go looking for qualia as physical phenomena. Qualities really are irreducible – that’s one of their defining characteristics (subjective privacy is another), so any attempt to cash them out in terms of objective quantities simply ends up talking about something else, e.g., the quantifiable physical characteristics of the neural spike trains *associated with* having qualia.

    In my experience this advisory will have precisely no effect on physicalists who suppose that to exist is necessarily to be physically specifiable. But that’s ok because their attempts to physicalize the phenomenal will certainly help clarify the NCC (so long as they don’t end up in the cul-de-sac of panpsychism).

  32. Tom,
    Interestingly, qualities, as in attributes or properties, actually are part of physics theories, but physicists often refer to them as “baggage”, since they feel bolted on to the mathematics, and those qualities have a history of eventually being reducible. Max Tegmark, a proponent of the MUH (mathematical universe hypothesis), speculated that the much sought after theory of everything will be a purely mathematical one, in other words, free of that baggage. I’m agnostic on that and a bit skeptical of the overall MUH, but it’s hard to argue with the “unreasonable effectiveness” of mathematics.

    Just so you know, while I’d own up to the label of physicalist (provisionally), I’m not a panpsychist. I see panpsychism as attempting to explain our failure to find the ghost in the machine by positing that the ghost is everywhere. To me, it’s much simpler to conclude there is no ghost, just the machine, but I suspect that conclusion goes to the heart of the difference between how we see consciousness.

  33. I think it’d be easier to just explain how a machine can end up seeing a ghost/an illusion.

    Qualities really are irreducible – that’s one of their defining characteristics

    Defined by whom? It’s irreducible and the evidence for that is…that it’s defined as irreducible?

    I get that there may seem no other option but irreducibility, therefore qualities have to be irreducible for having no other option. But why does there seem no other option?

  34. SAP & Callan:

    By qualities I meant phenomenal qualities, aka qualia, not material attributes or properties that figure in science (e.g., mass, charge, spin, etc.). To talk about physical characteristics is to change the subject, the way I understand consciousness. In a recent Tuscon talk, Dennett doesn’t change the subject when he describes the irreducibility of the basic qualities in our experience, where we reach the limits of discrimination:

    “If you take a wine tasting course or an ear training course you can learn new vocabularies and you can come to describe heretofore ineffable parts of your experience, but there’s always a limit, and wherever that limit is reached, you’re stuck with unwitting metaphor and fiction.”

    Actually, what we’re stuck with are the things in experience that can’t be further decomposed or described, thus only named, like red, pain, sweetness and other basic phenomenal qualities (qualia). There’s nothing metaphorical or fictional or ghostly about any of these, e.g., searing pain, and whether they are reducible to physical goings-on is a very open question. We of course won’t find them in the machine of the brain (no ghost in there) since they aren’t public objects like brains, but rather the qualitative terms in which public objects appear to conscious beings like us. For science, public objects (what’s intersubjectively available) get described quantitatively, and equally we won’t find numbers and equations out there in spacetime.

    As a general rule, one shouldn’t expect to find the terms of one’s model of reality, whether those terms are qualitative or quantitative, in reality as thus modeled. But, wanting a complete model, we naturally go looking for them there, and end up baffled when we don’t find them. So it’s understandable that some folks are skeptical about the reality of consciousness (Dennett says qualia are “fictional”), even though it’s the qualitative representational terms in which the world appears to us as individual conscious subjects.

    Dennett’s talk starts at about 44 minutes here: https://www.youtube.com/watch?v=JoZsAsgOSes

  35. Tom,
    I for one wasn’t trying to change the subject, but was trying to respond to what I thought you were talking about. On qualia, in the sense of being instances of conscious experience, my response would be the same that I used above for experience and feeling, as it would be for any other synonym of conscious experience.

    I take Dennett, in that quote, to be talking about subjective irreducibility, and I think the rest of his talk agrees with what I’m saying about objective reducibility (he mentions neural spikes repeatedly), although I sometimes find his language overly strident. But in that video, I definitely agree far more with him than the other speakers.

    On the ghost, I’m a bit confused by what your actual position here is. It sounds like you think the ghost (soul, spirit, quantum consciousness, etc) doesn’t exist (which I agree with) but also that attempts to reduce experience to the workings of just the machine are invalid. I’m curious what you think fills the gap.

  36. I perceive an explanatory gap there as well, which seems to indicate a problem in the reasoning rather than something that can be filled. How can it feel satisfactory to say something can neither be explained in material terms, but isn’t something supernatural either?

    And there doesn’t seem to be an answer to why there seems no other option than irreducibility? I mean, ‘I can’t describe it at a further reduced level, therefore it’s not describable’ clearly isn’t true in plenty of cases – we can’t describe the inner workings of the computers we use, for example.

    and equally we won’t find numbers and equations out there in spacetime.

    I agree – but simply saying we wont and leaving it at that hardly seems to have a spirit of enquiry to it.

  37. SAP & Callan, I take it that the hard problem of explaining consciousness is to explain the subjective irreducibility and privacy of qualia, the basic elements of experience. Why and how do only certain combinations of neural spike trains (or whatever the NCC are found to be) end up entailing the existence of qualia for the system? If you can answer that, then the hard problem is solved, and if you want to call this an “objective reduction,” that’s fine with me.

    In the talk I linked to, where Dennett denies there’s a hard problem, he poses what he calls the hard *question*:

    “How could we capture the enjoyment, the disgust, the pain of a person’s reaction to an experience in terms of spike trains in neural tracts? That is the hard question, and it’s not a rhetorical question. We should answer it. That’s where we’re making progress.”

    This sounds a lot like the hard problem, since enjoyment, disgust and pain are all qualities. Qualia remain the prime explanatory target in explaining consciousness, even for Dennett, even though he says at the beginning of the talk there are no qualia, but only seem to be.

    As to what might close the explanatory gap (but not be supernatural), I follow Metzinger and others in suggesting that any representational system at our level of recursive complexity perforce ends up with representational primitives which the system can’t decompose (they are cognitively impenetrable) so which necessarily end up as qualitative for the system. There are further considerations which point in more or less the same direction discussed in part 5 of “The appearance of reality” at http://www.naturalism.org/philosophy/consciousness/the-appearance-of-reality

    As Dennett says, we’re not going to get a causal story of qualia being produced by neural spike trains, which is why we’re not going to discover them as physically characterized phenomena in the brain or anywhere else (so there’s no objective reduction forthcoming in that *physicalist* sense). Yet they are perfectly real as the qualitative terms in which the world is given to us. The physical, on the other hand, is how we *represent* the world to be using those terms as well as concepts and numbers. So representation comes first, the way I see it.

  38. Tom,
    I didn’t realize you were the proprietor of naturalism.org. I’ve come across that site a few times over the years when researching various things and often found it to be useful, so I’m glad to have the chance to “meet” you.

    Reading your comment and the abstract at the link, it feels like you, Calan, and I are not that far apart, that we may be tripping over language differences. The epistemic perspectivalism you discuss may be another way of labeling the distinction I’m making between subjective and objective. (Admittedly I haven’t read the full article, so maybe the details make this untenable.)

    On representational primitives that the system can’t decompose, would you say that they don’t have a composition? Or that they do but that it’s simply unavailable to the system? Or are you saying that speaking of those representations from a perspective outside of the system is invalid?

  39. I understand “Don’t Sweat the Hard Problem”.

    But it is a little difficult to understand whether the writer understands Seth’s thought. While that is science argument, there is no room for dualism. It seems that there is a bad effect of Chalmers Hard Problem.

    Though in the article there is “the level of consciousness is not really as simple”, it seems to be affected by “contents”. Even if the writer understand IIT or not, it would be possible.

    Though it is not shown precisely in IIT, ? can not show contents. (I think that is natural if IIT pays attention to richness of consciousness.) Though it rather should show so, it could not do so due to the affect of Chalmers’ incorrect (unfortunate)?Hard Problem theory.

    As the writer understand that “Seth sensibly declines to go all the way with” IIT, it seems there are some conflicts in its point.

    Attached are comments to Seth.
    ________________________________
    I understand your worry to Hard Problem. I have some big issues to Chalmers.

    Many people understand qualia shows intuitive things. But I think it’s too broad. Qualia should be separated into unit qualia and association (relevance). If there are some consciousness minimum units (unit quale), larger consciousness should be shown with conbination of smallers. Or if there are small ? consciousness and relevant large ? consciousness, it should be natural to try to separate large ? one into smallers.

    IIT might be affected by Chalmers, and it tries to measure large ? naively. But from my hypothesis, unit qualia and association is separated. Meanwhile at this point, IIT has a value of thinking ‘ability of measurement’, and might keep a distance from Hard Problem. I would like to support IIT’s good points.

    http://mambo-bab.hatenadiary.jp/entry/2014/12/09/005711

  40. SAP


    ” but physicists often refer to them as “baggage”, since they feel bolted on to the mathematics, and those qualities have a history of eventually being reducible.

    What distinguishes physics from mathematics is precisely this “baggage”. It seems to me the analysis of this baggage isphysics !!

    Maybe it’s Tegmark’s extensive AI funding that persuades him that the “baggage” can’t be allowed to get in the way.

    I can’t think of many other physicists who assume this belief, and precisely zero before the digital era.


    but it’s hard to argue with the “unreasonable effectiveness” of mathematics.

    Except where it doesn’t work ? Mathematics is a tool of physics – it always has been and always will be. The subject matter is not mathematical itself, but the “baggage” we otherwise refer to as nature.

    None of the biological sciences are strong on mathematics. Darwinism has no mathematics – does that make it bad science ? Nor of course is darwinism reducible to physics, which would not recognise or distinguish biological agents, being instead focussed on matter in motion.


    To me, it’s much simpler to conclude there is no ghost, just the machine,

    It may be simpler but its fantastically dishonest. The “ghost” – otherwise known as the widespread, undisputably mundane and unspectacular proliferation of mental phenomena, is exactly what the argument is about. It seems legitimate to try to turn the mundane into the supernatural, the world beying explanation, precisely because the toolkit of choice is incapable of doing so.

    Its’ what I’ve always maintained : computationalism is a fantastic anti-scentific cop-out. Pretend the phenomena is not there because otherwise it gets complicated.

    J

  41. Tom


    ..for Dennett, even though he says at the beginning of the talk there are no qualia, but only seem to be.

    Good old DD. Fantastic linguistic tricks abound, as per his ‘proof’ of the compatiblism of free will and computationalism, which ‘succeeds’ only by sneaking in a fully formed free-will agent half way through.

    Mental phenomena is no more than “seeming” so you can’t disprove their existence on the basis of ‘seeming’, as ‘seeming’ is all they ever do. Its the ‘seeming’ that is the qualia, and wants for an explanation. Maybe somebody should hammer Dennett on the pinky with an ice pick, and reassue him there’s no pain, there just seems to be.

    J

  42. SAP: “On representational primitives that the system can’t decompose, would you say that they don’t have a composition? Or that they do but that it’s simply unavailable to the system? Or are you saying that speaking of those representations from a perspective outside of the system is invalid?”

    Here’s a bit of section 5 of “The appearance of reality” on some logical considerations about representation which speaks to your questions (there’s more on *adaptive* considerations that I didn’t include here since the length already might deter you from reading anything!). These considerations might get us at least in the vicinity of qualia. Re your third question, in building a representational account of consciousness we necessarily have to speak about representations from a third-person, objective, hence non-qualitative perspective from outside the system.

    Here you go:

    1. Root representational vocabulary. Logically, a representational system (RS) must have a basic, irreducible, reliable set of elements that can’t be further broken down, a root combinatorial vocabulary which gets leveraged in representing complex states of affairs (language, math are such systems, does the brain or its sub-systems have basic neural vocabularies?). Representational systems need basic inter-defined, contrastive elements in some format with which to operate, and these must be non-decomposable and unmodifiable for the system. The properties of the basic elements are arbitrary for the RS, hence not represented facts for it, not facts about the world it represents using those basic elements (e.g., there’s nothing actually red in the world, rather red is how the RS arbitrarily experiences its capacity to discriminate one part of the world from another via light reflectances). Qualia seem to be just such root representational, contrastive, content-bearing elements within conscious experience.

    2. Representational self-limitations. Logically, the RS can’t be in a direct representational, epistemic relationship with its own front-line, basic elements of representation; it doesn’t and can’t (directly) represent them (they represent the world for the RS), so they won’t be facts about which it could be wrong or right, but just counters (pieces) in the game of representation (see http://www.naturalism.org/philosophy/consciousness/killing-the-observer#Limits ). It can’t misrepresent them, hence they are direct irreducible givens for the RS as it constructs its reality model (Metzinger). This again is exactly how qualia (red, pain), the basic elements of experience, present themselves in consciousness: as impenetrable, untranscendable elements about which we have no say, that can’t be second guessed, that are “irrevocable” (Ramachandran and Hirstein in “3 laws of qualia” http://www.imprint.co.uk/rama/qualia.pdf , p. 437). Qualia are non-inferential, immediately given, non-conceptual (Michael Tye), system-real elements that get combined in our experience of objects, scenes and the world, whether veridical or not. What’s system-real is just that which the RS can’t transcend in its representational work, what it can’t modify or control as an element in its representational economy.

    3. Limits of resolution. The RS will necessarily have limits of resolution as entailed by behavioral requirements and physical limitations – there’s no need or capacity to discriminate beyond a certain level. This suggests a basic neural vocabulary for each sensory channel keyed to tracking differences in the external world and body as determined by the resolution of neurally instantiated state-spaces, e.g., hue discrimination, pitch discrimination, touch discrimination. The resolution is also set by constraints of smallest physical units being responded to, e.g. photons, wavelengths, chemical compounds. These limits of resolution parallel and perhaps entail the irreducible phenomenal elements of conscious experience that get defined in experimental setups by just noticeable differences (JNDs) between consciously experienced colors, pressures, temperatures, tastes, smells, etc.

    Conclusion re logical entailments. Any representational system (RS) at our level of recursive complexity will have a bottom level, not further decomposable, unmodifiable, epistemically impenetrable (unrepresentable) hence *qualitative (non-decomposable, homogeneous and ineffable)* set of representational elements. These elements are arbitrary with respect to what they represent since the RS only needs reliable co-variation, not literal similarity. They therefore appear as irreducible and indubitable phenomenal realities for the RS. Of course, these logical criteria by themselves imply that consciousness might attend very simple representational systems like thermostats. We can’t perhaps empirically rule out such consciousness, but we have no prima facie good reason to think thermostats entertain phenomenal states given the empirical evidence that consciousness correlates with certain higher level capacities, see http://www.naturalism.org/philosophy/consciousness/killing-the-observer#Neuroscience . The adaptive considerations set forth below will narrow the range of plausible candidates for consciousness to more complex systems such as ourselves and some other animals.

  43. Tom,
    I appreciate the explanation. If I’m understanding correctly, you’re saying that the system itself can only work in terms of its most primitive representations. I have some possible quibbles, but I mostly agree.

    My point would be that those representations, at some rubber meets the road point, would have to be composed of something. The system itself can’t access that composition. In other words, they are irreducible to it, which to me is another way of saying they are subjectively irreducible. But that shouldn’t mean that they ontologically irreducible.

    Let me make a very inexact comparison. Consider a binary bit (1 or 0, true or false, etc) in computer software. Generally, the bit is the lowest level of representation that software can work with. (Many high level computer languages can’t even work with individual bits, but C or assembler can.) A bit is effectively irreducible to the software.

    But a bit itself is physically a transistor in one of two states (or one of two ranges of states) allowing more or less electricity to pass through it. The transistor has components. No software has access to those components, but they are there.

    As I said, a very inexact comparison since primitive representations would no doubt be far more complicated than a bit. But my point is that, just as a bit is irreducible to software but is reducible in terms of hardware, a primitive representation may be irreducible for the conscious portion of a system but reducible to the non-conscious portions. And if the physical reality of that representation can be found, it should be reducible in terms of that physical reality (neurons, synapses, proteins, etc).

    We can already do this to some extent. For example, my experience of seeing something white in color begins with the red sensitive, blue sensitive, and green sensitive light cones at a certain spot on my retina all being excited at the same time. I don’t perceive being exposed to red, blue, and green all at once, just white. White is the representation my lower level cognitive machinery constructs for that input.

    No matter how long I stare at that white object, no matter how hard I concentrate, I’ll never have access to those mechanics. I’ll never be able to reduce the experience of white. But that doesn’t mean it isn’t reducible to someone studying my nervous system. (Of course, an outside observer will never be able to access my personal subjective experience of white. The subjective / objective divide works both ways.)

    This is why I think we’re really not that far apart in our views. At least, unless you assert that a representation is not only subjectively irreducible, but ontologically irreducible.

  44. The RS will necessarily have limits of resolution as entailed by behavioral requirements and physical limitations – there’s no need or capacity to discriminate beyond a certain level.

    I think what’s important to the discussion is that that lack of need comes from a sense of contentment, I estimate. I’d suggest the use of the word ‘need’ here is that there’s ‘no need’ in the sense that when one has eaten until full, there’s no need to eat more.

    I agree with that using the word ‘need’ in that application (and I agree with the end of capacity claim as well). And yet in contrast, corporate funded neuroscience is gluttony!

  45. SAP, thanks for these thoughts. I agree that the representational vehicles, which we describe in objective physical terms, are composite. But as you point out, the subjective representational content, e.g., the experience of white, can’t be decomposed into further phenomenal primitives.

    You say:

    “I’ll never be able to reduce the experience of white. But that doesn’t mean it isn’t reducible to someone studying my nervous system.”

    My experience of white isn’t reducible to someone looking at the nervous system, since my experience isn’t something that can be seen. As you put it, “an outside observer will never be able to access my personal subjective experience of white”. Only the representational vehicles can be seen when looking at my nervous system. So there’s no reduction of the experience, but rather a description of the neural processes associated with it. As you put it, “…if the physical reality of that representation can be found, it should be reducible in terms of that physical reality (neurons, synapses, proteins, etc).” But the physicalist project is that of reducing *experience* to physical goings-on, so I’m not sure we agree.

    When you say “White is the representation my lower level cognitive machinery constructs for that input” note that in this single sentence you slide from the experiential content (white) to the vehicular description (lower level cognitive machinery). This makes it sound as if there’s a reduction, but what’s going on is that “representation” is being used in two senses simultaneously, the conscious content sense and the vehicular sense. The ontology of the latter is straightforwardly physical, whereas the ontology of the former is widely seen as posing a problem since there’s no obvious reduction on offer.

    As a conscious subject, I consist of qualitative content and as you say have no access to the vehicles. The hard problem/question remains as to why the content carried by certain sorts of representational vehicles ends up being conscious, that is, qualitative and subjective. That’s what the considerations in my previous reply were trying to get at. Even if there’s no reduction to the physical in the cards (still an open question I hasten to add), there may be discoverable naturalistic reasons why certain sorts of representational systems end up with phenomenology.

  46. My experience of white isn’t reducible to someone looking at the nervous system, since my experience isn’t something that can be seen. As you put it, “an outside observer will never be able to access my personal subjective experience of white”. Only the representational vehicles can be seen when looking at my nervous system.

    Seems an argument based around encryption.

  47. “As a conscious subject, I consist of qualitative content”: This is where the problem is, I feel. Surely you consist of structural relations between a finite number of qualitative entities, the nature of which is pretty irrelevant? The way our brains work don’t really allow us to pretend that a quale is not a very complex result of processing – viz the white example. So in crossmodality sensing, eg visual-to-auditory sensory devices, the visual cortex (at least in some people) adapts, and presumably some of those individuals end up with a quale that is similar to what they had via the original modality. And in persons with one hearing aid and one cochlear implant, are there different qualia coming in at the same time, or do they eventually get joined up? For us hearing people, we might imagine the experience as being akin to having a crappy distorted sound quality in one headphone but not the other.

  48. Tom,
    I think you’re right that we do disagree. I’m comfortable seeing what you call a representation and the physical vehicle of that representation as one and the same thing. Just as I think the software bit *is* the transistor, despite it often being pragmatic to view them separately, I also think the representation *is* its physical vehicle.

    For me, if two things are correlated in time and space, then I think it’s reasonable to conclude that they’re one and the same. Of course, many will point out that we’re a long way from a full accounting of that correlation yet, and that’s certainly true. But all the data we do have seem to point to them being the same, and none that I know of point to them being different. (Of course, there could be data published at any time that changes this and I’d then be obliged to reconsider my conclusion.)

    On the hard problem, I think there are many potential answers, but those troubled by the problem don’t seem to find any of them acceptable. My question to them is, what kind of answer would be?

  49. SAP:”For me, if two things are correlated in time and space, then I think it’s reasonable to conclude that they’re one and the same…all the data we do have seem to point to them being the same, and none that I know of point to them being different”

    If the the conscious content and its vehicles were one and the same (identical), then both would have the exact same properties, so that when viewing the neural correlates of my pain you would actually be seeing my pain. But you don’t. Pains, unlike brains, aren’t ever observed, but are undergone, and unlike their neural correlates they are qualitative and subjective. Besides the manifest difference in properties, the question for the identity theorist is why only certain sorts of neural goings-on, and not others, are identical to experiences. If you can answer that question, then you’ll have a good handle on the hard problem (or hard question, as Dennett calls it, see #40 above).

    In part 5 of “The appearance of reality” I’ve suggested a potential answer to the hard problem/question having to do with representation which I find at least somewhat acceptable (otherwise I wouldn’t suggest it). What do you think might be some potentially acceptable answers?

  50. SAP,

    Well said in comment #1 and #51. Especially this from #1: “a better name might be ‘the hard truth’ … that experience is subjectively irreducible.” What’s worth adding is that *this is what physicalism predicts*. If we consider the brain states involved in *looking at* brains that are in pain, and at neural spike trains and so on, we will quickly notice that they are different from those involved in *being in pain*. So of course, the system’s representation of being in pain differs from that of seeing a brain in pain, and there is no direct inferential route between those two representations. No duh.

    Most of the time, nobody would criticize a theory for making a *successful* prediction. However, it is all too easy to (mis)attribute a different prediction to the theory and then complain that it fails.

  51. Tom,

    If the the conscious content and its vehicles were one and the same (identical), then both would have the exact same properties, so that when viewing the neural correlates of my pain you would actually be seeing my pain. But you don’t.

    Consider a speculative model where saying that is like saying when we see the magic show from side on (that reveals the tricks) we don’t see the magic you see as the audience.

    I know, how can pain be just an illusion? Well, under the speculative model all pain is is a positive voltage on a nerve wire that massages the responces of a neural net, in such a way that has correlated with Darwinistic survival (there are rare cases of people being born without much capacity to feel pain – and presumably that’s not been a prime survival trait in regards to Darwinism).

    I get that under the model you are working from, that we can’t somehow see your pain means that the only option is that pain isn’t materialist or can be something reduction can get to. Quite the opposite of an illusion – more real than real.

  52. Tom and Callan,

    “If the the conscious content and its vehicles were one and the same (identical), then both would have the exact same properties, so that when viewing the neural correlates of my pain you would actually be seeing my pain. But you don’t.”

    I think my question on this would be, what should the experience of pain look like to an observer? I think the answer, from the perspective of observing the brain, is that it will look like what it is, neural firing patterns. If we disrupt those patterns, we disrupt the pain. If someone’s anterior cingulate cortex is damaged, their ability to perceive pain can be destroyed. (Which isn’t as pleasant as it sounds. The life expectancy of such people isn’t good.)

    Tom, on your question about hard problem solutions, some that have caught my eye include Michael Graziano’s attention schema theory, Antonio Damasio’s theories of self, and the theory discussed in the linked Aeon piece. (These aren’t necessarily exclusive of each other.) The usual response from those concerned by the hard problem is that these address the information processing, but not experiences, feelings, qualia. As I said above, I think to address those things, we have to be willing to ask what they actually are.

    Currently, my own preferred answer is that consciousness is a system modeling its environment and itself as a guide to action. Such a mechanism seems to have originally evolved because it increased the chances of survival in a world of prey and predators. An individual experience, feeling, quale is an the act of forming, updating, and utilizing one or more of those models. (Obviously I’m leaving out a lot of detail here. I recently did a series of blog posts on this: https://selfawarepatterns.com/2016/09/21/the-range-of-conscious-systems-and-the-hard-problem/ )

    All that said, I’m always interested in alternative models, so I definitely plan to read your article.

  53. Thanks Paul. I wonder if you wouldn’t mind elaborating a bit more on how physicalism predicts the divide. I definitely agree that the divide is compatible with physicalism, but the prediction aspect seems like an important point I’d like to make sure I understand.

  54. SAP: “I think my question on this would be, what should the experience of pain look like to an observer? I think the answer, from the perspective of observing the brain, is that it will look like what it is, neural firing patterns.”

    I’d simply reiterate that looking at the firing patterns associated with my experience of pain is not to look at the experience, since only I undergo it – it isn’t a public object like my brain is. Experiences don’t look like anything since they aren’t possible objects of observation, rather they are qualitative terms with which we as conscious subjects model the public (intersubjective, observable) domain.

    “An individual experience, feeling, quale is an the act of forming, updating, and utilizing one or more of those models.”

    Ok, given that there are unconscious processes that do these things, then the question is what’s different about the conscious processes that do them that makes these processes identical to experiences? I’ll have a look at your blog posts to see if there are any clues.

  55. Tom,
    I have to admit that my series doesn’t cover that particular boundary. (It’s mostly concerned with the evolution of primary consciousness in animals within the scope of the book I mentioned in #1.)

    This is admittedly very speculative, but my current thoughts are that it has to do with the degree of involvement of the prefrontal cortex, in other words, with the degree of simulations currently being run in trade-off processing on which impulse from the limbic system to follow.

    For example, when I’m driving to work on my regular route, despite the fact that there is indeed a lot modeling happening, there’s very little trade off processing. My limbic system’s reactions to what’s in the current models can mostly simply be followed in a habitual slumber. So I’m not very conscious of most of it.

    But if I suddenly encounter severe traffic and have to figure out alternate routes, to run simulation models of various courses of action, I suddenly “wake up” and am more conscious of what’s going on. It seems to me that this is what consciousness brings to the table, the ability to do this trade off processing, essentially the root of reasoning.

    That said, I’m not tightly married to this view, and remain open to alternate explanations.

  56. SAP, thanks, good stuff. Thomas Metzinger talks a lot about simulations in connection with explaining consciousness, see the precis to his book Being No One at http://www.theassc.org/files/assc/2608.pdf

    So the question is: why might running simulations (perhaps along with conducting other cognitive business) end up entailing the existence of qualitative subjective states (experiences) for the system running them? The way I see it, such states will likely turn out to be a side effect of certain sorts of representational processing like running simulations, so it isn’t that consciousness per se adds to behavior control. But if your identity thesis ends up vindicated, then it will turn out that qualities, which just *are* neurally instantiated behavior-controlling processes, do bring something important to the table.

  57. Experiences don’t look like anything since they aren’t possible objects of observation

    I can’t imagine myself saying that without asking ‘Why aren’t they observable?’

  58. Callan, you’re not in an observational relationship to your experience, rather you consist of it as a conscious subject. Nor can anyone else observe your pain or any other of your experiences, only their physical and behavioral correlates. If they could, then the problem of other minds wouldn’t exist, and indeed no problems about consciousness would have arisen. About not being in an observational relationship with consciousness, see Killing the observer at http://www.naturalism.org/philosophy/consciousness/killing-the-observer

    Experiences don’t look or appear like anything, rather the *world* appears in various ways in terms of experience – colors, shapes, sounds, smells, etc. Which is to say that we’re in an observational relationship to the world, not our experience of it.

  59. Which is to say that we’re in an observational relationship to the world, not our experience of it.

    I agree, but the conclusion could be the other way around entirely. That indeed we are in an observational relationship with the world, and that means others can track our experiences far more so than we can track ourselves. That it’s not that others are locked out, but that oneself is locked out from ones own experience. Just getting glimpses through a keyhole and treating the part, for never having experienced the whole, as the whole.

    I mean, if we are in an observational relationship with the world, wouldn’t it make sense that we have a tenuous grasp on experience itself?

  60. Callan: “…we are in an observational relationship with the world, and that means others can track our experiences far more so than we can track ourselves…oneself is locked out from ones own experience.”

    I don’t see that this follows, since for instance you can’t tell when I’m in pain nearly as well as I can, or what thoughts I’m thinking. I’m not locked out of my experiences just because I can’t observe them since I consist of them as a subject.

    “…if we are in an observational relationship with the world, wouldn’t it make sense that we have a tenuous grasp on experience itself?”

    In a way yes since we ordinarily don’t experience experience itself as the medium via which the world appears to us, even though that’s always the case. Mostly we’re naive realists who proceed under the assumption that the world is directly given to us, not mediated by conscious experience. It’s only when we encounter visual illusions and other sensory artifacts such as afterimages do we start to realize that we are systems that represent reality via experience.

    This realization is most forcefully driven home by lucid dreams, in which one is confronted with the fact that very vivid, complex and coherent experience can happen without one’s being in contact with the world at all. Then you understand that your waking experience constitutes the world you perceive in every respect. But even then you don’t see experience, rather you see the world as modeled by experience, as sensory input from the environment constrains experience from moment to moment (unlike in dreams).

  61. Tom,

    I’m not locked out of my experiences just because I can’t observe them since I consist of them as a subject.

    You might consist of them, but the question is how much you know of them – and as we both say, we’re in an observational relationship to the world, not our experience of it. In such a case why would we really have a 100% knowledge of our own experience? Under a different model of consideration the question is if what you perceive as your experience and what is actually going on with your experience do not 100% correlate?

    I mean, if someone were to take a rather hot metal rod in an experiment and apply it to your skin, you’d feel pain. But say they then took a rod whose diameter was a few millimeters wider, what if you were asked to rate it (without seeing either rod) but couldn’t actually distinguish there being more of the pain source from the previous rod? It would appear the observer has a better understanding of your pain than you do – exactly as you might expect when we’re in an observational relationship to the world. I recall there being experiments where two toothpicks are applied to someones back (not for pain but for sensory responce) and they logged the distance between the toothpicks at which the subject could not tell if he was being poked with one toothpick or two. The observer clearly knew how many were being applied, but the subject did not.

    I’m thinking you’d argue that there’s a difference there between the sense and the experience itself.

  62. SAP,

    Most of the work in predicting the non-intuitiveness of equating brain activity with pains, is done by neurology itself. Physicalism just adds “and that’s the whole story”. Neurologically, looking at brain activity excites the visual cortex. Feeling a pain in one’s toe excites the thalamus, hypothalamus, sensory cortex, etc. These brain activities have little overlap and don’t tend to cause each other. If, as physicalism contends, that’s the whole story, then *of course* looking at brain activity isn’t going to bring to mind pain in your toe. If it did – if pain in your toe could happen without the characteristic brain activity that usually accompanies it, triggered instead by mere visual cortex activity associated with looking at brain scans – then physicalism would be in dire trouble.

  63. Thanks Paul. That makes sense.

    It is interesting though that if I see you stump your toe, it will make me think about pain in my toe. But that’s the result of mirror neurons firing, an adaptation of empathy for a social species. Those same neurons don’t fire when looking at a brain scan of toe pain in the somatosensory cortex, both because the brain has no sensory neurons of its own, and brain scans weren’t a part of our evolutionary history.

  64. Surely this reliance on experience differing from the brain scan reading gets undermined pretty quickly?

    For example, what if you had a device that turned up the pain someone feels from stubbing their toe – and you do this to a child, so their whole life they’ve only known extreme pain from stubbing their toe.

    If they stubbed their toe, they’d say we wouldn’t know their pain just like people are saying that from a brain scan of synaptic activity we wouldn’t know the persons pain. But how does a muted knowledge somehow count as not having a knowledge at all? Or even seeing the brain scan but not associating any particular bloom of activity with pain – how does ignorance of that somehow count as not having any knowledge at all of what is happening?

Leave a Reply

Your email address will not be published. Required fields are marked *