More philosophy?

SocratesWhy is it that we can’t solve the mind/body problem? Well, if we define that problem as being about the capacity of mental events to cause physical events, there is a project in progress at Durham University that says it’s about the lack of good philosophy, or more specifically, that our problem stems from inadequate ontology (ontology being the branch of metaphysics that deals with what there is).  The project has been running for a few years, and now a substantial volume of corrective metaphysics has been published, with a thoughtful and full review here.  (Hat-tip to Micha for drawing this to my attention).

 The book is not a manifesto, because the authors do not share a single view: it’s more like an exhibition. What’s on offer here is a variety of philosophical views of mental causation, all more sophisticated than the ones we typically encounter in discussions of  artificial intelligence. The review gives a good sense of what’s on offer, and depending on your inclinations you may see it as a collection of esoteric and unhelpful complications, or as a magic sweetshop whose every jar holds the way to a new world of possibility and enlightenment. I think the average view will see it as a bookshop with many volumes of dull sermons and outdated almanacs which might nevertheless just be holding somewhere in the dusty back room that one book that makes sense of everything.

Is it likely that better philosophy will deliver the answer? There is  a horrid vision in my mind in which the neurologists and/or the AI people produce a model which seems to work; we’re able to build machines which talk to us in the same way as human beings, and we can explain exactly how the brain does its stuff and how it is analogous to these machines: but the philosophers go on doubting whether this machine consciousness is real consciousness. No-one else cares.

Moreover, there are some identifiable weaknesses in philosophy which are clearly on display in the current volume. First is the fissiparous nature of philosophical discussion. I said this was an exhibition rather than a manifesto; but wouldn’t a manifesto have been better? It’s not achievable because every philosopher has his or her own view and the longer discussion goes on the more possible views there are. In one way it’s a pleasing, exploratory quality, but if you want a solution it’s a grave handicap. Second, and related, there’s no objective test beyond logical consistency. Experiments will never prove any of these views wrong.

Third, although philosophy is too difficult, it’s also too easy. Someone somewhere once said that Aristotle’s problem was that he was too clever. For him, it was always possible to come up with a theory which justified the outlook of a complacent middle-aged Ancient Greek: theories which have turned out, so far as we can test them, to be almost invariably false or incomplete. Less clever pre-Socratic philosophers like Heraclitus or Parmenides were forced to adopt weirder points of view which in the long run might actually tell us more.

The current volume, I think, might contain many cases of clever people making cases that broadly  justify common sense while the real truth may be out there in the wild regions beyond. E.J.Lowe, one of the editors of the book and champions of the project, has a view about the powers of the will. He characterises powers as active or passive on the one hand and causal or non-causal. This leaves open the possibility of a power which is both active and non-causal. He wants the human will to have these properties, so that it is  spontaneous and not causally inefficacious without the agent per se thereby bringing about any sort of effect (if I’ve got that right). The spontaneity is supposed to resemble the spontaneity of the decay of a specific radium atom, and hence be consistent with physics, while the causal  efficacy is of a kind that does  not require an interruption of normal physics while still being an important corollary of our status as rational beings.

This is clever stuff, no doubt, but it looks like an attempt – you may consider it a doomed attempt – to explain away the problems with our common sense views rather than correcting them. We’re being offered loopholes which may – debatably – let us carry on thinking what we’ve always thought, rather than offering us a new perspective. It leaves me feeling the way I might feel after a clever lawyer has explained why his client should not be convicted; yeah, but did he do it? There’s no ‘aha!’ moment on offer. In her review Sara Bernstein suggests that sceptics may be inclined to turn back to reductionism, and I must confess that is indeed my inclination.

Still, I can’t shake my hope that somewhere in that dusty old bookshop the truth is to be found, and so I can’t help wishing the project well.

62 thoughts on “More philosophy?

  1. Sara Bernstein writes: “Nonreductive physicalists face several extant challenges, such as providing a metaphysically satisfying picture of the relationship between the mental and the physical, vindicating the efficacy of the mental while remaining true to physicalism, and avoiding systematic causal overdetermination.”

    This comment seems to assume that the mental and the physical are ontologically different kinds of events. As I see it, mental events are particular kinds of physical events, but each occupies a distinct epistemological domain. Physical events are described in objective (3rd-person) language, whereas mental events are described in subjective (1st-person) language. Since objective descriptors and subjective descriptors occupy separate descriptive domains, there can be no formal identity reduction from the physical to the mental. This situation is captured by the metaphysical stance of dual-aspect monism. For more about this, see Ch. 7, “A Foundation for the Scientific Study of Consciousness” in *The Unity of Mind, Brain and World*, Cambridge University Press, 2013.

  2. Arnold, if as you say mental events are particular kinds of physical events, then it seems that the mental *qua mental* plays no causal role that isn’t already played by the physical. So is there any mental causation on your view?

    I thought Bernstein’s summary of the problems faced by non-reductive physicalists was telling:

    “Strategies such as Shoemaker’s (this volume) that posit mental properties that are numerically distinct but not causally distinct from physical properties don’t vindicate the efficacy of the mental qua mental. Views that posit mental properties causally distinct from physical properties are troublingly dualistic. And views that hold that the mental and the physical are causally distinct but do not systematically overdetermine mental properties just push the metaphysical bump further under the rug: if the causal power of the mental is fully independent from that of the physical, it is causally redundant, and if it not fully independent, the efficacy of the mental qua mental hasn’t been established in the first place. As it seems unlikely that naturalistically inclined philosophers will be friendly to dualism, the only option for skeptics is to beat a retreat into reductionism. For this reason, I expect a resurgence of reductive physicalism and type identity of the sort that Robb (this volume) defends.”

    Re identity claims (such as yours), I don’t think it’s so easy to dismiss the difference between mental and physical as merely epistemic, since experiences aren’t *descriptions* of anything, rather they present themselves as their own private qualitative ontological category, distinct from publicly available, quantifiable physical objects/events. Accepting that the twain shall never meet, I think we should give up on mental causation, and accept a psycho-physical parallelism in which mental and physical causation are both descriptively useful and valid in their respective domains, but the domains don’t interact, http://www.naturalism.org/privacy.htm

  3. Tom: “Arnold, if as you say mental events are particular kinds of physical events, then it seems that the mental *qua mental* plays no causal role that isn’t already played by the physical. So is there any mental causation on your view?”

    Of course there is mental causation in my view. According to the way I see it, an absence of mental causation could only be the case if mental events were NOT particular kinds of physical events. But this is precisely what retinoid theory denies.

    Tom: “I think we should give up on mental causation, and accept a psycho-physical parallelism in which mental and physical causation are both descriptively useful and valid in their respective domains, but the domains don’t interact,”

    This statement seems incoherent to me. If you “give up on mental causation”, how can you say that “mental and physical causation are both descriptively useful and valid in their respective domains”?

    If the “psycho” events in what you call “psycho-physical parallelism” are *not* physical events, then you embrace dualism. If the “psycho” events in “psycho-physical parallelism” *are* physical events, then we have dual-aspect monism where the distinction between subjective/conscious events and objective/physical events is an epistemological one depending on the difference between 1pp and 3pp terms of description.

  4. Tom if mental state X just is neural state Y, then there is no mystery about how the mental state could be causally efficacious. This is not to say there aren’t other problems with identity theories…

  5. Eric:

    Yes, if mental events just are physical events, then mental causation is real and is a subset of physical causation. When thinking about mental causation, I had in mind how phenomenal states (experiences) could play causal roles not already accounted for by physical events. What’s the causal role of qualia (“the mental qua mental” as I, following Bernstein, put it), if any? That question disappears if one accepts an identity thesis as does Arnold.

    Arnold,

    In saying we should give up on mental causation, I was saying we should give up on the idea that qualia, conceived of as being non-identical to their neural correlates (as I currently conceive them), play a causal role above and beyond those correlates in 3rd person explanations of behavior. In the paper I linked to, I suggest that the causal role of experiences, e.g., pain, and other sensations, is taken for granted in the 1st person domain: it seems to me as matter of my own experience that I wince because I feel pain. The assumption of mental (phenomenal) causation works as an efficient subjective heuristic in explaining behavior, even if experiences don’t add to what the brain does in 3rd person physicalist accounts.

    Re dualism, I don’t think there’s evidence for any non-physical mental substance or basic phenomenal property (a la proto-panpsychism) that we need add to physics in our 3rd person ontology. Yet for reasons discussed in the paper, I think it’s difficult to sustain the identity claim between experiences and their physical correlates. So, like many folks facing the hard problem, I’m looking for a naturalistic path between substance dualism on the one hand and unsustainable identity claims on the other. I don’t think your “1pp and 3pp terms of description” dual aspect account works because experiences are realities, not descriptions of anything.

  6. Tom: “So, like many folks facing the hard problem, I’m looking for a naturalistic path between substance dualism on the one hand and unsustainable identity claims on the other. I don’t think your “1pp and 3pp terms of description” dual aspect account works because experiences are realities, not descriptions of anything.”

    The problem is that naturalistic accounts must be consistent with scientific accounts, and in scientific accounts descriptions of one kind or another are all that we have to work with. This is why I have proposed that the “hard problem” is a problem of 1pp vs 3pp language, an epistemological problem rather than an ontological problem.

  7. “[R]ecent *advances* in metaphysics” says it all! Poor blokes.

    Arnold: The problem Type-B materialisms face is one of justifying the identity claims at issue. Short of that justification, it just sounds like foot-stomping. Adducing a case for that identity is well and fine, but short of some explanation *why* there seems to be such a fundamental divide, it becomes unclear how any of the standard arguments justify identity as opposed to, say, the kind of ‘parallelism’ that Tom advocates. This is why I think the empirical question of human metacognition has to be answered before the empirical question of human consciousness can be *institutionally* overcome. Like I’ve said to you so many times before, your Retinoid Theory could be right as right could be, but short of settling the issue of what’s being explained, a skewed perspective (as you and I think) versus a distinct (deflationary or inflationary) ontological mode (as most philosophers of mind still think), not many are liable to be convinced.

    Tom: I’m not sure how to make sense of your use of ‘privacy’ or ‘experience.’ Typically, it seems to me, privacy denotes some kind propriety access, or contingent restriction on *public* availability. In your account, however, it becomes a necessary condition of experience. You’re literally arguing that for an experience to be an experience, it must be yours and yours alone. This certainly contradicts everyday usages of both ‘privacy’ (which is always a contingent matter, and never ‘categorical’ (let alone ‘constitutive’!)) and ‘experience’ (which is often shared). If I go to a concert with a friend, what am I sharing if not an ‘experience’? Does this latter sense become degenerate or derivative in some way? The excitement of the crowd, the beauty of the music – are these things we cannot share?

    Otherwise, could you tell me what a ‘first person explanatory space’ is without presuming such a space to begin with?

  8. Scott: “Arnold: The problem Type-B materialisms face is one of justifying the identity claims at issue. Short of that justification, it just sounds like foot-stomping.”

    I’m not a philosopher and I don’t know what “Type-B materialism” is. It seems that you miss my point. I have argued that mind-brain identity can NOT be formally justified (see my Ch. 7 in *The Unity of Mind, Brain and World* (Cambridge University Press 2013). This is why I advocate the stance of dual-aspect monism — matching 3pp descriptions with 1pp descriptions via corresponding analogs.

  9. Arnold: So you don’t think our inclination to ontologize the ‘mental’ isn’t something that can be empirically explained? If so, you’re more of a philosopher than you think! 😉

    The term belongs to Chalmers whose typology has become, for better or worse, a theoretical rosetta stone of sorts in the philosophy of mind community. At the very least it allows philosophers to keep their confusions clear! You can find it’s canonical statement here: http://consc.net/papers/nature.html

  10. Scott: “Arnold: So you don’t think our inclination to ontologize the ‘mental’ isn’t something that can be empirically explained?”

    I’m still working my way through the double negative here.

  11. Scott: “Do you think the ontological puzzle posed by consciousness can be empirically explained?”

    Yes I do. I think that my chapter referenced above points the way.

  12. Scott: “I’m not sure how to make sense of your use of ‘privacy’ or ‘experience.’ Typically, it seems to me, privacy denotes some kind propriety access, or contingent restriction on *public* availability. In your account, however, it becomes a necessary condition of experience. You’re literally arguing that for an experience to be an experience, it must be yours and yours alone.”

    The privacy of experience is what drives the problem of other minds: how do we know for sure that a system that we suspect might phenomenally conscious is actually so? We can only observe its behavior and physiology, not its experience, should it have any. So experience is categorically private in that sense – it’s intersubjectively unavailable.

    We can share a situation (going to a concert), which is intersubjectively available, but we can’t share our separate, individual experiences of the situation, which perforce remain private in the above sense.

  13. Tom: So the first order, or everyday use, of ‘experience’ is faulty (insofar as it assumes experiences can be shared), and your second order (S2, deliberative, etc) metacognitive use of ‘experience’ is correct (insofar as it assumes they cannot be so shared)? We say ‘experience’ but what we *really mean* is ‘situation’ or ‘event’ or somesuch?

    It’s trivially true, it seems to me, that we share all kinds of types, the same vocabularies, DNA, beliefs, morphology, attitudes, and so on without sharing any of their tokenings. So there’s a trivial sense in which nothing whatsoever is shared given the spatiotemporal quiddity of our individual tokenings. So even if experiences were things that could be shared, their tokenings could not. So by ‘privacy of experience’ you can’t be referring to this trivial fact. It has to be impossible to ‘share experiences’ in some sense over and above the fact of their tokenings. The question is, What sense?

    When I see you feel pain, many of the same parts of our brains light up. This is just an empirical fact, so as good naturalists we can tuck the ‘problem of other minds’ into bed with the ‘problem of the external world.’ I say, ‘I feel your pain.’ This makes perfect sense to all involved – but only at some ‘naive’ level, you want to say, because there’s some special sense, beyond the trivial fact that the pain is yours, in which it *essentially* cannot be shared.

    So again, what is this special sense? Experiences in the everyday sense are regularly shared. Our brains, we now know, regularly mirror one another, in some sense confirming this everyday usage, but…

  14. Arnold:

    “The problem is that naturalistic accounts must be consistent with scientific accounts, and in scientific accounts descriptions of one kind or another are all that we have to work with. This is why I have proposed that the “hard problem” is a problem of 1pp vs 3pp language, an epistemological problem rather than an ontological problem.”

    I agree about naturalism having to be consistent with science, but in being conscious we’re presented with (or better, we subjectively *exist as*) a first person reality of phenomenal experience that isn’t obviously the same thing as what neurons are doing. Since we don’t have a perspective on experience (since we subjectively consist of it), the first person organismic perspective is on the *world*, and experience is the qualitative terms in which the world, including the body, appears to us (qualia). The intersubjective, 3rd person perspective of science is also on the world, but its terms are non-experiential, conceptual, and quantitative, e.g., as in a description of what my brain is doing when I have an experience of red.

    So the way I see it, the prima facie ontological difference between private subjective experience and public objects can’t be finessed by appeal to the difference between first and third person perspectives. It’s rather that each perspective (on the world) entails its own ontology of basic representational elements, one qualitative (the undeniable reality of private subjective experience as the way the world appears to me as a representational system), the other quantitative (the undeniable reality of physically extended intersubjectively available objects as the way the world appears to science as a representational system). This view, what I call epistemic perspectivalism, naturalistically respects the deliverances of both experience *and* science, http://www.naturalism.org/appearance.htm#part1

  15. Tom: “So the way I see it, the prima facie ontological difference between private subjective experience and public objects can’t be finessed by appeal to the difference between first and third person perspectives. It’s rather that each perspective (on the world) entails its own ontology of basic representational elements, one qualitative (the undeniable reality of private subjective experience as the way the world appears to me as a representational system), the other quantitative (the undeniable reality of physically extended intersubjectively available objects as the way the world appears to science as a representational system).”

    I don’t see a “prima facie ontological difference between private subjective experience” and a brain having this experience. To be clear about this, if autaptic-cell activity in the retinoid space of a biophysical brain (3pp) and its subjective/qualitative experience (1pp) were to have separate *ontological* existences, wouldn’t this imply that subjective experience is NOT a physical event? If you claim that subjective experience is not ontologically physical, aren’t you endorsing dualism? If you agree that subjective experience IS a physical event, what principled objection do you have to the proposal that our *understanding* of the relationship between our subjective/qualitative experience and the biophysical activity in a particular kind of brain mechanism (retinoid space) is limited by the epistemic problem posed by a mismatch of 1pp and 3pp descriptions of the same physical event?

  16. Scott,

    You suggest that I’m saying “there’s some special sense, beyond the trivial fact that the pain is yours, in which it *essentially* cannot be shared.”

    No it’s this sense that I had in mind: that my pain is mine alone and therefore can’t be shared – you can’t feel it. I don’t see this fact as being trivial since it sets up the problem of other minds and is what primarily distinguishes experience as a subjective reality from the intersubjective reality of public objects and situations.

    Any neural mirroring you do in response to seeing me grimace in pain (a public situation) results in your numerically distinct and unsharable experience. Of course the most vivid illustrations of experiential privacy are dreams, afterimages and hallucinations, in which there’s no shared public object or situation being modeled in our respective consciousnesses. But all experiences are private in this not so trivial sense.

  17. Arnold:

    “If you claim that subjective experience is not ontologically physical, aren’t you endorsing dualism?”

    See my answer to this in #5, last paragraph.

    “…what principled objection do you have to the proposal that our *understanding* of the relationship between our subjective/qualitative experience and the biophysical activity in a particular kind of brain mechanism (retinoid space) is limited by the epistemic problem posed by a mismatch of 1pp and 3pp descriptions of the same physical event?”

    As I’ve said several times, experiences aren’t descriptions of physical events but rather qualitative states, some of which (qualia) can’t be described since they are the basic terms in which we couch our descriptions of experience (e.g., the experience of pure red, of sweetness).

    The relationship of experience to biophysical activity isn’t that of two different and mismatched vocabularies describing the same thing (which assumes the truth of physicalism), but of a qualitative state arising coincident with neural activity. Our understanding is at a loss since it isn’t obvious how a brain mechanism like retinoid space could generate or cause experience for the system instantiating that mechanism. But I think there are plausible *non-causal* considerations that might explain why a representational system at our level necessarily has experience, see part 5 of “The Appearance of Reality”.

  18. Tom, in your #5, last paragraph, you wrote:

    “Re dualism, I don’t think there’s evidence for any non-physical mental substance or basic phenomenal property (a la proto-panpsychism) that we need add to physics in our 3rd person ontology.”

    But this is not responsive to my concern because, if I’m not mistaken, you claim a separate *ontology* for subjective/qualitative experience.

    Tom: “Our understanding is at a loss since it isn’t obvious how a brain mechanism like retinoid space could generate or cause experience for the system instantiating that mechanism.”

    Exactly so! Our quotidian *understanding* depends on a logically coherent set of sentential propositions. When we try to *understand* how our subjective experience/qualitative state (1pp) is constituted by the activity of our brain mechanisms (3pp) we confront a logical disconnect — the hard problem — because our 1pp descriptions/propositions and our 3pp descriptions/propositions are in different descriptive domains.

  19. Arnold, perhaps you could show, giving an example, how “1pp descriptions/propositions and our 3pp descriptions/propositions are in different descriptive domains.” If they are, how does this establish the identity of experience and brain mechanisms, or is there further explanation needed to support the identity claim?

  20. Tom: “No it’s this sense that I had in mind: that my pain is mine alone and therefore can’t be shared – you can’t feel it. I don’t see this fact as being trivial since it sets up the problem of other minds and is what primarily distinguishes experience as a subjective reality from the intersubjective reality of public objects and situations.”

    This is what I thought. Given your affinities with Dennett in other regards, I’m assuming that you accept that metacognition is both fractionate and heuristic (what else what it be?). For my own part, this is why I’m inclined to think the application of metacognitive heuristics governing usages of ‘experience’ is bound to be far more reliable in first-order everyday contexts. Given this is the way we talk about experiences, it seems fair to assume that experiences, whatever they are, are things that can be either more or less public or private.

    But the ‘experience’ you’re referring to is not the ‘experience’ referred to in the everyday sense. Yours, you think, is the ‘real experience,’ one defined on the basis of several, theoretical metacognitive intuitions. Now given that you agree that the mind-reading heuristics governing our first-order usages of ‘experience’ are reliable (if ontologically naive), why do you assume those heuristics should possess *any* reliability in second-order, theoretical metacognitive contexts? On my account, the ‘problem of other minds’ is partially an artifact of applying our mind-reading heuristics out-of-school, which is to say, to problem-ecologies they simply are not adapted to solve. The information and cognitive resources available are far too impoverished – obviously so, I think – to fix claims regarding the ‘essential nature’ of experience (which is why no one can agree).

    Now you don’t have to buy my account, but it does pose a pretty obvious question: What guarantees the accuracy of your second-order reflections on the nature of experience (leading to the intuition of categorical privacy), such that they epistemically trump our first order usages (and the default assumption of experiential publicity)? In the first order case it seems clear we’re using our mind-reading heuristics in their adaptive problem-ecology. The history of philosophy suggests that something hinky *has* to be going on in the second-order case.

  21. Scott,

    “What guarantees the accuracy of your second-order reflections on the nature of experience (leading to the intuition of categorical privacy), such that they epistemically trump our first order usages (and the default assumption of experiential publicity)?”

    I don’t think there’s a default assumption of experiential publicity, rather the opposite. We can see someone’s pain behavior, but the pain itself isn’t the behavior and isn’t available to us. This distinction is commonsensical and first-order since everyone realizes people can be in pain but not show it (my status in an emergency room recently when I sat there quietly with a rather painful foot injury). The everyday sense of conscious experience is that we each have our subjective worlds of thoughts, feelings, sensations, emotions, etc. that are ours alone, so I don’t see myself as being particularly theoretical or second-order here.

  22. Tom: This was why I was careful to distinguish trivial from nontrivial privacy. Remember my point is that publicity/privacy is give and take, more or less, in first order discourse. People speak of sharing ideas, experiences, and so on all the time. They also regularly talk about their ‘innermost thoughts’ and the like. And, as Sellars points out, there’s quite abit of philosophy that’s been uploaded into our attitudes over the centuries as well. But the ‘problem of other minds,’ which you use to establish the *categorical* nature of the privacy of experience, is certainly not part of our daily first-order interactions – even as philosophers!

    So, to rephrase the question, why do you think the metacognitive intuitions underwriting ‘the problem of other minds’ are reliable enough to trump the first-order assumption that experiences and so on can be shared (not categorically private)?

  23. Well, as I said before, I don’t think the privacy of my pain (that only I feel it) is trivial, nor do I think that the notion of such privacy is a theoretically-driven second-order assumption. Seems to me it’s part of the standard folk concept of consciousness: that we are each privy to a hidden personal reality of thought, sensation, emotion, etc. that we try to communicate via language and gesture precisely because it isn’t on public display.

    The problem of other minds is increasingly part of daily first-order interactions as people start contemplating at what point our machines will become conscious.

  24. My previous comment was missing some content because of some symbols I used. Here is a revised comment that includes the original ideas. [Arnold – I removed the earlier one, but if you want it back or other edits, let me know – Peter]

    Tom: “Arnold, perhaps you could show, giving an example, how ’1pp descriptions/propositions and our 3pp descriptions/propositions are in different descriptive domains.’”

    OK, take this fundamental case. Suppose you think this signal thought immediately after you awake:

    a. Tom’s 1pp description: ( I AM HERE )

    b. Science’s 3pp description re Tom’s brain: “Upon awakening, Tom’s retinoid space has been stimulated to a supra-threshold level by increased activity of the ascending reticular activating system (ARAS) resulting in activation of the cluster of autaptic neurons that constitute his core self (I!) together with its volumetric surround of autaptic neurons (Z-planes). This biophysical state of Tom’s retinoid space is a proper biophysical analog of the thought ( I [I!] AM HERE [the perspectival origin of Tom’s core self [I!] in his Z-plane surround] ). In the retinoid theory, this is how conscious content is generated in the brain.

    The descriptors of (a) (1pp) and (b) (3pp) are clearly in different conceptual domains because the referent of (a) is a personal feeling, whereas the referent of (b) is a biophysical state of the brain. Hence the “hard problem”.

    Tom: “… how does this establish the identity of experience and brain mechanisms, or is there further explanation needed to support the identity claim?”
    The identity of conscious experience and the patterns of autaptic-cell activity in retinoid space is a theoretical claim that cannot be justified by formal logic for the reason I have given. The validity of the retinoid theory of consciousness is not established by further explanation, but by the weight of empirical evidence. Does the theory successfully explain/predict previously inexplicable conscious phenomena? Does the theory successfully predict novel conscious phenomena? The available evidence on both counts lends strong support to the retinoid theory. In science, evidence trumps intuition.

  25. Many thanks Arnold.

    So one statement refers to a feeling (phenomenal experience), the other to a brain mechanism, and the hard problem is that we don’t see how a feeling could just *be* a brain mechanism, even though (let’s hypothesize) there’s a perfect correlation. You’re saying that it’s the conceptual divide – how we conceive of feelings vs. brain mechanisms – that prevents us from seeing their identity. So something must be wrong with our concepts if in fact feelings and brain mechanisms are identical. Since there are only physical things on your view, the conceptual mistake has to be about feelings. They don’t, as it might seem and as we ordinarily conceive of them, have apparently non-physical properties such as being qualitative and being private (all physical properties are in principle public and quantitative).

    But of course I’ll reply that our concept of phenomenal experience as qualitative and private is in good order – we’re not mistaken in supposing those properties are essential to what experience is: qualia really do have those properties; they can’t be eliminated as some sort of illusion. So then the question becomes how something that really has those properties could be the same thing as something that really doesn’t (a brain mechanism). To draw the identity claim, one has to show that both sides of the identity relation have the same properties. And in this case they don’t.

    Evidence might end up showing a perfect covariation between experiences and a brain mechanism like the retinoid space. But unless it’s also shown how experiential properties are *entailed by* the mechanism, we don’t have an explanation of consciousness. It seems you want to say that because of the conceptual mismatch, it’s unfair to ask a theory of consciousness for such an explanation. But that’s the primary, essential objective of such a theory: to show how a physical system, suitably arranged and operating, entails the existence of experience for the system. Since (imo) this is the explanatory burden of a theory of consciousness, I don’t think you can fairly say that your theory “successfully explains/predicts previously inexplicable conscious phenomena”. It might predict that I experience a certain qualitative state when a brain mechanism is active, but it doesn’t explain why or how that state is entailed by the mechanism. And asserting an identity between phenomenal feels and brain mechanisms (as you do) seems to me premature absent such an explanation.

  26. Tom, it seems to me that you are claiming that even though the biophysical structure and dynamics of a particular kind of brain mechanism (the retinoid system?) can explain the measurable content of consciousness, there must be something non-physical and non-measurable that exists in parallel with the activity of the critical brain mechanism, and that this non- physical something (psyche) is what consciousness really consists of. Am I wrong?

  27. I find this discussion between Scott and Tom quite interesting, I hope it continues.

    May I offer an interpretation of Scott’s point? Not that Scott is incapable of expressing himself accurately, but more to get a sense of what he is saying, perhaps for myself, too.

    Scott seems to me to be saying that we don’t really experience, in an everyday sense, strictly private minds or selves. In everyday experience, our sense of inside and outside is far more fluid. So, I think Scott is asking on what basis *outside* of this more-or-less fluid, everyday experience are you making the claim that our experiences *are* private? It can’t be based on our everyday experience, it seems Scott is saying, because where is a strictly private self there? So, you must be making this judgment on some other level which you, obviously, think is more secure or accurate than this everyday experience. I think Scott will say that this other level distinguishing a privately-experienced self has all the same problems of the everyday experience. That is, they are both more or less heuristic.

    Is that about right, Scott?

  28. Arnold,

    “Tom, it seems to me that you are claiming that even though the biophysical structure and dynamics of a particular kind of brain mechanism (the retinoid system?) can explain the measurable content of consciousness, there must be something non-physical and non-measurable that exists in parallel with the activity of the critical brain mechanism, and that this non- physical something (psyche) is what consciousness really consists of. Am I wrong?”

    The way I see it, what consciousness really consists of are experiences that have qualitative particulars available only to the system that’s conscious. We’re trying to figure out how and why such particulars are entailed by the system’s configuration and operations. I agree that measurable conscious content, e.g., the reported discriminations made in psychophysics experiments, can be accounted for by brain mechanisms, since that’s a matter of the brain modeling differences in the world. The qualitative nature of consciousness, however, is another story, since the essential character of qualia (red, pain, etc.) doesn’t get captured in such experiments. But my suggestion is that since a system has to have bottom line, irreducible representational elements with which to map differences in the world, these elements necessarily end up as qualitative for the system, that is, cognitively impenetrable, non-decomposable, and literally indescribable because they are themselves basic units of representation that the system can’t further represent.

    So (speculating here) we could say perhaps that there’s a representational ontology of qualia available to the system alone that presents the world to it in qualitative terms. So it isn’t as if qualia are a separate non-physical class of existents (“a non-physical something” as you put it) that are in the world as modeled by science. It’s that they are what happens for a sufficiently complex system (and that system alone!) when it carries out certain classes of representational operations, those that integrate information from multiple representational streams in the context of a self-model, http://www.naturalism.org/kto.htm#Neuroscience

    Conscious, qualitative representations are a funny class of thing, since they are what we use to characterize the world and divide it up into ontological categories, e.g., mental vs. physical (which sets up the hard problem). So the idea of a first-person qualitative representational ontology is perhaps in a way prior to the mental-physical distinction, http://www.naturalism.org/appearance.htm#part4 The same point could perhaps be made about the *quantitative* representational ontology of the sciences: metrics are what’s used to characterize the physical, so are themselves not physical (or mental).

  29. Tom: The presumption of all theoretical reflection on the first-person is that it simply makes what is implicit explicit, sure. But as the jungle of explicitations called ‘philosophy of mind’ shows, there’s nothing ‘simple’ about this assumption at all – which is why I’m asking you to justify it. Like it or not the explicit *expression* of the ‘problem of other minds’ is due to philosophers. There’s a substantial difference between people thinking their thoughts private (and yet shareable) and your notion of *categorical* privacy, which has the effect of ontologizing experience in a manner contrary to everyday uses of the term, as well as delivering it to all the traditional philosophical conundrums. I appreciate you think it’s obvious, Tom, but for me and others there’s nothing obvious about it all. Given the costs it entails – defection from everyday uses, downstream discursive intractability – it seems pretty clear that you owe *some* kind of story, some way of assuring us that you aren’t, for instance, applying a ‘fast and frugal’ heuristic outside of its adaptive – everyday – problem ecology.

    We have very good reason to suppose that theoretical metacognition suffers a variety of rather severe constraints, after all – more than enough to warrant questions such as my own. And you have to admit that the philosophy of mind, meanwhile, exhibits all the characteristics of information underdetermination.

  30. Tom: ” But unless it’s also shown how experiential properties are *entailed by* the mechanism, we don’t have an explanation of consciousness.”

    When you demand the *logical entailment* of a real property X (conscious experience) by another real property Y (a brain mechanism), it seems to me that you go beyond the normal pragmatic scientific standard of explanation of X by a theoretical model (Y) that successfully meets empirical tests. You now
    pass over to the rarified demand of *explanation by pure reason*. This, it seems, would lead to endless verbal conflict.

    A philosopher’s delight?

  31. Arnold:

    “You now pass over to the rarified demand of *explanation by pure reason*.”

    Not at all. It’s just that since (despite what Scott says :-)) we don’t see experience out there in the world as a further effect of what the brain does, standard scientific causal models of its provenance likely aren’t going to work, imo. So I’m looking for other sorts of entailments, e.g., non-causal, that are consistent with and motivated by the evidence of how experience is associated with neurally-instantiated cognitive processes, see http://www.naturalism.org/appearance.htm#part5 and see Metzinger’s books to see where I’m coming from. The search for an explanation of consciousness seems to me a philo-scientific project in which we need a variety of conceptual resources, some of which, perhaps yet to be developed, might end up being somewhat outside the standard scientific explanatory box. But we won’t know until we get there.

  32. Scott,

    “Given the costs it entails – defection from everyday uses, downstream discursive intractability – it seems pretty clear that you owe *some* kind of story, some way of assuring us that you aren’t, for instance, applying a ‘fast and frugal’ heuristic outside of its adaptive – everyday – problem ecology.”

    I don’t think its a defection from the ordinary conception of experience to claim that only I feel my pain, experience my taste of chocolate, etc. That’s a pretty commonsensical heuristic that we apply all the time, seems to me. Short of conducting some experimental philosophy to see what everyday intuitions are on the status of experience on the public vs. private dimension, it looks like we’re destined to disagree on this.

  33. As I understand it, the distinction between 1pp and 3pp descriptions is the distinction between verbal responses from a subject to the experience of observing an object or event in the world (1pp) and verbal responses from an observer of the subject’s behavioral responses, including verbal responses and responses observed indirectly via devices like fMRIs (3pp). Tom killed the 1pp observer thereby precluding “1pp descriptions”. I want to finish the job by killing the 3pp observer, or at least the idea of a “3pp description”.

    Assume we have asked subjects to describe a painting, say, Picasso’s “Guernica”. Each subject undergoes visual sensory stimulation by light reflected from the painting, and that results in neural activity in the subject’s brain. The neural activity together with various other inputs that reflect personal history, context, emotional state, etc, activate one or more context-dependent dispositions to respond. The combination of neural activity and responsive dispositions is specific to the observer (I assume that by experience “tokens” Scott means something like such combinations). In the assumed scenario, among the responses will be the utterance of certain sentences that may be of a type that in quotidian discourse would be called “descriptive” (eg, “It’s in grey tones”, “it depicts a grieving mother, a bull’s head, a wounded horse”, etc). However, in formal discussions I think the utterances should be thought of as essentially reflexive responses to the neural activity, although possibly quite complex responses composed of simpler ones learned during the course of the observer’s life.

    Tom and I seem to agree that calling these verbal responses “1pp descriptions” is misleading (although possibly for different reasons). I find that in formal discussions, the use of the visual vocabulary (“see”, “perspective”, “description”, et al) is frequently misleading. In particular, use of “description” raises the question of what entity is being described. In the example scenario, participants would normally assume that it’s the painting. However, if the brain’s visual processing has access only to the neural activity (possibly indirectly via the outputs of other modules), then the verbal responses aren’t really descriptions of the painting or even of the stimulation – they’re just responses to neural activity. Another candidate for the described entity is the visual phenomenal experience (“mental image”) consequent to the processing. But Tom has argued that such experiences are causally inert, and I mostly agree. If so, then the verbal responses can’t be descriptions of those experiences either. At which point it seems that they aren’t “descriptions” at all (see note below).

    In this view, “3pp descriptions” have disappeared. I think “3pp” as it’s being used in this context is merely a somewhat misleading epistemic reminder that publicly accessible objects and events can provide essentially equivalent sensory stimulation to multiple observers (not necessarily simultaneously), which in certain circumstances can facilitate justification of a response interpreted as assertion of a proposition. I agree with Arnold’s claim that there is an epistemic distinction to be made, although not between different “descriptions”. For example, in comment 25 Tom’s hypothesized thought (the result of internal sensory stimulation) if verbalized as an asserted proposition (“I think I’m here”) couldn’t be justified since no one else can experience his neural activity and concur with his response. Not so with response b (although as Arnold notes later in his comment, as written it’s a statement of a theory rather than a “description” in the sense relevant to the discussion). However, the next to last sentence in his comment is descriptive, hence could replace response b. And that new response can be justified because other researchers can also respond to the “available evidence”, possibly leading to a consensus response and the emergence of new “knowledge” (in a Sellarsian sense).

    All just IMO, of course.

    Note: One may argue that this is merely a quibble about how “description” is defined. And were we engaged in quotidian discussion, I’d agree. But in formal discussions, I think such words should be avoided. For example, many may agree that there’s actually no “picture in the head” to be described by a subject, but probably fewer would agree that neither is there a “picture outside the head” to be described. However, the light stimulating the visual sensory receptors could in principle be generated artificially in the absence of any external object in the FOV, in which case calling the uttering of descriptive sentences a “description of the painting” would seem clearly an abuse of language.

  34. Tom: “Short of conducting some experimental philosophy …”

    Once again, more or less private isn’t the issue. The question is about *categorical* privacy. And as it turns out, there’s already a fair amount of empirical research out there (beginning with Luria). For what it’s worth, abstract-categorical thinking, as it is sometimes called, pretty clearly isn’t something we naturally do. I’m not sure how else I can convince you this issue is a real one.

  35. This is a fascinating discussion. I appreciate the premise,sure it doesn’t come down to one “manifesto” but it does encourage a lot of creativity, which is what such a difficult subject needs. Ideas like ‘active-noncausal’ seem like they are getting closer, but as mentioned, this all seems like a work in cleverness.

    “There are some identifiable weaknesses in philosophy which are clearly on display in the current volume.”

    I see one you did not mention. What if to find the answer, we have to step outside the tool of logic and find some other tool? I don’t know what it would look like or if we even have the capacity for it as humans. Not that logic should be abandoned, but would it not be worth trying to experiment with this in some other way?

    The limits of language are certainly all over this. How could we communicate a non-logic approach? Just a question. Could be impossible, but perhaps gleaning a little more on consciousness, the unobservable, requires something altogether different.

  36. Joseph (Tom): Sorry, I didn’t see this, Joseph. Yes, pretty much. I literally think the ‘experience’ Tom is attempting to delineate is a kind of cognitive illusion, what happens when we submit information adapted to heuristic resources keyed to social problem-solving to heuristic resources keyed to environmental problem-solving, which transforms ‘experience’ into the proverbial beetle in the box. I’m still not sure what Tom’s counterargument is.

  37. Scott,

    The categorical privacy of experience is illustrated when we think about failures of anesthesia as discussed at http://www.nytimes.com/2013/12/15/magazine/what-anesthesia-can-teach-us-about-consciousness.html I don’t think there’s anything theoretical or metacognitive about the idea that someone can be conscious and we might have no idea what the content or phenomenology of the experience is. When normal channels of communication are open, we can be told about the content (e.g., I’m hallucinating a red unicorn), but the phenomenology (what my red is like) can’t be communicated.

  38. Tom: “When normal channels of communication are open, we can be told about the content (e.g., I’m hallucinating a red unicorn), but the phenomenology (what my red is like) can’t be communicated.”

    No subjective experience can be perfectly communicated to another. But communication can be effective within pragmatic limit. For example, if you are hallucinating red, the quality of your color experience can be communicated usefully by your adjusting the overlapping contributions of primary colors to match your internal red color quality. The component wavelengths are one kind of objective description of your quale.

  39. Arnold,

    The component wavelengths are one kind of objective description of your quale.

    If you are hallucinating how are you going to compare and report matching. And if you compare with your memories of the hallucinating experience, how reliable would that be?

    Would you be in the position to at least bound the error? this is the minimum required for any experimental data treatment.

  40. Arnold:

    “the quality of your color experience can be communicated usefully by your adjusting the overlapping contributions of primary colors to match your internal red color quality.”

    Say I’m having a persistent afterimage of red. You show me a color sample that I say matches the color of the afterimage. What *you* see, of course is the color sample, not my experience, so it isn’t as if you’re getting a fix on the quality of my experience except insofar as what particular mix of primary colors, presented in a particular format and situation, *corresponds* to my experience. So we’re not matching anything in the world to my experience since my experience isn’t out there in the world, it’s only available to me. Rather, I’m matching my experience of the afterimage with my experience of the color sample so we have an idea of what public object would normally trigger that experience in me.

  41. Vicente, even when we bound error we have some error of measurement. Can’t get away from it. Science, being pragmatic, just does the best it can. Consider the SMTT hallucination. In this case we can give error figures on the basis of repeated measurements in which the subject sets triangle height – width phenomenal equality over N trials.

  42. Arnold:

    “There is no question that subjective experience is not DIRECTLY accessible by any other person.”

    Right, this is what I’ve been driving at: experiences aren’t publicly available. But I think Scott and perhaps others here disagree about this.

  43. Arnold,

    Precisely, we can’t get away from error, but it is mandatory that we provide a bound for it.

    In this case, not even a bound can be provided. In the case, of the SMTT the error in the reported measurement, would be provided, in principle by the scale accuracy of the device used by the subject to set the triangle height, in arbitrary units. It is not the error in the measurement of the phenomenal triangle itself.

    If I want somebody to inform of the color (RGB) of an object he dreamt about last night, you could bound the error of the input RGB parameters used to report the experience memory, based on the available precision of the three parameters, for computer typical application (1-255, +/-1). But this accuracy is not directly related to the error involved when trying to report the actual color experienced that is subject to many other psychological factors, that can’t be modeled and estimated.

  44. Vicente: “In the case, of the SMTT the error in the reported measurement, would be provided, in principle by the scale accuracy of the device used by the subject to set the triangle height, in arbitrary units. It is not the error in the measurement of the phenomenal triangle itself.”

    Yes, but what some have a hard time trying to understand is that the same limitation holds in our physical measurements of our theoretical subatomic particles. When we decide about the validity of the Higgs particle, we do so, not on the basis of the particle itself, but on the basis of the EFFECTS of the theoretical properties of the particle as revealed in the LHC. We perform repeated trials and measurement of its physical effects and set an error tolerance bound to decide whether HIggs has or has not been found. If we narrow our error tolerance, we increase the risk of *rejecting* a “true” Higgs. If we broaden our error tolerance, we increase the risk of *accepting* a “false” Higgs. Same thing holds in SMTT for testing the validity of the retinoid model of consciousness.

  45. Arnold,

    but on the basis of the EFFECTS of the theoretical properties of the particle as revealed in the LHC

    In fact, of the ACTUAL PHYSICAL EFFECTS, predicted by the theoretical models…..

    Precisely, which is not the case in the SMTT experiments or in the color description example. I am having no hard time, it is crystal clear to me that, we are considering two different cases.

    If you follow the instructions in an instructions book to assemble a certain artifact, would you say that the instructions book is having a PHYSICAL EFFECT on you?

    Is it possible (in physical terms) that the subject in the SMTT experiment lies on purpose?

    Is it possible that the LHC provides wrong data on purpose?

  46. Vicente: “If you follow the instructions in an instructions book to assemble a certain artifact, would you say that the instructions book is having a PHYSICAL EFFECT on you?”

    Of course it is having a physical effect on you.

    Vicente: “Is it possible (in physical terms) that the subject in the SMTT experiment lies on purpose?”

    Any thing is possible, even in LHC experiments. But in the SMTT experiment there is every reason to believe that the subject is NOT lying because if the scientist looks over the naive subject’s shoulder the scientist experiences the same kind of vivid hallucination that the subject hallucinates.

    Vicente: “Is it possible that the LHC provides wrong data on purpose?”

    The LHC has no cognitive mechanisms; therefor it has no intrinsic purpose. However, it certainly can provide wrong data given a design flaw!

  47. Tom: “I don’t think there’s anything theoretical or metacognitive about the idea that someone can be conscious and we might have no idea what the content or phenomenology of the experience is.”

    And I agree, which leads me to think you’re missing the structure of the challenge I’m presenting. If I bury a car in a glacier does that make ‘categorically private’ or just more or less inaccessible? The fact that my experiences are mine or yours is trivial. The fact that people ‘share their experiences’ with us is trivial. In these trivial senses we can (and do) speak of various experiences being more or less private or public without contradiction all the time. It is trivially true that you can ‘feel my pain’ – our brains are built to mirror other human brains after all – and it is also trivially true that you can never token my pain. But then this just amounts to saying you are ‘you’ and I am ‘me.’

    What you want to do is make private definitive of what experience is ‘essentially,’ and the latter public sense derivative, to redefine it as ‘situations.’ The challenge is to explain what you mean in a manner that doesn’t simply collapse into mere token privacy (since that would seem to make *everything* categorically ‘private’!). In what sense, over and above the fact that your pain is your pain (just as your arm is your arm), does it make sense to assert that pain is something that can *never* be shared?

    For my part I think it’s plain as day that I know certain kinds of pain that people feel quite well, some not at all. My brain’s ability to mirror the brains about me exhausts what it means to ‘share experiences.’ As soon as philosophers use epistemological categories to ground ontological claims, talk about experiences as some exotic order of super-private entities, heuristic alarm bells go off… as I think they should. Just for instance: Why should the private categorization of experiences count as a positive property, as opposed to an admission of ignorance?

  48. Scott: “As soon as philosophers use epistemological categories to ground ontological claims, talk about experiences as some exotic order of super-private entities, heuristic alarm bells go off… as I think they should.”

    This, I agree with.

  49. Scott:

    “In what sense, over and above the fact that your pain is your pain (just as your arm is your arm), does it make sense to assert that pain is something that can *never* be shared?”

    In the sense that my experiences, unlike my arm or my brain, aren’t public, observable objects. As Arnold put it, experiences aren’t directly accessible, in contrast to things that are. So what can be shared is the behavior associated with experiences, including verbal descriptions, not the experiences themselves. But you think this is a trivial or non-existent distinction, nothing that could motivate the claim that there’s a hard problem of consciousness, the problem of other minds, the possibility of inverted/absent qualia and other long-standing philo-scientific puzzles.

    We seem to have a basic disagreement about the nature of consciousness itself, the target of explanation. It isn’t that consciousness is exotic, since we’re all conscious; it’s that I see it as being unavailable to third person observation (in contrast to its neural correlates, http://www.naturalism.org/privacy.htm ) whereas you seem to think it is available.

  50. Arnold,

    You are referring, for sure, to the weight put on your arms when you hold it. And, of course, there is no question about the absolute sincerity of all scientists.

    Irrespective of any design flaw, the LHC output is public for everybody, in equal terms. And we don’t need any intermidiary to tell us what is being depicyed on the computer screens.

  51. Vicente, not only the weight of the manual on your arms, but much more importantly, the brain activity evoked by the printed words in your sematic networks. Like the biophysical effect of this text on your own brain as you read these words.

    Vicente: “Irrespective of any design flaw, the LHC output is public for everybody, in equal terms.”

    The same is true for the SMTT output. The basic phenomenon has been publically experienced and reported since the late eighteen hundreds. It was just an inexplicable conscious experience then. Now it appears to be a natural consequence of the neuronal structure and dynamics of our brain’s retinoid system. By the way, without the phenomenal world of retinoid space, LHC and Higgs particles/fields would never be part our science story.

  52. Arnold, being serious, the activity evoked depends very little on the physics involved, that are the same whether the instructions are written in English or Arabic. Like the physics supporting your laptop are the same irrespective of the application you are using, word or excel, or the language used for the applications interfaces.

    Your SMTT experiment remark confirms that it is about an experience being reported, not a system being observed and measured. Thus, supporting my point. Regarding the discovery of the underlying neurological mechanism, brilliant!! outstanding achievement. Neurophysiology of vision also explains a few things, including illusions and delusions, still it is only you who experiences your sight.

    YES !! nothing at all exists without consciousness.

  53. Vicente: “… being serious, the activity evoked depends very little on the physics involved, that are the same whether the instructions are written in English or Arabic.”

    Please remember we are talking about physical EFFECTS of different patterns of photic input (English /Arabic) on a biophysical system (your cognitive brain and its semantic mechanisms). We are not discussing the basic physical PRINCIPLES (the physics involved) that give rise to these relevant biophysical effects.

  54. Arnold, the pattern has a logical effect (pattern translated to info), which is independent of the photic input, the letters could be in black or blue ink, or in one font or another. Or in Braille, or somebody could be reading the instructions for you, causing the same effect. There is no direct physical effect. You are assuming that the cognitive processes that support reading are still the biophysical processes that support them, and I believe that they are one (or several) layer above. In a microprocessor it is very important to differentiate between the physical layer, e.g. silicon transitors or Galium Arsenide ones, and the logical architecture, despite the performance of the latter relies on the capacities of the former.

    The neurophysics of a chimp’s brain, or an illiterate young boy, are the same to the those of an ordinary reader, but the processes triggered by the written pattern recognition are very different.

    So the cases of the SMTT and the LHC are different. Both, great pieces of knowledge and technology.

  55. Vicente: “Arnold, the pattern has a logical effect (pattern translated to info), which is independent of the photic input … ”

    Vicente, I am stunned! You seem to be claiming that the physical information from the printed pattern exists in the brain as some kind of non-physical (logical?) state (unabashed dualism), and that this information from the text is actually unrelated to to the particular photic pattern of the printed text. If you stand by this then we are not engaged within the norms of science.

  56. Why should we have a problem with the privacy of conscious experience when we have no problem with the privacy of our other bodily functions like digestion?

    Dualisms are a natural aspect of our nature and we usually have no problem resolving them in everyday experience i.e. I’m constantly hearing footsteps outside my cubicle during the workday. So when do they become a problem?

  57. Arnold, I used the rest of my comment to explain the statement.

    There is no physical information. There is a physical substrate that supports a pattern that codes the information. The information only becomes meaningful when read by a mind with the right decoder (i.e. knows how to read in English). I suppose that the neural patterns code the same information (BTW I am waiting for a full detailed model explaining this coding, that would really be a step forward in the understanding of the brain), and this patterns are directly related to the experience of reading. Subsequently, and with many other additional inputs your brain provides the motor orders to follow the instructions. There is no physical information and there is no direct physical interaction (in what to the behaviour concerns).

  58. Peter,

    The early pioneers of radio in the twentieth century were able to take the real time human voice and superimpose it on high frequency electromagnetic waves and transport it across space so it could now appear at a different point in space. While traveling through space the information was undetectable without a radio receiver or we can say the voice had been transformed into a different domain of reality. Consciousness has eerie similarities or we can say we are dealing with the reality problem, not the consciousness problem. Namely how do events in the physical domain and biological domain get transformed into this time domain of reality? The answer lies in the fact that complex physical organisms have sensorimotor systems that respond to the physical reality or how does evolution build the system of eventfulness to match the outer reality?

  59. Vic,

    So you’re suggesting the sensorimotor systems are like the radio transmitters/receivers, translating real events into the other domain (not a dualist one, any more than radio waves require dualism), in this case one of conscious experience?

    Not quite sure I’m reading you correctly.

  60. Peter,

    I was being analogous. In radio transmission the higher frequency is present but the audible information modulates the wave so the audible information is the real time information. For cells the particle world and cells are always present but we exist in the analogous audible or information world, or we can say our bodies operate at this information level. Looking backward from this information level to the particle and cellular level yields it’s share of bafflement.

    I’m optimistic that we don’t have the full picture because we are overlooking many areas like overall brain structure i.e. the cortical sheet is actually multilayered and the layers actually root into lower brain structures to yield other informational states etc.

Leave a Reply

Your email address will not be published. Required fields are marked *