Intellectual Catastrophe

gameScott has a nice discussion of our post-intentional future (or really our non-intentional present, if you like) here on Scientia Salon. He quotes Fodor saying that the loss of ‘common-sense intentional psychology’ would be the greatest intellectual catastrophe ever: hard to disagree, yet that seems to be just what faces us if we fully embrace materialism about the brain and its consequences. Scott, of course, has been exploring this territory for some time, both with his Blind Brain Theory  and his unforgettable novel Neuropath; a tough read, not because the writing is bad but  because it’s all too vividly good.

Why do we suppose that human beings uniquely stand outside the basic account of physics, with real agency, free will, intentions and all the rest of it? Surely we just know that we do have intentions? We can be wrong about what’s in the world; that table may be an illusion; but our intentions are directly present to our minds in a way that means we can’t be wrong about them – aren’t they?

That kind of privileged access is what Scott questions. Cast your mind back, he says, to the days before philosophy of mind clouded your horizons, when we all lived the unexamined life. Back to Square One, as it were: did your ignorance of your own mental processes trouble you then? No: there was no obvious gaping hole in our mental lives;  we’re not bothered by things we’re not aware of. Alas,  we may think we’ve got a more sophisticated grasp of our cognitive life these days, but in fact the same problem remains. There’s still no good reason to think we enjoy an epistemic privilege in respect of our own version of our minds.

Of course, our understanding of intentions works in practice. All that really gets us, though, is that it seems to be a viable heuristic. We don’t actually have the underlying causal account we need to justify it; all we do is apply our intentional cognition to intentional cognition…

it can never tell us what cognition is simply because solving that problem requires the very information intentional cognition has evolved to do without.

Maybe then, we should turn aside from philosophy and hope that cognitive science will restore to us what physical science seems to take away? Alas, it turns out that according to cognitive science our idea of ourselves is badly out of kilter, the product of a mixed-up bunch of confabulation, misremembering, and chronically limited awareness. We don’t make decisions, we observe them, our reasons are not the ones we recognise, and our awareness of our own mental processes is narrow and error-filled.

That last part about the testimony of science is hard to disagree with; my experience has been that the more one reads about recent research the worse our self-knowledge seems to get.

If it’s really that bad, what would a post-intentional world look like? Well, probably like nothing really, because without our intentional thought we’d presumably have an outlook like that of dogs, and dogs don’t have any view of the mind. Thinking like dogs, of course, has a long and respectable philosophical pedigree going back to the original Cynics, whose name implies a d0g-level outlook. Diogenes himself did his best to lead a doggish, pre-intentional life,  living rough, splendidly telling Alexander the Great to fuck off and less splendidly, masturbating in public (‘Hey,  I wish I could cure hunger too just by rubbing my stomach’). Let’s hope that’s not where we’re heading.

However, that does sort of indicate the first point we might offer. Even Diogenes couldn’t really live like a dog: he couldn’t resist the chance to make Plato look a fool, or hold back when a good zinger came to mind. We don’t really cling to our intentional thoughts because we believe ourselves to have privileged access (though we may well believe that); we cling to them because believing we own those thoughts in some sense is just the precondition of addressing the issue at all, or perhaps even of having any articulate thoughts about anything. How could we stop? Some kind of spontaneous self-induced dissociative syndrome? Intensive meditation? There isn’t really any option but to go on thinking of our selves and our thoughts in more or less the way we do, even if we conclude that we have no real warrant for doing so.

Secondly, we might suggest that although our thoughts about our own cognition are not veridical, that doesn’t mean our thoughts or our cognition don’t exist. What they say about the contents of our mind is wrong perhaps, but what they imply about there being contents (inscrutable as they may be) can still be right. We don’t have to be able to think correctly about what we’re thinking in order to think. False ideas about our thoughts are still embodied in thoughts of some kind.

Is ‘Keep Calm and Carry On’ the best we can do?

 

 

115 thoughts on “Intellectual Catastrophe

  1. That Blind Brain Theory paper makes me cringe when I read it anymore, not because I disagree with it, but because I had written it on the heels of a decade of full time fiction writing, and it shows. A better companion piece to Scientia Salon article, I think, would be http://rsbakker.wordpress.com/2014/11/02/meaning-fetishism/ .

    As always, Peter, your reading reminds me of the ideal, even-handed perspective to take to these things. I sometimes think I was a Pastor in a previous life.

    I guess the thing I would emphasize is the fact that I’m not saying that intentional cognition doesn’t exist, or that our everyday, Square One idiom will be eliminated, only that all the intentional posits advanced by philosophy over the ages are very likely chimerical effects of our metacognitive shortcomings, that we have only ever duped ourselves into thinking we’ve reached Square Two.

    The great mystery of intentional idioms lies in the way they so effectively solve a variety of social problems, even though the posits used – beliefs, intentions, hopes, meanings, and so on – stubbornly resist natural explanation. My argument here resembles Dennett’s interpretivism in certain respects, but differs in that it eschews the notion of ‘intentional stances,’ opting instead for an account in terms of heuristic neglect. This allows me to advance hypotheses regarding the ecological limits of various heuristics in a way Dennett never could. As it turns out, a good number of the conundrums afflicting cognitive science and consciousness research can be diagnosed in terms of heuristic overreach. As per the ecological rationalist line pursued by Gerd Gigerenzer and the ABC Research Group, different problem ecologies trigger the application of different heuristic systems, some evolved, some learned, all keyed to the solution of problems given only certain kinds of information. Intentional cognition is keyed to the solution of everyday social contexts: our ancestors needed to solve a great many social problems given very little information. Given this, and given metacognitive neglect of the heuristic nature of intentional cognition, we should expect the application of intentional cognition to the kinds of problems that vex philosophers would generate pseudo-knowledge at best. We should expect, in other words, a radical difference between the everyday problem-solving efficacy and the theoretical problem-solving efficacy of intentional posits.

    And indeed, this seems to be the case.

    So ‘thinking about thoughts’ is something we regularly do, for instance, when trying to understand how we feel about a given event. And indeed it helps. As soon as begin thinking about the *nature* of our thoughts, however, we should expect to run into troubles because the term ‘thought’ does not ‘pick out’ anything in nature, so much as it ‘locks into’ our nature in certain, advantageous ways that will eventually be understood by cognitive science. Just think of how much work ‘lies’ do, leading nations into war, whole societies into houses of worship, and so on. The idea here is that our ancestors evolved a toolbox of special purpose ‘lies’ capable of solving a specific set of problems regarding themselves and one another. An evolved ‘informal ideology of the human,’ in effect.

    So the unnerving idea is that, taking the high-dimensional view of the life sciences as our cognitive baseline, ‘false ideas about our thoughts’ are embodied in astronomically complicated neural processes, and are not still thoughts of some kind.

    To paraphrase Nietzsche: It thinks, therefore ‘we’ are.

  2. Pingback: Post-Intentionality Redux | Three Pound Brain

  3. It is true that ’human beings uniquely stand outside the basic account of physics, with real agency, free will, intentions and all the rest of it’. But animals can be part of the story if we take away free will (and human consciousness). In other words I feel that intentionality (‘aboutness’) should not be considered as a human privilege. ‘Aboutness’ also exists for animals (presence of acid for a paramecium, small moving object for a frog,….). And looking at a possible future for intentionality implicitly introduces an evolutionary perspective which brings to consider human intentionality backwards, relatively to a sort of pre-human bio-intentionality. Such animal aboutness deals with the satisfaction of ‘stay alive’ constraints related to animal nature. This brings us (again…) to the field of meaning generation for internal constraint satisfaction which, I feel, could provide a thread to an evolutionary approach to intentionality (see short paper http://philpapers.org/archive/MENITA-5.pdf . Perhaps intentionality positioned in an evolutionary background could offer a framework to address ‘post-intentionality’. But a lot is to be done as it is true that our awareness of our own mental processes is narrow and error-filled. And I feel that a bottom-up evolutionary approach should be a natural frame for the studies of intentionality and of self-consciousness as it allows to start from the simplest in terms of mental state.

  4. These themes have been raked over so many times before, I can’t see that anything new is being added here.

    As I said at Scientia Salon, if we are mistaken about our cognitive processes then what on earth makes you think that we would be any less mistaken about any problem we see with this?

    I know Scott thinks he has a way out of the self defeating nature of such claims, but I can’t see it.

  5. Christophe: So what about all the information the ‘aboutness relation’ neglects? Doesn’t it stand to reason that ‘aboutness’ is a *heuristic* way of conceiving our environmental relations absent any detailed causal information regarding those relations?

  6. Robin: “if we are mistaken about our cognitive processes then what on earth makes you think that we would be any less mistaken about any problem we see with this?”

    The question can be flipped on its head: How, given that metacognition is (as pretty much a matter of empirical fact) heuristic and fractionate, could we be anything but mislead by our intuitions regarding the nature of cognition?

    Otherwise, I entirely concede I could be likewise misled by my metacognitive intuitions. This is why I place so much emphasis on the theoretical virtues evinced by my approach, by the fact that it accords with what we’re learning about consciousness and metacognition, by its theoretical parsimony and explanatory reach, and by the kinds of cognitive deficits it predicts we should regularly run afoul of. This is why my position is something that cognitive science will ultimately sort through.

    But you seem to think my position has to be automatically self-defeating: I don’t see how that follows.

  7. >> “The question can be flipped on its head: How, given that metacognition is (as pretty much a matter of empirical fact) heuristic and fractionate, could we be anything but mislead by our intuitions regarding the nature of cognition?”

    Again, how exactly does “heuristic and fractionate” fail to jell with our intuitions about the nature of cognition?

    As I pointed out earlier, “heuristic” is a very broad term, but surely cognition is, by definition, heuristic?

    And was there any time in our lives that we thought of cognition as being anything but fractionate? I can’t remember that time.

    Was there a time in history when people thought of cognition as anything but fractionate?

    You talk of “neglect”, but I am not sure that appreciate what you are saying. “neglect” is a key component of knowing. It is one of the great outstanding problems of AI as to how our brains manage this “neglect”. It is one of the reasons we can’t build intelligent machines yet. And cognitive scientists cannot work this out either. For all the sound and fury coming out of the various areas of mind science, we are still in the alchemy stage there.

    “Neglect” is the only we we have of knowing and it is, as far as we know, the only way of knowing. It is not just any old kind of “neglect” that we have, just arbitrarily passing over information does not work, it has been tried.

    We can neglect creatively and that is the complement of attention. I can’t supply any more details because, as I said, nobody knows how this works yet.

    But, leaving that aside, and supposing that one day in the future that science does manage to contradict our intuitions. Even then your “turning it on its head” does not work.

    Here:

    1) Our intuitions about our cognitive processes are mistaken
    2) It is a problem that these intuitions are mistaken
    3) 2) is an intuition about our cognitive processes and, by 1), is mistaken.

    So, no problem.

    It follows that any theory that claims to address an unstatable problem is orphaned.

    In any case if science provides reliable information about our cognitive processes, then what more do we need? How does your approach add to what science can tell us?

  8. 1) Our intuitions about our cognitive processes are mistaken
    2) It is a problem that these intuitions are mistaken
    3) 2) is an intuition about our cognitive processes and, by 1), is mistaken.

    Have to say, if one could have an intuition that was correct about other intuitions being mistaken, this’d be an excellent way of locking it out. It ‘proves’ the validity of the other intuitions simply by the way it locks out any disproving intuition (kind of a neglect). All resting on the basis that only intuitions matter in regards to how the world works (ie, the idea that the world cannot work in X way that we have no intuition of)

    I mean, why would there be no problem, if our intuitions were actually mistaken regardless of the apparent paradox of 1,2 & 3 above? If our intuition is that the emperor is clothed, but actually he is naked, 1 to 3 doesn’t make him clothed. So why is there no problem? Is it because the entire extent of the world is just our intiutions? And trying to prove our intiutions wrong is just an intuition and so proves our intuitions right…never mind if the emperor is actually naked?

    That’s how seductive our intuitions are – they easily compel us to think they are the entirety of the world, based merely on that they compel us to think so.

  9. Robin: I’m beginning to understand why you’ve chased me across three boards now, Robin.

    Most everyone agrees that our cognitive processes are fractionate and heuristic, now. But how many agree that our *meta*cognitive processes are such? And who considers the role played by neglect?

    Otherwise the argument you give simply isn’t cogent. If you had argued something along the lines:

    1) ALL intuitions about our cognitive processes are mistaken.
    2) (1) is a claim about cognitive processes based on intuition.
    So, 3) (1) is mistaken.
    So, 4) Not all intuitions about our cognitive processes can be mistaken.

    you would have had something more workable.

    The problem of course is that I’m not arguing that ALL intuitions are mistaken. I’m arguing that most metacognitive intuitions about the general nature of our cognitive processes are mistaken.

  10. Arnold, I’m sure Scott will write a better response, but I’d like to hijack your question anyways. Apologies in advance to you both.

    Given that ‘aboutness’ and ‘self’ might not exist at all in any defensible way, I’d say that’s probably a good example of a probably mistaken metacognitive intuition (within the framework of BBT).

    Like, sure, I do think that a big part of what it means to be “conscious” is to have a model of space that includes self (0,0,0 coordinate or ‘locus of thought’). But my intuition goes further, and says that the self also has to include a model of itself and the things it perceives so that I can even be having this discussion. You can follow that intuition right into a Cartesian infinite homunculus regress.

    BBT avoids that by saying something like “even by assuming you exist in any unified way that is compatible with traditional concepts of self/soul you are misapplying cognitive resources in a futile infinite regress loop with increasing resource commitment at every iteration”. You can get as meta as you want, but you’ll always end up with a headache.

    The neat (or horribly flawed depending on opinion and mood) thing about BBT is how it predicts that people will have a hard time believing BBT, since what we neglect becomes completely incomprehensible, as in cases of anosognosia where a patient literally cannot conceive that his arm no longer works (I consider anosognosia to be the strongest empirical evidence that Scott can bring to buttress his scary theory, but there is no denying that generalizing a pathological case to all of cognition might be a problem).

  11. Hi Scott,

    >> “Robin: I’m beginning to understand why you’ve chased me across three boards now, Robin.”

    You extended an invitation across to your board and linked this one as being about the same article.

    What is this “chased” nonsense?

  12. Hi Scott,

    >> “The problem of course is that I’m not arguing that ALL intuitions are mistaken.”

    OK, so you have answered your own earlier question:

    >> “The question can be flipped on its head: How, given that metacognition is (as pretty much a matter of empirical fact) heuristic and fractionate, could we be anything but mislead by our intuitions regarding the nature of cognition?”

    So you have answered, yes, we could be something but misled by our intuitions about the nature of cognition. According to you, we might be correct about some of them.

    So we come back to the same issue – you have an intuition that this poses some sort of a problem for us. Is that intuition one of the mistaken ones? Or one of the good ones? How would you tell?

    So we have:

    1) We might be mistaken about some of our intuitions about the nature of cognition.
    2) 1) is a problem
    3) 2 is an intuition about the nature of cognition and so may be mistaken.

    So we are still left with the task of working out whether the problem that you intuit is real or mistaken, long before the task of finding a solution to that problem can begin.

    It might be necessary at this point to try to express more clearly the nature of the problem that you foresee.

    As I pointed out earlier, “fractionate and heuristic” may well be necessary attributes of any cognitive system. They certainly appear to be necessary attributes of any cognitive system that we have ever heard of, or can imagine.

    And as I have pointed out earlier “fractionate and heuristic” jells pretty well with our intuitions about our cognitive processes, at least on reflection.

    So we have learned something about ourselves. We are, in some ways, more effective cognisers than our intuitions would allow.

  13. Scott,
    There is indeed a huge amount of information that the ‘aboutness relation’ neglects because knowledge cannot be total (different subjectivities, and, ultimately, the “why is there something” question).
    And also because most of the time we naturally position the ‘aboutness relations’ as being about our human thoughts, beliefs, intuitions…
    The point I would like to highlight is that aboutness about humans implicitly addresses all the mysterious performances related to our human nature (self-consciousness, free will ,…). This makes the story very complex with elements specific to humans that we do not understand. And as these human specific performances do not exist within animals, it looks logical to begin by analyzing animal cognition where much less ‘aboutness relations’ will be neglected. So my interest with an evolutionary approach to cognition and a ‘bio-intentionality’.
    If you are interested, there is a presentation on that subject at http://cogprints.org/7584/2/C.Menant-Presentation.pdf

  14. Christophe: “There is indeed a huge amount of information that the ‘aboutness relation’ neglects because knowledge cannot be total (different subjectivities, and, ultimately, the “why is there something” question).”

    So aboutness is heuristic. As heuristic, then, we know that it leverages solutions via specific information structures in its environments – that it possesses a problem ecology. The question then, becomes one of what that problem ecology might be. What characteristics do solvable problems share? What characteristics do unsolvable problems exhibit? And so on.

    My question to you is, What kind of problems could aboutness solve for nonhuman animals?

  15. Robin: “So we are still left with the task of working out whether the problem that you intuit is real or mistaken, long before the task of finding a solution to that problem can begin.”

    Exactly. This has been my point all along: science offers the only way (that we know of, at least) to sort through these issues – to make any advance on Square Two. The ‘fractionate and heuristic’ nature of metacognition makes sense once we know enough to interpret our intuitions thus. We’re quite good theoretical cognizers in scientific contexts.

    The default, however, has been to assume our capacity to reflect was unitary and general – as is indeed we should expect, given metacognitive neglect. This is what has stranded us at Square One all this time.

    It’s nice to see you finally concede the tu quoque, Robin!

  16. Jorge: “I consider anosognosia to be the strongest empirical evidence that Scott can bring to buttress his scary theory, but there is no denying that generalizing a pathological case to all of cognition might be a problem.”

    It’s certainly not the greatest marketing tool, but it’s doubtless the case that we suffer ‘natural anosognosias’ of some kind. Combine this with the parsimonious way the notion explains so many our or travails trying to understand ourselves, and I think you have a theory that demands serious attention, at the very least.

  17. Arnold: “Scott, is the metacognitive intuition that consciousness is always about *something somewhere* in relation to our self mistaken?”

    On my account, the high-dimensional, natural scientific view of consciousness as simply another system causally embedded in larger systems is all we need. If we keep in mind that our metacognitive capacities are likewise another system nested within larger systems, then the question of what we can and cannot metacognize can answered by asking what kind of access and resources we might expect such a system to possess. My answer: We should expect such a system to be blind to the high-dimensional, natural scientific view – to be forced, in effect, to adopt some kind of heuristic that can solve for our environmental relations neglect the high-dimensional facts of those relations. Aboutness is that heuristic. Consciousness appears to be ‘intrinsically about’ simply because: 1) metacognition is blind to the myriad ways it actually plugs into our environments; and 2) metacognition is blind to this blindness.

    So to answer your question: consciousness is about nothing at all, even though, upon reflection, it seems as if aboutness *has to be* its central structural feature. But this is just a metacognitive glitch in the system. By replacing representation with neglect, BBT cleans ontological house, allows us to see all these mysterious entities in terms that are entirely natural.

  18. Scott: “By replacing representation with neglect, BBT cleans ontological house, allows us to see all these mysterious entities in terms that are entirely natural.”

    But here is my problem: My scientific explanation of consciousness sees it as an advantageous evolutionary adaptation. As such, it acknowledges a physical world all around each of us. If we were blind to relevant aspects of our world, we would not be able to adapt to it in the way that we do. So consciousness gives us a useful brain *representation* (not a copy) of the world with which we must cope. I take this kind of representation, which is *about* the world around us, as an entirely natural process. Why replace this natural kind of *aboutness* with the idea of *neglect*. In short, I disagree with your contention that “consciousness is about nothing at all”.

  19. Arnold, since you seem to be a phenomenal representationalist, I am genuinely curious how you approach the issue of infinite regress and the homunculus argument.

  20. Arnold: The brain has to be blind to any number of ‘relevant aspects of the world’ no matter what, simply because cognition is both expensive and subject to any number of structural limitations. Evolution had to pick and choose, cheat and steal. On an evolutionary view, we should expect to find those limitations expressed throughout every aspect of cognition, should we not?

  21. Jorge: “Arnold, since you seem to be a phenomenal representationalist, I am genuinely curious how you approach the issue of infinite regress and the homunculus argument.”

    Our brains’ transparent construction/representation of a phenomenal world does not lead to an infinite regress nor require a homunculus because the self (I!) is not an observer. Phenomenal representation is explained in the *retinoid model* of consciousness. For details, see “Space, self, and the theater of consciousness”, “Where Am I? Redux”, and “A foundation for the scientific study of consciousness”, here:
    http://www.researchgate.net/profile/Arnold_Trehub

  22. Scott: ‘My question to you is, What kind of problems could aboutness solve for nonhuman animals?’

    An animal has ‘stay alive’ constraints to satisfy. The aboutness of a predator is for the animal the meaningful representation of that predator relatively to the constraints (networks of meanings from sensed information & memory, action scenarios). Aboutness participates to constraints satisfaction processes. It embeds the animal in its environment.
    Regarding heuristics, it is clear that the meaningful representations need to be updated quasi permanently via try for fit type simulations (with some parts remaining as they are). Heuristics are part of aboutness but are not all aboutness.

  23. Scott: “The brain has to be blind to any number of ‘relevant aspects of the world’ no matter what, simply because cognition is both expensive and subject to any number of structural limitations.”

    Sure, I agree that the brain is unable to detect many relevant aspects of the world. It is able to “see” only those aspects that nature enables it to be conscious of. But what it is able to consciously represent is certainly not “about nothing at all”. BTW science is our enterprise aimed at constantly expanding what we can consciously represent about the natural world.

  24. Scott: “But what it is able to consciously represent is certainly not “about nothing at all”.”

    If you’re so certain, then you must have some kind of evidence warranting that certainty. So how do you know that you’re not running afoul metacognitive neglect in this instance?

    “It is able to “see” only those aspects that nature enables it to be conscious of.”

    Exactly. We can cognize only those things we possess the capacity to cognize. Why do you presume that consciousness belongs to those things nature allows us to accurately cognize as the explananda of scientific research? The history of ideas argues the exact opposite. The latest research on metacognition suggests that ‘accurate, high dimensional cognition of consciousness’ was among the last things evolution was worried about. Why would evolution just happen to grant us this miraculous metacognitive ability?

    What I’m saying is that we are natural in such a way that cannot intuitively cognize ourselves as natural, and so are forced to cognize ourselves in non-natural ways – that is, heuristically. Since we’re blind to the heuristic nature of our metacognitive access and capacity, we run afoul the only-game-in-town effect – presume things like ‘consciousness must be intentional.’

    My approach not only drains the swamp of problems generated by intentionality, it also promises to explain why there’s such a swamp in the first place.

  25. Christophe: “An animal has ‘stay alive’ constraints to satisfy. The aboutness of a predator is for the animal the meaningful representation of that predator relatively to the constraints (networks of meanings from sensed information & memory, action scenarios). Aboutness participates to constraints satisfaction processes. It embeds the animal in its environment.”

    The animal is already embedded in the environment. What you mean, I take, is that it somehow allows the animal to cognize its environment.

    So the mechanisms of meaning are? The mechanisms of ‘constraint satisfaction’? The processes by which information acquires content are? The causal explanation of evaluation is?

    These are all rhetorical questions, of course. No one can answer these questions because no one even has a clue as to how to *begin* to naturalize ‘meaning,’ ‘content,’ ‘evaluation,’ and so on. The easy thing to do here is assert they are intrinsically inexplicable, ‘irreducible.’

    Is this what you think, Christophe?

  26. Arnold: “But what it [your brain] is able to consciously represent is certainly not ‘about nothing at all’.”

    Scott: “If you’re so certain, then you must have some kind of evidence warranting that certainty. So how do you know that you’re not running afoul metacognitive neglect in this instance?”

    You have just provided evidence for that certainty. Surely your current response to my previous comments is not a result of your conscious representation of them as being about nothing at all. I have no idea of how one might respond if my comments were cognitively represented as nothing at all.

  27. Arnold: “You have just provided evidence for that certainty. Surely your current response to my previous comments is not a result of your conscious representation of them as being about nothing at all. I have no idea of how one might respond if my comments were cognitively represented as nothing at all.”

    But this just begs the question, Arnold. If I were right, this is precisely how it would seem to you or to anyone duped into thinking representation is more than a heuristic.

    Say the argument was over the intrinsic value of money. I say it’s heuristic, that money does what it does by virtue of a complicated system of relations, and you say, no, money is intrinsically valuable, that it does what it does simply because that’s what money is. As we’re having this debate, I buy you a cup of coffee. You say, ‘Aha! I told you so!’ and all I can do is scratch my head and say, ‘No, I told you so.’

    My argument isn’t that we don’t rely on aboutness as a heuristic, it’s simply that it is a heuristic. Since the ‘complicated system of relations’ is always inaccessible in any given instant of communication, we have no choice but to rely on aboutness/representation to solve problems involving this system. It’s the assumption that aboutness/representation must therefore be something real (despite the endless controversies, perennial confusions that surround it) that’s the problem.

    I know this likely feels like sideways philosophical chicanery to you, but if you look at all the problems that intentionality has caused, you have to admit that something sideways is going on, and that we should expect a sideways answer.

  28. Scott,

    Speaking of anosognosia, seems a person without access to other people dying would be able to weave around indications of death’s inevitability until the very end. But people with that access surely do as well. Heaven. Along those lines, what’s the difference between unknown unknowns like the inaccuracy of geocentrism for pre-moderns and knowable unknowns like “hitting kids is bad for them,” as interpreted by the abused (say, Adrian Peterson), where corrective information is systematically excluded?

  29. Scott: “It’s the assumption that aboutness/representation must therefore be something real (despite the endless controversies, perennial confusions that surround it) that’s the problem.”

    My assumption/heuristic is that “aboutness/representation” are words that *describe* the activity of real biophysical mechanisms in the human cognitive brain. The retinoid theory of consciousness details the neuronal structure and dynamics of the brain mechanisms that constitute subjective aboutness/representation. Activated autaptic-cell patterns in retinoid space are subjective *representations* of objects and events — what the representations are *about* in the real world. Do you disagree with this way of framing the problem?

  30. Scott: ‘So the mechanisms of meaning are? The mechanisms of ‘constraint satisfaction’? The processes by which information acquires content are? The causal explanation of evaluation is?
    These are all rhetorical questions, of course. No one can answer these questions because no one even has a clue as to how to *begin* to naturalize ‘meaning,’ ‘content,’ ‘evaluation,’ and so on. The easy thing to do here is assert they are intrinsically inexplicable, ‘irreducible.’
    Is this what you think, Christophe?’

    Animals are naturally embedded in their environments through cognition by their perceptions and actions taking place for them to maintain their nature (stay alive). The mechanisms of meaning generation exist so the animal can implement actions that allow his survival (escape dangers, …). The built up meaning (‘danger’) is the connection of received information with the constraint to triggers an action (hide or escape). Constraint => meaning => action. The action comes at the end.Meaning generation is constraint satisfaction driven. The action is a consequence of the generated meaning.
    Closer to your Scientia salon paper: stating that ‘humanity is just another facet of nature,’ looks to me as a bit biased. Abiotic universe was about ubiquist physico-chemical laws. Life brought in local constraints (‘stay alive’ does not exist in the water close to the paramecium).Then came in the capability for a living entity to represent herself as existing in the environment like conspecifics were represented (self-consciousness). I take these steps as characterizing an increasing complexity in the universe (on earth at least) where we humans are the most evolved stage. What is to come next in that evolutionary thread? The Fodor perspective looks to me as closer to Sysiphus finishing a climb up than to a Promethean action. I prefer looking to our future through the latter. The next step could be about self-consciousness able to build up an understanding of its nature (opening a possible path to an end of the ‘metacognitive incapacity’). I feel this will/can happen and lead to a repositioning of the hard problem.
    But true that today reality does not seem to illustrate such a perspective. As said, the point I feel we should focus on is human nature, our young human nature strongly rooted in a huge and terrible anxiety limitation constraint that we manage, mostly unconsciously, through life & death drives. Analyzing and understanding that process should shed some light on our motivations. Could such a gnoti seauton shadow a post-intentionality? I’d take the bet.
    But we will of course “ continually stumble across solutions we cannot decisively explain”. Again, the ‘decisive explanation’ is about the nature of being, the pre-big bang story. Accepting this and shelving the question for a time should help our questionings about how/why energy became matter, matter life, life self-consciousness and self-consciousness to become….

  31. https://www.youtube.com/watch?v=WloXdTi8Ymg

    From the Naturalism Workshop they discuss consciousness. At 28:30 Sean Carroll asks the BBT question. At 31:20 David Poeppel explains that the brain can “see” its facial recognition area functions but cannot “see” the primary visual cortex area functioality because the granularity is too fine.

    Well it’s not that the granularity is too fine but rather goes back to the very beginning of the session where he talks about the beads of time. Well there are no beads of time but rather the nervous system represents time. As a stimulus response system those beads are actually biological response windows of time which we knit together as reality.

    Likewise the discussion of sentence word reordering point to the SR paradigm and R (meaning) can be the same if the S are reordered.

    There is a hierarchy here which they are not “seeing” as well.

  32. Devin: “Speaking of anosognosia, seems a person without access to other people dying would be able to weave around indications of death’s inevitability until the very end. But people with that access surely do as well. Heaven.”

    The interesting thing to note is how ‘death neglect’ doesn’t require any ontological chicanery: death is simply unthinkable. A kind of ‘existential Square One.’

    ‘Death denial,’ however, requires rationalization.

  33. Arnold: “Activated autaptic-cell patterns in retinoid space are subjective *representations* of objects and events — what the representations are *about* in the real world. Do you disagree with this way of framing the problem?”

    We’ve been down this road before, Arnold. If consciousness has intrinsic aboutness, then you owe us a theory of naturalized meaning that explains the peculiar features of meaning. There’s no doubt the brain is systematically interrelated to its environments, or that it recapitulates certain structural features of its environment, but aboutness entails that the brain also be semantically related to its environments – and understanding how this can be so is one of the holy grails of philosophy of mind. As of a couple years ago at least, your account seemed to just help itself to semantic concepts without providing any real explanation of them.

  34. Christophe: “Animals are naturally embedded in their environments through cognition by their perceptions and actions taking place for them to maintain their nature (stay alive). The mechanisms of meaning generation exist so the animal can implement actions that allow his survival (escape dangers, …). The built up meaning (‘danger’) is the connection of received information with the constraint to triggers an action (hide or escape). Constraint => meaning => action. The action comes at the end. Meaning generation is constraint satisfaction driven. The action is a consequence of the generated meaning.”

    How does this explain meaning? Mechanisms stand in certain dispositional relations to other systems: the big question is one of how this or that system of mechanisms acquires ‘aboutness relations,’ and what these extraordinary relations consist in. If they’re natural, then what kind of instruments are required to detect them?

  35. Scott:”There’s no doubt the brain is systematically interrelated to its environments, or that it recapitulates certain structural features of its environment, but aboutness entails that the brain also be semantically related to its environments – and understanding how this can be so is one of the holy grails of philosophy of mind.”

    I explained this back in 1991. For example, see “Building a Semantic Network” here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter6.pdf

  36. For what it’s worth, it seems like the exchange between Arnold & Scott is more like money has been outed as virtual already, but maybe gold still carries intrinsic worth? Scott aims at the money object that ends up leading to the uncontested grasp of the coffee, but Arnold already acknowledges that money as virtual. However, maybe the virtual value of money is being backed by a gold with intrinsic value? However, Scott again tries to go right from the begining, inward – ie, starting with the money and outing it as virtual as the first step, but that makes us hit the loop where Arnold has already acknowledged that – thus the argument gains no traction at any deeper level/fails to out gold/representation itself as virtual as well. Scott tries to start from the outset again…and so the loop repeats. For what it’s worth *shrug*, might be way off.

  37. @Arnold: I don’t see how semantic networks solve the problem of intentionality. I can’t help but feel you’re missing what Scott is getting at but perhaps you can walk us through it.

  38. Scott,
    ‘How does this explain meaning? Mechanisms stand in certain dispositional relations to other systems: the big question is one of how this or that system of mechanisms acquires ‘aboutness relations,’ and what these extraordinary relations consist in. If they’re natural, then what kind of instruments are required to detect them?’

    As introduced above, I want to believe that meaning came in the universe with life, when local ‘stay alive’ constraints were added to ubiquist physico-chemical laws (local constraints to be satisfied => local meaning generation). So an explanation of meaning generation and aboutness assumes that we understand what life is. And the problem is that we do not know what life is (the best definition I know is the circular ‘set of functions that resist death’). So if we do not take life as a given usable to address performances that came with/after it, we are stuck…
    If this is what you mean, I of course agree with you. But I also feel that we can use life as a given while still working on the understanding of its nature. Then aboutness/intentionality and meaning generation make sense and can be used for animals, humans and artificial agents (understanding we must be clear about intrinsic vs derived constraints, meanings and intentionality).
    And it may also be worth noting that using life as a given we can work with is not that different from using matter as a given we do work with. Physics (Newtonian & quantum) do not explain the nature of matter (Still the bottom line question about the origins of what exists).

  39. Having followed this particular series of exchanges at three sites from the sidelines, it seems clear that for many it is emotionally difficult to accept the emerging reality: mankind’s attempts at self understanding, either through religion or philosophy, have been severely complicated (rendered wrong) by the fact that we happily create foundational narratives based on the scantiest of information. Philosophy used to occupy a most privileged position, now increasingly eroded by incontrovertible facts uncovered by science.

    This is not the first time that established narratives have had to be replaced. The Pantheon of innumerable Gods of ancient Rome gave way to a tripartite One around 400. Earth no longer being at the center of God’s Creation due to the work of Copernicus, Bruno and Galileo is another example. Now we realize that our minds are not as ‘clear and distinct’ as the philosophers had promised.

    A new understanding of the human is unavoidable because that is where progress lies. We will probably survive just fine, and be the better for it.

  40. @Callan: While I may be wrong I think the issue is that Scott is describing the problem of intentionality and people are offering solutions to much simpler problems.

    Admittedly I may be off, but intentionality is (IMO) a hard concept to really grasp and just as hard (again, IMO) to then explain in the space of a blog comment box. As such I’d recommend

    On the problem itself, I think it comes down to Fodor’s belief that no naturalistic account is possible vs. Rosenberg’s assertion(1) that Fodor in actuality paves the road for the kind of post-intentional landscape Scott is talking about.

    Where Scott deserves credit is recognizing just how problematic this is, especially when coupled with the spectre of transhumanism haunting the future horizon. As Benjamin Cain would rant, there’s no humanism to be found when you take the naturalism of the secular humanist to its conclusion.

    (1)The Mad Dog Naturalist:

    http://www.3ammagazine.com/3am/the-mad-dog-naturalist/

  41. *Sorry for the cut off sentence, should read:

    As such I’d recommend people at least read the definitions provided by the IEP or SEP.

  42. @Sci: Philosophy and science are products of biology. Neither would exist without the capacity of our cognitive brain mechanisms to represent relevant aspects of what exists outside of our own brain.

  43. @Arnold: Whether something is a product of biology doesn’t remove the need to actually provide an account for intentionality – regardless of whether one allies with Fodor/Searle (Intentionality is real & intrinsic to minds) or Scott/Rosenberg (Intentionality is born of heuristics attempting to account for lack of information.)

    I don’t mean to come across as rude, but when I see the answers offered to the problem of intentionality that Scott raised it just seems to me that people don’t understand what Intentionality is. Again, maybe I’m the one who is the ignoramus here but AFAICTell supposed solutions to intentionality either solve a simple problem or end up assuming intentionality in some form. (The latter being why Putnam argued there could be no causal account for intentionality.)

  44. @Sci,

    A causal explanation of consciousness/intentionality must be based on the specification of a brain mechanism with the properties that can actually cause particular kinds of intentional/ conscious content. Given the current limitations of our technology for direct observation of brain mechanisms, we have to begin by formulating detailed theoretical models of plausible mechanisms with properties that can cause systematic changes in conscious content. This is similar to the formulation of theoretical models such as photons and the Higgs particle in subatomic physics. The retinoid system [1] is just such a theoretical model in the biological realm. Its neuronal structure and dynamics enable us to make specific predictions about induced conscious content under specified experimental conditions. The vivid hallucinations that are induced in the case of the crucial SMTT experiment [1] were predicted by the causal properties of the retinoid mechanisms. In these experiments a vertically oscillating dot is seen by the observer, then it suddenly disappears and is replaced by a vivid conscious experience of a real triangle oscillating horizontally. The triangle appears to be really out there in front of the observer — an intentional event — even though there is no such retinal image. In fact, by turning a knob, the observer is able to control the width of the hallucinated triangle to match its height when the height of the unseen vertically oscillating dot changes. The successful match of the width and height of the hallucinated triangle is verified by independent observers who experience the same hallucination when they look over the shoulder of the subject. This is strong empirical evidence in support of the retinoid model, of consciousness and intentionality.

    1. For details about the neuronal structure an dynamics of the brain’s retinoid system see here:

    http://people.umass.edu/trehub/YCCOG828%20copy.pdf

    and

    http://people.umass.edu/trehub/Where%20Am%20I%20-%202.pdf

  45. Liam: “A new understanding of the human is unavoidable because that is where progress lies. We will probably survive just fine, and be the better for it.”

    I think the former is all but inevitable (so long as we take regarding that term ‘progress’) but the latter, not so much. The thing is, there really is no reason to assume that our native, heuristic self/other understanding, which we evolved to cope with in the absence of the information tsunami we’re presently experiencing, can reliably discharge its ancestral functions in the aftermath of that tsunami. An example of the kinds of problems we might expect in the near future: http://rsbakker.wordpress.com/2014/05/20/neuroscience-as-socio-cognitive-pollution/

  46. Arnold: “A causal explanation of consciousness/intentionality must be based on the specification of a brain mechanism with the properties that can actually cause particular kinds of intentional/ conscious content. Given the current limitations of our technology for direct observation of brain mechanisms, we have to begin by formulating detailed theoretical models of plausible mechanisms with properties that can cause systematic changes in conscious content.”

    But this is precisely the problem, Arnold. Mechanistic thought out and out scrambles so many kinds of intentional thinking. The oxymoron of a ‘free will’ mechanism is a good place to begin. Look at the esoteric conceptual gymnastics compatibilists (conveniently ignoring the fact that *anything* can be argued) have to go thru just to overcome intuitions that many adolescents stumble on themselves. But even if you set aside free will, you have the abject failure of semantic externalism, the continuing inability to make naturalistic sense of how information could possibly be ‘about’ anything, and therefore true or false. The list goes on and on. The bottom-line is that until someone finds a way reconciling mechanism with intentionality (as I claim to do via heuristic neglect) then we quite simply do not know how *to even begin* looking for the ‘mechanisms of intentionality.’ This is a big part of the reason why so many are wringing their hands about the ‘theory deficit’ in cognitive science. We don’t even have a consensus commanding definition of ‘cognition’ because of it!

  47. Scott: “The bottom-line is that until someone finds a way reconciling mechanism with intentionality (as I claim to do via heuristic neglect) then we quite simply do not know how *to even begin* looking for the ‘mechanisms of intentionality.’”

    I’m frankly puzzled. Why can’t the knowledgeable philosopher simply point to the neuronal structure of the retinoid system and its attached preconscious sensory matrices and semantic networks to understand what intentionality is. Or are philosophers required to wear blinders against the findings of science?

  48. Scott: “The thing is, there really is no reason to assume that our native, heuristic self/other understanding, which we evolved to cope with in the absence of the information tsunami we’re presently experiencing, can reliably discharge its ancestral functions in the aftermath of that tsunami.”

    There is no need to be more pessimistic than necessary. Yes, there are great obstacles to the dissemination of information in our culture. In many countries the illiteracy rate is 30%, even higher; educational levels are dismal, and the ability to think independently probably is a very rare commodity – it is not encouraged in many societies. But, I hope and suspect that there is no direct biological limit to our ability to learn. Rather, the impediments are cultural which are only indirectly controlled by our biology. Things are complicated, but I sense they are not hopeless. Nothing that a few religious our cultural wars couldn’t settle, which has been the way of the past.

    Educators, like Ronald Barnett, have already been thinking about learning in a super-complex world. It would seem to me that those who prefer to ignore the emerging information reality will simply be left behind, but I am not exactly sure how or if that would happen.

  49. Here is a philosopher — the father of the “explanatory gap” — who understands that intentionality, our phenomenal experience, is just the content of a brain mechanism that instantiates our *Cartesian Theater* (retinoid space?). A excerpt:

    ……………………………………………

    PHENOMENAL EXPERIENCE: A CARTESIAN THEATER REVIVAL

    Joseph Levine

    University of Massachusetts Amherst

    Philosophical Issues, 20, Philosophy of Mind, 2010

    “So one aspect of the Cartesian theater model has to do with the role of the
    subject of experience as sole audience member, constituting a point of view on these appearances. But now, what are these appearances?

    Here’s where the second aspect of the Cartesian theater model comes into play.
    Just as the objects that seemingly appear to us, that we track, in a movie theater are fictional – or, as I prefer to think of them, virtual – so too the objects of experience are virtual. Perhaps a virtual reality device is a better model than the Cartesian theater, but the idea is much the same. As a result of subconscious cognitive interaction with the world of physical reality, then, conscious experience in the form of a Cartesian theater is constructed. The Cartesian theater is what veridical and hallucinatory experiences share.

    So what distinguishes veridical experience from hallucination? Well, it isn’t going
    to be a matter of there really being objects out there in the world instantiating the
    properties experience seems to attribute to them. Rather, assuming, as I do, that there is a physical reality – describable by physics – outside the mind that disciplines the construction of the theater though causal interaction with the brain, then my experiences are largely veridical. If, however, through envatment, or some other weird occurrence, my brain’s constructive activity becomes unmoored from external physical reality, then I’m hallucinating.”
    ……………………………………………………………….

    Notice the last sentence and think of the SMTT hallucination.

  50. “Why do we suppose that human beings uniquely stand outside the basic account of physics, with real agency, free will, intentions and all the rest of it? ”

    Alternatively – why do we assume that physics stands outside the basic structure of the rest of human culture ? Why do we think it is more real than reality ?

    There is a simple answer of course : propaganda. Repeat the line ‘the universe is governed by physical laws’ for three centuries and sooner or later people will believe it. The answer of course is that there are remarkable mathematical regularities in certain physical domains that physics can map. But there are swathes of the universe where the mapping breaks down, and we don’t even have to leave the galaxy to find startling examples of it, such as the rotation of stars in the Milky Way.

    Physics follows reality – it doesn’t determine it. Just because physics doesn’t predict features of mental life doesn’t make them not so. There is no evidence that mental phenomena contradict the ‘laws’ of physics (anybody who has studied physics will know how ad-hoc, arbitrary and loosely defined this pantheon of ‘laws’ is in any case)

    The fact that contemporary physics cannot predict mental phenomena is a fault of physics, not reality. Consciousness is commonplace, it abounds on the earth in droves and is completely unmysterious. The true “mystery of consciousness” is actually a “mystery of physics” – why do we take it more seriously than reality and – looking forward – what would a physics look like that was capable of throwing light on mental phenomena ? How would it be structured ? It would almost certainly break down from its current theorem-based format. It might look less neat (although its far less neat than many give it credit for in any case). But at least it would be more complete than now.

  51. Arnold: And what do you take Levine to be explaining here as opposed to assuming? If the represented exists, he’s saying, then the representing is true–but this the very thing requiring explanation! Again, no one doubts that the brain is systematically related to its environments. What we want to know is how certain systematically related bits exhibit the property of intrinsic aboutness, such that it can be true or false (semantically evaluable). All relata are caught up in complex causal relations. Since effects do not recapitulate their causes, the notion of some special class of effects that are nevertheless ‘about’ their causes becomes deeply mysterious–to say the least! How could an effect be ‘true or false’ of anything (a ‘content’), let alone this or that environmental cause?

    This is one way of stating what’s typically called the ‘Problem of Content Determination.’ As far as I can tell Arnold, you’re simply saying there is no problem.

    My position says that effects are not true or false of anything, because ‘aboutness’ is simply a cognitive shorthand we use to understand certain local efficacies belonging to systems that are too complex to cognize causally. We attribute efficacies belonging to supercomplicated systems to entities (meanings, representations, contents) within the system as a way to economize local problem-solving. Metacognitive neglect means we cannot see this shorthand AS shorthand, and so we assume that we’re cognizing genuine entities.

  52. John Davey: The propaganda claim cuts both ways, so it really doesn’t do either side of the issue any favours. Mental phenomena are inexplicable in natural scientific terms, meaning that traditional speculation and metacognitive intuition is all we have to go on in our attempts to understand their nature. Given that science has revolutionized every single traditional domain it has colonized, the question is why we should expect anything different this time around. If consciousness were the least mysterious thing as you say, then one would think that we would have broad agreement on at least some basic issues, when in fact, the opposite plainly seems to be the case. Scientific knowledge, on other hand, is revolutionizing the world as we speak.

  53. Scott: “My position says that effects are not true or false of anything, because ‘aboutness’ is simply a cognitive shorthand we use to understand certain local efficacies belonging to systems that are too complex to cognize causally.”

    But you say that our cognitive heuristics are about nothing at all. I find this incoherent. I say that ‘aboutness’ is satisfied by a representation of *something somewhere* within a particular kind of neuronal brain structure organized around a fixed locus of spatiotopic origin (e.g., retinoid space). I provide evidence in support of this claim in the SMTT experiments in which subjects have a vivid conscious experience *of/about* a triangle out there in front of them when there is no such object in front of them.
    While the naive subject has no inkling why his conscious experience is about a triangle, the neuroscientist knows why the experience is about a triangle. This is often characteristic of the difference between common understanding and scientific understanding.

  54. Scott

    Do not synonymise “science” with “physics” as they are not the same. “Science” is making progress with mental phenomena in the field of neuroscience in particular. I don’t doubt that “science” will produce what it has always produced – a growing understanding of the phenomena, specifically it’s features and causes.

    But physics (as currently formatted) won’t and never will make an ounce of progress in dealing with mental phenomena, and that breaks the back of the over-hyped reductionist paradigm which forms the back of three centuries of propaganda and alleged “clockwork universes”. It’s the paradigm that says economics is a form of psychology which is a form of biology which is a form of chemistry which is a form of physics. It’s all physics. The reality is more nuanced and physics should be appreciated as an art form rather then an eluder of complete, all embracing determinitive natural causes. If it was, people would be happier with the mismatch between consciousness and physics, and their current lack of overall compatibility (although there is no evidence that conscious experience ‘violates’ a law of physics at all). They don’t contradict after all.

    “If consciousness were the least mysterious thing as you say, then one would think that we would have broad agreement on at least some basic issues, when in fact, the opposite plainly seems to be the case.”

    The only disagreement I’m aware of is the fact of the existence of mental phenomena – hardly a disagreement, more a statement of policy by computationalists that they stick with the hopelessly outdated reductionist line and can literally ignore the reality under their own noses. A legitimate philosophical stance but one to be taken as seriously as some other crank-but-valid scheme such as solipsism.

  55. @John Davey: You might have seen it, but check out Chomsky’s reflections on the failure of reductionism & mechanization ->

    https://www.youtube.com/watch?v=wMQS3klG3N0

    I suspect a lot of science’s success hinge on the focusing of that which is easily seen as quantitative rather than that which at least seem qualitative.

    Additionally, like Arnold I’m not sure what an eliminativist account of the world would look like even as I agree with Scott a naturalist solution likely entails a “sideways” approach. It’s not depressing so much as incoherent, or at least Rosenberg’s depiction is given what I’ve seen of his work.

  56. Arnold: I don’t think you see the problem I’m raising. Let me try a different tack.

    To be clear, your subjects have an experience, which they then report via an intentional idiom, ‘aboutness.’ You’re assuming the report can be trusted, that because the subjects resort to the aboutness idiom to report the experience, that the experience itself must be intentional, or ‘about.’ You assume, in other words, that your subjects can actually metacognize their conscious experiences for what they are. Why would you assume this?

  57. Sci: All the coherence arguments I’ve seen deployed against eliminativism simply beg the question: they assume that resorting to intentional idioms automatically commits you to some traditional theoretical account of intentionality. This is more problematic for Rosenberg (but still question begging) because Rosenberg has no substantive account of what intentionality is. In the absence of a real positive account, it’s easier to suppose he’s helping himself to the very traditional intentionality he is committed to eliminating. Even still, he’s right to assert that the intentionalist is simply begging the question against him.

    But for Rosenberg, eliminativism comes first, as a consequence of an interpretation of science. His problem is that he has no way of answering the kinds of abductive moves that intentionalists are prone to take: at least intentional realism offers some kind of explanation for intentional idioms, one that is intuitively satisfying, if horrible in other respects.

    My eliminativism, on the other hand, falls out of a prior account of intentionality (BBT). As such the question-begging nature of tu quoque arguments becomes immediately apparent, and the abductive strategy actually backfires on the intentionalist.

    I stick by my pessimism, though I think it’s one of the harder to defend elements of my view.

  58. Scott: “You’re assuming the report can be trusted, that because the subjects resort to the aboutness idiom to report the experience, that the experience itself must be intentional, or ‘about.’ You assume, in other words, that your subjects can actually metacognize their conscious experiences for what they are. Why would you assume this?”

    I assume this because when I look over the subject’s shoulder I have the same conscious experience about a triangle that the subject reports. Moreover, when I ask others to look over the subject’s shoulder, they report the same conscious (hallucinatory) experience about a triangle out there on the computer screen that the subject and I report. What other assumption should I make about these results within the norms of science?

  59. John Davey: “The only disagreement I’m aware of is the fact of the existence of mental phenomena – hardly a disagreement, more a statement of policy by computationalists that they stick with the hopelessly outdated reductionist line and can literally ignore the reality under their own noses.”

    What does the bottom of your nose look like? I ask this question quite seriously, and not to be glib. Put your nose inside your head, and ask yourself, how much of it would you be able to cognize? You certainly wouldn’t be able to cognize it the way you cognize it from the outside–not at all! You would have to cognize it quite differently, perhaps even radically so, given your inability to inspect it from multiple angles via multiple sensory modalities. In fact, you could suppose that your only way of cognizing your ‘inner nose’ would be opportunistically, as far as you needed to. And moreover, since you would have no way of cognizing your cognition of your inner nose, you would have no way of knowing just how opportunistic your cognition was–you would problem just assume you were cognizing your inner nose for what it was.

    So why should we think the case is any different for mental phenomena? The fact that they seem to be ‘right under our noses’ should be cause to doubt our intuitions regarding them, not otherwise. It strikes me as an extraordinary empirical claim to insist that we somehow evolved the capacity to intuit ourselves in anything other than a radically opportunistic manner. The burgeoning research into metacognition certainly suggests this has to be the case.

  60. @Scott: It seems to me that if Rosenberg can’t cash out his intentionality-based idioms in non-intentional terms then the criticisms against him likely have some merit. (Though I’ll grant the refutations by Massimo and Carrier, in trying reconcile materialism with humanism, were disappointing and possibly even intellectually dishonest.)

    But when Rosenberg writes stuff like the following(1)it’s hard to even picture how anything resembling coherent human thought could arise:

    “Perhaps the most profound illusion introspection foists on us is the notion that our thoughts are actually recorded anywhere in the brain at all in the form introspection reports. This has to be the profoundest illusion of all, because neuroscience has been able to show that networks of human brain cells are no more capable of representing facts about the world the way conscious introspection reports than are the neural ganglia of sea slugs! The real challenge for neuroscience is to explain how the brain stores information when it can’t do so in anything like the way introspection tells us it does—in sentences made up in a language of thought.”

    (1) Disenchanted Naturalist’s Guide to Reality:

    http://onthehuman.org/2009/11/the-disenchanted-naturalists-guide-to-reality/

  61. Addendum: I’d suggest Putnam’s pragmatic pluralism for anyone worried a rejection of materialism requires an embracing of scriptural traditions’ notions about God, souls, all that jazz.

  62. Sci, Maybe Rosenberg has invoked BBT because we do not know just how complex vision, the heuristic king of all senses, actually is. With only nominal understanding of vision and visual mechanisms, science based in empiricism or visual sense qua mechanism hits the wall of BBT or the Scientific Image is still so prejudiced by the Manifest Image that it can’t resolve these issues….yet.

  63. Sci: But regarding Rosenberg, no more than the abductive case. There’s nothing incoherent about thinking intentionalist accounts of intentional idioms are false, just as there’s nothing coherent about supposing that intentionalism must be true simply because we find it difficult to imagine otherwise. It is empirically possible (I think probable) that we possess ‘use only’ cognitive modes, ways of navigating the world that can only baffle us That difficulty does warrant an abductive case for intentionalism.

    And I think Rosenberg softsells the difficulty, which is as much one of explaining why intentionalism seems as intuitively obvious to reflection as it does. This is another place where my eliminativism differs from his. It actually takes making sense of experience to be an important theoretical desideratum. It’s all about linking this, whatever it is, to that. So you get the sense that Rosenberg’s eliminativism simply misses the point. But there’s nothing incoherent about the quote whatsoever – so long as you refrain from assuming some form of intentionalism is true!

    There’s thought, and then there’s thought as revealed to thought. The difficult to believe thing, the thing the intentionalist needs to explain, is why thought should have access to the nature of thought, as opposed to something ad hoc and heuristic. Biologically speaking, it’s hard to imagine how the brain’s cognitive self-relation could be anything other than ad hoc and heuristic.

    I’m saying it’s hard to imagine how we could have anything but a radically blinkered self-view (and the research is certainly confirming as much). And that also it’s hard to imagine that we could somehow intuit our view as blinkered. As a cultural exaptation, we have no reason to suppose philosophical reflection would inherit a ready-made error-signalling system. And lo, look at all the endless controversy.

    For the longest time I hedged, but as the research comes in the case keeps getting stronger. We possess blind brains. This is why knowledge is stopped cold every time it hits us, and controversy takes over. Your thinking is so much more vast and complicated than the murk, the smudge of associations and intuitions, that comes whenever you think of thought.

  64. John Davey, From your comment 51, if I invoked the true mystery of engines is the mystery of physics…well it is not…the “mystery of engines” is the mystery of several mechanisms and how they interplay. For engines the physical mystery is the controlled release of energy from fuel, for brains and consciousness the mystery appears, to me, to be the formation of coherency or emergence of the scaled up environment complex organisms exist in…sense and movement etc.

  65. Arnold: “I assume this because when I look over the subject’s shoulder I have the same conscious experience about a triangle that the subject reports. Moreover, when I ask others to look over the subject’s shoulder, they report the same conscious (hallucinatory) experience about a triangle out there on the computer screen that the subject and I report. What other assumption should I make about these results within the norms of science?”

    But what’s scientific about simply assuming that a theory about intentionality – an immensely controversial one at that – is true?

    But again, you miss the point, Arnold. How does yours and their report evidence your assumptions over my theory? Not at all. On my theory, everyone should say they’re having an ‘experience of’ because assuming as much is a damn simple way of resolving a restricted set of bogglingly complicated problems. Given that my theory is also consistent with the most recent research into metacognition, given that it actually explains away a good number of conundrums, given that it actually explains, as opposed to simply assumes, intentionality, you’re the one with the theoretical and the empirical burden, are you not?

  66. Scott, Assuming that all of the observers are made from the same genetic code and have the same brain mechanisms, well yes the observations are corroborated. Even if the observer is a computer made from similar heuristic algorithms, unless it isn’t, it may detect something else.

    However Arnold is saying there are mechanisms he has a handle on.

  67. Perhaps the most profound illusion introspection foists on us is the notion that our thoughts are actually recorded anywhere in the brain at all in the form introspection reports.

    Seems fair enough.

    Think of someone saying a phrase.

    Now think of them shouting the same phrase.

    Now, is any shouting going on as you think it? What is ‘volume’ inside your head? No, it’s just the idea of ‘volume’. How many other elements are just ideas of something? And perhaps the ideas themselves just provocations? Ala the matrix quote “Do you think that’s air you’re breathing now?”

  68. Part of me very much agrees with Arnold’s stance above, that in the end the structural relations of the world are re-wound in the brain somehow, allowing us to engage in complicated behaviors. In the end, language also plays some part in some of those processes, that is, it too plays complicatedly in the brain to allow for complex behaviors (as hopefully this linguistic response shows).

    On the other side, if intentional posits are being made and defended in the way that say Searle makes semantic posits (like in the Chinese Room), then the Rosenberg-like denials are going to make some sense. I think we can read Rosenberg, including the quote above, and Scott’s stance as well, as arguing against a certain conception of intentionality, one that is misconceived because of phenomenology and past historical theorizing. And there is good reason to think such a misconceiving of intentionality is rampant throughout philosophy. That is, that many philosophers are making an intentionality posit in the way that Searle makes a semantic posit.

  69. Scott: “On my theory, everyone should say they’re having an ‘experience of’ because assuming as much is a damn simple way of resolving a restricted set of bogglingly complicated problems.”

    Your theory claims that everyone *assumes* that they are having an ‘experience of’ something *because* to think otherwise raises too many problems for them. My theory claims that everyone has an immediate ‘experience of’ something *before* thinking anything else about the experience — before any reflection about the experience. This is just the way the brain’s retinoid system works, I say. The problems of “intentionality” are the product of discursive obfuscation that does not take account of the essential role of
    *representation* in the operations of our cognitive brain mechanisms.

    It seems to me that your Blind Brain Theory is a valid commentary about philosophers rather than a commentary about the rest of the folks.

  70. Arnold: “The problems of “intentionality” are the product of discursive obfuscation that does not take account of the essential role of *representation* in the operations of our cognitive brain mechanisms.”

    Well, you’ve crawled out on a lonely limb claiming this I fear. Even intentionalists acknowledge the problems of intentionality: this is why they’re so prone to blame their inability to naturalize their terms as ‘irreducibility.’

    So what, in natural, causal terms, does the aboutness relation consist in? How do causal relations suddenly come to evince normative properties, for instance?

    These questions are every bit as relevant to the science as to philosophy, Arnold.

  71. VicP: “However Arnold is saying there are mechanisms he has a handle on.”

    And my question is how these mechanisms explain intrinsic intentionality as opposed to simply assume it. This is really something he needs to be able to answer to be able to claim to have solved consciousness. I personally think that the retinoid theory could be immunized from this problem by adopting something like BBT.

    But then, as Nick Humphrey likes to say, theories are like toothbrushes in this biz: no one wants to use anyone else’s!

  72. Lyndon: “Part of me very much agrees with Arnold’s stance above, that in the end the structural relations of the world are re-wound in the brain somehow, allowing us to engage in complicated behaviors.”

    I agree with this as well. The question is one of what this ‘rewinding’ consists of, mere systematic covariances, or some set inexplicable intentional and normative relations. I’m offering an account of the former that allows us to understand what it actually going on with the latter.

  73. Scott: “So what, in natural, causal terms, does the aboutness relation consist in?”

    According to my theory of consciousness, “aboutness” is naturally given by a transparent brain representation of the world from a privileged egocentric perspective — the first-person perspective. Retinoid space realizes this perspective within one’s own brain. I discuss this in Chapter 7, “A foundation for the scientific study of consciousness” in *The Unity of Mind, Brain and World*. A. Pereira Jr. & D. Lehmann eds., Cambridge University Press 2013.

    If you go to the following link and open Look Inside, click on Table of Contents, then click on Chapter 7, you might be able to read some relevant parts of my chapter:

    http://www.amazon.com/Unity-Mind-Brain-World-Consciousness/dp/1107026296/ref=sr_1_1?s=books&ie=UTF8&qid=1416423763&sr=1-1&keywords=pereira+jr

  74. Scott: “And my question is how these mechanisms explain intrinsic intentionality as opposed to simply assume it. This is really something he needs to be able to answer to be able to claim to have solved consciousness. I personally think that the retinoid theory could be immunized from this problem by adopting something like BBT.”

    Let’s look at the Leibniz Mill as a real mill. Nobody questions the mill’s ability to take wind and make it grind wheat. But what if those gears in the mill were not hard wood or metal? What if they were styrofoam? One would question the inner function of the material to do the macrofunction which is assumed. With intentionality the mechanisms may be there but no knowledge of the inner mechanism…..

  75. @Callan: ‘And perhaps the ideas themselves just provocations? Ala the matrix quote “Do you think that’s air you’re breathing now?”’

    I’m a bit unsure what you mean by ideas themselves are just provocations?

    What’s difficult for me to grasp about eliminativism regarding intentionality is not that we are wrong about reality but that our thoughts about anything are, in some sense, illusory.

    For example if we’re only simulating valid argument forms, how did we ever evolve to assume such forms were valid?

    AFAIK Rosenberg never successfully explains this.

  76. @Arnold: It does *seem* incoherent, though I agree with Scott (and I think Peter?) that to take a materialist stance -as the term is currently defined – is to accept intentionality has to be subject to some kind of eliminativism.

    Then again, in the past Chomsky’s suggested our definition of “matter” is so malleable that there is no coherent mind-body problem. Perhaps this is where things will turn, once the religious fear of immaterialism subsides?

  77. Regarding the illusory and sometimes delusional content of consciousness: I am working on a coherent explanation of the problem at my other site, it seems rather straight straightforward when approached from an information processing perspective. In essence, a living organism by definition interacts with, is conscious of, reality. In us consciousness resembles a theater, mostly illusory, but shared by all to a degree. There lies the rub: how precise is our sharing of this movie?

  78. Liam,

    What makes you think that to interact with reality is to be conscious about reality? even more “by definition”.

    Even humans, at the peak of the conscious complexity on Earth, interact with reality at an unconscious level many, if not most, of the times.

  79. By thinking about the fact that even bacteria respond to stimuli (information): crawl on surfaces, move away from or towards light, etc. These micro-organisms process information and respond purposively. Obviously their awareness or consciousness is not comparable to ours, except to a disinterested outside observer they would appear to be intelligent. Many of the biochemical processes in neurons already exist in these little creatures which are extraordinarily complex.

    I believe that once we have a clearer understanding of human consciousness, it would be easier to see it as a natural process characteristic of life. (My other site on WordPress goes by Johanneslubbe where I hope to post my little meditation on this subject soon)

  80. Arnold,

    Thanks for the link. By your definition of consciousness (Consciousness is a transparent brain representation of the world from a privileged egocentric perspective) I would agree completely with your statement above. But I am trying to look at it in a less anthropomorphic and anthropocentric way.

    ‘Tropisms and reflexive behavior’ are the foundations upon which human-like consciousness is layered, I suspect. Unconscious, reflexive, processes are integral to our consciousness, apparently. More information promises to be very interesting.

    Otherwise, I generally agree with your proposal regarding human consciousness.

  81. Sci,

    What’s difficult for me to grasp about eliminativism regarding intentionality is not that we are wrong about reality but that our thoughts about anything are, in some sense, illusory.

    But such illusion is a common and commonly accepted part of everyday life – surely you have shaken a wrapped package at some point, trying to guess by the thumps and bumps it made as you moved it around as to what is inside?

    And you were perfectly comfortable with idea that you might guess wrong. That what you thought it was was just an illusion created from a shortage of information.

    It’s just a matter of considering that ones thoughts, which we might think we know so well, are really just the thumps and bumps of something wrapped away out of sight.

    The only problem then in taking our thoughts as illusions is in whether one feels one has to treat the brain as somehow ‘more’ than the wrapped package.

    A whole life of thumps and bumps and nothing better than that. It’s a hell of a red pill.

    For example if we’re only simulating valid argument forms, how did we ever evolve to assume such forms were valid?

    That’s what we guessed was in the package. And so far, that guess hasn’t gotten us extinct. So far.

  82. @Arnold: Good point on separating false perception and false judgement. I think one issue with those arguing for logic – the apprehension of some kind of universal – as some kind heuristic is how we could ever accept we are in error about grasping universals without having comprehended universals in the first place?

    How can syntatic engines – however complex – ever get you to semantic engines? Now since I’m not desperate for dualism I’m willing to accept this can be done somehow – perhaps via the sideways maneuver Scott describes – but what that means for our mental life is difficult for me to discern.

    The difficulty is why I suggested the alternative is to follow Chomsky in revising how we think of “materialism” versus “immaterialism”.

    I suppose time will tell which position is the right one.

  83. @Arnold: Thanks for the paper. I’ve saved your links and plan to work my way through them.

    @Callan: I think the challenge goes beyond being mistaken with regards to the nature of what we observe or the origins of our thoughts. Consider these lines from Rosenberg’s Atheist Guide to Reality:

    “…History is bunk…Science must even deny the basic notion that we ever really think about the past and the future or even that our conscious thoughts ever give any meaning to the actions that express them…”

    I can see how our brain could rationalize actions we’re taking due to subconscious biological drives but how can our thinking about the past and future be mistaken in the way Rosenberg describes?

    But the denial of intentionality regarding the part & future is only part of Rosenberg’s assertions:

    “…What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.

    It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all…”

  84. Arnold,

    An illusion is a false *perception*, not a false judgement. If you guess wrong about what is in a package, you are not the victim of an illusion.

    Unless you’re saying your perceptions control your actions directly regardless of what you think, then the issue doesn’t end at false perceptions – it’s about you judging those perceptions as accurate (or you failed to judge at all).

    Why does perception get the blame for an illusion when it’s judgement that judges whether it takes perception at its word?

    I see the distinction you’re making, but it appears to be buck passing.

    Sci,

    I’m not sure about the way Rosenberg describes, but it’s easy enough to take a (conciously imperceptable) flakey and fragmented record and treat it as an accurate account of events that occured (again it’s rather like anosognosia).

    On the last quote, Rosenberg strikes me as sort of arms around his knees, rocking back and forth in the face of something very traumatic. For instance, here’s a quote a reviewer made “In an interview he claims that “consistent thinkers” will embrace a “left-wing, egalitarian agenda” – an extraordinary statement from someone who claims that morality has no meaning.”

    The reviewer is very asute, I think – taking the idea to the degree Rosenburg, when rocking back towards traditional notions of morality, avoids doing so in such a case (taking it to the full degree if only as an attempt at reductio ad absurdum. Which, if things are absurd, doesn’t work). He rocks back and forth between the various moral pillars he takes as ‘right minded’ and that it’s all a virtual reality video game. Between the blue and the red.

    However, the last quote is him rocking back toward the red, and biting the bullet that thinking doesn’t happen – a causal chain of events just happens. It’s an extreme rock towards the red pill. There’s a reason we don’t take people out of the matrix after a certain age.

    Really I think the thing one might try to shoot him down for is for rocking back and forth. And really that’s just a kind of PTSD.

  85. Callan: “I see the distinction you’re making, but it appears to be buck passing.”

    It certainly is not “buck passing”. As you read this message, your conscious experience of your inner speach lags by approximately 500 milliseconds your pre-conscious understanding of what your brain has *represented* as being out there on your computer screen. Scott claims that there is no such thing as conscious brain representations of features of our environment. I claim that we can have no conscious experience of the world around us without brain representations of its relevant spatiotemporal features from a fixed “point” of perspectival origin (I!). For an overview of the minimal cognitive brain mechanisms see this chapter on my Research Gate page:

    https://www.researchgate.net/publication/249657555_Overview_and_Reflections_In_Trehub_A._The_Cognitive_Brain._MIT_Press_1991

  86. @Callan: Are you then saying Rosenberg is wrong to claim that intentionality is in truth illusory? If you think he’s wrong, are you accepting a different conception of matter than what the modern naturalist goes for or are you suggested there is some kind of emergence at the biological level?

    I confess I never understand how strong emergence is supposed to work.

  87. Arnold,

    I don’t understand the 500 millisecond reference in regards to illusions?

    As I understand it the only way to be under an illusion is if you chose to believe the senses in question. The senses didn’t force you to believe them, you just chose to take them at their word. Or you didn’t judge at all, just accepted. That’s still not the senses fault, though.

    Are you saying we believe we are seeing the screen straight away rather than with half a second of lag? But were both thinking about this now, of course. And even if we didn’t know before, we could have wondered ‘Am I comprehending the world at the rate I think I am?’. By your definition, if we could have wondered that, is it an illusion or a false judgement? When we could have questioned our senses but didn’t, is that an illusion or is it a false judgement?

    Sci,

    I was refering to the ‘right minded’ stuff as the blue pill.

    The “Thinking about things can’t happen at all” is a very stark statement. As much as one might say ones computer doesn’t think at all, saying oneself doesn’t think at all (though obviously the computer can and does do a great many things). I was trying to describe how stark it was – I might have drifted off topic slightly in doing so (but in a useful way, I hope).

    I think he’s correct about intentionality being illusory (or you could take it as the product of millenia of false judgement). But even as I do, I shy away from the full brunt of some of what he’s saying – not because it’s false, but because I don’t think it aids in living life. We use broad, kludgey concepts to try and guide our lives – we can’t do without them, but the full extent of his claims eliminate all kludge. So I shy from the full extent, even as I agree it seems pretty much the case.

  88. Callan: “As I understand it the only way to be under an illusion is if you chose to believe the senses in question.[1] The senses didn’t force you to believe them, you just chose to take them at their word. …… Are you saying we believe we are seeing the screen straight away rather than with half a second of lag?[2]”

    1. Your understanding is wrong. “Illusion” and “belief” belong to two different cognitive categories. Consider the moon illusion. You have a conscious experience of the moon seen at the horizon as being much larger that the moon seen high in the sky. This experience is called an illusion because, in fact, the projected size of the moon on the retina is essentially the same (~ 0.5 degrees in visual angle) whether seen at the horizon or overhead. Naive observers believe that the moon appears larger at the horizon either because it is actually closer to the earth or because its image is magnified by the atmosphere. Both of these beliefs are wrong. Knowledgeable observers believe that the moon appears smaller overhead because it is represented in the brain as closer and therefore is not enlarged as much as the horizon moon by the brain’s size-constancy mechanisms. This belief is correct. so the illusion is independent of the belief.

    2. Again, naive observers believe that whatever they consciously believe about their conscious representation of the visual patterns on their computer screen is immediate. This belief is wrong. Knowledgeable observers believe that there is a temporal lag between their conscious representation of the visual patterns on the screen (earlier) and their conscious belief about what is on the screen (later). This belief is correct.

  89. Arnold, I hope you can appreciate that when the illusion is of instantanious understanding of what’s on the screen and the belief is of having instantanious understanding of what’s on the screen, I have trouble seeing any distinction between them.

    In such a situation you could say the belief is just a xerox reiteration of the illusion, rather than the original. I’d agree with that – but the higher the quality of the copy, the smaller the difference is.

    And I feel that by refering to an ‘illusion’, were distancing ourselves from truely dealing with the impact of when there appears to be no illusion. Geocentricism was real. It was obvious – one just had to look at the sun traversing the sky. It was as real as climate change is real to us today. Who would feel comfortable saying climate change is an illusion (apart from the denyers)? That’s exactly how the geocentricism believer would feel – commited to the belief. A belief that’s entirely made of illusion.

    Even your moon example has people questioning what they are seeing – they aren’t thinking ‘The moon actually changes in size!’, they are thinking ‘The moon can’t be changing in size – what is going on?’ even if they get the reason it appears to change wrong, they are questioning their senses. Are all the examples for the difference between illusion and belief going to be of people who are actually questioning their senses? Will there be any full blooded geocentricism-like examples? Or will they be like the computer screen example – where illusion and belief are pretty much duplicates?

  90. Callan: “Even your moon example has people questioning what they are seeing”

    That is just the point. They see first and question later. It is the *false seeing* that is the illusion, not the belief that might follow the question about what is seen.

  91. Arnold, I got your point to begin with – and I raised another point about it. You do not have an example of seeing first and questioning never.

    Your examples all involve someone being, to some degree, skeptical of their senses. You don’t have examples of people not being skeptical of their senses and simply taking their senses at their word. Examples of people utterly duped and blissfully unaware they are duped. Geocentricism is an example I gave of that.

    I would agree that when people are skeptical of their senses, one might as well draw a distinction between the sense and the belief.

    But until the quite common occurance of people not being skeptical of their senses is addressed, I don’t agree that sense and belief are always seperate.

  92. Callan: “You don’t have examples of people not being skeptical of their senses and simply taking their senses at their word.”

    I gave the example of the moon illusion in which most people are not skeptical of what they see (their visual perception).

  93. @Arnold: “I might add that an illusion is a analogical/imagistic brain event, whereas a belief is a propositional brain event.”

    I have to admit the whole illusion digression seemed orthogonal to the issue of what erasure of intentionality means or if such erasure is even coherent.

    What does it mean to be wrong about our supposedly determinate mental content?

  94. Arnold, I wouldn’t say you gave examples of non skeptical people – your examples involve some skepticism because the person isn’t just thinking ‘the moon grows and shrinks’ which is what their eyes are telling them.

    Naive observers believe that the moon appears larger at the horizon either because it is actually closer to the earth or because its image is magnified by the atmosphere.

    To me the whole ‘closer to the earth’ or ‘atmosphere magnification’ theories are made up because the people in these examples feel there is a discrepancy in their senses to account for.

    That’s not an example of someone who does not feel there’s a discrepancy in their senses. Geocentricism is an example of people feeling there is no discrepancy in their senses and how things actually are.

    Surely you have some examples of people not being skeptical at all of their senses, Arnold? People that detect no discrepancy?

    And to tie it back in to the original post, perhaps the sense of ‘qualia’ or ‘intentionality’ would be examples. Very much ‘what you see/sense is what it is’ territory. It is really, really hard to detect discrepancy in regards to geocentricism. It takes a lot of fiddling around with instruments to get around it. The harder it is to detect a discrepancy, the more likely there is no skepticism on the matter.

    So how many discrepancies show up in regard to ‘intentionality’?

    If none appear to, then either A: As Scott might put it, it ‘Possesses inexplicable efficacy.’ or B: We are failing to detect the discrepancies present, and so insist an entirely false situation is occuring (much like the Anton’s syndrome patient does)

    On the other hand I think I’ve gotten more than my fair share of a turn at describing what I mean and am maybe begining to chafe with Sci on that. So if I’ve gone on too long I’m happy to end at what has been, IMO, a more than fair hearing on the matter, leaving it for consideration. Thank you for reading 🙂

  95. Sci: “What does it mean to be wrong about our supposedly determinate mental content?”

    It seems clear to me that a claim that we are wrong about our occurrent conscious content is incoherent because any such claim, in effect, denies the conscious content that is supposed to be wrong. Notice that this is different from asserting that our interpretations or our beliefs *about* our conscious content are wrong.

  96. @Arnold: “Notice that this is different from asserting that our interpretations or our beliefs *about* our conscious content are wrong.”

    Yeah, I think this is an important distinction. I would like to see eliminativists of intentionality provide a description of how the supposed trick works. I know Scott has posted some things on his blog, and maybe it’s just my ignorance at work but I’ve never felt I’ve understood how neglect works to produce semantics from syntax.

    Perhaps the science just isn’t there yet?

  97. This is what I wrote on a different forum devoted to the “hard problem” of consciousness. It is an excerpt from my chapter in *The Unity of Mind, Brain and World* (Pereira & Lehmann, eds. Cambridge University Press 2013. The private phenomenal descriptions mentioned below are our conscious contents:

    “Each of us holds an inviolable secret — the secret of our inner world. It is inviolable not because we vouch never to reveal it, but because, try as we may, we are unable to express it in full measure. The inner world, of course, is our own conscious experience. How can science explain something that must always remain hidden? Is it possible to explain consciousness as a natural biological phenomenon? Although the claim is often made that such an explanation is beyond the grasp of science, many investigators believe, as I do, that we can provide such an explanation within the norms of science. However, there is a peculiar difficulty in dealing with phenomenal consciousness as an object of scientific study because it requires us to systematically relate third person descriptions or measures of brain events to first person descriptions or measures of phenomenal content. We generally think of the former as objective descriptions and the latter as subjective descriptions. Because phenomenal descriptors and physical descriptors occupy separate descriptive domains, one cannot assert a formal identity when describing any instance of a subjective phenomenal aspect in terms of an instance of an objective physical aspect, in the language of science. We are forced into accepting some descriptive slack. On the assumption that the physical world is all that exists, and if we cannot assert an identity relationship between a first-person event and a corresponding third person event, how can we usefully explain phenomenal experience in terms of biophysical processes? I suggest that we proceed on the basis of the following points:

    1. Some descriptions are made public; i.e., in the 3rd person domain (3 pp).

    2. Some descriptions remain private; i.e., in the 1st person domain (1 pp).

    3. All scientific descriptions are public (3 pp).

    4. Phenomenal experience (consciousness) is constituted by brain activity that, as an object of scientific study, is in the 3 pp domain.

    5. All descriptions are selectively mapped to egocentric patterns of brain activity in the producer of a description and in the consumer of a description (Trehub 1991, 2007, 2011).

    6. The egocentric pattern of brain activity – the phenomenal experience – to which a word or image in any description is mapped is the referent of that word or image.

    7. But a description of phenomenal experience (1 pp) cannot be reduced to a description of the egocentric brain activity by which it is constituted (there can be no identity established between descriptions) because private events and public events occupy separate descriptive domains.

    It seems to me that this state of affairs is properly captured by the metaphysical stance of dual-aspect monism (see Fig.1) where private descriptions and public descriptions are separate accounts of a common underlying physical reality (Pereira et al 2010; Velmans 2009). If this is the case then to properly conduct a scientific exploration of consciousness we need a bridging principle to systematically relate public phenomenal descriptions to private phenomenal descriptions.”

  98. “What does it mean to be wrong about our supposedly determinate mental content?”

    It seems clear to me that a claim that we are wrong about our occurrent conscious content is incoherent because any such claim, in effect, denies the conscious content that is supposed to be wrong.

    I’m reading this as saying the denial of concious content is being made from concious content. A type of performative contradiction.

    Otherwise I don’t understand it – the claim we are wrong is incoherant because it’s a claim we are wrong?

    Sci,

    Well, it’s a bit like ths show or movie ‘The fugitive’ – you have a large number of forces rallied – all based on an unknown unknown. They don’t know he didn’t do it – they don’t even know that they don’t know that (well, they aren’t show speculating/philosophising “Hey, what if some complicated series of obscuring events made us think he did the murder?”, if they did they could be atleast said to be speculating about an unknown unknown that faces them)

    Such neglect means a whole bunch of police officers are empassioned about capturing a murderer. It’s not like they are just saying ‘well he might be a murderer’ – it’s genuine pursuit (though ‘unknown unknowns’ kind of makes a mockery of ‘genuine’)

    It is strange to think the underpinning of what we report might derive from the actions thousands of, occuring every fraction of a second, (so to speak) one armed men. By one armed men I mean synapses, neurons, etc.

    I’m not sure there’s any book or movie that has thousands of one armed man events going on, so as to show how all the false information could end up generating entirely false narratives of a complex nature. Though it might be what Scott is shooting for with his books. Anyway, I don’t think we normally deal with multiple mysteries colliding to produce more mysteries, which produce more – which become ever harder to unravel. Generally there’s one mystery at a time in media. But that’s what we’d be talking about – the thousand fold fugitive.

  99. But why is that convincing? It simply assumes such a denial HAS to come from concious content in order to forfil the contradiction. How can an assumption be that convincing?

    What if such a denial can come from something else? Then there is no contradiction.

    You can see how it might be frustrating for people who are trying to argue something else is going on only to be shut down with the performative contradiction claim – a claim backed purely by the unquestioned assumption that denials of concious content can only come from concious content.

  100. Callan, you can question the assumption that denials of conscious content must come from conscious content, but then the burden is to show where else such a denial comes from. Anything is possible. But we argue on the basis of available evidence.

  101. Arnold –

    Some questions about your comment #204:

    – “first person descriptions or measures of phenomenal content”: do you mean “descriptions” in the usual sense, ie, a sentence like “I see a zebra over there”? And what is an example of a “first person … measures of phenomenal content”?

    – In points 1-3 you use phrases like “public/private descriptions”. Do you mean descriptions that are of publicly/privately accessible objects or events? Point 7 seems to suggest that you do. But if not, what is a “private description”?

    – Point 5 as written says that descriptions are selectively mapped to egocentric patterns of brain activity (AKA PE per point 6) in both producers and consumers of descriptions. In the case of a consumer, that appears to be the conventional view: a heard verbal description of, say, a zebra may result in creation in the hearer of a PE (mental image) of a zebra. In the case of a producer, that doesn’t seem the conventional view, which I take to be that light from a zebra excites a viewer’s visual sensors, which results in creation of a PE, which in turn may result in creation of a verbal description. Ie, causality is in the other direction. I happen to suspect that even in the case of a producer the causal direction may be as in point 5, but since I assume that’s an unconventional view I have to wonder it’s actually yours.

  102. Arnold, the questioning of the existance of concious content or questioning if it is of an illusionary nature is an invitation to all parties to speculate what else might be producing the questioning/denial of concious content. Sure, one might not find personal interest in such speculation (and fair enough if anyone doesn’t) – but to insist on available evidence only (especially before true AI has been invented) or state performative contradiction – to do either is to entirely ignore the invitation to speculate.

  103. Charles, re your #109:

    1. First-person descriptions can be sentential propositions and/or conscious images.

    2. An example of a 1st-person measure of phenomenal content would be the perception of a large moon at the horizon and a smaller moon high in the sky.

    3. A private description is a 1st-person description; i.e., a description that is not fully shared in the public domain.

    4. If I understand your comment, the “unconventional view” is my view.

  104. OK, Arnold, because I would like to be absolutely sure we’re communicating, I’m going to pursue two items in your reply a little further.

    1. I infer that you see (in the producer) processing of the neural activity consequent to visual sensory stimulation as comprising two activities: one that maps the neural activity into “sentential propositions”, ie, a verbal representation (ostensibly of the content of the FOV); and another that takes that verbal representation and forms a mental image, ie, a non-verbal phenomenal experience (ostensibly a representation of the content of the FOV).

    As far as I know, that is definitely an unconventional view – one which I’ve expressed from time to time in this forum, eg, here.

    3. The phenomenal experience is inherently private. The verbal representation may or may not be private, depending on whether it is publicly expressed.

    If you agree with these, then why do you insist on calling private non-verbal phenomenal experiences and unexpressed verbal representations “descriptions”, a seemingly unequivocally public concept. I’m drilling down on this because the idea of “separate descriptive domains” ceases to make sense if the 1pp response to neural activity consequent to visual sensory stimulation isn’t properly called a “description”.

    I’m actually going somewhere with this, but I want to do it in small steps, if only for my own benefit since I find this topic quite confusing despite having thought about it for a long time.

  105. Charles,

    You wrote:

    “3. The phenomenal experience is inherently private. The verbal representation may or may not be private, depending on whether it is publicly expressed.”

    If the [internal] verbal/imagistic representation is publicly expressed, then what is public is a string of characters, sounds, or illustrations in the verbal language or means of depiction of the subject. But, crucially, what is not publicly expressed are the internal/1pp referents for the words and depictions (both are descriptions) used in the expression. So the public has access to and can point to your expressed descriptions (3pp), but does not have access to and cannot point to the phenomenal images that are the 1pp referents of these overt descriptions. Thus, 1pp descriptions and 3pp descriptions occupy separate descriptive domains.

    For more about this, see “A foundation for the scientific study of consciousness” on my Research Gate page, here:

    https://www.researchgate.net/profile/Arnold_Trehub

  106. Pingback: The Meaning Wars | Three Pound Brain

Leave a Reply

Your email address will not be published. Required fields are marked *