Scott’s Aliens return

Scott Bakker’s alien consciousnesses are back, and this time it’s peer-reviewed.  We talked about their earlier appearance in the Three Pound Brain a while ago, and now a paper in the JCS sets out a new version.

The new paper foregrounds the idea of using hypothetical aliens as a forensic tool for going after the truth about our own minds; perhaps we might call it xenophenomenology. That opens up a large speculative space, though it’s one which is largely closed down again here by the accompanying assumption that our aliens are humanoid, the product of convergent evolution. In fact, they are now called Convergians, instead of the Thespians of the earlier version.

In a way, this is a shame. On the one hand, one can argue that to do xenophenomenology properly is impractical; it involves consideration of every conceivable form of intelligence, which in turn requires an heroic if not god-like imaginative power which few can aspire to (and which would leave the rest of us struggling to comprehend the titanic ontologies involved anyway). But if we could show that any possible mind would have to be x, we should have a pretty strong case for xism about human beings. In the present case not much is said about the detailed nature of the Convergian convergence, and we’re pretty much left to assume that they are the same as us in every important respect. This means there can be no final reveal in which – aha! – it turns out that all this is true of humans too! Instead it’s pretty clear that we’re effectively talking about humans all along.

Of course, there’s not much doubt about the conclusion we’re heading to here, either: in effect the Blind Brain Theory (BBT). Scott argues that as products of evolution our minds are designed to deliver survival in the most efficient way possible. As a result they make do with a mere trickle of data and apply cunning heuristics that provide a model of the world which is quick and practical but misleading in certain important respects. In particular, our minds are unsuited to metacognition – thinking about thinking – and when we do apply our minds to themselves the darkness of those old heuristics breeds monsters: our sense of our selves as real, conscious agents and the hard problems of consciousness.

This seems to put Scott in a particular bind so far as xenophenomenology is concerned. The xenophenomenological strategy requires us to consider objectively what alien minds might be like; but Scott’s theory tells us we are radically incapable of doing so. If we are presented with any intelligent being, on his view those same old heuristics will kick in and tell us that the aliens are people who think much like us. This means his conclusion that Convergians would surely suffer the same mental limitations as us appears as merely another product of faulty heuristics, and the assumed truth of his conclusion undercuts the value of his evidence.

Are those heuristics really that dominant? It is undoubtedly true that through evolution the brains of mammals and other creatures took some short cuts, and quite a few survive into human cognition, including some we’re not generally aware of. That seems to short-change the human mind a bit though; in a way the whole point of it is that it isn’t the prisoner of instinct and habit. When evolution came up with the human brain, it took a sort of gamble; instead of equipping it with good fixed routines, it set it free to come up with new ones, and even over-ride old instincts. That gamble paid off, of course, and it leaves us uniquely able to identify and overcome our own limitations.

If it were true that our view of human conscious identity were built in by the quirks of our heuristics, surely those views would be universal, but they don’t seem to be. Scott suggests that, for example, the two realms of sky and earth naturally give rise to a sort of dualism, and the lack of visible detail in the distant heavens predisposes Convergians (or us) to see it as pure and spiritual. I don’t know about that as a generalisation across human cultures (didn’t the Greeks, for one thing, have three main realms, with the sea as the third?). More to the point, it’s not clear to me that modern western ways of framing the problems of the human mind are universal. Ancient Egyptians divided personhood into several souls, not just one. I’ve been told that in Hindu thought the question of dualism simply never arises. In Shinto the line between the living and the material is not drawn in quite the Western way. In Buddhism human consciousness and personhood have been taken to be illusions for many centuries. Even in the West, I don’t think the concept of consciousness as we now debate it goes back very far at all – probably no earlier than the nineteenth century, with a real boost in the mid-twentieth (in Italian and French I believe one word has to do duty for both ‘consciousness’ and ‘conscience’, although we mustn’t read too much into that). If our heuristics condemn us to seeing our own conscious existence in a particular way, I wouldn’t have expected that much variation.

Of course there’s a difference between what vividly seems true and what careful science tells us is true; indeed if the latter didn’t reveal the limitations of our original ideas this whole discussion would be impossible. I don’t think Scott would disagree about that; and his claim that our cognitive limitations have influenced the way we understand things is entirely plausible. The question is whether that’s all there is to the problems of consciousness.

As Scott mentions here, we don’t just suffer misleading perceptions when thinking of ourselves; we also have dodgy and approximate impressions of physics. But those misperceptions were not Hard problems; no-one had ever really doubted that heavier things fell faster, for example. Galileo sorted several of these basic misperceptions out simply by being a better observer than anyone previously, and paying more careful attention. We’ve been paying careful attention to consciousness for some time now, and arguably it just gets worse.

In fairness that might rather short-change Scott’s detailed hypothesising about how the appearance of deep mystery might arise for Convergians; those, I think, are the places where xenophenomenology comes close to fulfilling its potential.

 

22 thoughts on “Scott’s Aliens return

  1. A first trip through recent (2 or 3 centuries of) philosophy might seem to vindicate Scott’s view, considering zombies and secret sauce and the like. But I agree with Peter that this view should not be applied beyond our narrow recent attempts. I completely agree with the idea that our brains, as evolved, are not very good at seeing a global picture of the universe and how it all works. But I do believe we also have the broad generality we need to overcome that weakness. All well said, Peter.

    We humans seem to have a way of jumping out of the old molds when such action finally becomes absolutely necessary. I believe we can use that power to jump over the zombies and the idea that neural correlates of consciousness are something other than neural correlates of perception and model building. We will see that once the mechanisms are in place, by evolution or construction, then consciousness will follow.

  2. Thank you, Peter. That’s an awesome recapitulation of my larger view. I’m not sure I understand a couple of your criticisms of the case made in the article. You write:

    “This seems to put Scott in a particular bind so far as xenophenomenology is concerned. The xenophenomenological strategy requires us to consider objectively what alien minds might be like; but Scott’s theory tells us we are radically incapable of doing so. If we are presented with any intelligent being, on his view those same old heuristics will kick in and tell us that the aliens are people who think much like us. This means his conclusion that Convergians would surely suffer the same mental limitations as us appears as merely another product of faulty heuristics, and the assumed truth of his conclusion undercuts the value of his evidence.”

    But the whole point is that scientific cognition allows us to bootstrap out of this heuristic bind. We don’t know what their ‘minds’ are like, only that they would suffer a similar neglect structure, and so likely be saddled with similar cognitive binds. I’m not sure how this is any more controversial than saying they likely wouldn’t be sensitive to gamma or radio.

    “Are those heuristics really that dominant? It is undoubtedly true that through evolution the brains of mammals and other creatures took some short cuts, and quite a few survive into human cognition, including some we’re not generally aware of. That seems to short-change the human mind a bit though; in a way the whole point of it is that it isn’t the prisoner of instinct and habit. When evolution came up with the human brain, it took a sort of gamble; instead of equipping it with good fixed routines, it set it free to come up with new ones, and even over-ride old instincts. That gamble paid off, of course, and it leaves us uniquely able to identify and overcome our own limitations.”

    I agree with this. ‘Overcoming limitations’ is actually the lynchpin of alien philosophy of the soul: it is precisely because philosophical reflection involves the exaptation of existing capacities, that we can take ourselves to be a theoretical problem. It is precisely because this exaptation does not inherit the checks and balances built into more ancestral applications, that this theoretical problem becomes so impossibly hard. I’m saying that all such exaptations suffer inevitable vulnerabilities, blindspots. How could it be anything but a mixed bag? To argue otherwise is to argue for a biologically unprecedented capacity.

    Everybody assumes everybody else has run afoul some kind of cognitive illusion–so it’s trivial to note that cognitive illusions abound when it comes to these matters. Illusions in visual cognition turn on heuristics, so it makes sense to assume philosophical cognitive illusions amount to the same. I’m providing a parsimonious way to understand a good chunk of them. The point of this paper wasn’t to have a decisive, knockdown answer at every juncture, but to show how you can eliminate the mysteries and still provide plausible explanations for a wide variety of perplexing phenomena.

    The challenge it poses is abductive. Explain more with less. Something wonky is going on: that much is clear to everyone.

    Alien philosophy let’s us see why.

  3. Congrats on the publication, Scott! I think your perspective is a considerable enrichment to the landscape of philosophical debate on the subject, so it’s good to see that it’s getting some air time in a more ‘mainstream’ venue.

    I’ve yet to read the paper, but from a quick perusal, my concerns pretty much align with Peter’s: either we are actually radically cognitively closed with respect to metacognition—in this case, I’m not sure that xenophenomenology is going to be any help, since I’m not sure what confidence we could actually place in conclusions arrived at this way (if we can’t trust our own minds when it comes to contemplating themselves, why would we trust them any more in the contemplation of others?). Indeed, it’s hard to see, if this is the case, how one ever could justifiably and rationally come up with BBT in the first place!

    Or, we’re perhaps not so radically limited—as science has helped us overcome our cognitive biases with respect to many more ordinary kinds of illusions and false, but persistent impressions (i.e. the sun rotating around the Earth, the distinction between sub- and supralunar spheres, and so on), so it will eventually help us overcome our metacognitive limitations. But then, what we should do is simply to try and chip away at the problem bit by bit, eventually finding a completed scientific psychology giving us answers to all those questions that vex us at the moment (or explaining how those questions really just were confused nonsense in the first place).

    But I’m going to give the paper a more thorough read through, and perhaps return with some more well-informed comments…

  4. How bad is the problem, Scott? If scientific cognition lets us bootstrap out, it doesn’t seem so terrible. It looks to us as if heavier things fall faster: more careful, scientific observation shows that everything falls at the same speed (complicating factors aside). People might be surprised by that, but they accept it. In the case of consciousness it seems much, much worse. People are unable to step outside their misleading heuristics; they go on insisting that they are right and that the conflict with scientific observation is a problem, or that these matters are beyond the domain of science.

    Now, you have a good story to tell about how the Convergians end up as they do, and as I say, I probably haven’t given it its full value. But working out what aliens would be like is an exercise in imagination, and we rely here on the assumption that we have imagined all the possibilities. If the problem is as bad as you think, we’re radically unable to imagine any minds that aren’t the way we imagine ours to be. Maybe there are really other sets of heuristics that work just as well as ours and don’t cause the same problems (though they might cause others); maybe there are environments where Convergians evolve to use more data or even ‘enough’, whatever ‘enough’ amounts to. If so, we’ll never notice that possibility.

    There is a kind of circularity going on here which may not be vicious (nothing illogical about circularity) but it seems sort of disabling to the enterprise of what I’ve called xenophenomenology.

  5. Thanks, Jochen. I look forward to hearing your thoughts! Despite all my whining I found our last debate invaluable in a number of different ways.

    “I’ve yet to read the paper, but from a quick perusal, my concerns pretty much align with Peter’s: either we are actually radically cognitively closed with respect to metacognition—in this case, I’m not sure that xenophenomenology is going to be any help, since I’m not sure what confidence we could actually place in conclusions arrived at this way (if we can’t trust our own minds when it comes to contemplating themselves, why would we trust them any more in the contemplation of others?). Indeed, it’s hard to see, if this is the case, how one ever could justifiably and rationally come up with BBT in the first place!”

    The difficulty in formulating BBT falls out of BBT, which I take as a signature virtue of the approach. If I’m right, people should find my position intuitively outrageous. If I’m right, we should expect a good chunk of the population to be incapable of accepting BBT. What causes me dialectical worry, here, isn’t any vicious circularity (because there is none), but the ‘conspiracy theory’ structure of my position. It’s the combination of parsimony and comprehensiveness–which ‘Convergians’ are simply a vehicle to demonstrate–that convinces me I’m onto something very real with heuristic neglect.

    One can poke holes in any individual claim in innumerable ways. To genuinely answer the challenge of alien philosophy is to adduce an approach that can *explain more with less.* This is why I think this piece is probably the most devious thing I’ve ever written.

  6. Peter (4): “There is a kind of circularity going on here which may not be vicious (nothing illogical about circularity) but it seems sort of disabling to the enterprise of what I’ve called xenophenomenology.”

    Placing xenophenomenology (as opposed to xenophilosophy) front and centre does generate discomfort for my view, I admit, but only because it reframes the issue in a manner that I think the article deftly avoids. I can say aliens suffering a similar metacognitive and sociocognitive neglect structure will likely find their phenomenology inexplicable without imagining what that phenomenology is ‘like.’ (For me, this is a central insight, if not the way out of the maze: ignorance, unlike knowledge, is very easy to naturalize).

    But the important thing to remember, I think, is that my position is not mysterian (in either its honest or its computationalist guises). I’m not talking about ‘cognitive closure’ in any clunky, McGinnesque sense; I’m talking about heuristic neglect, the way certain, very powerful problem-solving modes turn exclusively on cues. This allows me to postulate an architecture of incapacities, some biologically fixed, some variable, and this provides the basis of my position’s explanatory scope.

    Eliminativism, when it’s given honest consideration, is generally rejected because it ‘throws the baby out with the bathwater,’ gets rid of the explananda. As a result, it simply has no horses to run in the abductive race that is the cornerstone of scientific theory. My approach, which can be thought of as a ‘critical eliminativism,’ turns the tables on the intentional realist, demanding they demonstrate their relative abductive credentials.

  7. Jochen,

    Indeed, it’s hard to see, if this is the case, how one ever could justifiably and rationally come up with BBT in the first place!

    Or, we’re perhaps not so radically limited

    If I can throw something in, I think it’s a leverage of admitting incapacity. Like the story of two men who both had the exact same set of knowledge, but the second man knew he did not know everything and the first did not. So the second, by leveraging an admittance of incapacity, knew one more thing than the first. It’s only when you admit you are radically limited that you become less radically limited.

    On an convergence specie coming up with an incapacity theory, I think if internal introspective access was not just a probe but a ruler as well, what you’d get is the specie determining how far introspective access can go by measuring it with that very same ruler. And so determine that their access plumbs their absolute depths, rather than, say, being a rather shallow probe. That measure of capacity would be on one side of the scales of judgement, but on the other side of the scales would be a historically growing number stories and even carefully scrutinised observations of inner access failures (Anton’s syndrome, for example). Eventually at some point in time the scales tip, in an individual.

  8. Rorty’s Antipodean aliens are the opposite:

    “Consider now how the Antipodeans would view ‘acts of imperfect apprehension.’ They would see them not as cloudy portions of the Mirror of Nature but as a result of learning a second-rate language…Antipodeans could offer noninferential reports of their own neural states, [not suprising] since it had been learned long since that psychophysiologists could train human subjects to report alpha-rhythms, as well as various other physiologically describable cortical states.”

  9. Scott, I’m currently reading through your paper (and wishing that more authors of philosophy had some grasp of narrative structure and construction, since despite its difficult subject matter, and in addition to being intellectually challenging, it also manages to be a fine read), so I’ll just jot down some notes here… But first:

    If I’m right, people should find my position intuitively outrageous. If I’m right, we should expect a good chunk of the population to be incapable of accepting BBT. […] To genuinely answer the challenge of alien philosophy is to adduce an approach that can *explain more with less.*

    These are somewhat dangerous positions to get yourself into. They read like an inoculation of your ideas against criticism: everybody being against it just proves you’re right; inability to come up with a better idea means yours should be accepted. But of course, everybody might be against it because it’s wrong; and an idea may be recognized as wrong without necessarily being able to supply a replacement. (But it seems you realize this.)

    Anyway, on to (perhaps) more substantive things. One issue I don’t think I’ve seen raised in this context before is the question of whether we actually do have some set of specialized, purpose-oriented tools with which we attempt to cognize the world (and ourselves). Taking computers as an analogy, there are many special computers uniquely suited to perform certain tasks; but there also are universal computers, capable of performing any task that can be performed by a computer at all.

    I think, or at least consider it to be plausible, that our cognitive tools are much more like a universal machine, than they are like a collection of special-purpose devices. So if that’s the case, then your narrative would have to be amended: we might have started out as a collection of special-purpose facilities, but at some point, some critical amount of complexity (to grossly gloss over all the relevant details), there’s a qualitative turning point, whereupon we develop reasoning tools that are, at least in principle, capable of handling any subject matter whatsoever. I think it’s very plausible that we’re in fact past this threshold—with language and mathematics, we have tools that are capable of simulating universal computation; and ultimately, working out how something works is just giving an algorithm—a structural model, or simulation of it.

    Of course, you may be completely unimpressed by that: even if we’re ‘universal reasoners’, in this sense, we’re still simply bounded by our finite nature, and the limited resources we have access to. The vast majority of even simple problems are completely out of the scope of being feasibly solved by such devices—consider just the traveling salesman problem: already for a few dozen cities, the problem becomes infeasible to solve for modern-day computers in a time comparable to something like the present age of the universe.

    But that’s just a quantitative difference; I’m not sure that will suffice for the sort of argument you’re raising. We can solve simple instances of the problem—say, with four cities—without any difficulty, and even though we can’t actually solve the problem in more complex cases, how one would go about solving it is completely transparent. While at least to all appearances, not even a simple instance of the problems thrown up by conscious experience seems to have been solved, and the way on might go about solving them appears completely opaque.

    So it’s not clear to me that your arguments actually work, if one supposes universal reasoning capacities.

    This also basically harkens back to the question of whether you’re meaning to propose that the problems of metacognition are in principle unsolvable, or whether they’re just difficult—i.e. whether we’re necessarily confused about our own nature, thus conjuring up phantasms of ‘qualia’ or ‘intentionality’, or whether we’re eventually going to get around these limitations. I used to attribute the former stance to you, but I’m no longer sure if I was right in that—but then, I’m not sure in which sense the latter is really ‘eliminativist’.

    After all, we haven’t eliminated the apparent distinction between sub- and supralunar spheres—we’ve merely explained how it comes about. Moreover, it’s still a very useful way of conceptualizing the world, at least as everyday dealings are concerned—we talk about ‘down’ as if it picks out a specific direction, for instance. Our knowledge of the true nature of space and our place within it hasn’t eliminated ‘down’ as a meaningful category. Everybody’s going to understand what I’m talking about when I say, ‘I fell down the stairs’.

    So if qualia and intentionality eventually come to occupy a place in our parlance similar to ‘down’, then I think we haven’t eliminated anything—rather, we’ve merely found an explanation, which is something I’d enthusiastically welcome. We’re not deceived or under some grand illusion when we talk about things falling down, anymore than we are when we talk about a certain assortment of molecules as a dinner table, in order to eschew the rather complicated description in terms of individual molecule positions and velocities. Intentionality and qualia may come to play the same role: once, before we understood their nature, they seemed mysterious, brute elements of the world; now that we do, they just occupy the same space of useful coarse-grainings and approximations as ‘down’ and dinner tables do.

    However, with the reference to anosognosia, you seem to be implying, again, the other interpretation: whatever a patient exhibiting blindness denial thinks they see, is simply confabulation; these things will not play any useful future role after their illusory nature has been demonstrated. They have to be tossed to the flames wholesale.

    It’s this sort of interpretation that I’ve in the past argued to be what David Albert calls ‘cognitively unstable’: if it’s right, then the evidence we seem to have for it isn’t actually evidence for it. This is also, I think, what you allude to when you speak about vicious circularity.

    I’m not sure I should really go down that road again, but I think that in the intervening time, I’ve at least found one source of misunderstanding between the two of us. So I’ll try to tease that out, and if it’s to no avail, we can abandon this for more productive debates.

    So, first of all, I don’t think your position is circular. But I do think it’s self-undermining to a certain degree (at least, on the strongest interpretation above: we’re massively deluded, and the concepts we presently use to coordinatize the mental realm won’t even feature as useful shortcuts in some hypothetical completed scientific psychology). However, I think you’ve in the past taken me to make a different argument than I intend to make.

    That argument roughly runs as follows, in short cartoon form: BBT proposes that the concept of ‘aboutness’ is erroneous; BBT itself is about this proposal; hence, BBT pulls the rug out from under itself. This might appear plausible on first sight, but it’s circular: there’s an assumption that the concept of ‘being about’, as usually understood, is necessary in order for BBT to have any content, or express any sort of fact, or what have you; but that’s just the sort of folk-psychological belief that would be invalidated on some completed scientific psychology.

    But that’s not the argument I’m making (or at least, not the argument I’m intending to make). Rather, it’s something like the following: let’s say that there’s some sort of matters of fact, which can either obtain, or not—x. We don’t have direct access to the facts of the world; we need some means of expressing them. So, for instance, I could express facts about my dinner table in the usual, everyday way—say, ‘my dinner table weighs 25kg’. This asserts a certain fact about the world, x, in everyday parlance—let’s write E(x) for that.

    Now, on a completed scientific theory, there’s an equivalent statement, expressing that matter of fact in terms of, say, molecules, or quantum fields, or superstrings, or whatever turns out to be the right fundamental-level description, presuming such a thing exists. Call that S(x).

    E(x) is reducible to S(x); and it’s only because of this reducibility that we can assert E(x) as being true, as being a useful shorthand for the presumably unmanageably complex description S(x). The same goes for descriptions including things like ‘down’, and so on: we can get away with using the everyday mode of talking because it’s ultimately grounded in the proper fundamental description.

    That’s also how it would be with BBT on the weaker interpretation: where intentions, qualia, mental content etc. are merely convenient shorthand to be eventually reduced to the parlance of the completed scientific psychology. In this sense, we could also understand BBT itself, since it’s unavoidably a description of the form E(b), where b is the factual matter expressed by BBT, namely, the idea that we’re unable to generate sufficient insight into our own inner workings in order to generate the full description S(b), which wouldn’t include things like intentions, qualia and so on, any more than the fundamental theory of physics contains dinner tables. We could use E(b) in place of S(b), the same way we use ‘my dinner table weighs 25kg’ in place of the full scientific description of this fact.

    But that’s only because E(b) is reducible to S(b)—that is, because, like dinner tables, the concepts used have a well-defined meaning in terms of S. Otherwise, if E(b) is a story such that it does not reduce to some S(b), there’s simply no meaning to it—E would be like the confabulations of the blindness denial patient, without any corresponding fundamental-level account. That doesn’t mean that it’s false, of course—after all, the blindness denial patient also could come up with a perfectly accurate description of the room he’s in by accident. But it means that E(b), if b holds, can’t give us any reason for believing in it; S(b) could, but if we’re forever barred from discovering this description, we’ll just never know.

    It’s like the Boltzmann brain idea: we observe our surroundings, coming to a certain theory of the world, including it being a statistical fluctuation; but if that theory it’s true, then it’s enormously more likely that we’re just disembodied brains, briefly fluctuating into existence; but if we are such brains, we’ve never made the observations leading us to come up with the theory of statistical fluctuations in the first place.

    In the same sense, on the strongest reading, the narrative of BBT, E(b), provides an argument which, if true, entails that arguments of the form E(x) should not convince us of anything (just like the descriptions of the room from a blindness denial patient shouldn’t convince us of the room being any particular way).

    Anyway. Despite this criticism, I think it’s a good paper, and a valuable addition to broaden the discourse. Even if I think it’s ultimately wrong (I haven’t fully decided), it’s not stupid wrong, like most positions on the subject are, and it may even be usefully wrong, in the same sense that Newtonian mechanics (or indeed, Aristotelian physics) is.

  10. Pingback: Framing “On Alien Philosophy”… | Three Pound Brain

  11. Jochen (9) – Classic wall of text, and a lovely one! Let me divide my responses by dividing your points:

    “I think, or at least consider it to be plausible, that our cognitive tools are much more like a universal machine, than they are like a collection of special-purpose devices. So if that’s the case, then your narrative would have to be amended: we might have started out as a collection of special-purpose facilities, but at some point, some critical amount of complexity (to grossly gloss over all the relevant details), there’s a qualitative turning point, whereupon we develop reasoning tools that are, at least in principle, capable of handling any subject matter whatsoever. I think it’s very plausible that we’re in fact past this threshold—with language and mathematics, we have tools that are capable of simulating universal computation; and ultimately, working out how something works is just giving an algorithm—a structural model, or simulation of it.”

    >You could be right. But for the life of me, I have no idea what a ‘universal problem solver’ would look like, and I don’t think anyone else does either. So long as we’re talking physical systems, we’re talking finite systems possessing finite access and finite resources. If you take this as your starting point, then the discussion should be *generalizability*, not universality. I agree that human cognitive capacities are generalizable (applicable to novel problems) in a way nonhuman cognitive capacities are not. This strikes me as a far more modest approach to take.

  12. Jochen (9.2) – “So if qualia and intentionality eventually come to occupy a place in our parlance similar to ‘down’, then I think we haven’t eliminated anything—rather, we’ve merely found an explanation, which is something I’d enthusiastically welcome. We’re not deceived or under some grand illusion when we talk about things falling down, anymore than we are when we talk about a certain assortment of molecules as a dinner table, in order to eschew the rather complicated description in terms of individual molecule positions and velocities. Intentionality and qualia may come to play the same role: once, before we understood their nature, they seemed mysterious, brute elements of the world; now that we do, they just occupy the same space of useful coarse-grainings and approximations as ‘down’ and dinner tables do.”

    There’s no one size fits all for the hacks we use, but they do come in families, I think. This is why, for instance, I think Dennett’s analogy between ‘belief’ and ‘centre of gravity’ is so deceptive (see, https://rsbakker.wordpress.com/2016/12/05/real-systems/ ).

    ‘Elimination’ for me always refers to theoretical contexts, our attempts to solve things in general, and as it turns out intentional cognition cannot theoretically solve the nature of intentional cognition. I consider myself an intentional eliminativist because I don’t think intentional posits do anything but scuttle our attempts to theoretically solve for cognition.

    We’ll continue to use the words to cut explanatory corners, troubleshoot everyday practical problems, as well as handy hacks in various scientific contexts, but, as artifacts of our ancestral needs, they will be gradually abandoned as those ancestral ecologies vanish (hopefully in the distant future).

  13. Jochen (9.3) – “That argument roughly runs as follows, in short cartoon form: BBT proposes that the concept of ‘aboutness’ is erroneous; BBT itself is about this proposal; hence, BBT pulls the rug out from under itself. This might appear plausible on first sight, but it’s circular: there’s an assumption that the concept of ‘being about’, as usually understood, is necessary in order for BBT to have any content, or express any sort of fact, or what have you; but that’s just the sort of folk-psychological belief that would be invalidated on some completed scientific psychology.”

    >Yes: this is way the tu quoque usually runs. There is no such thing as aboutness, but it is the case that humans need to track and communicate environmental relations while remaining utterly blind to their nature. The term ‘about’ is how we do this. Aboutness, on the other hand, is philosophy’s hopeless attempt to understand the application of this kluge, this way to game our physical relation to our environment without actually cognizing it. If you begin by assuming that applications of ‘about’ necessarily imply aboutness, then I have to be running afoul the tu quoque somewhere somehow. But then this assumes that ‘about’ is more than just a kluge, more than just a way to get a handle on otherwise intractable complexities, but a special type of inexplicable relation. Since this is the very issue to be decided between the eliminativist and the intentionalist, it quite clearly begs the question.

  14. Jochen (9.4) – “But that’s only because E(b) is reducible to S(b)—that is, because, like dinner tables, the concepts used have a well-defined meaning in terms of S. Otherwise, if E(b) is a story such that it does not reduce to some S(b), there’s simply no meaning to it—E would be like the confabulations of the blindness denial patient, without any corresponding fundamental-level account. That doesn’t mean that it’s false, of course—after all, the blindness denial patient also could come up with a perfectly accurate description of the room he’s in by accident. But it means that E(b), if b holds, can’t give us any reason for believing in it; S(b) could, but if we’re forever barred from discovering this description, we’ll just never know.”

    >A few points. My commitment to reduction is methodological merely: I’m a big fan of Wimsatt (no surprise) and the messiness of it all. I would argue that once you understand the high dimensional biology of cognition, you understand its contingency, and more importantly, the impossibility of ‘perfect knowledge machines,’ or perfect translations between ‘levels of description’ (whatever they amount to) even when framed in nonintentional idioms. Since the communication of knowledge generally elides these contingencies, we find ourselves continually tempted by the possibility of perfect systematicity, and the ability to give ‘complete descriptions’ and translate claims across all angles of high dimensional epistemic interaction.

    So your argument builds in a number of assumptions I just don’t share. The sciences abound with hacks, with solutions turning on cue correlation, and with the rise of AI, this is only going to intensify. We are surrounded by astronomical complexities, which is why human cognition is primarily geared to cue detection, isolating ‘handles,’ bits of information correlated to inaccessible systems (like other humans, or ourselves), that allow multifarious forms of engagement with those systems, despite our insensitivity to their high dimensional functions.

    Now, is a theory of cognition describing the operation of some given intentional handle (Eb), such as ‘thought,’ say, count as a ‘reduction’ of thought (translation into Sb), or does it explain thought away?

    One of the reasons I’ve been so obsessed with BBT (or its extension, HNT (heuristic neglect theory)) has to do with its explanatory potential: there is a whole bestiary of cognitive modes (we now know), and it provides a common framework for understanding (at least something of) the differences between them.

    The irony, Jochen, is that although I disagree with the way you frame the problem, I do think the problem you point to is a very real one. The difference is that where your frame casts the problem as a theoretical one for BBT, I see the problem as one that is *revealed by BBT*, a way to understand, for the first time, why the ‘neuroscientific explananda problem’ is so devilishly difficult for beings such as us. Once we (finally!) set aside our attempts to use intentional cognition to theoretically solve intentional cognition (as opposed to gerrymander it into other kinds of cue-correlative tools), we’re still stranded with the problem of ‘function fixing,’ understanding what does what–especially in the context of everyday life. It seems to me that all such functions proposed must begin their life as massively underdetermined, and therefore dialectically unstable.

  15. Jochen,

    That doesn’t mean that it’s false, of course—after all, the blindness denial patient also could come up with a perfectly accurate description of the room he’s in by accident.

    To me that’s the key. Measure whether the patients description aligns with the room. If I’m reading you right, if the patients description aligns with the room a large percentage of the time, there is something to believe there.

    But it means that E(b), if b holds, can’t give us any reason for believing in it; S(b) could, but if we’re forever barred from discovering this description, we’ll just never know.

    It seems you’re saying discrediting the speaker (to the point they are likened to a blindness denial patient) means there’s nothing that can be believed of what they say. But you can have a discrediting of the speaker but then using measurement to determine how aligned to situation the speakers claims are, as the method of them regaining credibility. You can empty the cup without it remaining utterly empty forever.

  16. Scott:

    Classic wall of text, and a lovely one!

    Yeah, that came out longer than I thought it would while jotting it down… Anyway, thanks for wading through it, and giving your replies! Unfortunately, I’m rather busy at the moment, so I can’t really work out a good response—but I’m not sure that’s needed, anyway; I probably won’t convert to BBT any time soon, but I think it’s a worthwhile idea to develop, and I’m happy it has found such a capable and eloquent defender. So, keep up the good work!

    Callan:

    If I’m reading you right, if the patients description aligns with the room a large percentage of the time, there is something to believe there.

    It would also basically be miraculous—somebody apparently possessing knowledge (if there’s a truly significant agreement between description and the room) without there being any conceivable justification for it.

    It’s not necessarily that I think the idea is wrong, but it seems (to me, anyway) that if it’s right, then we lack justification to believe it.

  17. Jochen,

    Then what comes next after not believing?

    I remember an account of scientists in Britain tearing apart a platypus specimen sent back from early colonisation Australia, in their search for the stitches of the counterfeit. They couldn’t believe it.

    There have to be steps after not believing, otherwise those scientists would be platypus deniers to this day.

    I’m not saying you’d have to have the same steps as everyone else, you might have your own method that suits you. But if you thought the specimen counterfeit to begin with, how would you avoid being a platypus denier? You don’t believe the specimen is real – what comes next?

  18. Callan:

    I remember an account of scientists in Britain tearing apart a platypus specimen sent back from early colonisation Australia, in their search for the stitches of the counterfeit. They couldn’t believe it.

    Well, that’s a bit of a false analogy, though. First of all, it assumes that what’s being presented is actually the truth; second, it insinuates that I don’t believe it just ’cause. Both are misleading. Take for instance the case of the Piltdown Man, or Barnum’s Fiji Mermaid, or even the recent kerfluffle about superluminal neutrinos: in each of these cases, skepticism was entirely warranted, as the first two turned out to be hoaxes, and the third one an honest mistake. Just believing something should never be the default stance.

    Second, I’ve provided what seem to me good reasons for my disbelief: I don’t think we’re the bundle of heuristic tools Scott takes us to be, at least not with respect to ‘system two’-style conscious, laborious, deliberate cognition (I’m more easily persuaded regarding ‘system one’). I’m also not sure precisely how neglect and metacognitive opacity equate to apparent phenomenology, or a belief therein.

    But mostly, as I said, the proposal seems cognitively unstable to me: think about somebody coming up with a theory of vision, based on what they (think they) ‘see’, and concluding that they are, in fact, afflicted with blindsight; then of course, all the visual evidence leading them to formulate that theory is null and void, and the conclusion can’t be justified. (The analogy is imperfect, since sightedness isn’t a part of our theory-building apparatus the way that intentionality, which BBT tries to do away with, is, but I hope it gets the point across.)

    This doesn’t directly imply that the conclusion can’t be right—it may both be the case that the person is sighted, and mistaken, or suffers from blindsight. But in the latter case, they could only be right by accident—the theory is built on visual evidence, but they don’t (and can’t) actually have visual evidence.

    As to what comes ‘after’ not believing: well, the same as what everybody else does when they reject a particular model of the mind (and we all reject many more than we accept, of course)—try to find a better one.

  19. Jochen,

    I raised the scientists example because it shows how we can encounter something, thinking it is false and not believe it, when it is true.

    How to deal with that? Because just continuing to not believe X and not treat it as true doesn’t work, does it? Not overall, anyway.

    The point you raised previously is that discrediting a speaker (to the point they are like a blindness denial patient) means, if I understood you rightly, that we can’t believe anything that speaker says about themselves. But surely there’s a third option past A: Believe everything a speaker self reports and B: Don’t believe anything the speaker says.

    But mostly, as I said, the proposal seems cognitively unstable to me: think about somebody coming up with a theory of vision, based on what they (think they) ‘see’, and concluding that they are, in fact, afflicted with blindsight; then of course, all the visual evidence leading them to formulate that theory is null and void, and the conclusion can’t be justified.

    I think I get what you’re saying – it’s that a form of knowledge cannot prove its own form untrue. It’d have to be true in order to prove anything untrue – and to prove itself untrue, it’d have to be true and the programmers in the audience are getting nervous ticks around their eye about now!

    I think I agree, but what about a contrast – if you see a wall, but when you use your tactile senses by reaching forward you feel absolutely nothing but thin air, one of these senses is not working, surely?

    Is there anything outside of intentionality that can measure things?

    As to what comes ‘after’ not believing: well, the same as what everybody else does when they reject a particular model of the mind (and we all reject many more than we accept, of course)—try to find a better one.

    But it’s not just a model involved. Instead of a platypus on a table in front of us, we have long texts describing an animal in another room (we can’t have the animal in front of us in this case). Those long texts aren’t just a model to discard, because that animal is still there no matter if we try to discard the text – trying to treat the text as just a discardable model doesn’t work, surely? Because the text might just align with the animal – and in such a case if you’ve attempted to discard it as just a model, that doesn’t work out.

    I don’t want to shoot myself in the foot, but of course that there’s maybe some chance the text aligns with the animal in the other room doesn’t mean you have to believe it definitely will. But the animal is out there – the evidence is out there, that will eventually sort some texts as being fairly aligned with the animal and some not. When evidence will do that, it’s a question of how one is discarding texts – is the discarding based on evidence?

  20. Peter

    “In Buddhism human consciousness and personhood have been taken to be illusions for many centuries. ”

    I’m not so sure this is true. The idea of self is rejected, but consciousness is “rejected” only in the sense that it shouldn’t be distinguished from the body (as in the Hinduism from which it originates). The idea of mental phenomena is as real as the material – in fact many meditations will focus on the material as being illusory, and only the mental as real.

    JBD

  21. First,
    my apologies to everyone (Peter in primis) for disappearing. Life keeps sidetracking me, and will not stop. Thus, I’ll keep this unusually short and sweet.

    First, an aside for Jochen, (re travelling salesman) I’m guessing you’ll appreciate this paper, in case you haven’t come across it already: http://bi.snu.ac.kr/Publications/Journals/International/BioSystems_SYSHIN04.pdf
    In this context, it might be a very weak suggestion that ad-hoc specialised (and based on biology) “solutions” might be surprisingly powerful. In turn, that’s a weak argument in favour of the “bag of tricks” view that Scott favours (I do as well, with significant caveats).

    Scott: congrats for the paper! Really glad you went for it and got it published. I still have to ready it, though :-(. Shame it wasn’t included in the whole “Illusionism” issue (which I also have to finish reading, aarrgh!).
    As you know I think I broadly agree with you, but I do see that Jochen is up to something as well. Perhaps one way to rehash what I think Jochen is trying to say is to point out the following. You mention “nonintentional idioms” (#14), eliciting my immediate reaction: “isn’t that an oxymoron?”, i.e. all idioms by definition are descriptions of something else, thus there is no such thing as nonintentional idioms (I think you meant it in much weaker sense, so I’m not criticising your response, here!). This isn’t (intended as) an argument against the “‘about’ is just a kluge” stance: I’m fully convinced it is: i.e., our way to track/cognise/use (or even better: abstract and re-use) the regularities that we encounter in the real world.

  22. Pingback: Framing “On Alien Philosophy”* | Three Pound Brain

Leave a Reply

Your email address will not be published. Required fields are marked *