Scott’s Aliens

blind alienScott Bakker has given an interesting new approach to his Blind Brain Theory (BBT): in two posts on his blog he considers what kind of consciousness aliens could have, and concludes that the process of evolution would put them into the same hole where, on his view, we find ourselves.

BBT, in sketchy summary, says that we have only a starvation diet of information about the cornucopia that really surrounds us; but the limitations of our sources and cognitive equipment mean we never realise it. To us it looks as if we’re fully informed, and the glitches of the limited heuristics we use to cobble together a picture of the world, when turned on ourselves in particular, look to us like real features. Our mental equipment was never designed for self-examination and attempting metacognition with it generates monsters; our sense of personhood, agency, and much about our consciousness comes from the deficits in our informational resources and processes.

Scott begins his first post by explaining his own journey from belief in intentionalism to eliminativist scepticism about it, and sternly admonishes those of us still labouring in intentionalist error for our failure to produce a positive account of how human minds could have real intentionality.

What about aliens – Scott calls the alien players in his drama ‘Thespians’ – could they be any better off than we are? Evolution would have equipped them with senses designed to identify food items, predators, mates, and so on; there would be no reason for them to have mental or sensory modules designed to understand the motion of planets or stars, and turning their senses on their own planet would surely tell them incorrectly that it was motionless. Scott points out that Aristotle’s argument against the movement of the Earth is rather good: if the Earth were moving, we should see shifts in the relative position of the stars, just as the relative position of objects in a landscape shifts when we we view them from the window of a moving train; yet the stars remain precisely fixed. The reasoning is sound; Aristotle simply did not know and could not imagine the mind-numbingly vast distances that make the effect invisibly small to unaided observation. The unrealised lack of information led Aristotle into misapprehension, and it would surely do the same for the Thespians; a nice warm-up for the main argument.

Now it’s a reasonable assumption that the Thespians would be social animals, and they would need to be able to understand each other. They’d get good at what is often somewhat misleadingly called theory of mind; they’d attribute motives and so on to each other and read each others behaviour in a fair bit of depth. Of course they would have no direct access to other Thespians; actual inner workings. What happens when they turn their capacity for understanding other people on themselves? In Scott’s view, plausibly enough, they end up with quite a good practical understanding whose origins are completely obscure to them; the lashed-up mechanisms that supply the understanding neither available to conscious examination or in fact even visible.

This is likely enough, and in fact doesn’t even require us to think of higher cognitive faculties. How do we track a ball flying through the air so we can catch it? Most people would be hard put to describe what the brain does to achieve that, though in practice we do it quite well. In fact, those who could put down the algorithm most likely get it wrong too, because it turns out the brain doesn’t use the optimal method: it uses a quick and easy one that works OK in practice but doesn’t get your hand to the right place as quickly as it could.

For Scott all this leads to a gloomy conclusion; much of our view about what we are and our mental capacities is really attributable to systematic error, even to  something we could regard as a cognitive deficit or disease. He cogently suggests how dualism and other errors might arise from our situation.

I think the Thespian account is the most accessible and persuasive account Scott has given to date of his view, and it perhaps allows us to situate it better than before. I think the scope of the disaster is a little less than Scott supposes, in two ways. First, he doesn’t deny that routine intentionality actually works at a practical level, and I think he would agree we can even hope to give a working level description of how that goes. My own view is that it’s all a grand extension of our capacity for recognition, (and I was more encouraged than not by my recent friendly disagreement with Marcus Perlman over on Aeon Ideas; I think his use of the term ‘iconic’ is potentially misleading, but in essence I think the views he describes are right and enlightening) but people here have heard all that many times. Whether I’m right or not we probably agree that some practical account of how the human mind gets its work done is possible.

Second, on a higher level, it’s not completely hopeless. We are indeed prone to dreadful errors and to illusions about how our minds work that cannot easily be banished. But we kind of knew that. We weren’t really struggling to understand how dualism could possibly be wrong, or why it seemed so plausible.  We don’t have to resort to those flawed heuristics; we can take our pure basic understanding and try again, either through some higher meta-meta-cognition (careful) or by simply standing aside and looking at the thing from a scientific third-person point of view. Aristotle was wrong, but we got it right in the end, and why shouldn’t Scott’s own view be part of getting it righter about the mind? I don’t think he would disagree on that either (he’ll probably tell us); but he feels his conclusions have disastrous implications for our sense of what we are.

Here we strike something that came up in our recent discussion of free will and the difference between determinists and compatibilists. It may be more a difference of temperament than belief. People like me say, OK, no, we don’t have the magic abilities we looked to have, so let’s give those terms a more sensible interpretation and go merrily on our way. The determinists, the eliminativists, agree that the magic has gone – in fact they insist – but they sit down by the roadside, throw ashes on their heads, and mourn it. They share with the naive, the libertarians, and the believers in a magic power of intentionality, the idea that something essential and basically human is lost when we move on in this way. Perhaps people like me came in to have the magic explained and are happy to see the conjuring tricks set out; others wanted the magic explained and for it to remain magic?

13 thoughts on “Scott’s Aliens

  1. @Peter: But as a determinist you would agree with those that say the determinists were always going to sit by the roadside and couldn’t do otherwise?

    I guess to me the determinism question seems too low level, especially in our reality where things seem much weirder than atoms colliding in billiard ball fashion. I realize this runs into the supposed “indeterminism” horn but I don’t think anyone would take it seriously that our consistency is a kind of Boltzmann brain situation.

    BBT, on the other hand, seems sufficiently high level to be disturbing though I’m not sure it’s going to be as terrible as Scott seems to think. But it is waiting for confirmation from findings in neuroscience, which seems the right level to think unless we find some more support for Orch-OR or the other quantum-consciousness theories?

  2. The BBT fails to answer the subtitle of this blog.

    If we are incapable of really understanding our own workings, isn’t it possible that part of our blindness is the inability to picture a third ground between determinism and randomness?

    Once you claim that conclusions based on metacognition are much like blind people arguing about the perception of color, the whole discussion stops, including the legitimacy of his own ability to argue his thesis.

  3. Given how ornery my style can be, I always feel embarrassed by the generosity of your own, Peter. I try to take you as my example–I really do! But I think I spent too much time reading Nietzsche at too tender a stage in my intellectual development–or something!

    “We don’t have to resort to those flawed heuristics; we can take our pure basic understanding and try again, either through some higher meta-meta-cognition (careful) or by simply standing aside and looking at the thing from a scientific third-person point of view. Aristotle was wrong, but we got it right in the end, and why shouldn’t Scott’s own view be part of getting it righter about the mind? I don’t think he would disagree on that either (he’ll probably tell us); but he feels his conclusions have disastrous implications for our sense of what we are.”

    I agree, with caveats. The primary reason I keep flogging the view is that I think it would allow cognitive science to move past all the interminable debate regarding representation, mental functions, phenomenology, and so on. I think blind brain theory provides a very parsimonious way to square the circle of intentionality and to finally–finally–move on. So for example, how are (purported) mental functions related to neurobiology? Heuristically (and how else could they be related, given that they neglect neurobiology?). Get rid of the ‘mental,’ and simply talk about the psychological, where ‘psychology’ is understood as the science of exapting folk idioms to regimented, experimental contexts. Stop insisting that these ‘functions’ cut nature at some spooky joints, and get to work figuring out their scope of application outside the laboratory. Is the tool peculiar to the lab, or can it be generalized?

    The list goes on and on.

    Culturally, though, I’m not nearly so sanguine. To give but one example: Blind brain theory argues that our intuitive understanding of self and other, aside from consisting of correlative kluges, is the adaptive product of what I call ‘shallow information environments.’ They function by taking tremendous amounts of background information for granted. This means:

    1) that we should look at the technical transformation of our environments as an unravelling of all work evolution (natural and cultural) has put into overcoming the Frame Problem for humans. All the sleeping dogs are being kicked awake.

    2) that we should regard the proliferation of ‘deep information’–the deliverances of the cognitive sciences–as a kind of potential ‘socio-cognitive pollution.’ The reason causal considerations scotch our intentional cognition, BBT claims, is simply that intentional cognition amounts to a way to solve *absent causal information.* It only functions smoothly when we neglect information pertaining to what, in the highest dimensional sense, we happen to be.

    But this is a different set of arguments from the one’s I lay out in Alien Philosophy. I just wanted to show that my pessimism is far from unwarranted, even though, as you say, Peter, it’s not warranted by the conclusions drawn in the piece.

  4. Micha (2): “Once you claim that conclusions based on metacognition are much like blind people arguing about the perception of color, the whole discussion stops, including the legitimacy of his own ability to argue his thesis.”

    I’m not sure how this follows, short of assuming that metacognition is somehow all or nothing. This certainly isn’t my position.

  5. Scott, If you were doing the neuroscience of color and show the linkage of light wavelengths, retinal activation, V1 etc for the various colors; you would have no problem with either the scientific details or intentional descriptions of “blue” “red” “green”. However the same does not hold for “desire” “belief” if the same scientific details become available to you?

    Why is intentionality not just another function we have not scientifically mapped in the brain yet?

  6. Pingback: Alienating Philosophies | Three Pound Brain

  7. VicP (5): “Why is intentionality not just another function we have not scientifically mapped in the brain yet?”

    Basically, because there’s no such thing–no such thing as ‘intentional functions’ (such as those Sellarsian normativists, for instance, are apt to posit). There’s nothing to map. Intentional cognition is real, on the other hand, and that’s being mapped as we speak.

  8. Scott (7), If intentional states were emotional states or attitudinal states that were triggered by the higher brain into a more fundamental area, module of the brain or old brain, would you still stand by your statement?

  9. Still, you are arguing that many of our intuitions are wrong because we lack the equipment to draw proper conclusions about how our minds work.

    I am saying that once you posit that our minds weren’t designed to properly analyze how we think, we really can’t have this conversation. You could be right, you could be wrong; but if you are right we lack the equipment to ever be sure you were.

  10. micha (9): “I am saying that once you posit that our minds weren’t designed to properly analyze how we think, we really can’t have this conversation. You could be right, you could be wrong; but if you are right we lack the equipment to ever be sure you were.”

    Again, this turns on an oversimplification. Pretty clearly, I think. It’s like saying that because the human eye alone cannot determine the nature of stars, we can never discuss the nature of stars. The fact that metacognition alone cannot determine the nature of consciousness says nothing about the possibility of determining the nature of consciousness, only that metacognition, like the human eye, needs help.

  11. VicP (8): “If intentional states were emotional states or attitudinal states that were triggered by the higher brain into a more fundamental area, module of the brain or old brain, would you still stand by your statement?”

    Your formulation answers your question: If it were *really the case* that aboutness was a high-dimensional property of certain neural states (as say, LoT theories presume), then BBT would be wrong, and we should expect it to be falsified by a more mature cognitive science.

    But we need to be wary of (what I’ve begun to think of as ‘Deacon’s Fallacy’), the assumption that we are doing anything more than reading intentionality *into* these systems (which happen to be exactly the kinds of systems requiring heuristic cognition to be understood).

  12. Scott (11), If it acts like it’s intentional, moves like it’s intentional…..well it may not be biologically intentional but a non-biological simulation. My point from Peter’s last blog, the core self may something we share with other species, a core of ‘being’ for simple insects or a core being with basic emotional states for higher species. Heuristics is just shorthand for emergent space time behavior. There may be no clear demarcation from the core self and higher functions except for ouch!
    Just look at the stock market, all that technology but the main function can be basic human fear. However I think all of these worldwide business news channels are there to temper emotions. A financial singularity.

  13. Scott,
    Again, this turns on an oversimplification.

    To be fair, you do call it “blind brain theory”, not “legally blind brain theory (but can still find the biscuit tin by itself)”

Leave a Reply

Your email address will not be published. Required fields are marked *