stance 23Dan Dennett famously based his view of consciousness on the intentional stance. According to him the attribution of intentions and other conscious states is a most effective explanatory strategy when applied to human beings, but that doesn’t mean consciousness is a mysterious addition to physics. He compares the intentions we attribute to people with centres of gravity, which also help us work out how things will behave, but are clearly not a new set of real physical entities.

Whether you like that idea or not, it’s clear that the human brain is strongly predisposed towards attributing purposes and personality to things. Now a new study by Spunt, Meyer and Lieberman using FMRI provides evidence that even when the brain is ostensibly not doing anything, it is in effect ready to spot intentions.

This is based on findings that similar regions of the brain are active both in a rest state and when making intentional (but not non-intentional) judgements, and that activity in the pre-frontal cortex of the kind observed when the brain is at rest is also associated with greater ease and efficiency in making intentional attributions.

There’s always some element of doubt about how ambitious we can be in interpreting what FMRI results are telling us, and so far as I can see it’s possible in principle that if we had a more detailed picture than FMRI can provide we might see more significant differences between the rest state and the attribution of intentions; but the researchers cite evidence that supports the view that broad levels of activity are at least a significant indicator of general readiness.

You could say that this tells us less about intentionality and more about the default state of the human mind. Even when at rest, on this showing, the brain is sort of looking out for purposeful events. In a way this supports the idea that the brain is never naturally quiet, and explains why truly emptying the mind for purposes of deep meditation and contemplation might require deliberate preparation and even certain mental disciplines.

So far as consciousness itself is concerned, I think the findings lend more support to the idea that having ‘theory of mind’ is an essential part of having a mind: that is, that being able to understand the probable point of view and state of knowledge of other people is a key part of having full human-style consciousness yourself.

There’s obviously a bit of a danger of circularity there, and I’ve never been sure it’s a danger that Dennett for one escapes. I don’t know how you attribute intentions to people unless you already know what intentions are. The normal expectation would be that I can do that because I have direct knowledge of my own intentions, so all I need to do is hypothesise that someone is thinking the way I would think if I were in their shoes. In Dennett’s theory, me having intentions is really just more attribution (albeit self-attribution), so we need some other account of how it all gets started (apparently the answer is that we assume optimal intentions in the light of assumed goals).

Be that as it may, the idea that consciousness involves attributing conscious states to ourselves is one that has a wider appeal and it may shed a slightly different light on the new findings. It might be that the base activity identified by the study is not so much a readiness to attribute intentions, but a continuous second-order contemplation of our own intentions, and an essential part of normal consciousness. This wouldn’t mean the paper’s conclusions are wrong, but it would suggest that it’s consciousness itself that makes us more ready to attribute intentions.

Hard to test that one because unconscious patients would not make co-operative subjects…

37 Comments

  1. 1. Sci says:

    I think you’re right about Dennett not escaping the problem of circularity. My understanding of Bakker’s writing (which, admittedly, is probably wrong) is what makes the snake let go of its tail is the abandonment of even the “stupid” homonculi Dennett appeals to.

    That said, Rosenberg seems to have expressed the eliminativist position clearly – we don’t have thoughts about anything, because all intentionality is illusory. I’m not sure any other materialist eliminativist has done so (possibly because many of them seem to have gone into New Atheist preaching)?

  2. 2. Richard Wein says:

    Hi Peter,

    I share Dennett’s “intentional stance” view, so let me try to defend it.

    Dan Dennett famously based his view of consciousness on the intentional stance. According to him the attribution of intentions and other conscious states is a most effective explanatory strategy when applied to human beings, but that doesn’t mean consciousness is a mysterious addition to physics.

    I don’t think that’s quite right. The intentional stance is not about attributing conscious states. It’s about attributing intentional>/i> states, like beliefs and other states that are about something. These needn’t necessarily be conscious states. After all, I’m not always conscious of my beliefs. And the intentional stance is not just an effective explanatory strategy when applied to humans. It can be applied to animals and inanimate objects too.

    I would put more emphasis on prediction than on explanation. Attributing intentional states helps us to predict behaviour. If I tell you someone believes it’s going to rain, you can predict that he’s more likely to take an umbrella with him. If we allow the idea of a philosophical zombie, then attributing such a belief to the equivalent zombie would be just as effective as a predictive strategy. Of course, we can use the intentional stance for explanations too. If you ask me why the man or zombie took an umbrella with him, we can say it was because he believed it was going to rain.

    It seems to me that critics of this view often think the word “belief” (and other intentional language) entails consciousness by definition. So they think that attributions of beliefs are necessarily attributions of consciousness. The point of the intentional stance view is that attributions of intentional states can be just as useful whether a system is conscious or not, and there’s no good reason to limit them to conscious systems. I think this is clearer when we talk about the predictive benefits than when we talk about the explanatory benefits. When it comes to explanations, critics are likely to think that there’s no benefit in giving an explanation if that explanation is false, and that attributing beliefs to a non-conscious system is a false attribution. But in my example of predicting that the zombie will take an umbrella, I think the predictive benefit is clear.

    As far as I’m concerned, the “intentional stance” view says nothing specifically about consciousness, though it can be applied to conscious states as well as non-conscious ones.

    I don’t know how you attribute intentions to people unless you already know what intentions are. The normal expectation would be that I can do that because I have direct knowledge of my own intentions, so all I need to do is hypothesise that someone is thinking the way I would think if I were in their shoes.

    Analogue of your first sentence: I don’t know how you could call something a chair unless you already know what a chair is. To use any language we must first acquire the correct habits of use, which we do by hearing and imitating the usage of those around us. A zombie child could acquire the correct habits of using intentional language (in its predictions and explanations) as a conscious child.

    (I don’t think that philosophical zombies are a real possibility. But I think it’s useful to allow them for the sake of argument, to help us separate the subjects of intentionality and consciousness.)

  3. 3. Richard Wein says:

    I’ve been thinking of an interesting thought experiment. (Sorry if this sort of thing has been discussed here already.) Imagine you wake up one morning and find that 90% of the population have (somehow) had their brains replaced by computers running whole brain simulations of the brains they had before. They continue behaving just as before, but according to Searle et al they’re not conscious. Call them zombies. We can use X-rays to tell who’s a zombie, but we can’t tell just by looking. I think this would make a good scenario for a science-fiction novel!

    If you’re a Searlian, perhaps you’ll break off relations with zombified friends and family, since you think they’re now only things, and not real people. But for practical reasons you’ll have to get on with zombies, as they make up 90% of the population. Even if you know your boss is a zombie, you’ll have to at least go through the motions of treating him like a person. In dealing with zombies or talking about them, you would have to adopt the intentional stance, much as you would do with humans. You would need to say things like (to a co-worker) “Does the boss think there’ll be a delivery tomorrow?”, “Does he want me to start work on the new project?” Even if you insist that the boss’s “beliefs” are only beliefs in a “metaphorical” or “as-if” sense, you would still be adopting the intentional stance when you attribute “beliefs” to him.

  4. 4. Sci says:

    Heh, I’d probably just try to hack the simulation of my boss to give me a raise since programs don’t deserve rights.

  5. 5. Richard Wein says:

    Hi Sci,

    I think the social situation would be unstable if many people thought like that. The zombies wouldn’t be able to “trust” the humans. There’d be trouble. Originally, I was going to make the proportion 50%, but I thought with those numbers the humans might try to eliminate the zombies. That would be hard enough, but they might try, leading to a civil war. At 90% zombies, it would probably seem too hard to try. But the zombies might be forced to segregate the humans, out of self-defence.

    I had another similar SF idea a while back. Suppose Star Trek style transporters were introduced. After a fair proportion of people have used them, the idea gets around that people who’ve been transported are zombies. Or just that they’re not the same people. You’re not my brother any more. You have no rights to my brother’s property.

    Well, those are the strange things I waste my time thinking about. If only I was a good enough writer to put them in a novel.

  6. 6. Callan S. says:

    Sci, how is he in a position to give you anything (let alone a raise) if he doesn’t have rights?

  7. 7. Sci says:

    @Richard: It’s definitely a good idea for a short story at least – the variety of ways people react to such entities would be an entertaining narrative especially if you had perspectives from different regions where the impact of AI varied (it replaces jobs in some areas, has barely made an impact in others.) But I think the fact people would attempt to project an intentional stance onto the AIs wouldn’t be convincing to people like Searle or myself.

    After all, I could probably convince a toddler to treat some basic speaking toys as if they have consciousness, but AFAIK there are very few (no?) people who attribute consciousness to every program? As Jochen asked in a previous thread, does anyone worry about the agents in video games crying out in supposed pain as they die?

    @Callan: I assumed for this hypothetical that the simulated minds continued on in the roles they had before. But it would be interesting to see how “murder” would be prosecuted. An interesting episode of some lawyer/cop show guest starring Dennet and Searle, though because there’s a physical flesh body it’s not quite as clear cut as if the case featured an uploaded brain.

  8. 8. Richard Wein says:

    Hi Sci,

    Thanks for the reply.

    But I think the fact people would attempt to project an intentional stance onto the AIs wouldn’t be convincing to people like Searle or myself.

    I think you’ve misunderstood the meaning of “intentional stance”. The intentional stance is something adopted by the observer (the speaker), not something projected onto the observed (the person being spoken about). I’m not saying that you should adopt the intentional stance towards zombies because they’re conscious. I’m saying that it makes sense to adopt the intentional stance towards them even if you know they’re not conscious. It would be useful to say things like, “Does the zombie boss think there’ll be a delivery tomorrow?”. In saying that, you would be adopting the intentional stance towards your zombie boss, despite thinking he’s (it’s) not conscious.

    By the way have you seen my comment #2? It’s marked “awaiting moderation”, so I’m not sure if it’s visible to other people.

  9. 9. Richard Wein says:

    Hi Callan,

    Sci, how is he in a position to give you anything (let alone a raise) if he doesn’t have rights?

    Let’s distinguish between moral rights, legal rights and practical capabilities. Sci’s zombie boss has the practical capability to give him a raise, as long as he still control’s the firm’s money. Suppose the boss owns the business and it’s purely a cash business. Who’s going to stop the boss giving Sci more cash?

    In my scenario, 90% of voters, legislators and judges are zombies too. We can assume that the law will recognise equal rights for zombies.

    Actually, I said you can only tell a zombie by an X-ray, so intially most people/zombies won’t know if they’re zombies. I don’t know whether they would want to get themselves X-rayed to find out. Note that zombies and non-zombies will be equally likely to get themselves X-rayed, since by hypothesis their behaviour is unaffected by being a zombie. Of course, once someone finds out he’s a zombie, that’s likely to change his behaviour somewhat. It’s not being a zombie per se that changes his behaviour, but his belief that he’s a zombie. Someone who incorrectly comes to believe he’s a zombie (perhaps two X-rays got switched round) will have his behaviour affected in the same way as someone who correctly comes to believe he’s a zombie.

    So there might be an initial period when most zombies don’t realise they’re zombies. During that period they will be just as likely to persecute known zombies as humans are. Perhaps initially zombie judges will rule that zombies don’t have legal rights, because they don’t realise that they themselves are zombies!

    To avoid such issues, let me propose a variant of my scenario (call this scenario Z2), where the fact someone is a zombie is immediately visible. Being zombified has changed the shape of people’s heads, enough to be pretty obvious. Then zombie judges know they’re zombies, and they’re not likely to refuse legal rights to zombies. (Doing so would be kind of self-defeating, since they would be pretty much saying that they had no right to be a judge and make that judgement.) Similarly, 90% of police, bank employees, etc, are zombies and know they’re zombies. Who’s going to stop Sci’s zombie boss from giving Sci a raise?

  10. 10. Richard Wein says:

    I wrote:

    Then zombie judges know they’re zombies…

    I should clarify. People with changed head shape know they have computers instead of brains. They know that because enough people have been X-rayed to show that anyone with a changed head shape has a computer instead of a brain. In some cases they’ve confirmed it by having their own X-ray. I should have said that zombies know they’re computer-heads (not that they know they’re zombies). To say they know they’re zombies could be taken to imply they know they have no consciousness. But that’s pre-judging a crucial question: would they accept that they have no consciousness? I don’t think they would accept that. I think they would be much less persuadable by Searle than are non-computer-heads (or computer-heads who don’t realise they’re computer-heads).

    How would you behave if you had an X-ray that showed you had a computer instead of a brain? Given that you believe you’re conscious and that computers can’t be conscious, you will probably say that scenario is impossible. But are you so absolutely certain computers can’t be conscious that you can’t even consider the possibility you’re wrong? If you’re not absolutely certain, then try imagining the scenario. It seems me then you would have the following options:
    1. Conclude you’re not conscious.
    2. Conclude that Searle is wrong, and that computers can be conscious after all.
    3. Deny that you’re a computer-head, no matter how strong the evidence.
    4. Be unable to decide what to make of the situation.

    I’ll assume that you find #1 the least acceptable. I’ll also assume the evidence you’re a computer-head is so overwhelming that you would choose #2 over #3. So I think that makes #2 and #4 the most plausible options for you. On either of those options, you’re not going deny that computer-heads are conscious.

    The hypothesis of my scenario (for the sake of argument) was that computer-heads are not conscious. But the hypothesis was also that a non-conscious system behaves just the same way as an equivalent conscious system. So a non-conscious computer-head-Sci would behave just the same way as conscious computer-head-Sci, and so would also probably not deny that computer-heads are conscious. The same goes for the zombies in my zombie-world scenario. In fact most of them are even less likely to deny that computer-heads are conscious, since they haven’t previously been exposed to philosophical arguments like Searle’s. Once they are aware that they’re computer-heads, I don’t think they will take such arguments seriously. So zombies will know they’re computer-heads, but they won’t know (won’t accept) that they’re not conscious. In that sense, they won’t know they’re zombies. (If you reject my application of words like “know” and “accept” to zombies, that read this as saying that they will behave as if they don’t know/accept that they are zombies.)

  11. 11. Callan S. says:

    Richard, you have to realise my question for Sci is a question of consistancy – why at one point respect the boss’s capacity to give a raise as if he were a being who deserved such a right, but at the same time treat him as an it, like a vending machine to be manipulated as much as any object?

    Or is it our natural inclination to ‘respect rights’ when it benefits us to do so? Sounds the sort of pragmatic thinking a ‘zombie’ might engage in…

    How about this for a story pitch – it’s ‘Left Behind’, but instead of people just disappearing, they are replaced by computers.

    The eerie, chilling part of the story is, perhaps all the computers are (of?) better people than the ‘real’ people are…

    Equally, if only I was a enough of a prolific writer to hash it out to the apparently necessary 50+k words.

  12. 12. Tanju Cataltepe says:

    Hi Richard,

    So there might be an initial period when most zombies don’t realise they’re zombies. During that period they will be just as likely to persecute known zombies as humans are. Perhaps initially zombie judges will rule that zombies don’t have legal rights, because they don’t realise that they themselves are zombies!

    Why do you think the zombies will have affinity for their kind? Wouldn’t this type of behavior indicate they are not zombies?

  13. 13. Sci says:

    @Callan: “Richard, you have to realise my question for Sci is a question of consistancy – why at one point respect the boss’s capacity to give a raise as if he were a being who deserved such a right, but at the same time treat him as an it, like a vending machine to be manipulated as much as any object?”

    Same reason I would have more consideration for a living person than a corpse being animated by an exo-skeleton.

  14. 14. Callan S. says:

    Sci,

    And the ‘zombie’ if it were in your shoes – what would you guess it would do?

    If the answer is ‘the exact same behaviour as me’, then isn’t there a problematism in that?

  15. 15. Tanju Cataltepe says:

    Please note that the first paragraph of my earlier comment (no. 11) was supposed to be a quote from Richard’s comment, but I could not find the right markup tags for quoting. (A pointer to the formatting tags documentation will be appreciated.)

  16. 16. Richard Wein says:

    Hi Tanju,

    Why do you think the zombies will have affinity for their kind?

    I’m not saying computer-heads would initially have any special affinity for other computer-heads. They’d have roughly the same affinities as before. They’d continue to have a strong affinity for their friends and family (whether computer-head or brain-head). They still wouldn’t have much affinity for their previous enemies. However, if they’re spurned by their brain-head friends and family, and persecuted by other brain-heads, they might start developing a general antipathy to brain-heads, and prefer the company of other computer-heads.

    You can use at least the HTML “blockquote” and “i” (italics) tags. Replace those quotes with angle brackets. I assume you can use some other HTML tags too, but those are the only ones I’ve tried here.

  17. 17. Callan S. says:

    Gah, I’m stuck in comment moderation again! Not sure if reposting is the done thing (prolly not)

  18. 18. Cognicious says:

    Richard Wein wrote: “The hypothesis of my scenario (for the sake of argument) was that computer-heads are not conscious. But the hypothesis was also that a non-conscious system behaves just the same way as an equivalent conscious system.”

    What supports the second hypothesis? Why would a nonconscious system be motivated to behave the same way as a conscious system or, indeed, to behave in any way at all? Humans go through the day doing things that are at least partly responses to conditions they’re conscious of. For instance, how would a computer-head know it needed to blow its nose? Without consciousness, it would lack the discomfort that moves the rest of us to do so, and it couldn’t look forward to the relief of discomfort that nose-blowing brings.

  19. 19. Richard Wein says:

    Hi Cognicious,

    What supports the second hypothesis?

    You’re asking the wrong person. I only adopted that hypothesis for the sake of argument. I don’t accept it myself.

    Many people seem to think that consciousness is “epiphenomenal”, and therefore that you could have a non-conscious “philosophical zombie” that behaved just like a conscious equivalent. I’m not one of them.

  20. 20. Scott Bakker says:

    Peter: “There’s obviously a bit of a danger of circularity there, and I’ve never been sure it’s a danger that Dennett for one escapes. I don’t know how you attribute intentions to people unless you already know what intentions are. The normal expectation would be that I can do that because I have direct knowledge of my own intentions, so all I need to do is hypothesise that someone is thinking the way I would think if I were in their shoes. In Dennett’s theory, me having intentions is really just more attribution (albeit self-attribution), so we need some other account of how it all gets started (apparently the answer is that we assume optimal intentions in the light of assumed goals).”

    I think you put your finger on the primary problem with Dennett’s interpretavism. The charge of circularity largely comes down to the ‘stance stance,’ more or less. Since Dennett (like Rosenberg) lacks of any thorough, *positive* account of intentional phenomena/idioms, many think he has to beg some account of intrinsic intentionality. But this assumption actually *does* beg the question, insofar as Dennett need not have a well-developed theory of intentional phenomena/idioms to cogently deny intentionalist theories of those phenomena/idioms. His real dilemma is abductive, the same as Rosenberg. He needs “some other account of how it all gets started.” But for some reason he thinks walking readers through the evolutionary complexity tree provides him with this account.

    By BBT’s lights, he still runs afoul the primary problem plaguing intentionalism: that of assuming intentional cognition is capable of theoretically explicating intentional cognition, viz., the ‘intentional stance.’ There’s no stance, strictly speaking, no ‘attribution,’ just specialized heuristic systems leaping to specialized heuristic answers on the basis of sparse, yet specialized information. All the conundrums involving ‘intrinsic’ or ‘original’ intentionality arise when we attempt to theoretically solve these systems *using* these systems (via ‘reflection’). Since we evolved intentional cognition to solve practical problems in the absence of detailed information, and since we have no metacognitive inkling of this, we find ourselves both stymied (for applying intentional cognition out of school) and mystified (for the lacking any knowledge that there is a ‘school’).

  21. 21. Peter says:

    Many apologies for comments that got lost in the spam filter.

    Richard – you don’t think the intentional stance explains consciousness. Do you think Dennett meant it to? If not, what was his explanation? The observation that attributing intentions can help predict behaviour is sort of uninteresting in itself, isn’t it?

  22. 22. Richard Wein says:

    Hi Peter,

    Richard – you don’t think the intentional stance explains consciousness. Do you think Dennett meant it to? If not, what was his explanation?

    Dennett wrote one book (“The Intentional Stance”) to explain intentionality and another book (“Consciousness Explained”) to explain consciousness. I don’t feel I can do justice to the latter book with a quick summary, or even feel confident that I fully understand it. So please excuse me if I pass on that question.

    The observation that attributing intentions can help predict behaviour is sort of uninteresting in itself, isn’t it?

    Well, there is rather more to be said on the subject than that. And many people reject Dennett’s views on the subject, so they’re interesting enough to be controversial! In the latest thread I posted a link to one of Dennett’s papers. I recommend reading it.

    http://www-personal.umich.edu/~lormand/phil/teach/dennett/readings/Dennett%20-%20True%20Believers%20(highlights).pdf
    (When I posted it before, the “.pdf” extension didn’t get included in the link, so you may have to add that manually.)

    Sorry if this reply seems rather terse, but I’d like to wind down my participation in the discussion for the time being, as it’s been quite time consuming.

  23. 23. Sci says:

    @Callan:

    ‘If the answer is ‘the exact same behaviour as me’, then isn’t there a problematism in that?’

    By this argument it seems even Eliza deserves some rights, since that program’s response would correspond to some hypothetical human. Of course even the subroutines in some of the best computer games then deserve rights.

  24. 24. Sci says:

    @Scott: Is there an easy access point to your position for the layperson? I have to admit it’s not clear to me how your position isn’t either a rewording of Rosenberg’s (we have no thoughts, it’s all illusion) or akin to the argument that relation to the world is what fixes a program’s meaning (the lack of applicable “isomorphism” between a chess and checkers program). Though as per recollection I don’t think you’re a computationalist?

    I guess what confuses me is your denial of intrinsic intentionality while simultaneously rejecting Rosenberg’s claims as too extreme. I fully respect I may not be aware of the philosophical positions one can take as a eliminativist.

  25. 25. Callan S. says:

    Sci, I was looking more at the morality of hacking the boss – if the zombie would do the same thing, doesn’t it seem problematic being comfortable with the same thing the zombie would be comfortable with?

    As to Eliza, the code is far less sophisticated than an earthworms reactions and the words outputed are similar to how some flies wear the colour of a wasp. Just a mimicry. Maybe if we looked at the robot from Ex Machina instead?

  26. 26. Peter says:

    OK, Richard, let’s not prolong it; but I’ve read both the books you mention: I thought I understood them; and I’m very surprised to hear that you don’t consider the intentional stance to be the key element in Dennett’s explanation of consciousness.

  27. 27. Richard Wein says:

    Peter, I think understanding intentionality is probably important in understanding consciousness. At the very least I would say that misunderstandings of intentionality get in the way of understanding consciousness. My point is rather that we can understand intentionality without understanding consciousness. I think it’s significant that Dennett wrote the book on intentionality before the book on consciousness, and presumably he thought it could be understood on its own.

  28. 28. Richard Wein says:

    P.S. I’ve just checked my copy of “Consciousness Explained”. Dennett says that his “heterophenomenological” approach requires us to take the intentional stance, but, as far as I can see, the book does little to explain intentionality. We all take the intentional stance (and understand intentional language) in our everyday speech. That’s not the same thing as understanding intentionality, in the sense that reading “The Intentional Stance” helps us understand intentionality. Dennett mentions “The Intentional Stance”, but doesn’t require readers of “Consciousness Explained” to have read it. He doesn’t seem to think that much understanding of intentionality is needed for understanding consciousness, though I take it he assumes his readers are not labouring under some misunderstanding of intentionality that would get in the way of their understanding consciousness, and no doubt a good understanding of intentionality would help.

  29. 29. Richard Wein says:

    I know I said I wanted to wind down my participation, but I took another look at the OP, and realised that my response to the main argument (about circularity) was inadequate. So I’d like to retract that response. However, I’m unable to understand just what the argument is. I won’t ask for clarification, as I don’t want get drawn into further discussion. But I would like to point out one apparent misinterpretation of Dennett, though I’m not sure if it’s critical to the argument.

    In Dennett’s theory, me having intentions is really just more attribution (albeit self-attribution),

    Dennett doesn’t deny that there is an intention there, prior to our attribution. Our attribution is our way of describing what’s there.

  30. 30. Scott Bakker says:

    RIchard: “Dennett says that his “heterophenomenological” approach requires us to take the intentional stance, but, as far as I can see, the book does little to explain intentionality.”

    I know you don’t want to get embroiled (I go through the same phases) but I thought this might help. Dennett never actually explains intentionality (or consciousness, for that matter). He puts the matter rather succinctly in Intuition Pumps:

    “I propose we simply postpone the worrisome question of what really has a mind, about what the proper domain of the intentional stance is. Whatever the right answer to this question is – if it has a right answer – this will not jeopardize the plain fact that the intentional stance works remarkably well as a prediction method in these other areas, almost as well as it works in our daily lives as folk psychologists dealing with other people. This move of mine annoys and frustrates some philosophers, who want to blow the whistle and insist on properly settling the issue of what a mind, a belief, a desire is before taking another step. Define your terms, sir! No, I won’t. That would be premature. I want to explore first the power and extent of application of this good trick, the intentional stance.” Intuition Pumps, 79

    Now I personally can make little sense of this quote (how can one both postpone *and* explore the ‘extent of application’ (domain) of this good trick?), but the takeaway message seems to be that giving the answers we want from him—some definitive explanation as to what intentional cognition is, what systems comprise its functions, what systems comprise its adaptive problem ecology, what information it picks out, what information it neglects, and what kind of cognitive illusions (such as the ‘Cartesian theatre’) follow from reflection upon it—will have to wait.

  31. 31. Richard Wein says:

    Hi Scott,

    Although I’ve read “Intuition Pumps”, I can’t remember how much it says about the intentional stance. It seems to me there are two possible questions Dennett could be referring to in that passage:
    1. To which systems can the intentional stance usefully be applied?
    2. Of those systems, which ones can be considered to have “minds”?

    Perhaps the apparent contradiction you point to can be resolved if we interpret Dennett as saying that it is only the second of these questions that is to be postponed. And I don’t think that “postponed” means postponed indefinitely. Arguably, he has gone some way to addressing question #2 in “Consciousness Explained”.

    With regard to question #1, I can’t remember just how much Dennett has said. But we don’t need to draw a well-defined border between those systems to which the intentional stance can usefully be applied, and those to which it can’t. In fact I would say that, like many distinctions, this one is fuzzy. Explaining is not primarily a matter of drawing demarcation lines.

    …but the takeaway message seems to be that giving the answers we want from him—some definitive explanation as to what intentional cognition is, what systems comprise its functions, what systems comprise its adaptive problem ecology, what information it picks out, what information it neglects, and what kind of cognitive illusions (such as the ‘Cartesian theatre’) follow from reflection upon it—will have to wait.

    I don’t understand what you mean by most of the items on your list. However, I would suggest that some or all of those items do not have to be supplied by an explanation of intentionality. The last item sounds more like a matter of consciousness, and I think Dennett addresses it in “Consciousness Explained”. Perhaps Dennett hasn’t explained intentionality to your satisfaction, or given as much detail as you want. In my view he has said enough to constitute a valuable explanation. He has significantly demystified the subject of intentionality, at least for those who understand and accept his explanation.

  32. 32. Peter says:

    Richard,

    No offence, but I’m not sure it’s altogether good manners on your part to pop in, suggest that I’ve got various things wrong – and then demand that the discussion not continue! And this is the second time you’ve done it, isn’t it – after first announcing you were going to defend Dennett, then confessing you didn’t understand ‘Consciousness Explained’ and now withdrawing what you said in the first place!
    How about we just agree it’s possible that in general I know what I’m talking about and leave it there?

  33. 33. Richard Wein says:

    Hi Peter,

    I think that’s rather a harsh interpretation. But I’ll admit to being weak-willed in failing to follow through on my aim of winding down my participation. Anyway, I really will make this my final comment for the time being. Best wishes.

  34. 34. Sci says:

    @Callan: Apologies, you’ve completely lost me on whatever point you’re trying to make regarding hacking the boss-program that replaces the real person.

    As for what’s mimicry – to me it’s all mimicry so far as it involves a program running on a Turing machine. Complexity adds no miracle that I can see as valid – if anything it suggests Rosenberg is correct that thoughts are illusory though such an extraordinary claim needs extraordinary evidence.

    A physical recreation of an actual brain using silicon or other elements would be another matter IMO. In that I’d lean toward Haikonen’s position and say there probably is some “light” in there.

    I fear we may have to agree to disagree barring some new scientific evidence. In fact it seems most arguments of this sort end due to exhaustion rather than anyone shifting their position. 🙂

  35. 35. Scott Bakker says:

    Sci: ” Is there an easy access point to your position for the layperson? I have to admit it’s not clear to me how your position isn’t either a rewording of Rosenberg’s (we have no thoughts, it’s all illusion) or akin to the argument that relation to the world is what fixes a program’s meaning (the lack of applicable “isomorphism” between a chess and checkers program). Though as per recollection I don’t think you’re a computationalist?”

    I think computationalism, ultimately, is where mysterians go to hide. The difference between my eliminativism and dogmatic eliminativisms like Rosenberg’s are actually quite extensive. Kreigel actually does a good job summing up the problem with such eliminativistic positions: what they gain in parsimony, they lose in explanatory power. Saying there’s no such thing as meaning is all well and fine, but then you better have some story about what it is we are getting so fundamentally wrong all the time and why we’re doing so! Rosenberg really has none. Nor does Dennett, really. My eliminativism actually falls out of just such explanations.

    A good place to begin with my position can be found at: https://scientiasalon.wordpress.com/2014/11/05/back-to-square-one-toward-a-post-intentional-future/

  36. 36. Scott Bakker says:

    Richard: I definitely agree with you on the fuzziness. Borderline cases are key to my position. Let me reiterate my questions regarding intentional cognition:

    1)What brain systems comprise intentional cognition?
    2)What environmental systems comprise its adaptive problem ecology?
    3)What information does it pick out?
    4)What information does it neglect?
    5)What kind of cognitive illusions (such as the ‘Cartesian theatre’) follow from reflection upon it?

    and suggest that, rather than misunderstanding Dennett, I’ve actually found a way to make sense of his biggest insights without any claptrap regarding ‘intentional stances.’ If you don’t need ‘stances’ to makes sense of human cognition, then why on earth would we bother with them?

  37. 37. Sci says:

    @Scott:

    “I think computationalism, ultimately, is where mysterians go to hide.”

    Ha! Now that’s a quote sure to ruffle feathers. I’d love to see you explore this further in a blog post – if you have already please link.

    Thanks for the link to the S.Salon piece. I’d read that earlier but I do feel like I’m closer to understanding your particular position.

Leave a Reply