whistlePhysical determinism is implausible according to Richard Swinburne in the latest JCS; he cunningly attacks via epiphenomenalism.

Swinburne defines physical events as public and mental ones as private – we could argue about that, but as a bold, broad view it seems fair enough. Mental events may be phenomenal or intentional, but for current purposes the distinction isn’t important. Physical determinism is defined as the view that each physical event is caused solely by other physical events; here again we might quibble, but the idea seems basically OK to be going on with.

Epiphenomenalism, then, is the view that while physical events may cause mental ones, mental ones never cause physical ones. Mental events are just, as they say, the whistle on the locomotive (though the much-quoted analogy is not exact: prolonged blowing of the whistle on a steam locomotive can adversely affect pressure and performance). Swinburne rightly describes epiphenomenalism as an implausible view (in my view, anyway – many people would disagree), but for him it is entailed by physical determinism, because physical events are only ever caused by other physical events. In his eyes, then, if he can prove that epiphenomenalism is wrong, he has also shown that physical determinism is ruled out. This is an unusual, perhaps even idiosyncratic perspective, but not illogical.

Swinburne offers some reasonable views about scientific justification, but what it comes down to is this; to know that epiphenomenalism is true we have to show that mental events cause no physical events; but that very fact would mean we could never register when they had occurred – so how would we prove it? In order to prove epiphenomenalism true, we must assume that what it says is false!

Swinburne takes it that epiphenomenalism means we could never speak of our private mental events – because our words would have to have been caused by the mental events, and ex hypothesi they don’t cause physical events like speech. This isn’t clearly the case – as I’ve mentioned before, we manage to speak of imaginary and non-existent things which clearly have no causal powers. Intentionality – meaning – is weirder and more powerful than Swinburne supposes.

He goes on to discuss the famous findings of Benjamin Libet, which seem to show that decisions are detectable in the brain before we are aware of having made them. These results point towards epiphenomenalism being true after all. Swinburne is not impressed; he sees no basic causal problem in the idea that a brain event precedes the mental event of the decision, which in turn precedes action. Here he seems to me to miss the point a bit, which is that if Libet is right, the mental experience of making a decision has no actual effect, since the action is already determined.

The big problem, though is that Swinburne never engages with the normal view; ie that in one way or another mental events have two aspects. A single brain event is at the same time a physical event which is part of the standard physical story, and a mental event in another explanatory realm. In one way this is unproblematic; we know that a mass of molecules may also be a glob of biological structure, and an organism; we know that a pile of paper, a magnetised disc, or a reel of film may all also be “A Christmas Carol”. As Scrooge almost puts it, Marley’s ghost may be undigested gravy as well as a vision of the grave.

It would be useless to pretend there is no residual mystery about this, but it’s overwhelmingly how most people reconcile physical determinism with the mental world, so for Swinburne to ignore it is a serious weakness.

brainsimAeon Magazine has published my Opinion piece on brain simulation. Go on over there and comment! Why not like me while you’re at it!!!

I’m sorry about that outburst – I got a little over-excited…

Coming soon (here) Babbage’s forgotten rival…

cosmosIs cosmopsychism the panpsychism we’ve all been waiting for? Itay Shani thinks so and sets out the reasons in this paper. While others start small and build up, he starts with the cosmos and works down. But he rejects the Blobject…

To begin at the beginning. Panpsychism is the belief that consciousness is everywhere; that it is in some sense a basic part of the world. Typically when people try to explain consciousness they start with the ingredients supplied by physics and try to build a mind out of them in a way which plausibly accounts for all the remarkable features of consciousness. Panpsychists just take awareness for granted, the way we often take matter or energy for granted; they take it to be primary, and this arguably gets them out of a very difficult explanatory task. There are a number of variants – panexperientialism, panentheism, and so on – which tend to be bracketed with panpsychism as similar considerations apply to all members of the family.

This kind of thinking has enjoyed quite a good level of popularity in recent years, perhaps a rising one. Regular readers may recall, though, that I’m not attracted by panpsychism. If stones have consciousness, we still have to explain how human consciousness comes to be different from what the stones have got. I suspect that that task is going to be just as difficult as explaining consciousness from scratch, so that adopting the panpsychist thesis leaves us worse off rather than better.

Shani, however, thinks some of the problems are easily dealt with; others he takes very seriously. He points out quite fairly that panpsychists are not bound to ascribe awareness to every entity at every level; they’re OK just so long as there is, as it were, universal coverage at some level. Most panpsychists, as he rightly observes, tend to push the basic home of consciousness down to a micro level, which leaves us with the problem of how these simple micro-consciousnesses can come together to form a higher level one – or sometimes not form a higher one.

Thus combination issue is a difficult one that comes in many forms: Shani picks out particularly the questions of how micro-subjects can combine to form a macro-subject; how phenomenal experience can combine, and how the structure of experience can combine. Cutting to the chase, he finds the most difficult of the three to be the problems with subjects, and in particular he quotes an argument of Coleman’s. This is, in brief, that distinct subjects require distinct points of view, but that in merging, points of view lose their identity. He mentions the simplified case of a subject that only sees red and one that only sees blue: the combined point of view includes both blue and red and the ‘just-red’ and ‘just-blue’ points of view are lost.

I think it requires a good deal more argumentation than Shani offers to make all this really convincing. He and Coleman, for example, take it as given that the combination of subjects must preserve the existence of the combined elements, more or less as the combination of hydrogen and oxygen to make water does not annihilate the component elements. Maybe that is the case, but the point seems very arguable.

Shani also seems to give way to Coleman without much of a fight, although there’s plenty of scope for one. But after all these are highly complex issues and Shani only has so much space: moreover I’m inclined to go along with him because I agree that the combination problem is very bad; perhaps worse than Shani thinks.

It just seems intuitively very unlikely that two micro-minds can be combined. Two of the things that seem clearest about our own minds is that they combine terrific complexity with a strong overall unity; both of those factors seem to throw up problems for a merger. To me it seems that two minds are like two clocks: you cannot meaningfully merge them except by taking them apart into their basic components and putting something completely new together – which is no use at all for panpsychism.

For Shani, of course, combination must fail so that he can offer his cosmic solution as an alternative route to a viable panpsychism. He sets out his stall with six postulates.

  1. The cosmos as a whole is the only ontological ultimate there is, and it is conscious.
  2. It is prior to its parts.
  3. It is laterally dual in nature, having a concealed and a revealed side (the concealed side being phenomenal experience while the revealed side is the apparently objective world around us).
  4. It is like a fluctuating ocean, with waves, ripples and vortices assuming temporary identity of their own.
  5. The cosmic consciousness grounds the smaller consciousnesses within it.
  6. Conscious entities’ are dynamic configurations within the cosmic whole.
  7. These consciousnesses are severally related to particular surges or vortices of the cosmic consciousness and never fully separate from it.

That seems at least a vision we can entertain, but it immediately faces the challenge of the Blobject. This is the universal cosmic object championed by Terry Horgan & Matjaž Potr?. They are happy with the grand cosmic unity proposed by Shani but they go further; how can it have any parts? They believe the great cosmic consciousness is the Blobject; the only thing that truly exists; the idea that there are really other things is deluded.

The austere ontology of the Blobject and its splendid parsimony can only be admired. We might talk more about it another time; but for now I’m inclined to agree with Shani that the task of reconciling it with actual experience is just too fraught with difficulty.

So does Shani succeed? He does, I think, set out, albeit briefly, a coherent and interesting view; but it does not have the advantages he supposes. He believes that starting at the top and working down avoids the difficult problems we encounter if we start at the bottom and work up. I think that is an illusion derive from the fact that the bottom-up approach has just been discussed more. I think in fact that just the same problems must recur whichever way we approach things.

Take the Coleman point. Coleman’s objection is that in combining, two points of view lose their separate identity, while it needs to be preserved. But surely, if we take his blue-and-red pov and split it into just-blue and just-red we get a similar loss of the original identity. Now as I said, I’m not altogether sure that this need be a problem, but it seems to me clear that it doesn’t really matter which way we move through the problem; and the same must be true of all arguments which relate different levels of panpsychist consciousness. Is there really any fundamental asymmetry that makes the top-down view stronger?

drugbotOver the years many variants and improvements to the Turing Test have been proposed, but surely none more unexpected than the one put forward by Andrew Smart in this piece, anticipating his forthcoming book Beyond Zero and One. He proposes that in order to be considered truly conscious, a robot must be able to take an acid trip.

He starts out by noting that computers seem to be increasing in intelligence (whatever that means), and that many people see them attaining human levels of performance by 2100 (actually quite a late date compared to the optimism of recent decades; Turing talked about 2000, after all). Some people, indeed, think we need to be concerned about whether the powerful AIs of the future will like us or behave well towards us. In my view these worries tend to blur together two different things; improving processing speeds and sophistication of programming on the one hand, and transformation from a passive data machine into a spontaneous agent, quite a different matter. Be that as it may, Smart reasonably suggests we could give some thought to whether and how we should make machines conscious.
It seems to me – this may be clearer in the book – that Smart divides things up in a slightly unusual way. I’ve got used to the idea that the big division is between access and phenomenal consciousness, which I take to be the same distinction as the one defined by the terminology of Hard versus Easy Problems. In essence, we have the kind of consciousness that’s relevant to behaviour, and the kind that’s relevant to subjective experience.
Although Smart alludes to the Chalmersian zombies that demonstrate this distinction, I think he puts the line a bit lower; between the kind of AI that no-one really supposes is thinking in a human sense and the kind that has the reflective capacities that make up the Easy Problem. He seems to think that experience just goes with that (which is a perfectly viable point of view). He speaks of consciousness as being essential to creative thought, for example, which to me suggests we’re not talking about pure subjectivity.
Anyway, what about the drugs? Smart sedans to think that requiring robots to be capable of an acid trip is raising the bar, because it is in these psychedelic regions that the highest, most distinctive kind of consciousness is realised. He quotes Hofman as believing that LSD…

…allows us to become aware of the ontologically objective existence of consciousness and ourselves as part of the universe…

I think we need to be wary here of the distinction between becoming aware of the universal ontology and having the deluded feeling of awareness. We should always remember the words of Oliver Wendell Holmes Sr:

…I once inhaled a pretty full dose of ether, with the determination to put on record, at the earliest moment of regaining consciousness, the thought I should find uppermost in my mind. The mighty music of the triumphal march into nothingness reverberated through my brain, and filled me with a sense of infinite possibilities, which made me an archangel for the moment. The veil of eternity was lifted. The one great truth which underlies all human experience, and is the key to all the mysteries that philosophy has sought in vain to solve, flashed upon me in a sudden revelation. Henceforth all was clear: a few words had lifted my intelligence to the level of the knowledge of the cherubim. As my natural condition returned, I remembered my resolution; and, staggering to my desk, I wrote, in ill-shaped, straggling characters, the all-embracing truth still glimmering in my consciousness. The words were these (children may smile; the wise will ponder): “A strong smell of turpentine prevails throughout.”…

A second problem is that Smart believes (with a few caveats) that any digital realisation of consciousness will necessarily have the capacity for the equivalent of acid trips. This seems doubtful. To start with, LSD is clearly a chemical matter and digital simulations of consciousness generally neglect the hugely complex chemistry of the brain in favour of the relatively tractable (but still unmanageably vast) network properties of the connectome. Of course it might be that a successful artificial consciousness would necessarily have to reproduce key aspects of the chemistry and hence necessarily offer scope for trips, but that seems far from certain. Think of headaches; I believe they generally arise from incidental properties of human beings – muscular tension, constriction of the sinuses, that sort of thing – I don’t believe they’re in any way essential to human cognition and I don’t see why a robot would need them. Might not acid trips be the same, a chance by-product of details of the human body that don’t have essential functional relevance?

The worst thing, though, is that Smart seems to have overlooked the main merit of the Turing Test; it’s objective. OK, we may disagree over the quality of some chat-bot’s conversational responses, but whether it fools a majority of people is something testable, at least in principle. How would we know whether a robot was really having an acid trip? Writing a chat-bot to sound as if were tripping seems far easier than the original test; but other than talking to it, how can we know what it’s experiencing? Yes, if we could tell it was having intense trippy experiences, we could conclude it was conscious… but alas, we can’t. That seems a fatal flaw.

Maybe we can ask tripbot whether it smells turpentine.

dialogueTom has written a nice dialogue on the subject of qualia: it’s here.

Could we in fact learn useful lessons from talking to a robot which lacked qualia?

Perhaps not; one view would be that since the robot’s mind presumably works in the same way as ours, it would have similar qualia: or would think it did. We know that David Chalmers’ zombie twin talked and philosophised about its qualia in exactly the same way as the original.

It depends on what you mean by qualia, of course. Some people conceive of qualia as psychological items that add extra significance or force to experience; or as flags that draw attention to something of potential interest. Those play a distinct role in decision making and have an influence on behaviour. If robots were really to behave like us, they would have to have some functional analogue of that kind of qualia, and so we might indeed find that talking to them on the subject was really no better or worse than talking to our fellow human beings.

But those are not real qualia, because they are fully naturalised and effable things, measurable parts of the physical world. Whether you are experiencing the same blue quale as me would, if these flags or intensifiers were qualia, be an entirely measurable and objective question, capable of a clear answer. Real, philosophically interesting qualia are far more slippery than that.

So we might expect that a robot would reproduce the functional, a-consciousness parts of our mind, and leave the phenomenal, p-consciousness ones out. Like Tom’s robot they would presumably be puzzled by references to subjective experience. Perhaps, then, there might be no point in talking to them about it because they would be constitutionally incapable of shedding any light on it. they could tell us what the zombie life is like, but don’t we sort of know that already? They could play the kind of part in a dialogue that Socrates’ easily-bamboozled interlocutors always seemed to do, but that’s about it, presumably?

Or perhaps they would be able to show us, by providing a contrasting example, how and why it is that we come to have these qualia? There’s something distinctly odd about the way qualia are apparently untethered from physical cause and effect, yet only appear in human beings with their complex brains.  Or could it be that they’re everywhere and it’s not that only we have them, it’s more that we’re the only entities that talk about them (or about anything)?

Perhaps talking to a robot would convince us in the end that in fact, we don’t have qualia either: that they are just a confused delusion. One scarier possibility though, is that robots would understand them all too well.

“Oh,” they might say, “Yes, of course we have those. But scanning through the literature it seems to us you humans only have a very limited appreciation of the qualic field. You experience simple local point qualia, but you have no perception of higher-order qualia; the qualia of the surface or the solid, or the complex manifold that seems so evident to us. Gosh, it must be awful…”

Gladraeli zapThis paper from  Chawke and Kanai reports unexpected effects on subjects’ political views, brought about by stimulation of the dorsolateral prefrontal cortex (DLPFC). It seems to make people more conservative.

The research set out to build on earlier studies. Those seemed to suggest that the DLPFC had a role in flagging up conflicts; noting for us where the evidence was suggesting our views might need to be changed. Generally people stick to a particular outlook (the researchers suggest that avoidance of cognitive dissonance and similar stresses is one reason) but every now and then a piece of evidence comes along that suggest we really have to do a little bit of reshaping, and the DLPFC helps with that unwelcome process.

If that theory is right. then gingering up the DLPFC ought to make people readier to change their existing views. To test this, the authors set up arrangements to deliver random trans-cranial noise stimulation bilaterally to the relevant areas. They tested subjects’ political views beforehand; showed them a party political broadcast, and then checked to see whether the subjects’ views had in fact changed.

This was at Sussex, so the political framework was a British one of Labour versus Conservative. The expectation was that stimulating the DLPFC would make the subjects more receptive to persuasion and so more inclined to adjust their views slightly in response to what they were seeing; so Labour-inclined subjects would move to the right while conservative-inclined ones moved to the left.

Briefly, that isn’t what happened: instead there was a small but significant general shift to the right. Why could that be? To be honest it’s impossible to say, but hypothetically we might suppose that the DLPFC is not, after all, responsible for helping us change our view in the fare of contrary evidence, but simply a sceptical or disbelieving module that allows us to doubt or discard political opinions. Arguably – and I hope I’m not venturing into controversial territory – right wing views tend to correspond with general doubt about political projects and a feeling that things are best left alone; we could say that the fewer politics you have the more you tend to be on the right?

Whether that’s true or not it seems alarming that stimulating the brain directly with random noise can affect your political views; it suggests an unscrupulous government could change the result of an election by irradiating the poling stations.

What did it feel like for the subjects? Nothing, it seems; the experimenters were careful to ensure that control subjects got the same kind of experience although their DLPFC was left alone. Subjects were apparently unaware of any change in their views (and we’re only talking shifts on a small scale, not Damascene conversions to the opposite party).

Perhaps in the end it’s not quite as alarming as it seems. Suppose we played our subjects bursts of ordinary random acoustic noise? That would be rather irritating; it might make them overall a little angrier and less patient – might that not also have a small temporary effect on their voting pattern…?

Interesting exchange about Eric Schwitzgebel’s view that we have special obligations to robots…

jennifer2Are ideas conscious at all? Neuroscience of Consciousness is a promising new journal from OUP introduced by the editor Anil Seth here. It has an interesting opinion piece from David Kemmerer which asks – are we ever aware of concepts, or is conscious experience restricted to sensory, motor and affective states?

On the face of it a rather strange question? According to Kemmerer there are basically two positions. The ‘liberal’ one says yes, we can be aware of concepts in pretty much the same kind of way we’re aware of anything. Just as there is a subjective experience when we see a red rose, there is another kind of subjective experience when we simply think of the concept of roses. There are qualia that relate to concepts just as there are qualia that relate to colours or smells, and there is something it is like to think of an idea. Kemmerer identifies an august history for this kind of thinking stretching back to Descartes.

The conservative position denies that concepts enter our awareness. While our behaviour may be influenced by concepts, they actually operate below the level of conscious experience. While we may have the strong impression that we are aware of concepts, this is really a mistake based on awareness of the relevant words, symbols, or images. The intellectual tradition behind this line of thought is apparently a little less stellar – Kemmerer can only push it back as far as Wundt – but it is the view he leans towards himself.

So far so good – an interesting philosophical/psychological issue. What’s special here is that in line with the new journal’s orientation Kemmerer is concerned with the neurological implications of the debate and looks for empirical evidence. This is an unexpected but surely commendable project.

To do it he addresses three particular theories. Representing the liberal side he looks at Global Neural Workspace Theory (GNWT) as set out by Dehaene, and Tononi’s Integrated information Theory (IIT)’ on the conservative side he picks the Attended Intermediate-Level Representation Theory (AIRT) of Prinz. He finds that none of the three is fully in harmony with the neurological evidence, but contends that the conservative view has distinct advantages.

Dehaene points to research that identified specific neurons in a subject’s anterior temporal lobes that fire when the subject is shown a picture of, say, Jennifer Aniston (mentioned on CE – rather vaguely). The same neuron fires when shown photographs, drawing, or other images, and even when the subject is reporting seeing a picture of Aniston. Surely then, the neuron in some sense represents not an image but the concept of Jennifer Aniston?  against theconservative view Kemmerer argues that while a concept may be at work, imagery is always present in the conscious mind; indeed, he contends,  you cannot think of ‘Anistonicity’ in itself without a particular image of Aniston coming to mind. Secondly he quotes further research which shows that deterioration of this portion of the brain impairs our ability to recognise, but not to see, faces. This, he contends, is good evidence that while these neurons are indeed dealing with general concepts at some level, they are contributing nothing to conscious awareness, reinforcing the idea that concepts operate outside awareness. According to Tononi we can be conscious of the idea of a triangle, but how can we think of a triangle without supposing it to be equilateral, isosceles, or scalene?

Turning to the conservative view, Kemmerer notes that AIRT has awareness at a middle level, between the jumble of impressions delivered by raw sensory input on the one hand, and the invariant concepts which appear at the high level. Conscious information must be accessible but need not always be accessed.  It is implemented as gamma vector waves. This is apparently easier to square with the empirical data than the global workspace, which implies that conscious attention would involve a shift into the processing system in the lateral prefrontal cortex where there is access to working memory – something not actually observed in practice. Unfortunately although the AIRT has a good deal of data on its side the observed gamma responses don’t in fact line up with reported experience in the way you would expect if it’s correct.

I think the discussion is slightly hampered by the way Kemmerer uses ‘awareness’ and ‘consciousness’ as synonyms. I’d be tempted to reserve awareness for what he is talking about, and allow that concepts could enter consciousness without our being (subjectively) aware of them. I do think there’s a third possibility being overlooked in his discussion – that concepts are indeed in our easy-problem consciousness while lacking the hard-problem qualia that go with phenomenal experience. Kemmerer alludes to this possibility at one point when he raises Ned Block’s distinction between access and phenomenal  (a- and p-consciousness), but doesn’t make much of it.

Whatever you think of Kemmerer’s ambivalent;y conservative conclusion, I think the way the paper seeks to create a bridge between the philosophical and the neurological is really welcome and, to a degree, surprisingly successful. If the new journal is going to give us more like that it will definitely be a publication to look forward to.

 

baby with mirrorAre babies solipsists? Ali, Spence and Bremner at Goldsmiths say their recent research suggests that they are “tactile solipsists”.

To be honest that seems a little bit of a stretch from the actual research. In essence this tested how good babies were at identifying the location of a tactile stimulus. The researchers spent their time tickling babies and seeing whether the babies looked in the direction of the tickle or not (the life of science is tough, but somebody’s got to do it). Perhaps surprisingly, perhaps not, the babies were in general pretty good at this. In fact the youngest ones were less likely to be confused by crossing their legs before tickling their feet, something that reduced the older ones’ success rate to chance levels, and in fact impairs the performance of adults too.

The reason for this is taken to be that long experience leads us to assume a stimulus to our right hand will match an event in the right visual field, and so on. After the correlations are well established the brain basically stops bothering to check and is then liable to be confused when the right hand (or foot) is actually on the left, or vice versa.

This reminded me a bit of something I noticed with my own daughters: when they were very small their fingers all worked independently and were often splayed out, with single fingers moving quite independently; but in due course they seemed to learn that not much is achieved in most circumstances by using the four digits separately and that you might as well use them in concert by default to help with grasping, as most of us mostly do except when using a keyboard.

Very young babies haven’t had time to learn any of this and so are not confused by laterally inconsistent messages. The Goldsmiths’ team read this as meaning that they are in essence just aware of their own bodies, not aware of them in relation to the world. It could be so, but I’m not sure it’s the only interpretation. Perhaps it’s just not that complex.

There are other reasons to think that babies are sort of solipsistic. There’s some suggestive evidence these days that babies are conscious of their surroundings earlier than we once thought, but until recently it’s been thought that self-awareness didn’t dawn until around fifteen months, with younger babies unaware of any separation between themselves and the world. This was partly based on the popular mirror test, where a mark is covertly put on the subject’s face. When shown themselves in a mirror, some touch the mark; this is taken to show awareness that the reflection is them, and hence a clear sign of self awareness. The test has been used to indicate that such self-awareness is mainly a human thing, though also present in some apes, elephants, and so on.

The interpretation of the mirror test always seemed dubious to me. Failure to touch your own face might not mean you’ve failed to recognise yourself; contrariwise, you might think the reflection was someone else but still be motivated to check your own face to see whether you too had a mark. If people out there are getting marked, wouldn’t you want to check?

Sure enough about five years ago evidence emerged that the mirror test is in fact very much affected by cultural factors and that many human beings outside the western world react quite differently to a mirror. It’s not all that surprising that if you’ve seen people use mirrors to put on make-up (or shave) regularly your reactions to one might be affected.  If we were to rely on the mirror test, it seems many Kenyan six-year-olds would be deemed unaware of their own existence.

Of course the question is in one sense absurd: to be any kind of solipsist is, strictly speaking, to hold an explicit philosophical position which requires quite advanced linguistic and conceptual apparatus which small infants certainly don’t have. For the question to be meaningful we have to have a clear view about what kinds of beliefs babies can be said to hold. I don’t doubt that hold some inexplicit ones, and that we go on holding beliefs in the same way alongside others at many different levels. If we reach out to catch a ball we can in some sense be said to hold the belief that it is following a certain path, although we may not have entertained any conscious thoughts on the matter. At the other end of the spectrum, where we solemnly swear to tell the truth, the whole truth, and nothing but the truth, the belief has been formulated with careful specificity and we have (one hopes) deliberated inwardly at the most abstract levels of thought about the meaning of the oath. The complex and many-layered ways in which we can believe things have yet to be adequately clarified I think; a huge project and since introspection is apparently the only way to tackle it, a daunting one.

For me the only certain moral to be drawn from all the baby-tickling is one which philosophers might recognise: the process of learning about the world is at root a matter of entering into worse and grander confusions.

BermudezThe more you know about the science of the mind, the less appealing our common sense ideas seem. Ideas about belief and desire motivating action just don’t seem to match up in any way with what you see going on. So, at least, says Jose Luis Bermudez in Arguing for Eliminativism (freely available on Academia, but you might need to sign in). Bermudez sympathises with Paul Churchland’s wish to sweep the whole business of common sense psychology away; but he wants to reshape Churchland’s attack, standing down the ‘official’ arguments and bringing forward others taken from within Churchland’s own writing on the subject.

Bermudez sketches the complex landscape with admirable clarity. He notes Boghossian has argued that eliminativism of this kind is incoherent: but Boghossian construed eliminativism as an attack on all forms of content. Bermudez has no desire to be so radical and champions a purely psychological eliminativism.

If something’s wrong with common sense psychology it could either be that what it says is false, or that what it says is not even capable of being judged true or false. In the latter case it could, for example, be that all common sense talk of mental states is nothing more than a complex system of reflexive self-expression like grunts and moans. Bermudez doesn’t think it’s like that: the propositions of common sense psychology are meaningful, they just happen to be erroneous.

It therefore falls to the eliminativist to show what the errors are.  Bermudez has a two-horned strategy: first, we can argue that as a matter of fact, we don’t rely on common sense understanding as much as we think. Second, we can look for ways to show that the kind of propositional content implied by common sense views is just incompatible with the mechanism that actually underlie human action and behaviour as revealed by scientific investigation.

There are, in fact, two different ways of construing common sense psychology. One is that our common sense understanding is itself a kind of theory of mind: this is the ‘theory theory’ line. To disprove this we might try to bring out what the common sense theory is and then attack it. The other way of construing common sense is that we just use our own minds as a model: we put ourselves in the other person’s shoes and imagine how we should think and react. To combat this one we should need a slightly different approach; but it seems Bermudez’s strategy is good either way.

I think the first horn of the attack works better than the second – but not perfectly. Bermudez rightly says it is very generally accepted that to negotiate complex social interactions we need to ascribe beliefs and desires to other people and draw conclusions about their likely behaviour. It ain’t necessarily so. Bermudez quotes the Prisoner’s Dilemma, the much-cited example where we have been arrested: if we betray our partner in crime we’ll get better terms whatever the other one does. We don’t, Bermudez points out, need to have any particular beliefs about what the other person will do: we can work out the strategy from just knowing the circumstances.

More widely, Bermudez contends, we often don’t really need to know what an individual has in mind.  If we know that person is a butcher, or a waiter, then the relevant social interaction can be managed without any hypothesising about beliefs and desires. (In fact we can imagine a robot butcher/waiter who would certainly lack any beliefs or desires but could execute the transactions perfectly well.)

That is fine as far as it goes, but it isn’t that hard to think of examples where the ascription of beliefs seems relevant. In particular, the interpretation of speech, especially the reading of Gricean implicatures, seems to rely on it. Sometimes it also seems that the ascription of emotional states is highly relevant, or hypotheses about what another person knows, something Bermudez doesn’t address.

It’s interesting to reflect on what a contrast this is with Dennett. I think of Dennett and Churchland as loosely allied: both sceptics about qualia, both friendly to materialist, reductive thinking. Yet here Bermudez presents a Churchlandish view which holds that ascriptions of purpose are largely useless in dealing with human interaction, while Dennett’s Intentional Stance of course requires that they are extremely useful.

Bermudez doesn’t think this kind of argument is sufficient, anyway, hence the second horn in which he tries to sketch a case for saying that common sense and neurons don’t fit well. The real problem here for Bermudez is that we don’t really know how neurons represent things. He makes a case for kinds of representation other than the sort of propositional representation he thinks is required by the standard common sense view (ie, we believe or desire that  xxx…). It’s true that a mess of neurons doesn’t look much like a set of well-formed formulae, but to cut to the chase I think Bermudez is pursuing a vain quest. We know that neurons can deal with ascriptions of propositional belief and desire (otherwise how would we even be able to think and talk about them) so it’s not going to be possible to rule them out neurologically. Bermudez presents some avenues that could be followed, but even he doesn’t seem to think the case can be clinched as matters stand.

I wonder if he needs to? It seems to me that the case for elimination does not rest on proving the common sense concepts false, only on their being redundant. If Bermudez can show that all ascriptions of belief and desire can for practical purposes be cashed out or replaced by cognition about the circumstances and game-theoretic considerations, then simple parsimony will get him the elimination he seeks.

He would still, of course, be left with explaining why the human mind adopts a false theory about itself instead of the true one: but we know some ways of explaining that – for example, ahem, through the Blindness of the Brain  (ie that we’re trapped within our limitations and work with the poor but adequate heuristics gifted to us, or perhaps foisted on us, bu evolution).