Just deserts

Dan Dennett and Gregg Caruso had a thoughtful debate about free will on Aeon recently. Dennett makes the compatibilist case in admirably pithy style. You need, he says, to distinguish between causality and control. I can control my behaviour even though it is ultimately part of a universal web of causality. My past may in the final sense determine who I am and what I do, but it does not control what I do; for that to be true my past would need things like feedback loops to monitor progress against its previously chosen goals, which is nonsensical. This concept of being in control, or not being in control, is quite sufficient to ground our normal ideas of responsibility for our actions, and freedom in choosing them.

Caruso, who began by saying he thought their views might turn out closer than they seemed, accepts pretty well all of this, agreeing that it is possible to come up with conceptions of responsibility that can be used to underpin talk of free will in acceptable terms. But he doesn’t want to do that; instead he wants to jettison the traditional outlook.

At this point Caruso’s motivation may seem puzzling. Here we have a way of looking at freedom and responsibility which provides a philosophically robust basis for our normal conception of those two moral basics – ideas we could not easily do without in our everyday lives. Now sometimes philosophy may lead us to correct or reject everyday ideas, but typically only when they appear to be without rational justification. Here we seem to have a logically coherent justification for some everyday moral concepts. Isn’t that a case of ‘job done’?

In fact, as he quickly makes clear, Caruso’s objections mainly arise from his views on punishment. He does not believe that compatibilist arguments can underpin ‘basic desert’ in the way that would be needed to justify retributive punishment. Retribution, as a justification for punishment, is essentially backward looking; it says, approximately, that because you did bad things, bad things must happen to you. Caruso completely rejects this outlook, and all justifications that focus on the past (after all, we can’t change the past, so how can it justify corrective action?). If I’ve understood correctly, he favours a radically new regime which would seek to manage future harms from crime in broadly the way we seek to manage the harms that arise from ill-health.

I think we can well understand the distaste for punishments which are really based on anger or revenge, which I suspect lies behind Caruso’s aversion to purely retributive penalties. However, do we need to reject the whole concept of responsibility to escape from retribution? It seems we might manage to construct arguments against retribution on a less radical basis – as indeed, Dennett seeks to do. No doubt it’s right that our justification for punishments should be forward looking in their aims, but that need not exclude the evidence of past behaviour. In fact, I don’t know quite how we should manage if we take no account of the past. I presume that under a purely forward-looking system we assess the future probability of my committing a crime; but if I emerge from the assessment with a clean bill of health, it seems to follow that I can then go and do whatever I like with impunity. As soon as my criminal acts are performed, they fall into the past, and can no longer be taken into account. If people know they will not be punished for past acts, doesn’t the (supposedly forward-looking) deterrent effect evaporate?

That must surely be wrong one way or another, but I don’t really see how a purely future-oriented system can avoid unpalatable features like the imposition of restrictions on people who haven’t actually done anything, or the categorisation of people into supposed groups of high or low risk. When we imagine such systems we imagine them being run justly and humanely by people like ourselves; but alas, people like us are not always and everywhere in charge, and the danger is that we might be handing philosophical credibility to people who would love the chance to manage human beings in the same way as they might manage animals.

Nothing, I’m sure, could be further from Gregg Caruso’s mind; he only wants to purge some of the less rational elements from our system of punishment. I find myself siding pretty much entirely with Dennett, but it’s a stimulating and enlightening dialogue.

A Meeting of Minds

The self is real – it just, like Walt Whitman, contains multitudes. That’s the case made by Serife Tekin in Aeon. She begins by rightly pointing out the current popularity of disbelief in the self. She traces antirealist thinking right back to Hume, who said he was never able to spot his self by introspection; all he ever came up with was a bundle of perceptions. Interestingly she picks out Dennett as a contemporary example of antirealism, but she could readily have pointed to several others who think the self is an illusion or misinterpretation, perhaps stemming from our cognitive limitations, or from the reflexivity that arises when we turn our mind on itself.

Tekin by contrast suggests the self is both real and open to proper scientific investigation. It’s just that it has many forms; it is multitudinous. Borrowing from Neisser, she suggests five main dimensions of the self…

…the ecological self, or the embodied self in the physical world, which perceives and interacts with the physical environment; the interpersonal self, or the self embedded in the social world, which constitutes and is constituted by intersubjective relationships with others; the temporally extended self, or the self in time, which is grounded in memories of the past and anticipation of the future; the private self which is exposed to experiences available only to the first person and not to others; and finally the conceptual self, which (accurately or falsely) represents the self to the self by drawing on the properties or characteristics of not only the person but also the social and cultural context to which she belongs.

I don’t think these five types are meant to exhaust the variety of the self, which actually comes in a huge variety of shifting shapes. Nor are we meant to think that there is no basic unity; the five work together to provide an overall coherence of agency, though not without retaining some inner tensions and contradictions (nothing too strange psychologically in the idea that we may entertain contradictory thoughts and feelings in certain contexts.

The fivefold structure pays off because Tekin can give a separate account of how each can be addressed scientifically. The ecological self is easily observable, for example; fir the interpersonal self we need to pay attention to social aspects, but no great problem there. The most difficult seems likely to be the private self; Tekin seems to think we can get to that simply by interviewing people about ‘what it is like’, which perhaps underrates the problems.

Overall, it’s a sensible and appealing position. The curious thing is how close it seems to the kind of position taken by Dennett, here quoted as an example of antirealism. In fact, Dennett’s ideas are more nuanced than some. He doesn’t believe in a continuous, coherent self like a soul, but he is content to liken the self to a centre of gravity; not a real physical entity as such but a useful and harmless construction. As the author of the ‘multiple drafts’ theory of consciousness, I think he might rather like Tekin’s multitudinousness; and her approach to the private self looks quite like his ‘heterophenomenology’ in which we give up trying to study ineffable inner experience, but happily give consideration to what people tell us about ineffable inner experience.

This raises the attractive possibility that sceptics and believers might end up constructing effectively identical models of the self, the only difference being that one side regards the model as an eliminative reduction while the other sees it as simply analysis. I find that a strangely cheering prospect.

 

Disastrous Consciousness

Hugh Howey gives a bold, amusing, and hopelessly optimistic account of how to construct consciousness in Wired. He thinks it wouldn’t be particularly difficult. Now you might think that a man who knows how to create artificial consciousness shouldn’t be writing articles; he should be building the robot mind. Surely that would make his case more powerfully than any amount of prose? But Howey thinks an artificial consciousness would be disastrous. He thinks even the natural kind is an unfortunate burden, something we have to put up with because evolution has yet to find a way of getting the benefits of certain strategies without the downsides. But he doesn’t believe that conscious AI would take over the world, or threaten human survival, so I would still have thought one demonstration piece was worth the effort? Consciousness sucks, but here’s an example just to prove the point?

What is the theory underlying Howey’s confidence? He rests his ideas on Theory of Mind (which he thinks is little discussed); the ability to infer the thoughts and intentions of others. In essence, he thinks that was a really useful capacity for us to acquire, helping us compete in the cut-throat world of human society; but when we turn it on ourselves it disastrously generates wrong results, in particular about our own having of conscious states.

It remains a bit mysterious to me why he thinks a capacity that is so useful applied to others should be so disastrously and comprehensively wrong when applied to ourselves. He mentions priming studies, where our behaviour is actually determined by factors we’re unaware of; priming’s reputation has suffered rather badly recently in the crisis of non-reproducibility, but I wouldn’t have thought even ardent fans of priming would claim our mental content is entirely dictated by priming effects.

Although Dennett doesn’t get a mention, Howey’s ideas seem very Denettian, and I think they suffer from similar difficulties. So our Theory of Mind leads us to attribute conscious thoughts and intentions  to others; but what are we attributing to them? The theory tells us that neither we nor they actually have these conscious contents; all any of us has is self-attributions of conscious contents. So what, we’re attributing to them some self-attributions of self-attributions of…  The theory covertly assumes we already have and understand the very conscious states it is meant to analyse away. Dennett, of course, has some further things to say about this, and he’s not as negative about self-attributions as Howie.

But you know, it’s all pretty implausible intuitively. Suppose I take a mouthful of soft-boiled egg which tastes bad, and I spit it out. According to Howey what went on there is that I noticed myself spitting out the egg and thought to myself: hm, I infer from this behaviour that it’s very probable I just experienced a bad taste, or maybe the egg was too hot, can’t quite tell for sure.

The thing is, there are real conscious states irrespective of my own beliefs about them (which indeed, may be plagued by error). They are characterised by having content and intentionality, but these are things Howie does not believe in, or rather it seems has never thought of; his view that a big bank of indicator lights shows a language capacity suggests he hasn’t gone into this business of meaning and language quite deeply enough.

If he had to build an artificial consciousness, he might set up a community of self-driving cars, let one make inferences about the motives of the others and then apply that capacity to itself. But it would be a stupid thing to do because it would get it wrong all the time; in fact at this point Howie seems to be tending towards a view that all Theory of Mind is fatally error-prone. It would better, he reckons, if all the cars could have access to all of each other’s internal data, just as universal telepathy would be better for us (though in the human case it would be undermined by mind-masking freeloaders.

Would it, though? If the cars really had intentions, their future behaviour would not be readily deducible  simply from reading off all the measurements. You really do have to construct some kind of intentional extrapolation, which is what the Dennettian intentional stance is supposed to do.

I worry just slightly that some of the things Howey says seem to veer close to saying, hey a lot of these systems are sort of aware already; which seems unhelpful. Generally, it’s a vigorous and entertaining exposition, even if, in my view, on the wrong track.

Greatest Hits

Give up on real comprehension, says Daniel Dennett in From Bacteria to Bach and Back: commendably honest but a little discouraging to the reader? I imagine it set out like the warning above the gates of Hell: ‘Give up on comprehension, you who turn these pages’.  You might have to settle for acquiring some competences.

What have we got here? In this book, Dennett is revisiting themes he developed earlier in his career, retelling the grand story of the evolution of minds. We should not expect big new ideas or major changes of heart ( but see last week’s post for one important one). It would have been good at this stage to have a distillation; a perfect, slim little volume presenting a final crystalline formulation of what Dennett is all about. This isn’t that. It’s more like a sprawling Greatest Hits album. In there somewhere are the old favourites that will always have the fans stomping and shouting (there’s some huffing and puffing from Dennett about how we should watch out because he’s coming for our deepest intuitions with scary tools that may make us flinch, but honestly by now this stuff is about as shocking and countercultural as your dad’s Heavy Metal collection); but we’ve also got unnecessary cover versions of ideas by other people, some stuff that was never really a hit in the first place, and unfortunately one or two bum notes here and there.

And, oh dear, another attempt to smear Descartes by association. First Dennett energetically promoted the phrase “Cartesian theatre” – so hard some people suppose that it actually comes from Descartes; now we have ‘Cartesian gravity’, more or less a boo-word for any vaguely dualistic tendency Dennett doesn’t like. This is surely not good intellectual manners; it wouldn’t be quite so bad if it wasn’t for the fact that Descartes actually had a theory of gravity, so that the phrase already has a meaning. Should a responsible professor be spreading new-minted misapprehensions like this? Any meme will do?

There’s a lot about evolution here that rather left me cold (but then I really, really don’t need it explained again, thanks); I don’t think Dennett’s particular gift is for popularising other people’s ideas and his take seems a bit dated. I suspect that most intelligent readers of the book will already know most of this stuff and maybe more, since they will probably have kept up with epigenetics and the various proposals for extension of the modern synthesis that have emerged in the current century, (and the fascinating story of viral intervention in human DNA, surely a natural for anyone who likes the analogy of the selfish gene?) none of which gets any recognition here (I suppose in fairness this is not intended to be full of new stuff). Instead we hear again the tired and in my opinion profoundly unconvincing story about how leaping (‘stotting’) gazelles are employing a convoluted strategy of wasting valuable energy as a lion-directed display of fitness. It’s just an evasive manoeuvre, get over it.

For me it’s the most Dennettian bits of the book that are the best, unsurprisingly. The central theme that competence precedes, and may replace, comprehension is actually well developed. Dennett claims that evolution and computation both provide ‘inversions’ in which intentionless performance can give the appearance of intentional behaviour. He has been accused of equivocating over the reality of intentionality, consciousness and other concepts, but I like his attitude over this and his defence of the reality of ‘free-floating rationales’ seems good to me. It gives us permission to discuss the ‘purposes’ of things without presupposing an intelligent designer whose purposes they are, and I’m completely with Dennett when he argues that this is both necessary and acceptable. I’ve suggested elsewhere that talking about ‘the point’ of things, and in a related sense, what they point to, is a handy way of doing this. The problem for Dennett, if there is one, is that it’s not enough for competence to replace comprehension often; he needs it to happen every time by some means.

Dennett sets out a theoretical space with ‘bottom-up vs top-down’, ‘random vs directed search’, and ‘comprehension’ as its axes; at one corner of the resulting cube we have intentionless structures like a termite colony; at the other we have fully intentional design like Gaudi’s church of the Sagrada Familia, which to Dennett’s eye resembles a termite colony. Gaudi’s perhaps not the ideal choice here, given his enthusiasm for natural forms; it makes Dennett seem curiously impressed by the underwhelming fact that buildings by an architect who borrowed forms from the natural world turn out to have forms resembling those found in nature.

Still, the space suggests a real contrast between the mindless processes of evolution and deliberate design, which at first sight looks refreshingly different and unDennetian. It’s not, of course; Dennett is happy to embrace that difference so long as we recognise that the ‘deliberate design’ is simply a separate evolutionary process powered by memes rather than genes.

I’ve never thought that memes, Richard Dawkins’s proposed cultural analogue of genes, were a particularly helpful addition to Dennett’s theoretical framework, but here he mounts an extended defence of them. One of the worst flaws in the theory as it stands – and there are several – is its confused ontology. What are memes – physical items of culture or abstract ideas? Dennett, as a professional philosopher, seems more sensitive to this problem than some of the more metaphysically naive proponents of the meme. He provides a relatively coherent vision by invoking the idea that memes are ‘tokens’; they may take all sorts of physical forms – written words, pictures, patterns of neuronal firing – but each form is a token of a particular way of behaving. The problem here is that anything at all can serve as a token of any meme; we only know that a given noise or symbol tokens a specific meme because of its meaning. There may be – there certainly are – some selective effects that bite on the actual form of particular tokens. A word that is long or difficult to pronounce is more likely to be forgotten. But the really interesting selections take place at the level of meanings; that requires a much more complex level of explanation. There may still be mechanisms involved that are broadly selective if not exactly Darwinian – I think there are – but I believe any move up to this proper level of complexity inevitably edges the simplistic concept of the meme out of play.

The original Dawkinsian observation that the development of cultural items sometimes resembles evolution was sound, but it implicitly called for the development of a general theory which in spite of some respectable attempts, has simply failed to appear. Instead, the supporters of memetics, perhaps trapped by the insistent drumbeat of the Dawkinsian circus, have tended to insist instead that it’s all Darwinian natural selection. How a genetic theory can be Darwinian when Darwin never heard of genes is just one of the lesser mysteries here (should we call it ‘Mendelian’ instead? But Darwin’s name is the hooray word here just as Descartes’ is the cue for boos). Among the many ways in which cultural selection does not resemble biological evolution, Dennett notes the cogent objection that there is nothing that corresponds to DNA; no general encoding of culture on which selection can operate. One of the worst “bum notes” in the book is Dennett’s strange suggestion that HTML might come to be our cultural DNA. This is, shall we say, an egregious misconception of the scope of text mark-up language.

Anyway, it’s consciousness we’re interested in (check out Tom Clark’s thoughtful take here) and the intentional stance is the number the fans have been waiting for; cunningly kept till last by Dennett. When we get there, though, we get a remix instead of the classic track. Here he has a new metaphor, cunningly calculated to appeal to the youth of today; it’s all about apps. Our impression of consciousness is a user illusion created by our gift for language; it’s like the icons that activate the stuff on your phone. You may object that a user illusion already requires a user, but hang on. Your ability to talk about yourself is initially useful for other people, telling them useful stuff about your internal states and competences, but once the system is operating, you can read it too. It seems plausible to me that something like that is indeed an important part of the process of consciousness, though in this version I felt I had rather lost track of what was illusory about it.

Dennett moves on to a new attack on qualia. This time he offers an explanation of why people think they occur – it’s because of the way we project our impressions back out into the world, where they may seem unaccountable. He demonstrates the redundancy of the idea by helpfully sketching out how we could run up a theory of qualia and noting how pointless they are. I was nodding along with this. He suggests that qualia and our own sense of being the intelligent designers in our own heads are the same kind of delusion, simply applied externally or internally. I suppose that’s where the illusion is.

He goes on to defend a sort of compatibilist view of free will and responsibility; another example of what Descartes might be tempted to label Dennettian Equivocation, but as before, I like that posture and I’m with him all the way. He continues with a dismissal of mysterianism, leaning rather more than I think is necessary on the interesting concept of joint understanding, where no one person gets it all perfectly, but nothing remains to be explained, and takes a relatively sceptical view of the practical prospects for artificial general intelligence, even given recent advances in machine learning. Does Google Translate display understanding (in some appropriate sense); no, or rather, not yet. This is not Dennett as we remember him; he speaks disparagingly of the cheerleaders for AI and says that “some of us” always discounted the hype. Hmm. Daniel Dennett, long-time AI sceptic?

What’s the verdict then? Some good stuff in here, but as always true fans will favour the classic album; if you want Dennett at his best the aficionado will still tell you to buy Consciousness Explained.

 

 

Dennett recants

Yes, Dennett has recanted. Alright, he hasn’t finally acknowledged that Jesus is his Lord and Saviour. He hasn’t declared that qualia are the real essence of consciousness after all. But his new book From Bacteria to Bach and Back does include a surprising change of heart.

The book is big and complex: to be honest it’s a bit of a ragbag (and a bit of a curate’s egg). I’ll give it the fuller review it deserves another time, but it seems to be worth addressing this interesting point separately. The recantation arises from a point on which Dennett has changed his mind once before. This is the question of homunculi. Homunculi are ‘little people’ and the term is traditionally used to criticise certain kinds of explanation, the kind that assume some module in the brain is just able to do everything a whole person could do. Those modules are effectively ‘little people in your head’, and they require just as much explanation as your brain did in the first place. At some stage many years ago, Dennett decided that homunculi were alright after all, on certain conditions. The way he thought it could work was an hierarchy of ever stupider homunculi. Your eyes deliver a picture to the visual homunculus, who sees it for you; but we don’t stop there; he delivers it to a whole group of further colleagues; line-recognising homunculi, colour-recognising homunculi, and so on. Somewhere down the line we get to an homunculus whose only job is to say whether a spot is white or not-white. At that point the function is fully computable and our explanation can be cashed out in entirely non-personal, non-mysterious, mechanical terms. So far so good, though we might argue that Dennett’s ever stupider routines are not actually homunculi in the correct sense of being complete people; they’re more like ‘black boxes’, perhaps, a stage of a process you can’t explain yet, but plan to analyse further.

Be that as it may, he now regrets taking that line. The reason is that he no longer believes that neurons work like computers! This means that even at the bottom level the reduction to pure computation doesn’t quite work. The reason for this remarkable change of heart is that Terrence Deacon and others have convinced Dennett that the nature of neurons as entities with metabolism and a lifecycle is actually relevant to the way they work. The fact that neurons, at some level, have needs and aims of their own may ground a kind of ‘nano-intentionality’ that provides a basis for human cognition.

The implications are large; if this is right then surely, computation alone cannot give rise to consciousness! You need metabolism and perhaps other stuff. That Dennett should be signing up to this is remarkable, and of course he has a get-out. This is that we could still get computer consciousness by simulating an entire brain and reproducing every quirk of every neuron. For now that is well beyond our reach – and it may always be, though Dennett speaks with misplaced optimism about Blue Brain and other projects. In fact I don’t think the get-out works even on a theoretical level; simulations always leave out some aspect of the thing simulated, and if this biological view is sound, we can never be sure that we haven’t left out something important.

But even if we allow the get-out to stand this is a startling change, and I’ve been surprised to see that no review of the book I’ve seen even acknowledges it. Does Dennett himself even appreciate quite how large the implications are? It doesn’t really look as if he does. I would guess he thinks of the change as merely taking him a bit closer to, say, the evolution-based perspective of Ruth Millikan, not at all an uncongenial direction for him. I think, however, that he’s got more work to do. He says:

The brain is certainly not a digital computer running binary code, but it is still a kind of computer…

Later on, however, he rehashes the absurd but surely digitally-computational view he put forward in Consciousness Explained:

You can simulate a virtual serial machine on a parallel architecture, and that’s what the brain does… and virtual parallel machines can be implemented on serial machines…

This looks pretty hopeless in itself, by the way. You can do those things if you don’t mind doing something really egregiously futile. You want to ‘simulate’ a serial machine on a parallel architecture? Just don’t use more than one of its processors. The fact is, parallel and serial computing do exactly the same job, run the same algorithms, and deliver the same results. Parallel processing by computers is just a practical engineering tactic, of no philosophical interest whatever. When people talk about the brain doing parallel processing they are talking about a completely different and much vaguer idea and often confusing themselves in the process. Why on earth does Dennett think the brain is simulating serial processing on a parallel architecture,  a textbook example of pointlessness?

It is true that the brain’s architecture is massively parallel… but many of the brain’s most spectacular activities are (roughly) serial, in the so-called stream of consciousness, in which ideas, or concepts or thoughts float by not quite in single file, but through a Von Neumann bottleneck of sorts…

It seems that Dennett supposes that only serial processing can deliver a serially coherent stream of consciousness, but that is just untrue. On display here too is Dennett’s bad habit of using ‘Von Neumann’ as a synonym for ‘serial’. As I understand it the term “Von Neumann Architecture” actually relates to a long-gone rivalry between very early computer designs. Historically the Von Neumann design used the same storage for programs and data, while the more tidy-minded Harvard Architecture provided separate storage. The competition was resolved in Von Neumann’s favour long ago and is as dead as a doornail. It simply has no relevance to the human brain: does the brain have a Von Neumann or Harvard architecture? The only tenable answer is ‘no’.

Anyway, whatever you may think of that, if Dennett now says the brain is not a digital computer, he just cannot go on saying it has a Von Neumann architecture or simulates a serial processor. Simple consistency requires him to drop all that now – and a good thing too. Dennett has to find a way of explaining the stream of consciousness that doesn’t rely on concepts from digital computing. If he’s up for it, we might get something really interesting – but retreat to the comfort zone must look awfully appealing at this stage. There is, of course, nothing shameful in changing your mind; if only he can work through the implications a bit more thoroughly, Dennett will deserve a lot of credit for doing so.

More another time.

Illusionism

frankish-illusionConsciousness – it’s all been a terrible mistake. In a really cracking issue of the JCS (possibly the best I’ve read) Keith Frankish sets out and defends the thesis of illusionism, with a splendid array of responses from supporters and others.

How can consciousness be an illusion? Surely an illusion is itself a conscious state – a deceptive one – so that the reality of consciousness is a precondition of anything being an illusion? Illusionism, of course, is not talking about the practical, content-bearing kind of consciousness, but about phenomenal consciousness, qualia, the subjective side, what it is like to see something. Illusionism denies that our experiences have the phenomenal aspect they seem to have; it is in essence a sceptical case about phenomenal experience. It aims to replace the question of what phenomenal experience is, with the question of why people have the illusion of phenomenal experience.

In one way I wonder whether it isn’t better to stick with raw scepticism than frame the whole thing in terms of an illusion. There is a danger that the illusion itself becomes a new topic and inadvertently builds the confusion further. One reason the whole issue is so difficult is that it’s hard to see one’s way through the dense thicket of clarifications thrown up by philosophers, all demanding to be addressed and straightened out. There’s something to be said for the bracing elegance of the two-word formulation of scepticism offered by Dennett (who provides a robustly supportive response to illusionism here, as being the default case) – ‘What qualia?’. Perhaps we should just listen to the ‘tales of the qualophiles’ – there is something it is like, Mary knows something new, I could have a zombie twin – and just say a plain ‘no’ to all of them. If we do that, the champions of phenomenal experience have nothing to offer; all they can do is, as Pete Mandik puts it here, gesture towards phenomenal properties. (My imagination whimpers in fear at being asked to construe the space in which one might gesture towards phenomenal qualities, let alone the ineffable limb with which the movement might be performed; it insists that we fall back on Mandik’s other description; that phenomenalists can only invite an act of inner ostension.)

Eric Schwitzgebel relies on something like this gesturing in his espousal of definition by example as a means of getting the innocent conception of phenomenal experience he wants without embracing the dubious aspects. Mandik amusingly and cogently assails the scepticism of the illusionist case from an even more radical scepticism – meta-illusionism. Sceptics argue that phenomenalism can’t be specified meaningfully (we just circle around a small group of phrases and words that provide a set of synonyms with no definition outside the loop) , but if that’s true how do we even start talking about it? Whereof we cannot speak…

Introspection is certainly the name of the game, and Susan Blackmore has a nifty argument here; perhaps it’s the very act of introspecting that creates the phenomenal qualities? Her delusionism tells us we are wrong to think that there is a continuous stream of conscious experience going on in the absence of introspection, but stops short of outright scepticism about the phenomenal. I’m not sure. William James told us that introspection must be retrospection – we can only mentally examine the thought we just had, not the one we are having now – and it seems odd to me to think that a remembered state could be given a phenomenal aspect after the fact. Easier, surely, to consider that the whole business is consistently illusory?

Philip Goff is perhaps the toughest critic of illusionism; if we weren’t in the grip of scientism, he says, we should have no difficulty in seeing that the causal role of brain activity also has a categorical nature which is the inward, phenomenal aspect. If this view is incoherent or untenable in any way, we’re owed a decent argument as to why.

Myself I think Frankish is broadly on the right track. He sets out three ways we might approach phenomenal experience. One is to accept its reality and look for an explanation that significantly modifies our understanding of the world. Second, we look for an explanation that reconciles it with our current understanding, finding explanations within the world of physics of which we already have a general understanding. Third, we dismiss it as an illusion. I think we could add ‘approach zero’: we accept the reality of phenomenal experience and just regard it as inexplicable. This sounds like mysterianism – but mysterians think the world itself makes sense; we just don’t have the brains to see it. Option zero says there is actual irreducible mystery in the real world. This conclusion is surely thoroughly repugnant to most philosophers, who aspire to clear answers even if they don’t achieve them; but I think it is hard to avoid unless we take the sceptical route. Phenomenal experience is on most mainstream accounts something over and above the physical account just by definition. A physical explanation is automatically ruled out; even if good candidates are put forward, we can always retreat and say that they explain some aspects of experience, but not the ineffable one we are after. I submit that in fact this same strategy of retreat means that there cannot be any satisfactory rational account of phenomenal experience, because it can always be asserted that something ineffable is missing.

I say philosophers will find this repugnant, but I can sense some amiable theologians sidling up to me. Those light-weight would-be scientists can’t deal with mystery and the ineffable, they say, but hey, come with us for a bit…

Regular readers may possibly remember that I think that the phenomenal aspect of experience is actually just its reality; that the particularity or haecceity of real experience is puzzling to those who think that theory must accommodate everything. That reality is itself mysterious in some sense, though: not easily accounted for and not susceptible to satisfactory explanation either by induction or deduction. It may be that to understand that in full we have to give up on these more advanced mental tools and fall back on the basic faculty of recognition, the basis of all our thinking in my view and the capacity of which both deduction and induction are specialised forms. That implies that we might have to stop applying logic and science and just contemplate reality; I suppose that might mean in turn that meditation and the mystic tradition of some religions is not exactly a rejection of philosophy as understood in the West, but a legitimate extension of the same enquiry.

Yeah, but no; I may be irredeemably Western and wedded to scientism, but rightly or wrongly, meditation doesn’t scratch my epistemic itch. Illusionism may not offer quite the right answer, but for me it is definitely asking the right questions.

Inside Out

homunculusThe homunculus returns? I finally saw Inside Out (possible spoilers – I seem to be talking about films a lot recently). Interestingly, it foregrounds a couple of problematic ways of thinking about the mind.

One, obviously, is the notorious homuncular fallacy. This is the tendency to explain mental faculties, say consciousness, by attributing them to a small entity within the mind – a “little man” that just has all the capacities of the whole human being. It’s almost always condemned because it appears to do no more than defer the real explanation. If it’s really a little man in your head that does consciousness, where does his consciousness come from? An even smaller man, in his head?

Inside Out of course does the homuncular thing very explicitly. The mind of the young girl Riley, the main character, where most of the action is set, is controlled by five primal emotions who are all fully featured cartoon people – Joy, Sadness, Anger, Fear, and Disgust, little people who walk around inside Riley’s head doing the kind of thing people do (Is it actually inside her head? In the Beano’s Numskulls cartoon, touted as a forerunner of Inside Out, much of the humour came from the definite physicality of the way they worked; here the five emotions view the world through a screen rather than eyeholes and use a console rather than levers. They could in fact be anywhere or in some undefined conceptual space.) It’s an odd set (aren’t Joy and Sadness the extremes of a spectrum?) Unexpectedly negative too: this is technically a Disney film, and it rates anger, fear, and disgust as more important and powerful than love? If it were full-on Disney the leading emotions would surely be Happy-go-lucky Feelin’s, and Wishing on a Star.

There are some things to be said in favour of homunculi. Most people would agree that we contain a smaller entity that does all the thinking; the brain, or maybe even narrower than that (proponents of the Extended Mind would very much not agree, of course). Daniel Dennett has also spoken out for homunculi, suggesting that they’re fine so long as the homunculi in each layer get simpler; in the end we get to ones that need no explanation. That’s alright, except that I don’t think the beings in this Dennettian analysis are really homunculi – they’re more like black boxes. The true homunculus has all the capacities of a full human being rather than a simpler subset.

We see the problem that arises from that in Inside Out. The emotions are too rounded; they all seem to have a full set of feelings themselves; they show all show fear and Joy gets sad. How can that work?

The other thing that seems not quite right to me is unfortunately the climactic revelation that Sadness has a legitimate role. It is, apparently, to signal for help. In my view that can’t really be the whole answer and the film unintentionally shows us the absurdity of the idea; it asks us to believe that being joyless, angry and withdrawn, behaving badly and running away are not enough to evoke concern and sympathetic attention from parents; you don’t get your attention, and your hug till they see the tears.

No doubt sadness does often evoke support, but I can’t think that’s its main function. Funnily enough, Sadness herself briefly articulates a somewhat better idea early in the film. It’s muttered so quickly I didn’t quite get it, but it was something about providing an interval for adjustment and emotional recalibration. That sounds a bit more promising; I suspect it was what a real psychologist told Pixar at some stage; something they felt they should mention for completeness but that didn’t help the story.

Films and TV do shape our mental models; The Matrix laid down tramlines for many metaphysical discussions and Star Trek’s transporters are often invoked in serious discussions of personal identity. Worse, fears about AI have surely been helped along by Hollywood’s relentless and unimaginative use of the treacherous robot that turns on its creators. I hope Inside Out is not going to reintroduce homunculi to general thinking about the mind.

Are robots people or people robots?

boilerplateI must admit I generally think of the argument over human-style artificial intelligence as a two-sided fight. There are those who think it’s possible, and those who think it isn’t. But a chat I had recently made it clear that there are really more differences than that, in particular among those who believe we shall one day have robot chums.

The key difference I have in mind is over whether there really is consciousness at all, or at least whether there’s anything special about it.

One school of thought says that there is indeed a special faculty of consciousness; but eventually machines of sufficient complexity will have it too. We may not yet have all the details of how this thing works; maybe we even need some special new secret. But one thing is perfectly clear; there’s no magic involved, nothing outside the normal physical account, and in fact nothing that isn’t ultimately computable. One day we will be able to build into a machine all the relevant qualities of a human mind. Perhaps we’ll do it by producing an actual direct simulation of a human brain, perhaps not; the point is, when we switch on that ultimate robot, it will have feelings and qualia, it will have moral rights and duties, and it will have the same perception of itself as a real existing personality, that we do.

The second school of thought agrees that we shall be able to produce a robot that looks and behaves exactly like a human being. But that robot will not have qualia or feelings or free will or any of the rest of it, because in reality human beings don’t have them either! That’s one of the truths about ourselves that has been helpfully revealed by the progress of AI: all those things are delusions and always have been. Our feelings that we have a real self, that there is phenomenal experience, and that somehow we have a special kind of agency, those things are just complicated by-products of the way we’re organised.

Of course we could split the sceptics too, between those who think that consciousness requires a special spiritual explanation, or is inexplicable altogether, and those who think it is a natural feature of the world, just not computational or not explained by any properties of the physical world known so far. There is clearly some scope for discussion between the former kind of believer and the latter kind of sceptic because they both think that consciousness is a real and interesting feature of the world that needs more explanation, though they differ in their assumptions about how that will turn out. Although there’s less scope for discussion, there’s also some common ground between the two other groups because both basically believe that the only kind of discussion worth having about consciousness is one that clarifies the reasons it should be taken off the table (whether because it’s too much for the human mind or because it isn’t worthy of intelligent consideration).

Clearly it’s possible to take different views on particular issues. Dennett, for example, thinks qualia are just nonsense and the best possible thing would be to stop even talking about them, while he thinks the ability of human beings to deal with the Frame Problem is a real and interesting ability that robots don’t have but could and will once it’s clarified sufficiently.

I find it interesting to speculate about which camp Alan Turing would have joined; did he think that humans had a special capacity which computers could one day share, or did he think that the vaunted consciousness of humans turned out to be nothing more than the mechanical computational abilities of his machines? It’s not altogether clear, but I suspect he was of the latter school of thought. He notes that the specialness of human beings has never really been proved; and a disbelief in the specialness of consciousness might help explain his caginess about answering the question “can machines think?”. He preferred to put the question aside: perhaps that was because he would have preferred to answer; yes, machines can think, but only so long as you realise that ‘thinking’ is not the magic nonsense you take it to be…

Now That’s What I Call Dennett

dennettProfessors are too polite. So Daniel Dennett reckons. When leading philosophers or other academics meet, they feel it would be rude to explain their theories thoroughly to each other, from the basics up. That would look as if you thought your eminent colleague hadn’t grasped some of the elementary points. So instead they leap in and argue on the basis of an assumed shared understanding that isn’t necessarily there. The result is that they talk past each other and spend time on profitless misunderstandings.

Dennett has a cunning trick to sort this out. He invites the professors to explain their ideas to a selected group of favoured undergraduates (‘Ew; he sounds like Horace Slughorn’ said my daughter); talking to undergraduates they are careful to keep it clear and simple and include an exposition of any basic concepts they use. Listening in, the other professors understand what their colleagues really mean, perhaps for the first time, and light dawns at last.

It seems a good trick to me (and for the undergraduates, yes, by ‘good’ I mean both clever and beneficial); in his new book Intuition Pumps and Other Tools for Thinking Dennett seems covertly to be playing another. The book offers itself as a manual or mental tool-kit offering tricks and techniques for thinking about problems, giving examples of how to use them. In the examples, Dennett runs through a wide selection of his own ideas, and the cunning old fox clearly hopes that in buying his tools, the reader will also take up his theories. (Perhaps this accessible popular presentation will even work for some of those recalcitrant profs, with whom Dennett has evidently grown rather tired of arguing…. heh, heh!)

So there’s a hidden agenda, but in addition the ‘intuition pumps’ are not always as advertised. Many of them actually deserve a more flattering description because they address the reason, not the intuition. Dennett is clear enough that some of the techniques he presents are rather more than persuasive rhetoric, but at least one reviewer was confused enough to think that Reduction ad Absurdum was being presented as an intuition pump – which is rather a slight on a rigorous logical argument: a bit like saying Genghis Khan was among the more influential figures in Mongol society.

It seems to me, moreover, that most of the tricks on offer are not really techniques for thinking, but methods of presentation or argumentation. I find it hard to imagine someone trying to solve a problem by diligently devising thought-experiments and working through the permutations; that’s a method you use when you think you know the answer and want to find ways to convince others.

What we get in practice is a pretty comprehensive collection of snippets; a sort of Dennettian Greatest Hits. Some of the big arguments in philosophy of mind are dropped as being too convoluted and fruitless to waste more time on, but we get the memorable bits of many of Dennett’s best thought-experiments and rebuttals.  Not all of these arguments benefit from being taken out of the context of a more systematic case, and here and there – it’s inevitable I suppose – we find the remix or late cover version is less successful than the original. I thought this was especially so in the case of the Giant Robot; to preserve yourself in a future emergency you build a wandering robot to carry you around in suspended animation for a few centuries. The robot needs to survive in an unpredictable world, so you end up having to endow it with all the characteristics of a successful animal; and you are in a sense playing the part of the Selfish Gene. Such a machine would be able to deal with meanings and intentionality just the way you do, wouldn’t it? Well, in this brief version I don’t really see why or, perhaps more important, how.

Dennett does a bit better with arguments against intrinsic intentionality, though I don’t think his arguments succeed in establishing that there is no difference between original and derived intentionality. If Dennett is right, meaning would be built up in our brains through the interaction of gradually more meaningful layers of homunculi; OK (maybe), but that’s still quite different to what happens with derived intentionality, where things get to mean something because of an agreed convention or an existing full-fledged intention.

Dennett, as he acknowledges, is not always good at following the maxims he sets out. An early chapter is given over to the rules set out by Anatol Rapoport, most notably:

You should attempt to re-express your target’s position so clearly, vividly and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”

As someone on Metafilter said, when Dan Dennett does that for Christianity, I’ll enjoy reading it; but there was one place in the current book where I thought Dennett fell short on understanding the opposition. He suggests that Kasparov’s way of thinking about chess is probably the same as Deep Blue’s in the end. What on earth could provoke one to say that they were obviously different, he protests. Wishful thinking? Fear? Well, no need to suppose so: we know that the hardware (brain versus computer) is completely different and runs a different kind of process; we know the capacities of computer and brain are different and, in spite of an argument from Dennett to the contrary, we know the heuristics are significantly different. We know that decisions in Kasparov’s case involve consciousness, while Deep Blue lacks it entirely. So, maybe the processes are the same in the end, but there are some pretty good prima facie reasons to say they look very different.

One section of the book naturally talks about evolution, and there’s good stuff, but it’s still a twentieth century, Dawkinsian vision Dennett is trading in. Can it be that Dennett of all people is not keeping up with the science? There’s no sign here of the epigenetic revolution; we’re still in a world where it’s all about discrete stretches of DNA. That DNA, moreover, got to be the way it is through random mutation; no news has come in of the great struggle with the viruses which we now know has left its wreckage all across the human genome, and more amazing,  has contributed some vital functional stretches without which we wouldn’t be what we are. It’s a pity because that seems like a story that should appeal to Dennett, with his pandemonic leanings.

Still, there’s a lot to like; I found myself enjoying the book more and more as it went on and the pretence of being a thinking manual dropped away a bit.  Naturally some of Dennett’s old attacks on qualia are here, and for me they still get the feet tapping. I liked Mr Clapgras, either a new argument or more likely one I missed first time round; he suffers a terrible event in which all his emotional and empathic responses to colour are inverted without his actual perception of colour changing at all. Have his qualia been inverted – or are they yet another layer of experience? There’s really no way of telling and for Dennett the question is hardly worth asking. When we got to Dennett’s reasonable defence of compatibilism over free will, I was on my feet and cheering.

I don’t think this book supersedes Consciousness Explained if you want to understand Dennett’s views on consciousness. You may come away from reading it with your thinking powers enhanced, but it will be because your mental muscles have been stretched and used, not really because you’ve got a handy new set of tools. But if you’re a Dennett fan or just like a thoughtful and provoking read, it’s worth a look.

Feral neurons

Dan Dennett confesses to a serious mistake here, about homuncular functionalism.

An homunculus is literally a “little man”. Some explanations of how the mind works include modules which are just assumed to be capable of carrying out the kind of functions which normally require the abilities of a complete human being. This is traditionally regarded as a fatal flaw equivalent to saying that something is done by “a little man in your head”; which is no use because it leaves us the job of explaining how the little man does it.
Dennett, however, has defended homuncular explanations in certain circumstances. We can, he suggests, use a series of homunculi so long as they get gradually simpler with each step, and we end up with homunculi who are so simple we can see that they are only doing things a single neuron, or some other simple structure, might do.

That seems fair enough to me, except that I wouldn’t call those little entities homunculi; they could better be called black boxes, perhaps. I think it is built into the concept of an homunculus that it has the full complement of human capacities. But that’s sort of a quibble, and it could be that Dennett’s defence of the little men has helped prevent people being scared away from legitimate “homuncular” hypotheses.

Anyway, he now says that he thinks he underestimated the neuron. He had been expecting that his chain or hierarchy of homunculi would end up with the kind of simple switch that a neuron was then widely taken to be; but he (or ‘we’, as he puts it) radically underestimated the complexity of neurons and their behaviour. He now thinks that they should be considered agents in their own right, competing for control and resources in a kind of pandemonium. This, of course, is not a radical departure for Dennett, harmonising nicely with his view of consciousness as a matter of ‘multiple drafts’.

It has never been really clear to me how, in Dennett’s theory, the struggle between multiple drafts ends up producing well-structured utterances, let alone a coherent personality, and the same problem is bound to arise with competing neurons. Dennett goes further and suggests, in what he presents as only the wildest of speculations, that human neurons might have some genetic switch turned on which re-enables some of the feral, selfish behaviour of their free-swimming cellular ancestors.

A resounding no to that, I think, for at least three reasons. First, it confuses their behaviour as cells, happily metabolising and growing, with their function as neurons, firing and transmitting across synapses. If neurons went feral it is the former that would go out of control, and as Dennett recognises, that’s cancer rather than consciousness. Second, neurons are just too dependent to strike out on their own; they are surrounded, supported, and nurtured by a complex of glial cells which is often overlooked but which may well exert quite a detailed influence on neuronal firing. Neurons have neither the incentive nor the capacity to strike out on their own. Third, although the evolution of neurons is rather obscure, it seems probable that they are an opportunistic adaptation of cells originally specialised for detecting elusive chemicals in the environment; so they may well be domesticated twice over, and not at all likely to retain any feral leanings. As I say, Dennett doesn’t offer the idea very seriously, so I may be using a sledgehammer on butterflies.

Unfortunately Dennett repeats here a different error which I think he would do well to correct; the idea that the brain does massively parallel processing. This is only true, as I’ve said before, if by ‘parallel processing’ you mean something completely different to what it normally means in computing. Parallel processing in computers involves careful management of processes which are kept discrete, whereas the brain provides processes with complex and promiscuous linkages. The distinction between parallel and serial processing, moreover, just isn’t that interesting at a deep theoretical level; parallel processing just a handy technique for getting the same processes done a bit sooner; it’s not something that could tell us anything about the nature of consciousness.

Always good to hear from Dennett, though. He says his next big project is about culture, probably involving memes. I’m not a big meme fan, but I look forward to it anyway.