HuddhaAlison Gopnik suggests that David Hume was inspired by Buddhist ideas; she means well, but what atrocious nonsense! Hume is among the most important of Western philosophers and deserves to be widely appreciated, but if you wanted to strike a really damaging blow at popular understanding of what he says you could hardly do better than tangle him up with Buddha. Alas, the damage is probably done; the bogus linkage of the sceptical British empiricist with world-rejecting mysticism is probably lodged at the back of many minds.

Gopnik’s piece here (the same ideas are set out in a paper here) suggests that Hume got an important idea – that the self is an illusion – from Buddhist thought. Her focus is narrow, with relatively little about the philosophy. Nearly all of her account is directed towards establishing the historical possibility of a particular route by which Hume could have come to hear of Buddhist doctrine, from Jesuits at La Flèche. She spends a lot of time on the details and parades the results as a success, but actually finds no evidence for anything more than the bare possibility that Hume might have, could have discussed Buddhism with someone who might have picked up a knowledge from someone else who had produced unpublished translations of certain texts and who had been at La Flèche some years earlier.

If Gopnik had found proof that Hume showed an interest in Buddhism or that it was ever mentioned to him at La Flèche her research might be of some value. As it is it’s irrelevant, I think, because it’s not all that unlikely that Hume could have found out about Buddhism anyway, from other sources, if he were at all interested. There is, of course, a vast historical chasm between could have and did. As one of the West’s leading sceptics and the author of some of the most slyly biting sarcasm about religious beliefs it isn’t particularly likely Hume would have sought to learn from Eastern religions, but let it pass; for the sake of argument we can grant that he might have heard the gist of Buddhist doctrine.

Was that the only place Hume could have got sceptical beliefs about the self? Well, no: there’s a far more likely place. It was, after all, Descartes’ most celebrated claim – then as now, one of the best-known theses of Western philosophy – that the existence of the self was the most certain thing we knew, and that it could be established simply by thinking about it. All we need do is negate that – and negating other people’s claims is, after all, what philosophers do – and we’re pretty much there. Descartes rests his whole system on his perception of the self; Hume comes along and says, when I think about it I find nothing there. Surely Hume, the commonsensical British empiricist, is inverting the celebrated foundation of the Frenchman’s continental-style a priori reasoning?

Ah, but he could still have been influenced, couldn’t he? Gopnik floats the idea that Hume could have forgotten about Buddhist ideas consciously while still having them working away in his subconscious mind. Such things do happen, and if the influence were subconscious it would handily acquit Hume, the most honest and modest of men, from dishonesty or culpable silence over his sources.

The trouble is, Hume’s source was explicitly his own mind, and it matters. He presents his view of the self, not as an interesting argument he heard somewhere that might be true, but as the direct result of his own inner observation. He was simply reporting how things looked to him. Was he wrong? What are we to say, that his introspections were systematically determined by his prior beliefs? That his unremembered conversation about Buddhism somehow rendered him incapable of seeing actual key features of his own mental landscape? Or that these same forgotten words enabled him to perceive an absence which his mind would otherwise have filled with a confabulated construct? Gopnik talks as though she supports Hume, but to discredit his introspection so radically would invalidate the grounds he is claiming; it would be a vigorous attack on Hume. It seems far simpler all round to believe that he was telling the simple truth: he looked into his mind and found no self there.

That is the real killer for me; Hume did not, in fact, say that the self was an illusion; he saw nothing. On this he may well be unique; he is surely original. Buddhists, and some modern philosophers, contend that the self is actually an illusion; a powerful one which it is hard to shake off, but one which is ultimately misleading. Hume, on the contrary, just saw no self.

The distinction may not seem important, but it is; let me offer an analogy. Suppose we live near the great Nemonic Desert. Priests warn us that the fabled city of Nemonia, in the middle of the desert, is an hallucination. We will be tempted to stop and drink from its fountains, eat from the generous hospitality provided, and perhaps even stay; if we do we shall die, because the food and water are delusions and we’ll die of thirst. Modern scientists have offered theories which explain how the mind constructs the delusion of the Nemonian city and why we should stop sending expeditions to look for it.  In certain conditions our cognitive apparatus just constructs an encouraging but false perception for us, they say.

David Hume, on the other hand, tells us he walked across the desert keeping his eyes open and never saw anything but sand. Maybe it looks different to other people, he says, I can’t argue with them about that; but to me there’s just nothing there, simple as that.

To say then, that Hume got his disbelief in the city from listening to the priests is a dreadful error, a confusing misrepresentation, and really a bit of a slight against a man whose originality and honesty deserves better.

cats jailedWe know all about the theory espoused by Roger Penrose and Stuart Hameroff, that consciousness is generated by a novel (and somewhat mysterious) form of quantum mechanics which occurs in the microtubules of neurons. For Penrose this helps explain how consciousness is non-computational, a result he has provided a separate proof for.

I have always taken it that this theory is intended to explain consciousness as we know it; but there are those who also think that quantum theory can provide a model which helps explain certain non-rational features of human cognition. This feature in the Atlantic nicely summarises what some of them say.

One example of human irrationality that might be quantumish is apparently provided by the good old Prisoner’s Dilemma. Two prisoners who co-operated on a crime are offered a deal. If one rats, that one goes free while the other serves three years. If both rat, they serve two years; if neither do, they do one year. From a selfish point of view it’s always better to rat, even though overall the no-rat strategy leads to least time in jail for the prisoners. Rationally, everyone should always rat, but in fact people quite often behave against their selfish interest by failing to do so. Why would that be?

Quantum theorists suggest that it makes more sense if we think of the prisoners being in a superposition of states between ratting and not-ratting, just as Schroedinger’s cat superimposes life and death. Instead of contemplating the possible outcomes separately, we see them productively entangled (no, I’m not sure I quite get it, either).

There is of course another explanation; if the prisoners see the choice, not as a one-off, but as one in a series of similar trade-offs, the balance of advantage may shift, because those who are known to rat will be punished while those who don’t may be rewarded by co-operation . Indeed, since people who seek to establish implicit agreements to co-operate over such problems will tend to do better overall in the long run, we might expect such behaviour to have positive survival value and be favoured by evolution.

A second example of quantum explanation is provided by the fact that question order can affect responses. There’s an obvious explanation for this if one question affects the context by bringing something to the forefront of someone’s mind. Asking someone whether they plan to drive home before asking them whether they want another drink may produce different results from asking the other way round for reasons that are not really at all mysterious. However, it’s not always so clear cut and research demonstrates that a quantum model based on complementarity is really pretty good at making predictions.

How seriously are we to take this? Do we actually suppose that exotic quantum events in microtubules are directly responsible for the ‘irrational’ decisions? I don’t know exactly how that would work and it seems rather unlikely. Do we go to the other extreme and assume that the quantum explanations are really just importing a useful model – that they are in fact ultimately metaphorical? That would be OK, except that metaphors typically explain the strange by invoking something understood. It’s a little weird to suppose we could helpfully explain the incomprehensible world of human motivation by appealing to the readily understood realm of quantum physics.

Perhaps it’s best to simply see this as another way of thinking about cognition, something that surely can’t be bad?

KantWe’ve done so much here towards clearing up the problems of consciousness I thought we might take a short excursion and quickly sort out ethics?

It’s often thought that philosophical ethics has made little progress since ancient times; that no firm conclusions have been established and that most of the old schools, along with a few new ones, are still at perpetual, irreconcilable war. There is some truth in that, but I think substantial progress has been made. If we stop regarding the classic insights of different philosophers as rivals and bring them together in a synthesis, I reckon we can put together a general ethical framework that makes a great deal of sense.

What follows is a brief attempt to set out such a synthesis from first principles, in simple non-technical terms. I’d welcome views: it’s only fair to say that the philosophers whose ideas I have nicked and misrepresented would most surely hate it.




The deepest questions of philosophy are refreshingly simple. What is there? How do I know? And what should I do?

We might be tempted to think that that last question, the root question of ethics, could quickly be answered by another simple question; what do you want to do? For thoughtful people, though, that has never been enough. We know that some of the things people want to do are good, and some are bad. We know we should avoid evil deeds and try to do good ones – but it’s sometimes truly hard to tell which they are. We may stand willing to obey the moral law but be left in real doubt about its terms and what it requires. Yet, coming back to our starting point, surely there really is a difference between what we want to do and what we ought to do?

Kant thought so: he drew a distinction between categorical and hypothetical imperatives. For the hypothetical ones, you have to start with what you want. If you’re thirsty, then you should drink. If you want to go somewhere, then you should get in your car. These imperatives are not ethical; they’re simply about getting what you want. The categorical imperative, by contrast, sets out what you should do anyway, in any circumstances, regardless of what you want; and that, according to Kant, is the real root of morality.

Is there anything like that? Is there anything we should unconditionally do, regardless of our aims or wishes? Perhaps we could say that we should always do good; but even before we get on to the difficult task of defining ‘good’, isn’t that really a hypothetical imperative? It looks as if it goes: if you want to be good, behave like this…? Why do we have to be good? Let’s imagine that Kant, or some other great man, has explained the moral law to us so well, and told us what good is, so correctly and clearly that we understand it perfectly. What’s to stop us exercising our freedom of choice and saying “I recognise what is good, and I choose evil”?

To choose evil so radically and completely may sound more like a posture than a sincere decision – too grandly operatic, too diabolical to be altogether convincing – but there are other, appealing ways we might want to rebel against the whole idea of comprehensive morality. We might just seek some flexibility, rejecting the idea that morality rules our lives so completely, always telling us exactly what to do at every turn. We might go further and claim unrestricted freedom, or we might think that we may do whatever we like so long as we avoid harm to others, or do not commit actual crimes. Or we might decide that morality is simply a matter of useful social convention, which we propose to go along with just so long as it suits our chosen path, and no further. We might come to think that a mature perspective accepts that we don’t need to be perfect; that the odd evil deed here and there may actually enhance our lives and make us more rounded, considerable and interesting people.

Not so fast, says Kant, waving a finger good-naturedly; you’re missing the point; we haven’t yet even considered the nature of the categorical imperative! It tells us that we must act according to the rules we should be happy to see others adopt. We must accept for ourselves the rules of behaviour we demand of the people around us.

But why? It can be argued that some kind of consistency requires it, but who said we had to be consistent? Equally, we might well argue that fairness requires it, but we haven’t yet been given a reason to be fair, either. Who said that we had to act according to any rule? Or even if we accept that, we might agree that everyone should behave according to rules we have cunningly slanted in our own favour (Don’t steal, unless you happen to be in the special circumstances where I find myself to be) or completely vacuous rules (Do whatever you want to do). We still seem to have a serious underlying difficulty: why be good? Another simple question, but it’s one we can’t answer properly yet.

For now, let’s just assume there is something we ought to do. Let’s also assume it is something general, rather than a particular act on a particular occasion. If the single thing we ought to do were to go up the Eiffel Tower at least once in our life, our morality would be strangely limited and centred. The thing we ought to do, let’s assume, is something we can go on doing, something we can always do more of. To serve its purpose it must be the kind of behaviour that racks up something we can never have too much of.

There are people who have ethical theories which are exactly based on general goals like that, namely consequentialists. They believe the goodness of our acts depends on their consequences. The idea is that our actions should be chosen so that as a consequence some general desideratum is maximised. The desired thing can vary but the most famous example is the happiness which Jeremy Bentham embodied in the Utilitarians’ principle: act so as to bring about the greatest happiness of the greatest number of people.
Old-fashioned happiness Utilitarianism is a simple and attractive theory, but there are several problems with the odd results it seems to produce in unusual cases. Putting everyone in some kind of high-tech storage ward but constantly stimulating the pleasure centres in their brains with electrodes appears a very good thing indeed if we’re simply maximising happiness. All those people spend their existence in a kind of blissful paralysis: the theory tells us this is an excellent result, something we must strive to achieve, but it surely isn’t. Some kinds of ecstatic madness, indeed, would be high moral achievements according to simple Utilitarianism.

Less dramatically, people with strong desires, who get more happiness out of getting what they want, are awarded a bigger share of what’s available under utilitarian principles. In the extreme case the needs of ‘happiness monsters’ whose emotional response is far greater than anyone else’s, come to dominate society. This seems strange and unjust; but perhaps not to everyone. Bentham frowns at the way we’re going: why, he asks, should people who don’t care get the same share as those who do?

That case can be argued, but it seems the theory now wants to tutor and reshape our moral intuitions, rather than explaining them. It seems a real problem, as some later utilitarians recognised, that the theory provides no way at all of judging one source or kind of happiness better or worse than another. Surely this reduces and simplifies too much; we may suspect in fact that the theory misjudges and caricatures human nature. The point of life is not that people want happiness; it’s more that they usually get happiness from having the things they actually want.

With that in mind, let’s not give up on utilitarianism; perhaps it’s just that happiness isn’t quite the right target? What if, instead, we seek to maximise the getting of what you want – the satisfaction of desires? Then we might be aiming a little more accurately at the real desideratum, and putting everyone in pleasure boxes would no longer seem to be a moral imperative; instead of giving everyone sweet dreams, we have to fulfil the reality of their aspirations as far as we can.

That might deal with some of our problems, but there’s a serious practical difficulty with utilitarianism of all kinds; the impossibility of knowing clearly what the ultimate consequences of any action will be. To feed a starving child seems to be a clear good deed; yet it is possible that by evil chance the saved infant will grow up to be a savage dictator who will destroy the world. If that happens the consequences of my generosity will turn out to be appalling. Even if the case is not as stark as that, the consequences of saving a child roll on through further generations, perhaps forever. The jury will always be out, and we’ll never know for sure whether we really brought more satisfaction into the world or not.

Those are drastic cases, but even in more everyday situations it’s hard to see how we can put a numerical value on the satisfaction of a particular desire, or find any clear way of rating it against the satisfaction of a different one. We simply don’t have any objective or rigorous way of coming up with the judgements which utilitarianism nevertheless requires us to make.

In practice, we don’t try to make more than a rough estimate of the consequences of our actions. We take account of the obvious immediate consequences: beyond that the best we can do is to try to do the kind of thing that in general is likely to have good consequences. Saving children is clearly good in the short term, and people on the whole are more good than bad (certainly for a utilitarian – each person adds more satisfiable desires to the world), so that in most cases we can justify the small risk of ultimate disaster following on from saving a life.

Moreover, even if I can’t be sure of having done good, it seems I can at least be sure of having acted well; I can’t guarantee good deeds but I can at least guarantee being a good person. The goodness of my acts depend on their real consequences; my own personal goodness depends only on what I intended or expected, whether things actually work out the way I thought they would or not. So if I do my best to maximise satisfaction I can at least be a good person, even if I may on rare occasions be a good person who has accidentally done bad things.

Now though, if I start to guide my actions according to the kind of behaviour that is likely to bring good results, I am in essence going to adopt rules, because I am no longer thinking about individual acts, but about general kinds of behaviour. Save the children; don’t kill; don’t steal. Utilitarianism of some kind still authorises the rules, but I no longer really behave like a Utilitarian; instead I follow a kind of moral code.

At this point some traditional-looking folk step forward with a smile. They have always understood that morality was a set of rules, they explain, and proceed to offer us the moral codes they follow, sanctified by tradition or indeed by God. Unfortunately on examination the codes, although there are striking broad resemblances, prove to be significantly different both in tone and detail. Most of them also seem, perhaps inevitably, to suffer from gaps, rules that seem arbitrary, and ones which seem problematic in various other ways.

How are we to tell what the correct code is? Our code is to be authorised and judged on the basis of our preferred kind of utilitarianism, so we will choose the rules that tend to promote the goal we adopted provisionally; the objective of maximising the satisfaction of desires. Now, in order to achieve the maximal satisfaction of desires, we need as many people as possible living in comfortable conditions with good opportunities and that in turn requires an orderly and efficient society with a prosperous economy. We will therefore want a moral code that promotes stable prosperity. There turns out to be some truth in the suggestion that morality in the end consists of the rules that suit social ends! Many of these rules can be worked out more or less from first principles. Without consistent rules of property ownership, without reasonable security on the streets, we won’t get a prosperous society and economy, and this is a major reason why the codes of many cultures have a lot in common.

There are also, however, many legitimate reasons why codes are different, too. In certain areas the best rules are genuinely debatable. In some cases, moreover, there is genuinely no fundamental reason to prefer one reasonable rule over another. In these cases it is important that there are rules, but not important what they are – just as for traffic regulations it is not important whether the rule is to drive on the left or the right, but very important that it is one or the other. In addition the choice of rules for our code embodies some assumptions about human nature and behaviour and which arrangements work best with it. Ethical rules about sexual behaviour are often of this kind, for example. Tradition and culture may have a genuine weight in these areas, another potentially legitimate reason for variation in codes.

We can also make a case for one-off exceptions. If we believed our code was the absolute statement of right and wrong, perhaps even handed down by God, we should have no reason to go against it under any circumstances. Anything we did that didn’t conform with the code would automatically be bad. We don’t believe that, though; we’ve adopted our code only as a practical response to difficulties with working out what’s right from first principles – the impossibility of determining what the final consequences of anything we do will be. In some circumstances, that difficulty may not be so great. In some circumstances it may seem very clear what the main consequences of an action will be, and if it looks more or less certain that following the code will, in a particular exceptional case, have bad consequences, we are surely right to disobey the code; to tell white lies, for example, or otherwise bend the rules. This kind of thing is common enough in real life, and I think we often feel guilty about it. In fact we can be reassured that although the judgements required may sometimes be difficult, breaking the code to achieve a good result is the right thing to do.

The champions of moral codes find that hard to accept. In their favour we must accept that observance of the code generally has a significant positive value in itself. We believe that following the rules will generally produce the best results; it follows that if we set a bad example or undermine the rules by flouting them we may encourage disobedience by others (or just lax habits in ourselves) and so contribute to bad outcomes later on. We should therefore attach real value to the code and uphold it in all but exceptional cases.

Having got that far on the basis of a provisional utilitarianism, we can now look back and ask whether the target we chose, that of maximising the satisfaction of desires, was the right one. We noticed that odd consequences follow if we seek to maximise happiness; direct stimulation of the pleasure centres looks better than living your life, happiness monsters can have desires so great that they overwhelm everything else. It looked as if these problems arise mainly in situations where the pursuit of simple happiness is too narrowly focused, over-riding other objectives which also seem important.

In this connection it is productive to consider what follows if we pursue some radical alternative to happiness. What, indeed, if we seek to maximise misery? The specifics of our behaviour in particular circumstances will change, but the code authorised by the pursuit of unhappiness actually turns out to be quite similar to the one produced by its opposite. For maximum misery, we still need the maximum number of people. For the maximum number of people, we still need a fairly well-ordered and prosperous society. Unavoidably we’re going to have to ban disorderly and destructive behaviour and enforce reasonable norms of honesty. Even the armies of Hell punish lying, theft, and unauthorised violence – or they would fall apart. To produce the society that maximises misery requires only a small but pervasive realignment of the one that produces most happiness.

If we try out other goals we find that whatever general entity we want to maximise, consequentialism will authorise much the same moral code. Certain negative qualities seem to be the only exceptions. What if we aim to maximise silence, for example? It seems unlikely that we want a bustling, prosperous society in that case: we might well want every living thing dead as soon as possible, and so embrace a very different code. But I think this result comes from the fact that negative goals like silence – the absence of noise – covertly change our principle from one of maximising to one of minimising, and that makes a real difference. Broadly, maximising anything yields the same moral code.

In fact, the vaguer we are about what we seek to maximise, the fewer the local distortions we are likely to get in the results. So it seems we should do best to go back now and replace the version of utilitarianism we took on provisionally with something we might call empty consequentialism, which simply enjoins us to choose actions that maximise our own legacy as agents, without tying us to happiness or any other specific desideratum. We should perform those actions which have the greatest consequences – that is, those that tend to produce the largest and most complex world.

We began by assuming that something was worth doing and have worked round to the view that everything is: or at least, that everything should be treated as worth doing. The moral challenge is simply to ensure our doing of things is as effective as possible. Looking at it that way reveals that even though we have moved away from the narrow specifics of the hypothetical imperatives we started with, we are still really in the same territory and still seeking to act effectively and coherently; it’s just that we’re trying to do so in a broader sense.

In fact what we’re doing by seeking to maximise our consequential legacy is affirm and enlarge ourselves as persons. Personhood and agency are intimately connected. Acts, to deserve the name, must be intentional: things I do accidentally, unknowingly, or under direct constraint don’t really count as actions of mine. Intentions don’t exist in a free-floating state; they always have their origin in a person; and we can indeed define a person as a source of intentions. We need not make any particular commitment about the nature of intentions, or about how they are generated. Whether the process is neural, computational, spiritual, or has some other nature, is not important here, so long as we can agree that in some significant sense new projects originate in minds, and that such minds are people. By adopting our empty consequentialism and the moral code it authorises, we are trying to imprint our personhood on the world as strongly as we can.

We live in a steadily unrolling matrix of cause and effect, each event following on from the last. If we live passively, never acting on intentions of our own, we never disturb the course of that process and really we do not have a significant existence apart from it. The more we act on projects of our own, the stronger and more vivid our personal existence becomes. The true weight of these original actions is measured by their consequences, and it follows that acting well in the sense developed above is the most effective way to enhance and develop our own personhood.

To me, this a congenial conclusion. Being able to root good behaviour and the observance of an ethical code in the realisation and enlargement of the self seems a satisfying result. Moreover, we can close the circle and see that this gives us at last some answer to the question we could not deal with at first – why be good? In the end there is no categorical imperative, but there is, as it were, a mighty hypothetical; if you want to exist as a person, and if you want your existence as a person to have any significance, you need to behave well. If you don’t, then neither you nor anyone else need worry about the deeper reasons or ethics of your behaviour.

People who behave badly do not own the outcomes of their own lives; their behaviour results from the pressures and rewards that happen to be presented by the world. They themselves, as bad people, play little part in the shaping of the world, even when, as human beings, they participate in it. The first step in personal existence and personal growth is to claim responsibility and declare yourself, not merely reactive, but a moral being and an aspiring author of your own destiny. The existentialists, who have been sitting patiently smoking at a corner table, smile and raise an ironic eyebrow at our belated and imperfect enlightenment.

What about the people who rejected the constraints of morality and to greater or lesser degrees wanted to be left alone? Well, the system we’ve come up with enjoins us to adopt a moral code – but it leaves us to work out which one and explicitly allows for exceptions. Beyond that it consists of the general aspiration of ‘empty consequentialism’, but it is for us to decide how our consequential legacy is to be maximized. So the constraints are not tight ones. More important, it turns out that moral behaviour is the best way to escape from the tyranny of events and imprint ourselves on the world; obedience to the moral law, it turns out, is really the only way to be free.

conceptlessThinking without concepts is a strange idea at first sight; isn’t it always the concept of a thing that’s involved in thought, rather than the thing itself? But I don’t think it’s really as strange as it seems. Without looking too closely into the foundations at this stage, I interpret conceptual thought as simply being one level higher in abstraction than its non-conceptual counterpart.

Dogs, I think, are perfectly capable of non-conceptual thinking. Show them the lead or rattle the dinner bowl and they assent enthusiastically to the concrete proposal. Without concepts, though, dogs are tied to the moment and the actual; it’s no good asking the dog whether it would prefer walkies tomorrow morning or tomorrow afternoon; the concept cannot gain a foothold – nothing more abstract than walkies now can really do so. That doesn’t mean we should deny a dog consciousness – the difference between a conscious and unconscious dog is pretty clear – only to certain human levels. The advanced use of language and symbols certainly requires concepts, though I think it is not synonymous with it and conceptual but inexplicit thought seems a viable option to me. Some, though, have thought that it takes language to build meaningful self-consciousness.

Kristina Musholt has been looking briefly at whether self-consciousness can be built out of purely non-conceptual thought, particularly in response to Bermudez, and summarising the case made in her book Thinking About Oneself.
Musholt suggests that non-conceptual thought reflects knowledge-how rather than knowledge-that; without quite agreeing completely about that we can agree that non-conceptual thought can only be expressed through action and so is inherently about interaction with the world, which I take to be her main pointer.

Following Bermudez it seems we are to look for three things from any candidate for self-awareness; namely,

(1) non-accidental self-reference,
(2) immediate action relevance, and
(3) immunity to error through misidentification.

That last one may look a bit scary; it’s simply the point that you can’t be wrong about the identity of the person thinking your own thoughts. I think there are some senses in which this ain’t necessarily so, but for present purposes it doesn’t really matter. Bermudez was concerned to refute those who think that self-consciousness requires language; he thought any such argument collapses into circularity; to construct self-consciousness out of language you have to be able to talk about yourself, but talking about yourself requires the very self-awareness you were supposed to be building.

Bermudez, it seems, believes we can go elsewhere and get our self-awareness out of something like that implicit certainty we mentioned earlier.  As thought implies the thinker, non-conceptual thoughts will serve us perfectly well for these purposes. Musholt, though broadly in sympathy, isn’t happy with that. While the self may be implied simply by the existence of non-conceptual thoughts, she points out that it isn’t represented, and that’s what we really need. For one thing, it makes no sense to her to talk about immunity from error when it applies to something that isn’t even represented – it’s not that error is possible, it’s that the whole concept of error or immunity doesn’t even arise.

She still wants to build self-awareness out of non-conceptual thought, but her preferred route is social. As we noted she thinks non-conceptual thought is all about interaction with the world, and she suggests that it’s interaction with other people that provides the foundation for our awareness ourselves. It’s our experience of other people that ultimately grounds our awareness of ourselves as people.

That all seems pretty sensible. I find myself wondering about dogs, and about the state of mind of someone who grew up entirely alone, never meeting any other thinking being. It’s hard even to form a plausible thought experiment about that, I think. The human mind being what it is, I suspect that if no other humans or animals were around inanimate objects would be assigned imaginary personalities and some kind of substitute society cobbled up. Would the human being involved end up with no self-awareness, some strangely defective self-awareness (perhaps subject to some kind of dissociative disorder?), or broadly normal? I don’t even have any clear intuition on the matter.

Anyway, we should keep track of the original project, which essentially remains the one set out by Bermudez. Even if we don’t like Musholt’s proposal better than his, it all serves to show that there is actually quite a rich range of theoretical possibilities here, which tends to undermine the view that linguistic ability is essential. To me it just doesn’t seem very plausible that language should come before self-awareness, although I think it does come before certain forms of self-awareness. The real take-away, perhaps, is that self-awareness is a many-splendoured thing and different forms of it exist on all the three levels mentioned and surely more, too. This conclusion partly vindicates the attack on language as the only basis for self-awareness, undercutting Bermudez’s case for circularity. If self-awareness actually comes in lots of forms, then the sophisticated, explicit, language-based kind doesn’t need to pull itself up by its bootstraps, it can grow out of less explicit versions.

Anyway, Musholt has at least added to our repertoire a version of self-awareness which seems real and interesting – if not necessarily the sole or most fundamental version.

macaque and rakeDo we care whether the mind is extended? The latest issue of the JCS features papers on various aspects of extended and embodied consciousness.

In some respects I think the position is indicated well in a paper by Michael Wheeler, which tackles the question of whether phenomenal experience is realised, at least in part, outside the brain. One reason I think this attempt is representative is its huge ambition. The general thesis of extension is that it makes sense to regard tools and other bodily extensions – the iPad in my hand now, but also simple notepads, and even sticks – as genuine participating contributors to mental events. This is sort of appealing if we’re talking about things like memory, or calculation, because recording data and doing sums are the kind of thing the iPad does. Even for sensory experience it’s not hard to see how the camera and Skype might reasonably be seen as extended parts of my perceptual apparatus. But phenomenal experience, the actual business of how something feels? Wheeler notes a strong intuition that this, at least, must be internal (here read as ‘neural’), and this surely comes from the feeling that while the activity of a stick or pad looks like the sort of thing that might be relevant to “easy problem” cognition, it’s hard to see what it could contribute to inner experience. Granted, as Wheeler says, we don’t really have any clear idea what the brain is contributing either, so the intuition isn’t necessarily reliable. Nevertheless it seems clear that tackling phenomenal consciousness is particularly ambitious, and well calculated to put the overall idea of extension under stress.

Wheeler actually examines two arguments, both based on experiments. The first, from Noë, relies on sensory substitution. Blind people fitted with apparatus that delivers optical data in tactile form begin to have what seems like visual experience (How do we know they really do? Plenty of scope for argument, but we’ll let that pass.) The argument is that the external apparatus has therefore transformed their phenomenal experience.

Now of course it’s uncontroversial that changing what’s around you changed the content of your experience, and changing the content changes your phenomenal experience. The claim here is that the whole modality has been transformed, and without a parallel transformation in the brain. It’s the last point that seems particularly vulnerable. Apparently the subjects adapt quickly to the new kit, too quickly for substantial ‘neural rewiring’, but what’s substantial in this context? There are always going to be some neural changes during any experience, and who’s to say that those weren’t the crucial ones?

The second argument is due to Kiverstein and Farina, who report that when macaques are trained to use rakes to retrieve food, the rakes are incorporated into their body image (as reflected in neural activity). This is easy enough to identify with – if you use a stick to probe the ground, you quickly start to experience the ‘feel’ of the ground as being at the end of the stick, not in your hand. Does it prove that your phenomenal experience is situated partly in the stick? Only in a sense that isn’t really the one required – we already felt it as being in the hand. We never experience tactile sensation as being in the brain: the anti-extension argument is merely that the brain is uniquely the substrate where it the feeling is generated.

Wheeler, rather anti-climatically but I think correctly, thinks neither argument is successful; and that’s another respect in which I think his paper represents the state of the extended mind thesis; both ambitious and unproven.
Worse than that, though, it illustrates the point which kills things for me; I don’t really care one way or the other. Shall we call these non-neural processes mental? What if we do? Will we thereby get a new insight into how mental processes work? Not really, so why worry? The thesis that experience is external in a deeper sense, external to my mind, is strange and mind-boggling; the thesis that it’s external in the flatly literal sense of having some of its works outside the brain is just not that philosophically interesting.

OK, it’s true that what we know about the brain doesn’t seem to explicate phenomenal experience either, and perhaps doesn’t even look like the kind of thing that in principle might do so. But if there are ever going to be physical clues, that’s kind of where you’d bet on them being.

Is phenomenal experience extended? Well, I reckon phenomenal experience is tied to the non-phenomenal variety. Red qualia come with the objective perception of red. So if we accept the extended mind for the latter, we should probably accept it for the former. But please yourself; in the absence of any additional illumination, who cares where it is?

chainWhy does the question of determinism versus free will continue to trouble us? There’s nothing too strange, perhaps, about a philosophical problem remaining in play for a while – or even for a few hundred years: but why does this one have such legs and still provoke such strong and contrary feelings on either side?

For me the problem itself is solved – and the right solution, broadly speaking, has been known for centuries: determinism is true, but we also have free choice in a meaningful sense. St Augustine, to go no earlier, understood that free will and predestination are not contradictory, but people still find it confusing that he spoke up for both.

If this view – compatibilism – is right, why hasn’t it triumphed? I’m coming to think that the strongest opposition on the question might not in fact be between the hard-line determinists and the uncompromising libertarians but rather a matter of both ends against the middle. Compatibilists like me are happy to see the problem solved and determinism reconciled with common sense, whereas people from both the extremes insist that that misses something crucial. They believe the ‘loss’ of free will radically undercuts and changes our traditional sense of what we are as human beings. They think determinism, for better or worse, wipes away some sacred mark from our brows. Why do they think that?

Let’s start by quickly solving the old problem. Part one: determinism is true. It looks, with some small reservations about the interpretation of some esoteric matters, as if the laws of physics completely determine what happens. Actually even if contemporary physics did not seem to offer the theoretical possibility of full determination, we should be inclined to think that some set of rules did. A completely random or indeterminate world would seem scarcely distinguishable from a nullity; nothing definite could be said about it and no reliable predictions could be made because everything could be otherwise. That kind of scenario, of disastrous universal incoherence is extreme, and I admit I know of no absolute reason to rule out a more limited, demarcated indeterminacy. Still, the idea of delimited patches of randomness seems odd, inelegant and possibly problematic. God, said Einstein, does not play dice.

Beyond that, moreover, there’s a different kind of point. We came into this business in pursuit of truth and knowledge, so it’s fair to say that if there seemed to be patches of uncertainty we should want to do our level best to clarify them out of existence. In this sense it’s legitimate to regard determinism not just as a neutral belief, but as a proper aspiration. Even if we believe in free will, aren’t we going to want a theory that explains how it works, and isn’t that in the end going to give us rules that determine the process? Alright, I’m coming to the conclusion too soon: but in this light I see determinism as a thing that lovers of truth must strive towards (even if in vain) and we can note in passing that that might be some part of the reason why people champion it with zeal.

We’re not done with the defence, anyway. One more thing we can do against indeterminacy is to invoke the deep old principle which holds that nothing comes of nothing, and that nothing therefore happens unless it must; if something particular must happen, then the compulsion is surely encapsulated in some laws of nature.

Further still, even if none of that were reliable, we could fall back on a fatalistic argument. If it is true that on Tuesday you’ll turn right, then it was true on Monday that you would turn right on Tuesday; so your turning that way rather than left was already determined.

Finally, we must always remember that failure to establish determinism is not success in establishing liberty. Determinism looks to be true; we should try to establish its truth if by any means we can: but even if we fail, that failure in itself leaves us not with free will but with an abhorrent void of the unknowable.

Part two: we do actually make free decisions. Determinism is true, but it bites firmly only at a low level of description; not truly above the level of particles and forces. To look for decisions or choices at that level is simply a mistake, of the same general kind as looking for bicycles down there. Their absence from the micro level does not mean that cyclists are systematically deluded. Decisions are processes of large neural structures, and I suggest that when we describe them as free we simply mean the result was not constrained externally. If I had a gun to my head or my hands were tied, then turning left was not a free decision. If no-one could tell which way I should go without knowledge of what was going on in the large neural structures that realise my mind, then it was free. There are of course degrees of freedom and plenty of grey areas, but the essential idea is clear enough. Freedom is just the absence of external constraint on a level of description where people and decisions are salient, useful concepts.

For me, and I suppose other compatibilists, that’s a satisfying solution and matches well with what I think I’ve always meant when I talk about freedom. Indeed, it’s hard for me to see what else freedom could mean. What if God did play dice after all? Libertarians don’t want their free decisions to be random, they want them to belong to them personally and reflect consideration of the circumstances; the problem is that it’s challenging for them to explain in that case how the decisions can escape some kind of determination. What unites the libertarians and the determinists is the conviction that it’s that inexplicable, paradoxical factor we are concerned to affirm or deny, and that its presence or absence says something important about human nature. To quietly do without the magic, as compatibilists do, is on their view to shoot the fox and spoil the hunt. What are they both so worried about?

I speculate that the one factor here is a persistent background confusion. Determinism, we should remember, is an intellectual achievement, both historically and often personally. We live in a world where nothing much about human beings is certainly determined; only careful reflection reveals that in the end, at the lowest level of detail and at the very last knockings of things, there must be certainty. This must remain a theoretical conclusion, certainly so far as human beings are concerned; our behaviour may be determinate, but it is not determinable; certainly not in practice and very probably not even in theory, given the vast complexity, chaotic organisation and marvellously emergent properties of our brains. Some of those who deny determinism may be moved, not so much by explicit rejection of the true last-ditch thesis, but by the certainty that our minds are not determinable by us or by anyone. This muddying of the waters is perpetuated even now by arguments about how our minds may be strongly influenced by high-level factors: peer pressure, subliminal advertising, what we were given to read just before making a decision. These arguments may be presented in favour of determinism together with the water-tight last-ditch case, but they are quite different, and the high-level determinism they support is not certainly true but rather an eminently deniable hypothesis. In the end our behaviour is determined, but can we be programmed like robots by higher level influences? Maybe in some cases – generally, probably not.

The second, related factor is a certain convert enthusiasm. If determinism is a personal intellectual achievement it may well be that we become personally invested in it. When we come to appreciate its truth for the first time it may seem that we have grasped a new perspective and moved out of the confused herd to join the scientifically enlightened. I certainly felt this on my first acquaintance with the idea; I remember haranguing a friend about the truth of determinism in a way that must, with hindsight, have resembled religious conviction and been very tiresome.

“Yes, yes, OK, I get it,” he would say in a vain attempt to stop the flow.

Now no-one lives pure determinism; we all go behaving as if agency and freedom were meaningful. The fact that this involves an unresolved tension between your philosophy and the ideas about people you actually live by was not a deterrent to me then, however; in fact it may even have added a glamorous sheen of esoteric heterodoxy to the whole thing. I expect other enthusiasts may feel the same today. The gradual revelation, some years later, that determinism is true but actually not at all as important as you thought is less exciting: it has rather a dying fall to it and may be more difficult to assimilate. Consistency with common sense is perhaps a game for the middle aged.

“You know, I’ve been sort of nuancing my thinking about determinism lately…”

“Oh, what, Peter? You made me live through the conversion experience with you – now I have to work through your apostasy, too?”

On the libertarian side, it must be admitted that our power of decision really does look sort of strange, with a power far exceeding that of mere absence of constraint. There are at least two reasons for this. One is our ability to use intentionality to think about anything whatever, and base our decisions on those thoughts. I can think about things that are remote, non-existent, or even absurd, without any difficulty. Most notably, when I make decisions I am typically thinking about future events: will I turn left or right tomorrow? How can future events influence my behaviour now?

It’s a bit like the time machine case where I take the text of Hamlet back in time and give it to Shakespeare – who never actually produced it but now copies it down and has it performed. Who actually wrote it, in these circumstances? No-one, it just appeared at some point. Our ability to think about the future, and so use future goals as causes of actions now, seems in the same way to bring our decisions into being out of nowhere inside us. There was no prior cause, only later ones, so it really seems as if the process inverts and disrupts the usual order of causality.

We know this is indeed remarkable but it isn’t really magic. On my view it’s simply that our recognition of various entities that extend over time allows a kind of extrapolation. The actual causal processes, down at that lowest level, tick away in the right order, but our pattern-matching capacity provides processes at a higher level which can legitimately be said to address the future without actually being caused by it. Still, the appearance is powerful, and we may be impatient with the kind of materialist who prefers to live in a world with low ceilings, insists on everything being material and denies any independent validity to higher levels of description. Some who think that way even have difficulty accepting that we can think directly about mathematical abstractions – quite a difficult posture for anyone who accepts the physics that draws heavily on them.

The other thing is the apparent, direct reality of our decisions. We just know we exercise free will, because we experience the process immediately. We can feel ourselves deciding. We could be wrong about all sorts of things in the world, but how could I be wrong about what I think? I believe the feeling of something ineffable here comes from the fact that we are not used to dealing with reality. Most of what we know about the world is a matter of conscious or unconscious inference, and when we start thinking scientifically or philosophically it is heavily informed by theory. For many people it starts to look as if theory is the ultimate bedrock of things, rather than the thin layer of explanation we place on top. For such a mindset the direct experience of one’s own real thoughts looks spooky; its particularity, its haecceity, cannot be accounted for by theory and so looks anomalous. There are deep issues here, but really we ought not to be foxed by simple reality.

That’s it, I think, in brief at least. More could be said of course; more will be said. The issues above are like optical illusions: just knowing how they work doesn’t make them go away, and so minds will go on boggling. People will go on furiously debating free will: that much is both determined and determinable.

blind alienScott Bakker has given an interesting new approach to his Blind Brain Theory (BBT): in two posts on his blog he considers what kind of consciousness aliens could have, and concludes that the process of evolution would put them into the same hole where, on his view, we find ourselves.

BBT, in sketchy summary, says that we have only a starvation diet of information about the cornucopia that really surrounds us; but the limitations of our sources and cognitive equipment mean we never realise it. To us it looks as if we’re fully informed, and the glitches of the limited heuristics we use to cobble together a picture of the world, when turned on ourselves in particular, look to us like real features. Our mental equipment was never designed for self-examination and attempting metacognition with it generates monsters; our sense of personhood, agency, and much about our consciousness comes from the deficits in our informational resources and processes.

Scott begins his first post by explaining his own journey from belief in intentionalism to eliminativist scepticism about it, and sternly admonishes those of us still labouring in intentionalist error for our failure to produce a positive account of how human minds could have real intentionality.

What about aliens – Scott calls the alien players in his drama ‘Thespians’ – could they be any better off than we are? Evolution would have equipped them with senses designed to identify food items, predators, mates, and so on; there would be no reason for them to have mental or sensory modules designed to understand the motion of planets or stars, and turning their senses on their own planet would surely tell them incorrectly that it was motionless. Scott points out that Aristotle’s argument against the movement of the Earth is rather good: if the Earth were moving, we should see shifts in the relative position of the stars, just as the relative position of objects in a landscape shifts when we we view them from the window of a moving train; yet the stars remain precisely fixed. The reasoning is sound; Aristotle simply did not know and could not imagine the mind-numbingly vast distances that make the effect invisibly small to unaided observation. The unrealised lack of information led Aristotle into misapprehension, and it would surely do the same for the Thespians; a nice warm-up for the main argument.

Now it’s a reasonable assumption that the Thespians would be social animals, and they would need to be able to understand each other. They’d get good at what is often somewhat misleadingly called theory of mind; they’d attribute motives and so on to each other and read each others behaviour in a fair bit of depth. Of course they would have no direct access to other Thespians; actual inner workings. What happens when they turn their capacity for understanding other people on themselves? In Scott’s view, plausibly enough, they end up with quite a good practical understanding whose origins are completely obscure to them; the lashed-up mechanisms that supply the understanding neither available to conscious examination or in fact even visible.

This is likely enough, and in fact doesn’t even require us to think of higher cognitive faculties. How do we track a ball flying through the air so we can catch it? Most people would be hard put to describe what the brain does to achieve that, though in practice we do it quite well. In fact, those who could put down the algorithm most likely get it wrong too, because it turns out the brain doesn’t use the optimal method: it uses a quick and easy one that works OK in practice but doesn’t get your hand to the right place as quickly as it could.

For Scott all this leads to a gloomy conclusion; much of our view about what we are and our mental capacities is really attributable to systematic error, even to  something we could regard as a cognitive deficit or disease. He cogently suggests how dualism and other errors might arise from our situation.

I think the Thespian account is the most accessible and persuasive account Scott has given to date of his view, and it perhaps allows us to situate it better than before. I think the scope of the disaster is a little less than Scott supposes, in two ways. First, he doesn’t deny that routine intentionality actually works at a practical level, and I think he would agree we can even hope to give a working level description of how that goes. My own view is that it’s all a grand extension of our capacity for recognition, (and I was more encouraged than not by my recent friendly disagreement with Marcus Perlman over on Aeon Ideas; I think his use of the term ‘iconic’ is potentially misleading, but in essence I think the views he describes are right and enlightening) but people here have heard all that many times. Whether I’m right or not we probably agree that some practical account of how the human mind gets its work done is possible.

Second, on a higher level, it’s not completely hopeless. We are indeed prone to dreadful errors and to illusions about how our minds work that cannot easily be banished. But we kind of knew that. We weren’t really struggling to understand how dualism could possibly be wrong, or why it seemed so plausible.  We don’t have to resort to those flawed heuristics; we can take our pure basic understanding and try again, either through some higher meta-meta-cognition (careful) or by simply standing aside and looking at the thing from a scientific third-person point of view. Aristotle was wrong, but we got it right in the end, and why shouldn’t Scott’s own view be part of getting it righter about the mind? I don’t think he would disagree on that either (he’ll probably tell us); but he feels his conclusions have disastrous implications for our sense of what we are.

Here we strike something that came up in our recent discussion of free will and the difference between determinists and compatibilists. It may be more a difference of temperament than belief. People like me say, OK, no, we don’t have the magic abilities we looked to have, so let’s give those terms a more sensible interpretation and go merrily on our way. The determinists, the eliminativists, agree that the magic has gone – in fact they insist – but they sit down by the roadside, throw ashes on their heads, and mourn it. They share with the naive, the libertarians, and the believers in a magic power of intentionality, the idea that something essential and basically human is lost when we move on in this way. Perhaps people like me came in to have the magic explained and are happy to see the conjuring tricks set out; others wanted the magic explained and for it to remain magic?

disappearMy Aeon Ideas Viewpoint on ‘Is the Self an Illusion?’.

I do sort of get why people are so keen on the idea that the self is illusory, but what puzzles me slightly is the absence of any middling, commonsensical camp. When it comes to Free Will, we have the hard-nosed deniers on the one hand and the equally uncompromising people who think determinism debases human nature; but there are quite a lot of people in the middle offering various compatibilist arguments that seek to let us have more or less the traditional concept of freedom and rigorous scientific materialism at the same time. I’m one, really. There just doesn’t seem to be the same school of thought in respect of the self; people who recognise the problem but regard the mission as sorting it out rather than erasing the concept from our vocabulary.

mind the gapA better neurophysiology, the answer to the Hard Problem? Kirchhoff and Hutto propose a slightly different way forward.

The Hard Problem, of course, is about reconciling the physical description of a conscious event with the way it feels from inside. This is the ‘explanatory gap’. Most of us these days are monists of one kind or another; we believe the world ultimately consists of one kind of thing, usually matter, without a second realm of spirits or other metaphysical entities on top. Some people would, accordingly seek to reduce the mental to the physical, perhaps even eliminating the mental so that our monism can be tidy ((I’m a messy monist myself). Neurophysiology, as formulated by Varela and briefly described in Kirchhoff and Hutto’s paper, does not look for a reduction, merely an explanation.
It does this by putting aside any idea of representations or computations; instead it proposes a practical research programme in which introspective reports of experience are matched with scans or other physical investigations. By elucidating the structure of both experience and physical event, the project aims to show how the two sides of experience constrain each other.

This, though, doesn’t seem enough for Kirchhoff and Hutto. Researching the two sides of the matter together is fine, but how will it ever show constraints, or generate an explanation? it seems it will be doomed to merely exhibiting correlation. Moreover, rather than resolving the explanatory gap, this approach seems to consolidate it.
These are reasonable objections, but I don’t think it’s quite as hopeless as that. The aspiration must surely be that the exploration comes together by exhibiting, not just correlation, but an underlying identity of structure? We might hope that the physical structure of the visual cortex tells us something about our colour space and visual experience that matches the structure of 0ur direct experience of colour, for example, in such a way that the mysterious quality of that experience is attenuated and eventually even dispelled. Other kinds of explanation might emerge. When I take off my glasses and look at the surface of brightly lit swimming pool, I see a host of white circles, all the same size and filled with the suggestion of a moire pattern, bobbing daintily about. In a pre-scientific era, this would have been hard to account for, but now I know it is entirely the result of some facts about the shape of my eyes and the lenses in them, and phenomenological worries don’t even get started. It could be that neurophilosophy can succeed in offering explanations good enough to remove the worries that currently exist. The great thing about it, of course, is that even if that hope is philosophically misplaced, elucidating the structure of experience from both ends is a very worthwhile project anyway, one that can surely only yield valuable new understanding.

However, what Kirchhoff and Hutto propose is that we go a little further and abolish the gap. Instead of affirming the separateness of the physical and the phenomenal, they suggest, we should recognise that they represent to different descriptions of a single thing.

That might seem a modest adjustment, but they also assert that the phenomenal character of experience actually arises not from the mere physics, but from the situation of that experience, taking place in an enactive, embodied context. So if we hold a book, we can see it; if we shut our eyes, we continue to feel it; but we also have a more complex engagement with it from our efforts to hold up what we know is a book, the feel of pages, and so on. There’s all sorts of stuff going on that isn’t the mere physical contact, and that’s what yields the character of the experience.

I see that, I think, but it’s a little odd. If we imagine floating in a sensory deprivation tank and gazing at a smooth, uniform red wall, we seem to be free of a lot of the context we’d normally have and on this view it’s a bit hard to see where the phenomenal intensity would be coming from (perhaps from the remembered significance of red?) We might suspect that Kirchhoff and Hutto are getting their phenomenal content smuggled in with the more complex phenomenal experience that they implicitly demand by requiring context, an illicit supplement that remains unexplained.

On this, why not let a thousand flowers grow; go ahead and develop explanations according to any exploratory project you prefer, and then we’ll have a look. Some of them might be good even if your underlying theory is wrong.
I think it is, incidentally. For me the explanatory gap is always misconstrued; the real gap is not between physics and phenomenology, it’s between theory and actuality, something that shouldn’t puzzle us, or at least not in the way it always does.

homunculusThe homunculus returns? I finally saw Inside Out (possible spoilers – I seem to be talking about films a lot recently). Interestingly, it foregrounds a couple of problematic ways of thinking about the mind.

One, obviously, is the notorious homuncular fallacy. This is the tendency to explain mental faculties, say consciousness, by attributing them to a small entity within the mind – a “little man” that just has all the capacities of the whole human being. It’s almost always condemned because it appears to do no more than defer the real explanation. If it’s really a little man in your head that does consciousness, where does his consciousness come from? An even smaller man, in his head?

Inside Out of course does the homuncular thing very explicitly. The mind of the young girl Riley, the main character, where most of the action is set, is controlled by five primal emotions who are all fully featured cartoon people – Joy, Sadness, Anger, Fear, and Disgust, little people who walk around inside Riley’s head doing the kind of thing people do (Is it actually inside her head? In the Beano’s Numskulls cartoon, touted as a forerunner of Inside Out, much of the humour came from the definite physicality of the way they worked; here the five emotions view the world through a screen rather than eyeholes and use a console rather than levers. They could in fact be anywhere or in some undefined conceptual space.) It’s an odd set (aren’t Joy and Sadness the extremes of a spectrum?) Unexpectedly negative too: this is technically a Disney film, and it rates anger, fear, and disgust as more important and powerful than love? If it were full-on Disney the leading emotions would surely be Happy-go-lucky Feelin’s, and Wishing on a Star.

There are some things to be said in favour of homunculi. Most people would agree that we contain a smaller entity that does all the thinking; the brain, or maybe even narrower than that (proponents of the Extended Mind would very much not agree, of course). Daniel Dennett has also spoken out for homunculi, suggesting that they’re fine so long as the homunculi in each layer get simpler; in the end we get to ones that need no explanation. That’s alright, except that I don’t think the beings in this Dennettian analysis are really homunculi – they’re more like black boxes. The true homunculus has all the capacities of a full human being rather than a simpler subset.

We see the problem that arises from that in Inside Out. The emotions are too rounded; they all seem to have a full set of feelings themselves; they show all show fear and Joy gets sad. How can that work?

The other thing that seems not quite right to me is unfortunately the climactic revelation that Sadness has a legitimate role. It is, apparently, to signal for help. In my view that can’t really be the whole answer and the film unintentionally shows us the absurdity of the idea; it asks us to believe that being joyless, angry and withdrawn, behaving badly and running away are not enough to evoke concern and sympathetic attention from parents; you don’t get your attention, and your hug till they see the tears.

No doubt sadness does often evoke support, but I can’t think that’s its main function. Funnily enough, Sadness herself briefly articulates a somewhat better idea early in the film. It’s muttered so quickly I didn’t quite get it, but it was something about providing an interval for adjustment and emotional recalibration. That sounds a bit more promising; I suspect it was what a real psychologist told Pixar at some stage; something they felt they should mention for completeness but that didn’t help the story.

Films and TV do shape our mental models; The Matrix laid down tramlines for many metaphysical discussions and Star Trek’s transporters are often invoked in serious discussions of personal identity. Worse, fears about AI have surely been helped along by Hollywood’s relentless and unimaginative use of the treacherous robot that turns on its creators. I hope Inside Out is not going to reintroduce homunculi to general thinking about the mind.