chainWhy does the question of determinism versus free will continue to trouble us? There’s nothing too strange, perhaps, about a philosophical problem remaining in play for a while – or even for a few hundred years: but why does this one have such legs and still provoke such strong and contrary feelings on either side?

For me the problem itself is solved – and the right solution, broadly speaking, has been known for centuries: determinism is true, but we also have free choice in a meaningful sense. St Augustine, to go no earlier, understood that free will and predestination are not contradictory, but people still find it confusing that he spoke up for both.

If this view – compatibilism – is right, why hasn’t it triumphed? I’m coming to think that the strongest opposition on the question might not in fact be between the hard-line determinists and the uncompromising libertarians but rather a matter of both ends against the middle. Compatibilists like me are happy to see the problem solved and determinism reconciled with common sense, whereas people from both the extremes insist that that misses something crucial. They believe the ‘loss’ of free will radically undercuts and changes our traditional sense of what we are as human beings. They think determinism, for better or worse, wipes away some sacred mark from our brows. Why do they think that?

Let’s start by quickly solving the old problem. Part one: determinism is true. It looks, with some small reservations about the interpretation of some esoteric matters, as if the laws of physics completely determine what happens. Actually even if contemporary physics did not seem to offer the theoretical possibility of full determination, we should be inclined to think that some set of rules did. A completely random or indeterminate world would seem scarcely distinguishable from a nullity; nothing definite could be said about it and no reliable predictions could be made because everything could be otherwise. That kind of scenario, of disastrous universal incoherence is extreme, and I admit I know of no absolute reason to rule out a more limited, demarcated indeterminacy. Still, the idea of delimited patches of randomness seems odd, inelegant and possibly problematic. God, said Einstein, does not play dice.

Beyond that, moreover, there’s a different kind of point. We came into this business in pursuit of truth and knowledge, so it’s fair to say that if there seemed to be patches of uncertainty we should want to do our level best to clarify them out of existence. In this sense it’s legitimate to regard determinism not just as a neutral belief, but as a proper aspiration. Even if we believe in free will, aren’t we going to want a theory that explains how it works, and isn’t that in the end going to give us rules that determine the process? Alright, I’m coming to the conclusion too soon: but in this light I see determinism as a thing that lovers of truth must strive towards (even if in vain) and we can note in passing that that might be some part of the reason why people champion it with zeal.

We’re not done with the defence, anyway. One more thing we can do against indeterminacy is to invoke the deep old principle which holds that nothing comes of nothing, and that nothing therefore happens unless it must; if something particular must happen, then the compulsion is surely encapsulated in some laws of nature.

Further still, even if none of that were reliable, we could fall back on a fatalistic argument. If it is true that on Tuesday you’ll turn right, then it was true on Monday that you would turn right on Tuesday; so your turning that way rather than left was already determined.

Finally, we must always remember that failure to establish determinism is not success in establishing liberty. Determinism looks to be true; we should try to establish its truth if by any means we can: but even if we fail, that failure in itself leaves us not with free will but with an abhorrent void of the unknowable.

Part two: we do actually make free decisions. Determinism is true, but it bites firmly only at a low level of description; not truly above the level of particles and forces. To look for decisions or choices at that level is simply a mistake, of the same general kind as looking for bicycles down there. Their absence from the micro level does not mean that cyclists are systematically deluded. Decisions are processes of large neural structures, and I suggest that when we describe them as free we simply mean the result was not constrained externally. If I had a gun to my head or my hands were tied, then turning left was not a free decision. If no-one could tell which way I should go without knowledge of what was going on in the large neural structures that realise my mind, then it was free. There are of course degrees of freedom and plenty of grey areas, but the essential idea is clear enough. Freedom is just the absence of external constraint on a level of description where people and decisions are salient, useful concepts.

For me, and I suppose other compatibilists, that’s a satisfying solution and matches well with what I think I’ve always meant when I talk about freedom. Indeed, it’s hard for me to see what else freedom could mean. What if God did play dice after all? Libertarians don’t want their free decisions to be random, they want them to belong to them personally and reflect consideration of the circumstances; the problem is that it’s challenging for them to explain in that case how the decisions can escape some kind of determination. What unites the libertarians and the determinists is the conviction that it’s that inexplicable, paradoxical factor we are concerned to affirm or deny, and that its presence or absence says something important about human nature. To quietly do without the magic, as compatibilists do, is on their view to shoot the fox and spoil the hunt. What are they both so worried about?

I speculate that the one factor here is a persistent background confusion. Determinism, we should remember, is an intellectual achievement, both historically and often personally. We live in a world where nothing much about human beings is certainly determined; only careful reflection reveals that in the end, at the lowest level of detail and at the very last knockings of things, there must be certainty. This must remain a theoretical conclusion, certainly so far as human beings are concerned; our behaviour may be determinate, but it is not determinable; certainly not in practice and very probably not even in theory, given the vast complexity, chaotic organisation and marvellously emergent properties of our brains. Some of those who deny determinism may be moved, not so much by explicit rejection of the true last-ditch thesis, but by the certainty that our minds are not determinable by us or by anyone. This muddying of the waters is perpetuated even now by arguments about how our minds may be strongly influenced by high-level factors: peer pressure, subliminal advertising, what we were given to read just before making a decision. These arguments may be presented in favour of determinism together with the water-tight last-ditch case, but they are quite different, and the high-level determinism they support is not certainly true but rather an eminently deniable hypothesis. In the end our behaviour is determined, but can we be programmed like robots by higher level influences? Maybe in some cases – generally, probably not.

The second, related factor is a certain convert enthusiasm. If determinism is a personal intellectual achievement it may well be that we become personally invested in it. When we come to appreciate its truth for the first time it may seem that we have grasped a new perspective and moved out of the confused herd to join the scientifically enlightened. I certainly felt this on my first acquaintance with the idea; I remember haranguing a friend about the truth of determinism in a way that must, with hindsight, have resembled religious conviction and been very tiresome.

“Yes, yes, OK, I get it,” he would say in a vain attempt to stop the flow.

Now no-one lives pure determinism; we all go behaving as if agency and freedom were meaningful. The fact that this involves an unresolved tension between your philosophy and the ideas about people you actually live by was not a deterrent to me then, however; in fact it may even have added a glamorous sheen of esoteric heterodoxy to the whole thing. I expect other enthusiasts may feel the same today. The gradual revelation, some years later, that determinism is true but actually not at all as important as you thought is less exciting: it has rather a dying fall to it and may be more difficult to assimilate. Consistency with common sense is perhaps a game for the middle aged.

“You know, I’ve been sort of nuancing my thinking about determinism lately…”

“Oh, what, Peter? You made me live through the conversion experience with you – now I have to work through your apostasy, too?”

On the libertarian side, it must be admitted that our power of decision really does look sort of strange, with a power far exceeding that of mere absence of constraint. There are at least two reasons for this. One is our ability to use intentionality to think about anything whatever, and base our decisions on those thoughts. I can think about things that are remote, non-existent, or even absurd, without any difficulty. Most notably, when I make decisions I am typically thinking about future events: will I turn left or right tomorrow? How can future events influence my behaviour now?

It’s a bit like the time machine case where I take the text of Hamlet back in time and give it to Shakespeare – who never actually produced it but now copies it down and has it performed. Who actually wrote it, in these circumstances? No-one, it just appeared at some point. Our ability to think about the future, and so use future goals as causes of actions now, seems in the same way to bring our decisions into being out of nowhere inside us. There was no prior cause, only later ones, so it really seems as if the process inverts and disrupts the usual order of causality.

We know this is indeed remarkable but it isn’t really magic. On my view it’s simply that our recognition of various entities that extend over time allows a kind of extrapolation. The actual causal processes, down at that lowest level, tick away in the right order, but our pattern-matching capacity provides processes at a higher level which can legitimately be said to address the future without actually being caused by it. Still, the appearance is powerful, and we may be impatient with the kind of materialist who prefers to live in a world with low ceilings, insists on everything being material and denies any independent validity to higher levels of description. Some who think that way even have difficulty accepting that we can think directly about mathematical abstractions – quite a difficult posture for anyone who accepts the physics that draws heavily on them.

The other thing is the apparent, direct reality of our decisions. We just know we exercise free will, because we experience the process immediately. We can feel ourselves deciding. We could be wrong about all sorts of things in the world, but how could I be wrong about what I think? I believe the feeling of something ineffable here comes from the fact that we are not used to dealing with reality. Most of what we know about the world is a matter of conscious or unconscious inference, and when we start thinking scientifically or philosophically it is heavily informed by theory. For many people it starts to look as if theory is the ultimate bedrock of things, rather than the thin layer of explanation we place on top. For such a mindset the direct experience of one’s own real thoughts looks spooky; its particularity, its haecceity, cannot be accounted for by theory and so looks anomalous. There are deep issues here, but really we ought not to be foxed by simple reality.

That’s it, I think, in brief at least. More could be said of course; more will be said. The issues above are like optical illusions: just knowing how they work doesn’t make them go away, and so minds will go on boggling. People will go on furiously debating free will: that much is both determined and determinable.

blind alienScott Bakker has given an interesting new approach to his Blind Brain Theory (BBT): in two posts on his blog he considers what kind of consciousness aliens could have, and concludes that the process of evolution would put them into the same hole where, on his view, we find ourselves.

BBT, in sketchy summary, says that we have only a starvation diet of information about the cornucopia that really surrounds us; but the limitations of our sources and cognitive equipment mean we never realise it. To us it looks as if we’re fully informed, and the glitches of the limited heuristics we use to cobble together a picture of the world, when turned on ourselves in particular, look to us like real features. Our mental equipment was never designed for self-examination and attempting metacognition with it generates monsters; our sense of personhood, agency, and much about our consciousness comes from the deficits in our informational resources and processes.

Scott begins his first post by explaining his own journey from belief in intentionalism to eliminativist scepticism about it, and sternly admonishes those of us still labouring in intentionalist error for our failure to produce a positive account of how human minds could have real intentionality.

What about aliens – Scott calls the alien players in his drama ‘Thespians’ – could they be any better off than we are? Evolution would have equipped them with senses designed to identify food items, predators, mates, and so on; there would be no reason for them to have mental or sensory modules designed to understand the motion of planets or stars, and turning their senses on their own planet would surely tell them incorrectly that it was motionless. Scott points out that Aristotle’s argument against the movement of the Earth is rather good: if the Earth were moving, we should see shifts in the relative position of the stars, just as the relative position of objects in a landscape shifts when we we view them from the window of a moving train; yet the stars remain precisely fixed. The reasoning is sound; Aristotle simply did not know and could not imagine the mind-numbingly vast distances that make the effect invisibly small to unaided observation. The unrealised lack of information led Aristotle into misapprehension, and it would surely do the same for the Thespians; a nice warm-up for the main argument.

Now it’s a reasonable assumption that the Thespians would be social animals, and they would need to be able to understand each other. They’d get good at what is often somewhat misleadingly called theory of mind; they’d attribute motives and so on to each other and read each others behaviour in a fair bit of depth. Of course they would have no direct access to other Thespians; actual inner workings. What happens when they turn their capacity for understanding other people on themselves? In Scott’s view, plausibly enough, they end up with quite a good practical understanding whose origins are completely obscure to them; the lashed-up mechanisms that supply the understanding neither available to conscious examination or in fact even visible.

This is likely enough, and in fact doesn’t even require us to think of higher cognitive faculties. How do we track a ball flying through the air so we can catch it? Most people would be hard put to describe what the brain does to achieve that, though in practice we do it quite well. In fact, those who could put down the algorithm most likely get it wrong too, because it turns out the brain doesn’t use the optimal method: it uses a quick and easy one that works OK in practice but doesn’t get your hand to the right place as quickly as it could.

For Scott all this leads to a gloomy conclusion; much of our view about what we are and our mental capacities is really attributable to systematic error, even to  something we could regard as a cognitive deficit or disease. He cogently suggests how dualism and other errors might arise from our situation.

I think the Thespian account is the most accessible and persuasive account Scott has given to date of his view, and it perhaps allows us to situate it better than before. I think the scope of the disaster is a little less than Scott supposes, in two ways. First, he doesn’t deny that routine intentionality actually works at a practical level, and I think he would agree we can even hope to give a working level description of how that goes. My own view is that it’s all a grand extension of our capacity for recognition, (and I was more encouraged than not by my recent friendly disagreement with Marcus Perlman over on Aeon Ideas; I think his use of the term ‘iconic’ is potentially misleading, but in essence I think the views he describes are right and enlightening) but people here have heard all that many times. Whether I’m right or not we probably agree that some practical account of how the human mind gets its work done is possible.

Second, on a higher level, it’s not completely hopeless. We are indeed prone to dreadful errors and to illusions about how our minds work that cannot easily be banished. But we kind of knew that. We weren’t really struggling to understand how dualism could possibly be wrong, or why it seemed so plausible.  We don’t have to resort to those flawed heuristics; we can take our pure basic understanding and try again, either through some higher meta-meta-cognition (careful) or by simply standing aside and looking at the thing from a scientific third-person point of view. Aristotle was wrong, but we got it right in the end, and why shouldn’t Scott’s own view be part of getting it righter about the mind? I don’t think he would disagree on that either (he’ll probably tell us); but he feels his conclusions have disastrous implications for our sense of what we are.

Here we strike something that came up in our recent discussion of free will and the difference between determinists and compatibilists. It may be more a difference of temperament than belief. People like me say, OK, no, we don’t have the magic abilities we looked to have, so let’s give those terms a more sensible interpretation and go merrily on our way. The determinists, the eliminativists, agree that the magic has gone – in fact they insist – but they sit down by the roadside, throw ashes on their heads, and mourn it. They share with the naive, the libertarians, and the believers in a magic power of intentionality, the idea that something essential and basically human is lost when we move on in this way. Perhaps people like me came in to have the magic explained and are happy to see the conjuring tricks set out; others wanted the magic explained and for it to remain magic?

disappearMy Aeon Ideas Viewpoint on ‘Is the Self an Illusion?’.

I do sort of get why people are so keen on the idea that the self is illusory, but what puzzles me slightly is the absence of any middling, commonsensical camp. When it comes to Free Will, we have the hard-nosed deniers on the one hand and the equally uncompromising people who think determinism debases human nature; but there are quite a lot of people in the middle offering various compatibilist arguments that seek to let us have more or less the traditional concept of freedom and rigorous scientific materialism at the same time. I’m one, really. There just doesn’t seem to be the same school of thought in respect of the self; people who recognise the problem but regard the mission as sorting it out rather than erasing the concept from our vocabulary.

mind the gapA better neurophysiology, the answer to the Hard Problem? Kirchhoff and Hutto propose a slightly different way forward.

The Hard Problem, of course, is about reconciling the physical description of a conscious event with the way it feels from inside. This is the ‘explanatory gap’. Most of us these days are monists of one kind or another; we believe the world ultimately consists of one kind of thing, usually matter, without a second realm of spirits or other metaphysical entities on top. Some people would, accordingly seek to reduce the mental to the physical, perhaps even eliminating the mental so that our monism can be tidy ((I’m a messy monist myself). Neurophysiology, as formulated by Varela and briefly described in Kirchhoff and Hutto’s paper, does not look for a reduction, merely an explanation.
It does this by putting aside any idea of representations or computations; instead it proposes a practical research programme in which introspective reports of experience are matched with scans or other physical investigations. By elucidating the structure of both experience and physical event, the project aims to show how the two sides of experience constrain each other.

This, though, doesn’t seem enough for Kirchhoff and Hutto. Researching the two sides of the matter together is fine, but how will it ever show constraints, or generate an explanation? it seems it will be doomed to merely exhibiting correlation. Moreover, rather than resolving the explanatory gap, this approach seems to consolidate it.
These are reasonable objections, but I don’t think it’s quite as hopeless as that. The aspiration must surely be that the exploration comes together by exhibiting, not just correlation, but an underlying identity of structure? We might hope that the physical structure of the visual cortex tells us something about our colour space and visual experience that matches the structure of 0ur direct experience of colour, for example, in such a way that the mysterious quality of that experience is attenuated and eventually even dispelled. Other kinds of explanation might emerge. When I take off my glasses and look at the surface of brightly lit swimming pool, I see a host of white circles, all the same size and filled with the suggestion of a moire pattern, bobbing daintily about. In a pre-scientific era, this would have been hard to account for, but now I know it is entirely the result of some facts about the shape of my eyes and the lenses in them, and phenomenological worries don’t even get started. It could be that neurophilosophy can succeed in offering explanations good enough to remove the worries that currently exist. The great thing about it, of course, is that even if that hope is philosophically misplaced, elucidating the structure of experience from both ends is a very worthwhile project anyway, one that can surely only yield valuable new understanding.

However, what Kirchhoff and Hutto propose is that we go a little further and abolish the gap. Instead of affirming the separateness of the physical and the phenomenal, they suggest, we should recognise that they represent to different descriptions of a single thing.

That might seem a modest adjustment, but they also assert that the phenomenal character of experience actually arises not from the mere physics, but from the situation of that experience, taking place in an enactive, embodied context. So if we hold a book, we can see it; if we shut our eyes, we continue to feel it; but we also have a more complex engagement with it from our efforts to hold up what we know is a book, the feel of pages, and so on. There’s all sorts of stuff going on that isn’t the mere physical contact, and that’s what yields the character of the experience.

I see that, I think, but it’s a little odd. If we imagine floating in a sensory deprivation tank and gazing at a smooth, uniform red wall, we seem to be free of a lot of the context we’d normally have and on this view it’s a bit hard to see where the phenomenal intensity would be coming from (perhaps from the remembered significance of red?) We might suspect that Kirchhoff and Hutto are getting their phenomenal content smuggled in with the more complex phenomenal experience that they implicitly demand by requiring context, an illicit supplement that remains unexplained.

On this, why not let a thousand flowers grow; go ahead and develop explanations according to any exploratory project you prefer, and then we’ll have a look. Some of them might be good even if your underlying theory is wrong.
I think it is, incidentally. For me the explanatory gap is always misconstrued; the real gap is not between physics and phenomenology, it’s between theory and actuality, something that shouldn’t puzzle us, or at least not in the way it always does.

homunculusThe homunculus returns? I finally saw Inside Out (possible spoilers – I seem to be talking about films a lot recently). Interestingly, it foregrounds a couple of problematic ways of thinking about the mind.

One, obviously, is the notorious homuncular fallacy. This is the tendency to explain mental faculties, say consciousness, by attributing them to a small entity within the mind – a “little man” that just has all the capacities of the whole human being. It’s almost always condemned because it appears to do no more than defer the real explanation. If it’s really a little man in your head that does consciousness, where does his consciousness come from? An even smaller man, in his head?

Inside Out of course does the homuncular thing very explicitly. The mind of the young girl Riley, the main character, where most of the action is set, is controlled by five primal emotions who are all fully featured cartoon people – Joy, Sadness, Anger, Fear, and Disgust, little people who walk around inside Riley’s head doing the kind of thing people do (Is it actually inside her head? In the Beano’s Numskulls cartoon, touted as a forerunner of Inside Out, much of the humour came from the definite physicality of the way they worked; here the five emotions view the world through a screen rather than eyeholes and use a console rather than levers. They could in fact be anywhere or in some undefined conceptual space.) It’s an odd set (aren’t Joy and Sadness the extremes of a spectrum?) Unexpectedly negative too: this is technically a Disney film, and it rates anger, fear, and disgust as more important and powerful than love? If it were full-on Disney the leading emotions would surely be Happy-go-lucky Feelin’s, and Wishing on a Star.

There are some things to be said in favour of homunculi. Most people would agree that we contain a smaller entity that does all the thinking; the brain, or maybe even narrower than that (proponents of the Extended Mind would very much not agree, of course). Daniel Dennett has also spoken out for homunculi, suggesting that they’re fine so long as the homunculi in each layer get simpler; in the end we get to ones that need no explanation. That’s alright, except that I don’t think the beings in this Dennettian analysis are really homunculi – they’re more like black boxes. The true homunculus has all the capacities of a full human being rather than a simpler subset.

We see the problem that arises from that in Inside Out. The emotions are too rounded; they all seem to have a full set of feelings themselves; they show all show fear and Joy gets sad. How can that work?

The other thing that seems not quite right to me is unfortunately the climactic revelation that Sadness has a legitimate role. It is, apparently, to signal for help. In my view that can’t really be the whole answer and the film unintentionally shows us the absurdity of the idea; it asks us to believe that being joyless, angry and withdrawn, behaving badly and running away are not enough to evoke concern and sympathetic attention from parents; you don’t get your attention, and your hug till they see the tears.

No doubt sadness does often evoke support, but I can’t think that’s its main function. Funnily enough, Sadness herself briefly articulates a somewhat better idea early in the film. It’s muttered so quickly I didn’t quite get it, but it was something about providing an interval for adjustment and emotional recalibration. That sounds a bit more promising; I suspect it was what a real psychologist told Pixar at some stage; something they felt they should mention for completeness but that didn’t help the story.

Films and TV do shape our mental models; The Matrix laid down tramlines for many metaphysical discussions and Star Trek’s transporters are often invoked in serious discussions of personal identity. Worse, fears about AI have surely been helped along by Hollywood’s relentless and unimaginative use of the treacherous robot that turns on its creators. I hope Inside Out is not going to reintroduce homunculi to general thinking about the mind.

flowAn interesting piece from Evan Thompson on the ‘stream of consciousness’. The phrase is probably best known now as the name for a style of modern literary prose, but it originates with William James. Thompson compares James’ concept of a smoothly rolling stream with the view taken by the Buddhist Abidharma tradition, which holds that closer consideration shows the stream to consist of discrete parts.
Thompson quotes two pieces of experimental evidence which broadly suggest the Abidharma view is closer to the truth. Experiments conducted by Francisco Varela on the young Thompson himself suggested that perception varied in harmony with the brain’s alpha waves, although it seems the results have not been successfully replicated since. The other study related to the ‘attentional blink’ in which a stimulus rapidly following another is likely to be missed. It seems successful attempts by the subjects were accompanied by a kind of phase locking with theta rhythms; certain meditative techniques of mindfulness improved both the theta phase locking and the ability to perceive the following stimulus.
Overall, Thompson concludes that conscious perception isn’t smoothly regular, but comes in pulses. Perhaps we could say that it’s more like the flow of a bloodstream than that of a river.
Still, though – is consciousness actually continuous? Suppose in fact that it was composed of a series of static moments, like the succeeding frames of a film. In a film the frames follow quickly, but we can imagine longer intervals if we like. However long the gaps, the story told by the film is unaffected and retains all its coherence; the discontinuity can only be seen by an observer outside the film. In the case of consciousness our experience actually is the succession of moments, so if consciousness were discontinuous we should never be aware of it directly. If we noticed anything at all, it would seem to us to be discontinuity in the external world.
It’s not, of course, as simple as that; there are two particular issues. One is that consciousness is not automatically self-consciousness. To draw conclusions about our conscious state requires a second conscious state which is about the first one. We’ve remarked here before on Comte’s objection that the second state necessarily disrupts the first, making reliable introspection impossible: James’ view was that the second state had to be later, so that introspection was always retrospection.
This obviously raises many potential complications; all I want to do is pick out one possibility: that when we introspect the first and second order states alternate. Perhaps what we do is a moment of first-order thinking, then a moment of second order reflection on the moment just past, then another moment of simple first-order thought and so on; a process a bit like an artist flicking his gaze back and forth between subject and canvas.
If that’s what happens, then it would clearly introduce a kind of pulse into our thoughts. This raises the curious possibility that our normal thoughts run smoothly, but start to pulsate exactly when we start to think about them. The pulse would be an artefact of our own introspection.
The other issue is more fundamental. Both James and the Abidharma school apparently assume that our thoughts seem to come in a continuous flow. Well, mine don’t. Yes, at times there is a coherent narrative sequence or a flowing perceptual experience, but these often seem like achievements of my concentration rather than the natural state of my mind. At least as often, things pop up unbidden, stop and start, and generally behave less like a flow and more like one damn thing after another. It’s noteworthy that the stream of consciousness in literature is not characterised by smooth logical development, but by a succession of fragmentary ideas and perceptions, a more realistic picture in many ways.
However, reflecting on a train of thought afterwards we can sometimes see links that we didn’t notice before. Several of our thoughts which seemed unrelated all bear on a particular anxiety or concern, say; scarcely a novel phenomenon in either psychology or literature. Hypothetically we might guess that our conscious moments are indeed part of a coherent stream, but one which includes important unconscious or subconscious elements, If we could see the whole process it might make fine logical sense, but all we get are the points where the undulating serpent’s back breaks the surface.
Neither of those issues disturbs Thompson’s modest conclusion that there is a kind of pulse on the surface of the stream; but there is deep water underneath, I think.

cakeThe Stanford Encyclopaedia of Philosophy is twenty years old. It gives me surprisingly warm feelings towards Stanford that this excellent free resource exists. It’s written by experts, continuously updated, and amazingly extensive. Long may it grow and flourish!

Writing an encyclopaedia is challenging, but an encyclopaedia of philosophy must take the biscuit. For a good encyclopaedia you need a robust analysis of the topics in the field so that they can be dealt with systematically, comprehensively, and proportionately. In philosophy there is never a consensus, even about how to frame the questions, never mind about what kind of answers might be useful. This must make it very difficult: do you try to cover the most popular schools of thought in an area? All the logically possible positions one might take up?  A purely historical survey? Or summarise what the landscape is really like, inevitably importing your own preconceptions?

I’ve seen people complain that the SEP is not very accessible to newcomers, and I think the problem is partly that the subject is so protean. If you read an article in the SEP, you’ll get a good view and some thought-provoking ideas; but what a noob looks for are a few pointers and landmarks. If I read a biography I want to know quickly about the subject’s  main works, their personal life, their situation in relation to other people in the field, the name of their theory or school, and so on.  Most SEP subject articles cannot give you this kind of standard information in relation to philosophical problems. There is a real chance that if you read up a SEP article and then go and talk to professionals, they won’t really get what you’re talking about. They’ll look at you blankly and then say something like:

“Oh, yes, I see where you’re coming from, but you know, I don’t really think of it that way…”

It’s not because the article you read was bad, it’s because everyone has a unique perspective on what the problem even is.

Let’s look at Consciousness. The content page has:

consciousness (Robert Van Gulick)

  • animal (Colin Allen and Michael Trestman)
  • higher-order theories (Peter Carruthers)
  • and intentionality (Charles Siewert)
  • representational theories of (William Lycan)
  • seventeenth-century theories of (Larry M. Jorgensen)
  • temporal (Barry Dainton)
  • unity of (Andrew Brook and Paul Raymont)

All interesting articles, but clearly not a systematic treatment based on a prior analysis. It looks more like the set of articles that just happened to get written with consciousness as part of the subject. Animal consciousness, but no robot consciousness? Temporal consciousness, but no qualia or phenomenal consciousness? But I’m probably looking in the wrong place.

In Robert Van Gulick’s main article we have something that looks much more like a decent shot at a comprehensive overview, but though he’s done a good job it won’t be a recognisable structure to anyone who hasn’t read this specific article. I really like the neat division into descriptive, explanatory, and functional questions; it’s quite helpful and illuminating: but you can’t rely on anyone recognising it (Next time you meet a professor of philosophy ask him: if we divide the problems of consciousness into three, and the first two are descriptive and explanatory, what would the third be? Maybe he’ll say  ‘Functional’, but maybe he’ll say ‘Reductive’ or something else – ‘Intentional’ or ‘Experiential'; I’m pretty sure he’ll need to think about it). Under ‘Concepts of Consciousness’ Van Gulick has ‘Creature Consciousness': our noob would probably go away imagining that this is a well-known topic which can be mentioned in confident expectation of the implications being understood. Alas, no: I’ve read quite a few books about consciousness and can’t immediately call to mind any other substantial reference to ‘Creature Consciousness': I’m pretty sure that unless you went on to explain that you were differentiating it from ‘State Consciousness’ and ‘Consciousness as an Entity’, you might be misunderstood.

None of this is meant as a criticism of the piece: Van Gulick has done a great job on most counts (the one thing I would really fault is that the influence of AI in reviving the topic and promoting functionalist views is, I think, seriously underplayed). If you read the piece you  will get about as good a view of the topic as that many words could give you, and if you’re new to it you will run across some stimulating ideas (and some that will strike you as ridiculous). But when you next read a paper on philosophy of mind, you’ll still have to work out from scratch how the problem is being interpreted. That’s just the way it is.

Does that mean philosophy of mind never gets anywhere? No, I really don’t think so, though it’s outstandingly hard to provide proof of progress. In science we hope to boil down all the hypotheses to a single correct theory: in philosophy perhaps we have to be happy that we now have more answers (and more problems) than ever before.

And the SEP has got most of them! Happy Birthday!

Aeon IdeasI’ve been having fun recently, writing Viewpoints and commenting on Aeon Ideas; if you can bear even more noodling from me, why not have a look – or sign up and join in?

I’ve also been attempting some long-overdue upgrades. As a result Conscious Entities should now be fully compatible with mobile devices. There’s also a Facebook page and a Twitter feed (@peter_hankins), though I have no idea what I’m doing with those, so stand by for embarrassing mistakes..

wise menThere were a number of reports recently that a robot had passed ‘one of the tests for self-awareness’. They seem to stem mainly from this New Scientist piece (free registration may be required to see the whole thing, but honestly I’m not sure it’s worth it). That in turn reported an experiment conducted by Selmer Bringsjord of Rensselaer, due to be presented at the Ro-Man conference in a month’s time. The programme for the conference looks very interesting and the experiment is due to feature in a session on ‘Real Robots That Pass Human Tests of Self Awareness’.

The claim is that Bringsjord’s bot passed a form of the Wise Man test. The story behind the Wise Man test has three WMs tested by the king; he makes them wear hats which are either blue or white: they cannot see their own hat but can see both of the others. They’re told that there is at least one blue hat, and that the test is fair; to be won by the first WM who correctly announces the colour of his own hat. There is a chain of logical reasoning which produces the right conclusion: we can cut to the chase by noticing that the test can’t be fair unless all the hats are the same colour, because all other arrangements give one WM an advantage. Since at least one hat is blue, they all are.

You’ll notice that this is essentially a test of logic, not self awareness. If solving the problem required being aware that you were one of the WMs then we who merely read about it wouldn’t be able to come up with the answer – because we’re not one of the WMs and couldn’t possibly have that awareness. But there’s sorta,  kinda something about working with other people’s point of view in there.

Bringsjord’s bots actually did something rather different. They were apparently told that two of the three had been given a ‘dumbing’ pill that stopped them from being able to speak (actually a switch had been turned off; were the robots really clever enough to understand that distinction and the difference between a pill and a switch?); then they were asked ‘did you get the dumbing pill?’  Only one, of course, could answer, and duly answered ‘I don’t know': then, having heard its own voice, it was able to go on to say ‘Oh, wait, now I know…!”

This test is obviously different from the original in many ways; it doesn’t involve the same logic. Fairness, an essential factor in the original version, doesn’t matter here, and in fact the test is egregiously unfair; only one bot can possibly win. The bot version seems to rest mainly on the robot being able to distinguish its own voice from those of the others (of course the others couldn’t answer anyway; if they’d been really smart they would all have answered ‘I wasn’t dumbed’, knowing that if they had been dumbed the incorrect conclusion would never be uttered). It does perhaps have a broadly similar sorta, kinda relation to awareness of points of view.

I don’t propose to try to unpick the reasoning here any further: I doubt whether the experiment tells us much, but as presented in the New Scientist piece the logic is such a dog’s breakfast and the details are so scanty it’s impossible to get a proper idea of what is going on. I should say that I have no doubt Ringsjord’s actual presentation will be impeccably clear and well-justified in both its claims and its reasoning; foggy reports of clear research are more common than vice versa.

There’s a general problem here about the slipperiness of defining human qualities. Ever since Plato attempted to define a man as ‘a featherless biped’ and was gleefully refuted by Diogenes with a plucked chicken, every definition of the special quality that defines the human mind seems to be torpedoed by counter-examples. Part of the problem is a curious bind whereby the task of definition requires you to give a specific test task; but it is the very non-specific open-ended generality of human thought you’re trying to capture. This, I expect, is why so many specific tasks that once seemed definitively reserved for humans have eventually been performed by computers, which perhaps can do anything which is specified narrowly enough.

We don’t know exactly what Bringsjord’s bots did, and it matters. They could have been programmed explicitly just to do exactly what they did do, which is boring: they could have been given some general purpose module that does not terminate with the first answer and shows up well in these circumstances, which might well be of interest; or they could have been endowed with massive understanding of the real world significance of such matters as pills, switches, dumbness, wise men, and so on, which would be a miracle and raise the question of why Bringsjord was pissing about with such trivial experiments when he had such godlike machines to offer.

As I say, though, it’s a general problem. In my view, the absence of any details about how the Room works is one of the fatal flaws in John Searle’s Chinese Room thought experiment; arguably the same issue arises for the Turing Test. Would we award full personhood to a robot that could keep up a good conversation? I’m not sure I would unless I had a clear idea of how it worked.

I think there are two reasonable conclusions we can draw, both depressing. One is that we can’t devise a good test for human qualities because we simply don’t know what those qualities are, and we’ll have to solve that imponderable riddle before we can get anywhere. The other possibility is that the specialness of human thought is permanently indefinable. Something about that specialness involves genuine originality, breaking the system, transcending the existing rules; so just as the robots will eventually conquer any specific test we set up, the human mind will always leap out of whatever parameters we set up for it.

But who knows, maybe the Ro-Man conference will surprise us with new grounds for optimism.

transferThe new film Self/less is based around the transfer of consciousness. An old man buys a new body to transfer into, and then finds that contrary to what he was told, it wasn’t grown specially: there was an original tenant who moreover, isn’t really gone. I understand that this is not a film that probes the metaphysics of the issue very deeply; it’s more about fight scenes; but the interesting thing is how readily we accept the idea of transferred consciousness.
In fact, it’s not at all a new idea; if memory serves, H.G.Wells wrote a short story on a similar theme; a fit young man with no family is approached by a rich old man in poor health who apparently wants to leave him all his fortune; then he finds himself transferred unwillingly to the collapsing ancient body and the old man making off in his fresh young one. In Wells’ version the twist is that the old man gets killed in a chance traffic accident, thereby dying before his old body does anyway.
The thing is, how could a transfer possibly work? In Wells’ story it’s apparently done with drugs, which is mysterious; more normally there’s some kind of brain-transfer helmet thing. It’s pretty much as though all you needed to do was run an EEG and then reverse the polarity. That makes no sense. I mean, scanning the brain in sufficient detail is mind-boggling to begin with, but the idea that you could then use much the same equipment to change the content of the mind is in another league of bogglement. Weather satellites record the meteorology of the world, but you cannot use them to reset it. This is why uploading your mind to a computer, while highly problematic, is far easier to entertain than transferring it to another biological body.
The big problem is that part of the content of the brain is, in effect, structural. It depends on which neurons are attached to which (and for that matter, which neurons there are), and on the strength and nature of that linkage. It’s true that neural activity is important too, and we can stimulate that electrically; even with induction gear that resembles the SF cliché: but we can’t actually restructure the brain that way.
The intuition that transfers should be possible perhaps rests on an idea that the brain is, as it were, basically hardware, and consciousness is essentially software; but isn’t really like that. You can’t run one person’s mind on another’s brain.
There is in fact no reason to suppose that there’s much of a read-across between brains: they may all be intractably unique. We know that there tends to be a similar regionalisation of functions in the brain, but there’s no guarantee that your neural encoding of ‘grandmother’ resembles mine or is similarly placed. Worse, it’s entirely possible that the ‘meaning’ of neural assemblages differs according to context and which other structures are connected, so that even if I could identify my ‘grandmother’ neurons, and graft them in in place of yours, they would have a different significance, or none.
Perhaps we need a more sophisticated and bespoke approach. First we thoroughly decipher both brains, and learn their own idiosyncratic encoding works. Then we work out a translator. This is a task of unimaginable complexity and particularity, but it’s not obviously impossible in principle. I think it’s likely that for each pair of brains you would need a unique translator: a universal one seems such an heroic aspiration that I really doubt its viability: a universal mind map would be an achievement of such interest and power that merely transferring minds would seem like time-wasting games by comparison.
I imagine that even once a translator had been achieved, it would normally only achieve partial success. There would be a limit to how far you could go with nano-bot microsurgery; and there might be certain inherent limitations. Certain ideas, certain memories, might just be impossible to accommodate in the new brain because of their incompatibility with structural or conceptual issues that were too deeply embedded to change; there would be certain limitations. The task you were undertaking would be like the job of turning Pride and Prejudice into Don Quixote simply by moving the words around and perhaps in a few key cases allowing yourself one or two anagrams: the result might be recognisable, but it wouldn’t be perfect. The transfer recipient would believe themselves to be Transferee, but they would have strange memory gaps and certain cognitive deficits, perhaps not unlike Alzheimer’s, as well as artefacts: little beliefs or tendencies that existed neither in Transferee or Recipient, but were generated incidentally through the process of reconstruction.
It’s a much more shadowy and unappealing picture, and it makes rather clearer the real killer: that though Recipient might come to resemble Transferee, they wouldn’t really be them.
In the end, we’re not data, or a program; we’re real and particular biological entities, and as such our ontology is radically different. I said above that the plausibility of transfers comes from thinking of consciousness as data, which I think is partly true: but perhaps there’s something else going on here; a very old mental habit of thinking of the soul as detachable and transferable. This might be another case where optimists about the capacity of IT are unconsciously much less materialist than they think.