The Philosophy of Delirium

Is there any philosophy of delirium? I remember asserting breezily in the past that there was philosophy of everything – including the actual philosophy of everything and the philosophy of philosophy. But when asked recently, I couldn’t come up with anything specifically on delirium, which in a way is surprising, given that it is an interesting mental state.

Hume, I gather, described two diseases of philosophy, characterised by either despair or unrealistic optimism in the face of the special difficulties a philosopher faces. The negative over-reaction he characterised as melancholy, the positive as delirium, in its euphoric sense. But that is not what we are after.

Historically I think that if delirium came up in discussion at all, it was bracketed with other delusional states, hallucinations and errors. Those, of course, have an abundant literature going back many centuries. The possibility of error in our perceptions has been responsible for the persistent (but surely erroneous) view that we never perceive reality, only sense-data, or only our idea of reality, or only a cognitive model of reality. The search for certainty in the face of the constant possibility of error motivated Descartes and arguably most of epistemology.

Clinically, delirium is an organically caused state of confusion. Philosophically, I suggest we should seize on another feature, namely that it can involve derangement of both perception and cognition. Let’s use the special power of fiat used by philosophers to create new races of zombies, generate second earths, and enslave the population of China, and say that philosophical delirium is defined exactly as that particular conjunction of derangements. So we can then define three distinct kinds of mental disturbance. First, delusion, where our thinking mind is working fine but has bizarre perceptions presented to it. Second, madness, where our perceptions are fine, but our mental responses make no sense. Third, delirium, in which distorted perceptions meet with distorted cognition.

The question then is; can delirium, so defined, actually be distinguished from delusion and madness? Suppose we have a subject who persistently tries to eat their hat. One reading is that the subject perceives the Homburg as a hamburger.  The second reading is that they perceive the hat correctly, but think it is appropriate to eat hats. The delirious reading might be that they see the hat as a shoe and believe shoes are to be eaten. For any possible set of behaviour it seems different readings will achieve consistency with any of the three possible states.

That’s from a third person point of view, of course, but surely the subject knows which state applies? They can’t reliably tell us, because their utterances are open to multiple interpretations too, but inwardly they know, don’t they? Well, no. The deluded person thinks the world really is bizarre; the mad one is presumably unaware of the madness, and the delirious subject is barred from knowing the true position on both counts. Does it then, make any sense to uphold the existence of any real distinction? Might we not better say that the three possibilities are really no more than rival diagnostic strategies, which may or may not work better in different cases, but have no absolute validity?

Can we perhaps fall back on consistency? Someone with delusions may see a convincing oasis out in the desert, but if a moment later it becomes a mountain, rational faculties will allow them to notice that something is amiss, and hypothesise that their sensory inputs are unreliable. However, a subject of Cartesian calibre would have to consider the possibility that they are actually just mistaken in their beliefs about their own experiences; in fact it always seemed to be a mountain. So once again the distinctions fall away.

Delusion and madness are all very well in their way, but delirium has a unique appeal in that it could be invisible. Suppose my perceptions are all subject to a consistent but complex form of distortion; but my responses have an exquisitely apposite complementary twist, which means that the two sets of errors cancel out and my actual behaviour and everything that I say, come out pretty much like those of some tediously sane and normal character. I am as delirious as can be, but you’d never know. Would I know? My mental states are so addled and my grip on reality so contorted, it hardly seems I could know anything; but if you question me about what I’m thinking, my responses all sound perfectly fine, just like those of my twin who doesn’t have invisible delirium.

We might be tempted to say that invisible delirium is no delirium; my thoughts are determined by the functioning of my cognitive processes, and since those end up working fine, it makes no sense to believe in some inner place where things go all wrong for a while.

But what if I get super invisible delirium? In this wonderful syndrome, my inputs and outputs are mangled in complementary ways again, but by great good fortune the garbled version actually works faster and better than normal. Far from seeming confused, I now seem to understand stuff better and more deeply than before. After all, isn’t reaching this kind of state why people spend time meditating and doing drugs?

But perhaps I am falling prey to the euphoric condition diagnosed by Hume…

Chomsky’s Mysterianism

Or perhaps Chomsky’s endorsement of Isaac Newton’s mysterianism.  We tend to think of Newton as bringing physics to a triumphant state of perfection, one that lasted until Einstein, and with qualifications, still stands. Chomsky says that in fact Newton shattered the ambitions of mechanical science, which have never recovered; and in doing so he placed permanent limits on the human mind. He quotes Hume;

While Newton seemed to draw off the veil from some of the mysteries of nature, he shewed at the same time the imperfections of the mechanical philosophy; and thereby restored her ultimate secrets to that obscurity, in which they ever did and ever will remain.

What are they talking about? Above all, the theory of gravity, which relies on the unexplained notion of action at a distance. Contemporary thinkers regarded this as nonsensical, almost logically absurd: how could object A affect object B without contacting it and without and internediating substance? Newton, according to Chomsky, agreed in essence; but defended himself by saying that there was nothing occult in his own work, which stopped short where the funny stuff began.  Newton, you might say, described gravity precisely and provided solid evidence to back up his description; what he didn’t do at all was explain it.

The acceptance of gravity, according to Chomsky, involved a permanent drop in the standard of intelligibility that scientific theories required. This has large implications for the mind it suggests there might be matters beyond our understanding, and provides a particular example. But it may well be that the mind itself is, or involves, similar intractable difficulties.

Chomsky reckons that Darwin reinforced this idea. We are not angels, after all, only apes; all other creatures suffer cognitive limitations; why should we be able to understand everything? In fact our limitations are as important as our abilities in making us what we are; if we were bound by no physical limitations we should become shapeless globs of protoplasm instead of human beings, and the same goes for our minds. Chomsky distinguishes between problems and mysteries. What is forever a mystery to a dog or rat may be a solvable problem for us, but we are bound to have mysteries of our own.

I think some care is needed over the idea of permanent mysteries. We should recognise that in principle there may be several things that look mysterious, notably the following.

  1. Questions that are, as it were, out of scope: not correctly definable as questions at all: these are answerable even by God.
  2. Mysterian mysteries; questions that are not in themselves unanswerable, but which are permanently beyond the human mind.
  3. Questions that are answerable by human beings, but very difficult indeed.
  4. Questions that would be answerable by human beings if we had further information which we (a) either just don’t happen to have, or which (b) we could never have in principle.

I think it’s just an assumption that the problem of mind, and indeed, the problem of gravity, fall into category 2. There has been a bit of movement in recent decades, I think, and the possibility of 3 or 4(a) remains open.

I don’t think the evolutionary argument is decisive either. Implicitly Chomsky assumes an indefinite scale of cognitive abilities matched by an indefinite scale of problems. Creatures that are higher up the first get higher up the second, but there’s always a higher problem.  Maybe, though, there’s a top to the scale of problems? Maybe we are already clever enough in principle to tackle them all.

If this seems optimistic, think of Chomsky the Lizard, millions of years ago. Some organisms, he opines, can stick their noses out of the water. Some can leap out, briefly. Some crawl out on the beach for a while. Amphibians have to go back to reproduce. But all creatures have a limit to how far they can go from the sea. We lizards, we’ve got legs, lungs, and the right kind of eggs; we can go further than any other. That does not mean we can go all over the island. Evolution guarantees that there will always be parts of the island we can’t reach.

Well, depending on the island, there may be inaccessible parts, but that doesn’t mean legs and lungs have inbuilt limits. So just because we are products of evolution, it doesn’t mean there are necessarily questions of type 2 for us.

Chomsky mocks those who claim that the idea of reducing the mind to activity of the brain is new and revolutionary; it has been widely espoused for centuries, he says. He mentions remarks of Locke which I don’t know, but which resemble the famous analogy of Leibniz’s mill.

If we imagine that there is a machine whose structure makes it think, sense, and have perceptions, we could conceive it enlarged, keeping the same proportions, so that we could enter into it, as one enters a mill. Assuming that, when inspecting its interior, we will find only parts that push one another, and we will never find anything to explain a perception.

The thing about that is, we’ll never find anything to explain a mill, either. Honestly, Gottfried, all I see is pieces of wood and metal moving around; none of them have any milliness? How on earth could a collection of pieces of wood – just by virtue of being arranged in some functional way, you say – acquire completely new, distinctively molational qualities?

Where am I?

brain-implantA treat for lovers of bizarre thought experiments, Rick Grush’s paper Some puzzles concerning relations between minds, brains, and bodies asks where the mind is, in a sequence of increasingly weird scenarios. Perhaps the simplest way into the subject is to list these scenarios, so here we go.

Airhead. This a version of that old favourite, brain-in-a-vat. Airhead’s brain has been removed from its skull, but every nerve ending is supplied with little radio connectors that ensure the neural signals to and from the brain continue. The brain is supplied with everything it needs, and to Airhead everything seems pretty normal.

Rip. A similar set-up, but this time the displacement is in time. Signals from body to brain are delayed, but signals in the opposite direction are sent back in time (!) by the same interval, so again everything works fine. Rip seems to be walking around three seconds, or why not, a thousand years hence.

Scatterbrain. This time the brain is split into many parts and the same handy little radio gizmos are used to connect everything back up. The parts of the brain are widely separated. Still Scatterbrain feels OK, but what does the thought ‘I am here’ even mean?

Raid. We have two clones, identical in every minutest detail; both brains are in vats and both are connected up to the same body – using an ‘AND’ gate, so the signal passes on only if both brains send the same one (though because they are completely identical, they always will anyway). Two people? One with a backup?

Janus. One brain, two bodies, connected up in the way we’re used to by now; the bodies inhabit scrupulously identical worlds: Earth and Twin Earth if you like.

Bourdin. This time we keep the identical bodies and brains, wire them up, and let both live in their identical worlds. But of course we’re not going to leave them alone, are we? (The violence towards the brain displayed in the stories told by philosophers of mind sometimes seems to betray an unconscious hatred of the organ that poses such intractable questions – not that Grush is any worse than his peers.) no; what we do is switch signals from time to time, so that the identical brains are linked up with first one, then the other, of the identical bodies.

Theseus. Now Grush tells us things are going to get a little weird… We start with the two brains and bodies, as for Bourdin, and we divide up the brains as for Scatterbrain. Now we can swap over not just whole brains, but parts and permutations. Does this create a new individual every time? If we put the original set of disconnected brain parts back together, does the first Theseus come back, or a fresh individual who is merely his hyper-identical twin?

Grush tests these situations against a range of views, from ‘brain-lubbers’ who believe the mind and the brain are the same thing, through functionalists of various kinds, to ‘contentualists’ who think mind is defined by broad content; I think Grush is inventing this group, but he says Dennett’s view of the self as ‘a centre of narrative gravity’ is a good example. It’s clearly this line of thinking he favours himself, but he admits that some of his thought experiments throw up real problems, and in the end he offers no final conclusion. I agree with him that a discussion can have value without being tied to the defence of a particular position; his exploration is quite well done and did suggest to me that my own views may not be quite as coherent as I had supposed.

He defends the implausibility of his scenarios on a couple of grounds. We need to test our ideas in extreme circumstances to make sure they are truly getting the fundamentals and setting aside any blinkers we may have on. Physical implausibilities may be metaphysically irrelevant (nobody believes I’m going to toss a coin and get tails 10,000 times in a row, but nobody sees a philosophical problem with talking about that case).  To be really problematic, objections would have to raise nomological or logical impossibilities.

Well, yes but… All of the thought experiments rely on the idea that nerves work like wires, that you can patch the signal between them and all will be well. The fact that that instant patching of millions of nerves is not physically possible even in principle may not matter. But it might be that the causal relations have to be intact; Grush says he has merely extended some of them, but picking up the signal and the relaying it is a real intervention, as is shown when later thought experiments use various kinds of switching. It could be argued that just patching in radio transmitters destroys the integrity of the causality and turns the whole thing into a simulation from the off.

The other thing is, should we ask where the mind is at all? Might not the question just be a category mistake? There are various proxies for the mind that can be given locations, but the thing itself? Many years ago, on the way to nursery, my daughter told me a story about a leopard. What is the location of that story? You could say it is the streets where she told it; you could say it is in the non-existent forest where the leopard lived. You could say it is in my memory, and therefore maybe in my brain in some sense. But putting the metaphors aside, isn’t it plausible that location is a property these entities made of content just don’t have?

 

Jochen’s Intentional Automata

intentional automatonJochen’s paper Von Neumann Minds: Intentional Automata has been published in Mind and Matter.

Intentionality is meaningfulness, the quality of being directed at something, aboutness. It is in my view one of the main problems of consciousness, up there with the Hard Problem but quite distinct from it; but it is often under-rated or misunderstood. I think this is largely because our mental life is so suffused with intentionality that we find it hard to see the wood for the trees; certainly I have read more than one discussion by very clever people who seemed to me to lose their way half-way through without noticing and end up talking about much simpler issues.

That is not a problem with Jochen’s paper which is admirably clear.  He focuses on the question of how to ground intentionality and in particular how to do so without falling foul of an infinite regress or the dreaded homunculus problem. There are many ways to approach intentionality and Jochen briefly mentions and rejects a few (basing it in phenomenal experience or in something like Gricean implicature, for example) before introducing his own preferred framework, which is to root meaning in action: the meaning of a symbol is, or is to be found in, the action it evokes. I think this is a good approach; it interprets intentionality as a matter of input/output relations, which is clarifying and also has the mixed blessing of exposing the problems in their worst and most intractable form. For me it recalls the approach taken by Quine to the translation problem – he of course ended up concluding that assigning certain meanings to unknown words was impossible because of radical under-determination; there are always more possible alternative meanings which cannot be eliminated by any logical procedure. Under-determination is a problem for many theories of intentionality and Jochen’s is not immune, but his aim is narrower.

The real target of the paper is the danger of infinite regress. Intentionality comes in two forms, derived on the one hand and original or intrinsic on the other. Books, words, pictures and so on have derived intentionality; they mean something because the author or the audience interprets them as having meaning. This kind of intentionality is relatively easy to deal with, but the problem is that it appears to defer the real mystery to the intrinsic intentionality in the mind of the person doing the interpreting. The clear danger is that we then go on to defer the intentionality to an homunculus, a ‘little man’ in the brain who again is the source of the intrinsic intentionality.

Jochen quotes the arguments of Searle and others who suggest that computational theories of the mind fail because the meaning and even the existence of a computation is a matter of interpretation and hence without the magic input of intrinsic intentionality from the interpreter fails through radical under-determination. Jochen dramatises the point using an extension of Searle’s Chinese Room thought experiment in which it seems the man inside the room can really learn Chinese – but only because he has become in effect the required homunculus.

Now we come to the really clever and original part of the paper; Jochen draws an analogy with the problem of how things reproduce themselves. To do so it seems they must already have a complete model of themselves inside themselves… and so the problem of regress begins. It would be OK if the organism could scan itself, but a proof by Svozil seems to rule that out because of problems with self-reference.  Jochen turns to the solution proposed by the great John Von Neumann (a man who might be regarded as the inventor of the digital computer if Turing had never lived). Von Neumann’s solution is expressed in terms of a tw0-dimensional cellular automaton (very simplistically, a pattern on a grid that evolves over time according to certain rules – Conway’s Game of Life surely provides the best-known examples). By separating the functions of copying and interpretation, and distinguishing active and passive states Von Neumann managed to get round Svozil successfully.

Now by importing this distinction between active and passive into the question of intentionality, Jochen suggests we can escape the regress. If symbols play either an active or a passive role (in effect, as semantics or as syntax) we can have a kind of automaton which, in a clear sense, gives its own symbols their interpretation, and so escapes the regress.

This is an ingenious move. It is not a complete solution to the problem of intentionality (I think the under-determination monster is still roaming around out here), but it is a novel and very promising solution to the regress. More than that, it offers a new perspective which may well offer further insights when fully absorbed; I certainly haven’t managed to think through what the wider implications might be, but if a process so central to meaningful thought truly works in this unexpected dual way it seems there are bound to be some. For that reason, I hope the paper gets wide attention from people whose brains are better at this sort of thing than mine…

Equivocating uploads

brain copyInformation about the brain is not the same as information in the brain; yet in discussions of mind uploading, brain simulation, and mind reading the two are quite often conflated or confused. Equivocating between the two makes the task seem far easier than it really is. Scanners of various kinds exist, after all, and have greatly improved in recent years; technology usually goes on improving over time. If all we need is to get a really good scan of the brain in order to understand it, then surely it can only be a matter of time? Alas, information about the brain is an inadequate proxy for the information in the brain that we really need.

We’re often asked to imagine a scanner that can read off the structural details of a brain down to any required level of detail. Usually we’re to assume this can be done non-destructively, or even without disturbing the content and operation of the brain at all. These are of course unlikely feats, not just beyond existing technology but rather hard to imagine even on the most optimistic view of future developments. Sometimes the ready confidence that this miracle will one day be within our grasp is so poorly justified I find it tempting to think that the belief in the possibility of such magic scans is being buoyed up not by sanguine technological speculation but unconsciously by much older patterns of thinking; that the mind is located in breath, or airy spirits, or some other immaterial substance that can be sucked out of a body and replaced without physical consequences. Of course on the other side it’s perfectly true that lots of things once considered impossible are now routinely achieved.

But even allowing ourselves the most miraculous knowledge of the brain’s structure, so what? We could have an exquisite plan of the structure of a disk or a book without knowing what story it contained. Indeed, it would only take a small degree of inaccuracy or neglect in copying to leave us with a duplicate that strongly resembled the original but actually reproduced none of the information bearing elements; a disk with nothing on it, a book with random ink patterns.

Yeah but, the optimists say; the challenge may be huge, the level of detail required orders of magnitude beyond anything previously attempted, but if we copy something with enough fidelity the information just is going to come along with the copy necessarily. A perfect copy just has to include a perfect copy of the information. Granted, in the case of a book it’s not much use if we have the information but don’t know how to read it. The great thing about simulating a brain, though, is that we don’t even need to understand. We can just set it up and start it going. We may never know directly what the information in the brain was, but it’ll do its job; the mind will upload, the simulation will run.

In the case of mind reading the marvellous flexibility of the mind also offers us a chance to cheat by taking some measurable, controllable brain function and simply using it as a signalling device. It works, up to a point, but it isn’t clear why brain communication by such lashed-up indirect routes is any more telepathy than simply talking to someone; in both cases the actual information in the brain remains inaccessible except through a deliberate signalling procedure.

Now of course a book or a disk is in some important ways actually a far simpler challenge than a brain. The people who designed, made, and use the disk or the book take great care to ensure that a specific, readily readable set of properties encodes the information required in a regular, readable form. These are artefacts designed to carry information, as is a computer. The brain is not artefactual and does not need to be legible. There’s no need for a clear demarcation between information-bearing elements and the rest, and there’s no need for a standardised encoding or intelligible structures. There are, in fact many complex elements that might have a role in holding information.

Suppose we recalibrated our strategy and set out to scan just the information in the brain; what would we target? The first candidate these days is the connectome; the overall pattern of neural connections within the brain. There’s no doubt this kind of research is currently very lively and interesting – see for example this recent study. Current research remains pretty broad brush stuff and it’s not really clear how much detail will ever be manageable; but what if we could map the connections perfectly? How could we read off the content? It’s actually highly unlikely that all the information in the brain is encoded as properties of a network. The functional state of a neuron depends on many things, in particular the receptors and transmitters; the large known repertoire of these has greatly increased in recent years. We know that the brain does not operate simply through electrical transmission, with chemical controls from the endocrine system and elsewhere playing a large and subtle part. It’s not at all unlikely that astrocytes, the non-neural cells in the brain, have a significant role in modulating and even controlling its activity. It’s not at all unlikely, on the other hand, that ephaptic coupling or other small electrical induction effects have a significant role, too. And while myself I wouldn’t place any bets on exotic quantum physics being relevant, as some certainly believe, I think it would be very rash to assume that biology has no further tricks up its sleeve in the shape of important mechanisms we haven’t even noticed yet.

None of that can currently be ruled out of court as irrelevant. A computer has a specified way of working and if electrical interference starts changing the value of some bits in working memory you know it’s a bug, not a feature. In the brain, it could be either; the only way to judge is whether we like the results or not. There’s no reason why astrocyte states, say, can’t be key for one brain region or for one personality, and irrelevant for others, or even legitimate at one time and unhelpful interference at others. We just cannot know what to point our magic scanner at, and it may well be that the whole idea of information recorded in but distinct from a substrate just isn’t appropriate.

Yeah but again, total perfect copy? In principle if we get everything, we get everything, don’t we? The problem is that we can’t have everything. Copying, simulating, or transmitting necessarily involve transitions during which some features are unavoidably left out. Faith in the possibility of a good copy rests on the belief that we can identify a sufficient set of relevant features; so long as those are preserved, we’re good. We’re optimistic that one day we can do a job on the physical properties which is good enough. But that’s just information about the brain.

Thatter way to consciousness

Picture: Raymond Tallis‘Aping Mankind’ is a large scale attack by Raymond Tallis on two reductive dogmas which he characterises as ‘Neuromania’ and ‘Darwinitis’.  He wishes especially to refute the identification of mind and brain, and as an expert on the neurology of old age, his view of the scientific evidence carries a good deal of weight. He also appears to be a big fan of Parmenides, which suggests a good acquaintance with the philosophical background. It’s a vigorous, useful, and readable contribution to the debate.

Tallis persuasively denounces exaggerated claims made on the basis of brain scans, notably claims to have detected the ‘seat of wisdom’ in the brain.  These experiments, it seems, rely on what are essentially fuzzy and ambiguous pictures arrived at by subtraction in very simple experimental conditions, to provide the basis for claims of a profound and detailed understanding far beyond what they could possibly support. This is no longer such a controversial debunking as it would have been a few years ago, but it’s still useful.

Of course, the fact that some claims to have reduced thought to neuronal activity are wrong does not mean that thought cannot nevertheless turn out to be neuronal activity, but Tallis pushes his scepticism a long way. At times he seems reluctant to concede that there is anything more than a meaningless correlation between the firing of neurons in the brain and the occurence of thoughts in the mind.  He does agree that possession of a working brain is a necessary condition for conscious thought, but he’s not prepared to go much further. Most people, I think, would accept that Wilder Penfield’s classic experiments, in which the stimulation of parts of the brain with an electrode caused an experience of remembered music in the subject, pretty much show that memories are encoded in the brain one way or another; but Tallis does not accept that neurons could constitute memories. For memory you need a history, you need to have formed the memories in the first place, he says: Penfield’s electrode was not creating but merely reactivating memories which already existed.

Tallis seems to start from a kind of Brentanoesque incredulity about the utter incompatibility of the physical and the mental. Some of his arguments have a refreshingly simple (or if you prefer, naive) quality: when we experience yellow, he points out, our nerve impulses are not yellow.  True enough, but then a word need not be printed in yellow ink to encode yellowness either. Tallis quotes Searle offering a dual-aspect explanation: water is H2O, but H2O molecules do not themselves have watery properties: you cannot tell what the back of a house loks like from the front, although it is the same house. In the same way our thoughts can be neural activity without the neurons themselves resembling thoughts. Tallis utterly rejects this: he maintains that to have different aspects requires a conscious observer, so we’re smuggling in the very thing we need to explain.  I think this is an odd argument. If things don’t have different aspects until an observer is present, what determines the aspects they eventually have? If it’s the observer, we seem to slipping towards idealism or solipsism, which I’m sure Tallis would not find congenial. Based on what he says elsewhere, I think Tallis would say the thing determines its own aspects in that it has potential aspects which only get actualised when observed; but in that case didn’t it really sort of have those aspects all along? Tallis seems to be adopting the view that an appearance (say yellowness) can only properly be explained by another thing that already has that same appearance (is yellow). It must be clear that if we take this view we’re never going to get very far with our explanations of yellow or any other appearance.

But I think that’s the weakest point in a sceptical case which is otherwise fairly plausible. Tallis is Brentanoesque in another way in that he emphasises the importance of intentionality – quite rightly, I think. He suggests it has been neglected, which I think is also true, although we must not go overboard: both Searle and Dennett, for example, have published whole books about it. In Tallis’ view the capacity to think explicitly about things is a key unique feature of human mindfulness, and that too may well be correct. I’m less sure about his characterisation of intentionality as an outward arrow. Perception, he says, is usually represented purely in terms of information flowing in, but there is also a corresponding outward flow of intentionality. The rose we’re looking at hits our eye (or rather a beam of light from the rose does so), but we also, as it were, think back at the rose. Is this a useful way of thinking about intentionality? It has the merit of foregrounding it, but I think we’d need a theory of intentionality  in order to judge whether talk of an outward arrow was helpful or confusing, and no fully-developed theory is on offer.

Tallis has a very vivid evocation of a form of the binding problem, the issue of how all our different sensory inputs are brought together in the mind coherently. As normally described, the binding problem seems like lip-synch issues writ large: Tallis focuses instead on the strange fact that consciousness is united and yet composed of many small distinct elements at the same time.  He rightly points out that it’s no good having a theory which merely explains how things are all brought together: if you combine a lot of nerve impulses into one you just mash them. I think the answer may be that we can experience a complex unity because we are complex unities ourselves, but it’s an excellent and thought-provoking exposition.

Tallis’ attack on’ Darwinitis’ takes on Cosmidoobianism, memes and the rest with predictable but entertaining vigour. Again, he presses things quite a long way. It’s one thing to doubt whether every feature of human culture is determined by evolution: Tallis seems to suggest that human culture has no survival value, or at any rate, had none until recently, too recently to account for human development. This is reminiscent of the argument put by Alfred Russel Wallace, Darwin’s co-discoverer of the principle of survival of the fittest: he later said that evolution could not account for human intelligence because a caveman could have lived his life perfectly well with a much less generous helping of it. The problem is that this leaves us needing a further explanation of why we are so brainy and cultured; Wallace, alas, ended up resorting to spiritualism to fill the gap (we can feel confident that Tallis, a notable public champion of disbelief, will never go that way). It seems better to me to draw a clear distinction between the capacity for human culture, which is wholly explicable by evolutionary pressure, and the contents of human culture, which are largely ephemeral, variable, and non-hereditary.

Tallis points out that some sleight of hand with vocabulary is not unknown in this area, in particular the tactic of the transferrred epithet: a word implying full mental activity is used metaphorically – a ‘smart’ bomb is said to be ‘hunting down’ its target – and the important difference is covertly elided. He notes the particular slipperiness of the word ‘information’, something we’ve touched on before.

It is a weakness of Tallis’ position that he has no general alternative theory to offer in place of those he is attacking – consciousness remains a mystery (he sympathises with Colin McGinn’s mysterianism to some degree, incidentally, but reproves him for suggesting that our inability to understand ourselves might be biological). However, he does offer positive views of selfhood and free will, both of which he is concerned to defend. Rather than the brain, he chooses to celebrate the hand as a defining and influential human organ: opposable thumbs allow it to address itself and us: it becomes a proto-tool and this gives us a sense of ourselves as acting on the world in a tool-like manner. In this way we develop a sense of ourselves as a distinct entity and an agent, an existential intuition.  This is OK as far as it goes though it does sound in places like another theory of how we get a mere impression, or dare I say an illusion, of selfhood and agency, the very position Tallis wants to refute. We really need more solid ontological foundations. In response to critics who have pointed to the elephant’s trunk and the squid’s tentacles, Tallis grudgingly concedes that hands alone are not all you need and a human brain does have something to contribute.

Turning to free will, Tallis tackles Libet’s experiments (which seem to show that a decision to move one’s hand is actually made a measurable time before one becomes aware of it). So, he says, the decision to move the hand can be tracked back half a second? Well, that’s nothing: if you like you can track it back days, to when the experimental subject decided to volunteer; moreover, the aim of the subject was not just to move the hand, but also to help that nice Dr Libet, or to forward the cause of science. In this longer context of freely made decisions the precise timing of the RP is of no account.

To be free according to Tallis, an act must be expressive of what the agent is, the agent must seem to be the initiator, and the act must deflect the course of events. If we are inclined to doubt that we can truly deflect the course of events, he again appeals to a wider context: look at the world around us, he says, and who can doubt that collectively we have diverted the course of events pretty substantially?  I don’t think this will convert any determinists. The curious thing is that Tallis seems to be groping for a theory of different levels of description, or well, a dual aspect theory.  I would  have thought dual-aspect theories ought to be quite congenial to Tallis, as they represent a rejection of ‘nothing but’ reductionism in favour of an attempt to give all levels of interpretation parity of esteem, but alas it seems not.

As I say, there is no new theory of consciousness on offer here, but Tallis does review the idea that we might need to revise our basic ideas of how the world is put together in order to accommodate it. He is emphatically against traditional dualism, and he firmly rejects the idea that quantum physics might have the explanation too. Panpsychism may have a certain logic but generate more problems than it solves.  Instead he points again to the importance of intentionality and the need for a new view that incorporates it: in the end ‘Thatter’, his word for the indexical, intentional quality of the mental world, may be as important as matter.

More mereology

Picture: Peter Hacker. Peter Hacker made a surprising impact with his recent interview in the TPM, which was reported and discussed in a number of other places.  Not that his views aren’t of interest; and the trenchant terms in which he expressed them probably did no harm: but he seemed mainly to be recapitulating the views he and Max Bennett set out in 2003;  notably the accusation that the study of consciousness is plagued by the ‘mereological fallacy’ of taking a part for the whole and ascribing to the brain alone the powers of thought, belief, etc, which are properly ascribed only to whole people.

There’s certainly something in Hacker’s criticism, at least so far as popular science reporting goes. I’ve lost count of the number of times I’ve read newspaper articles that explain in breathless tones the latest discovery: that learning, or perception, or thought are really changes in the brain!  Let’s be fair: the relationship between physical brain and abstract mind has not exactly been free of deep philosophical problems over the centuries. But the point that the mind is what the brain does, that the relationship is roughly akin to the relationship between digestion and gut, or between website and screen, surely ought not to trouble anyone too much?

You could say that in a way Bennett and Hacker have been vindicated since 2003 by the growth of the ‘extended mind’ school of thought. Although it isn’t exactly what they were talking about, it does suggest a growing acknowledgement that too narrow a focus on the brain is unhelpful. I think some of the same counter-arguments also apply. If we have a brain in a VAT, functioning as normally as possible in such strange circumstances, are we going to say it isn’t thinking?  If we take the case of Jean-Dominique Bauby, trapped in a non-functioning body, but still able to painstakingly dictate a book about his experience,  can’t we properly claim that his brain was doing the writing? No doubt Hacker would respond by asking whether we are saying that Bauby had become a mere brain? That he wasn’t a person any more? Although his body might have ceased to function fully he still surely had the history and capacities of a person rather than simply those of a brain.

The other leading point which emerges in the interview is a robust scepticism about qualia.  Nagel in particular comes in for some stick, and the phrase ‘there is something it is like’ often invoked in support of qualia, is given a bit of a drubbing. If you interpret the phrase as literally invoking a comparison, it is indeed profoundly obscure; on the other hand we are dealing with the ineffable here, so some inscrutability is to be expected. Perhaps we ought to concede that most people readily understand what it is that Nagel and others are getting at.  I quite enjoyed the drubbing, but the issue can’t be dismissed quite as easily as that.

From the account given in the interview (and I have the impression that this is typical of the way he portrays it) you would think that Hacker was alone in his views, but of course he isn’t. On the substance of his views, you might expect him to weigh in with some strong support for Dennett; but this is far from the case.  Dennett is too much of a brainsian in Hacker’s view for his ideas to be anything other than incoherent.  It’s well worth reading Dennett’s own exasperated response (pdf), where he sets out the areas of agreement before wearily explaining that he knows, and has always said, that care needs to be taken in attributing mental states to the brain; but given due care it’s a useful and harmless way of speaking.

John Searle also responded to Bennett and Hacker’s book and, restrained by no ties of underlying sympathy, dismissed their claims completely. Conscious states exist in the brain, he asserted: Hacker got this stuff from misunderstanding Wittgenstein, who says that observable behaviour (which only a whole person can provide) is a criterion for playing the language game, but never said that observable behaviour was a criterion for conscious experience.  Bennett and Hacker confuse the criterial basis for the application of mental concepts with the mental states themselves. Not only that, they haven’t even got their mereology right: they’re talking about mistaking the part for the whole, but the brain isn’t a part of a person, it’s a part of a body.

Hacker clearly hasn’t given up, and it will be interesting to see the results of his current ‘huge project, this time on human nature’.

Supersized Mind

Picture: walker. I was agreeably surprised by Andy Clark’s ‘Supersizing the Mind’.  I had assumed it would be a fuller treatment of the themes set out in ‘The Extended Mind’, the paper he wrote with David Chalmers, and which is included in the book as an Appendix. In fact, it ranges more widely and has a number of interesting points to make on the general significance of embodiment and mind extension.  Various flavours of externalism, the doctrine that the mind ain’t in the head, seem to be popular at the moment, but Clark’s philosophical views are clearly just part of a coherent general outlook on cognition.

Early on, Clark contrasts different approaches to the problem of robotic walking. Asimo, Honda’s famous robot, does a pretty impressive job of walking, but achieves it through an awful lot of very carefully controlled mechanical kit. The humanoid shape of the body, apart from giving the robot its anthropomorphic appeal, is treated as more or less an incidental feature. By contrast, Clark cites Collins, Wisse and Ruina on passive dynamic walkers whose design gives them a natural tendency to waddle along with no control system at all.  These machines are far less complex (one walking toy patented by Fallis in 1888 consists entirely of two pieces of cleverly bent wire – I’m going to try making my own later), but they embody a different kind of sophistication.  As always, evolution got there first, and the relatively good levels of energy efficiency achieved by the ambulant human body, in contrast to the energy-guzzling needs of an Asimo-style approach, show that although human walking is more controlled than two pieces of wire, walking has been cunningly built into the design.

All this serves to introduce the idea that form has more importance than we may think, that legs are not just tools strapped on to the bottom of a standard unit.  The example of feeling with a stick is often quoted; when we use a stick to probe the texture and shape of an object, it doesn’t feel as if we’re registering the motion of the stick against our hand, but rather as if we were really feeling with the stick, as though the stick itself had for the moment become a sensory extension, with feeling at the end, not in the handle.

Clark wants to say that this feeling is not an illusion, that the stick, and other extensions we may use, really are incorporated into us temporarily.  He quotes a dystopian vision of Bruce Sterling in which powerful machines are driven by increasingly feeble and senile humans; why instead shouldn’t the combination of feeble human and carefully designed robotics amount instead to a newly invigorated and intelligent person?

Among the most powerful extensions available for human beings is, of course, language. Words are often considered solely as a means of communication, but he makes a convincing case for the conclusion that they actually add massively to our cognitive abilities. He rightly says that the complexity generated by our ability to think about the way we think, or to think of new worlds from which other new worlds can be found, is staggering.

One of the ways our extended cognition helps us to extend ourselves further is by the construction of niches – re-engineering parts of the world to enhance our own performance. Clark here gives a fascinating account of how Elizabethan actors coped with the impossible load imposed on their memories – in those days it was apparently customary for a company to play six different plays a week with few repetitions and entirely new plays arriving at regular intervals.

Key factors were the standard layout of the stage and documents known as ‘plots’, a kind of high-level map of who comes in when and where and does what. These would enable actors to get a quick overall grasp of a new play, and with a script which gave only their own lines and using an over-learned grasp of the dramatic conventions of the day which gave them an in-built knowledge of the sort of thing that could be expected, they were be able to pick up plays and switch from one to another almost on the fly as they went. This is just a particularly striking example of the way we often make things easier for our brains by reducing the epistemic complexity of the environment.  Putting that piece of IKEA furniture together may be tough, but if we take out all the components and lay them out neatly the way they’re shown in the diagram, it actually becomes easier.

Clark draws a useful distinction, in many ways a key point, between three grades of embodiment. In mere embodiment, the body and sense are simply pieces of equipment with which things can be done; in basic embodiment their own feature and dynamics become resources in themselves which the entity in question can exploit. Finally, in the profound embodiment which humans and some other animals, especially primates, have developed, new parts of the world and new uses of parts of the body are constantly discovered and recruited to serve previously unimagined purposes. This capacity to pick up things and make them part of your own mental apparatus might be a defining feature of human cognition and human consciousness.

All this is fine, but we may ask whether it is enough to make us think cognitive extension is anything more than some fancy metaphors about plain old tool use. Clark now goes on to tackle some objections, first those made by Adams and Aizawa, who in a nutshell say that the kind of processes Clark are talking about are just not the right kind of thing to be considered cognitive.  Various grounds are advanced, but the strongest part of the negative case is perhaps the claim that cognitive processes are distinguished by processes involving non-derived representations. In other words, real minds deal in stuff which is inherently meaningful, and the kinds of things Clark is on about, if they deal in meaning at all, do so only because a real brain interprets them them as doing so. This claim clearly rebuts the example of Otto, used by Clark and Chalmers:  Otto, briefly, uses a notebook as a supplementary memory. It’s not memory, or memory-like, Adams and Aizawa would say, because the information in Otto’s brain is live, it means something all on its own, whereas his notebook only means something through the convention of writing and when brains write or read in it.

Clark suggests that meaningfulness, non-derived representations, can perhaps be found independently of brains in some cases. This is highly contentious territory where I think it might actually be more prudent not to go. There’s no need in any case, since he also has the argument that brains frequently use arbitrary, conventional symbols in internal cognitive activities – whenever we use words or numbers to think with. If those processes are cognitive, surely similar ones using external symbols deserve the same recognition. Is there any real difference between someone who visualises a map in memory in order to work out where to go, and someone who looks at the paper equivalent, such that we can dismiss one as not cognitive?

A different challenge comes from Robert Rupert, who proposes that embedded cognition is enough: we can say that cognitive processes are heavily dependent on external props and prompts without having to make the commitments required by extended cognition. The props and prompts may be essential, but that doesn’t mean we have to bring them within the pale of cognition itself – and in fact if we do we may risk losing sight of the thing we set out to study in the first place. If the mind includes all the teeming series of objects we use to help us think clearly, then the science of the mind is going to be more intractable than ever.

This latter objection is easily dealt with, since Clark is not concerned to deny that there is a recognisable, continuing core of cognition in the brain. On the other point, I’m not sure that choosing embedding over extension has any real advantage here: embedding might give us a cleanly delineated target for the study of the mind, but it explicitly does so by narrowing the scope of that study to exclude all the allegedly intractable swarm of prompts and props which help us out. But aren’t the props and prompts an interesting and essential area of study, even if we deny them the status of being truly cognitive? Clark seems to accept that there is no knock-out case for extension rather than embedding, but he claims a number of advantages for the former, notably that it steers us safely away from ‘magic dust’ and the seductive old idea of the inner homunculus, the little man in our head who controls everything.

In a final section, Clark considers two ‘Limits of Embodiment’. The first has to do with Noë’s argument that perception is active; that seeing is like painting. This Strong Sensorimotor model has many virtues in Clark’s eyes: in particular it moves attention away from the vexed issue of qualia and on to skills. In fact, it is associated with the claim that experience is just constituted by the activity of perception, which leaves no ambiguous ineffable inner reality to trouble us, dismissing the menacing hordes of philosophical zombies that qualophiles would unleash on us. The snag, in Clark’s eyes, is ‘hypersensitivity’ – if we take this line at face value we find ourselves saying that only identical people could have identical experiences, and therefore that no actual people see the same things.

The other limit has to do with an argument that if embodiment is true, functionalism must be false. It is a basic claim of functionalism, after all, that the substrate doesn’t matter; that in principle you could make a brain out of anything so long as it had the right functional properties at a high level. But embodiment says that the nature of the substrate is crucial – so if one is true the other must be false, right? Put as baldly as that, I think the weakness of the argument is apparent; why shouldn’t it be the case that the things which the theory of embodiment says are crucial are in fact, in the final analysis,  functional properties?

The book is a fascinating read, and full of stuff which this short sketch can’t really do any justice to.  I don’t know whether I’ve become a convert to mind extension, though. The problem for me at a philosophical level, is that none of this really worries me.  There are a number of  issues to do with consciousness and the mind where my inability to see the answers is, in a quiet but deep way, quite troubling to me. Cognitive extension,  somehow not so much.

I suppose part of the reason is that lurking at the back of my mind is the feeling that the location of the mind is a false issue. Where is the mind? In some Platonic realm, or zone of abstraction, or perhaps the question is meaningless. When we ask, is the mind within the brain or in the external world , the answer is in fact  ‘No’. Of course I understand that Clark is not operating at this naive level, but it leaves a lingering doubt somewhere at the back of my mind about how much the answer really matters , which is reinforced by the apparent softness of the issue. Clark himself seems to concede that it’s not exactly a question of absolute right and wrong, more of whether this is a useful and enlightening way to look at things. On that level, I certainly found his account convincing.

Perhaps I’ve ended up becoming one of those frivolous people who always want a theory to boggle their minds a bit.

Fear in a handful of dust?

Picture: handful of dust. Eric Schwitzgebel had an interesting post the other day asking why the universe isn’t permeated with minds made of complex conjunctions of dust or other stuff.  Most people would accept that ‘brains’ can in principle be made out of anything so long as the functional relationships are right; those relationships can be spread out over time and space, and so long as the right relationships are maintained, we don’t even have to keep them in the right order. So shouldn’t there be copies of your mind, and lots of other minds, spread all over the Universe? But that couldn’t be true…

For a more careful exposition and an interesting discussion, see the original post.  I think I fall on the sceptical side of this dialogue, more or less for some of the reasons articulated in the comments over there, so I won’t repeat the points already made.  Briefly, thought, consciousness, is surely a process, and if you split it up over time and space the process isn’t really there. Never mind cogitation; would we even say that digestion could be instantiated by such a set-up? Here is a set of atoms; they are scattered over a huge area and thousands of years, but taken together in the right order they could constitute a steak and kidney pie; if you don’t like that, perhaps the pie could appear momentarily, or even for several minutes, as the result of a bizarre but providential quantum accident inside my fridge, say. Then here’s another set, or another bizarre occurrence, which duplicates the pie in a state of beginning to be bitten. And so on through to the unmentionable end. That’s not really digestion, is it (and to be honest, perhaps not a totally accurate translation of Eric’s argument either)?

But it’s interesting to take a different tack and bite the bullet instead of the pie. Yes, OK, all that dust does have mental experience. But surely it only has the most tenuous and ethereal kind. To take another ridiculous example, suppose we were talking about an axe. Normally we require all the parts of an axe to be well co-ordinated in space and time, but we could have a dust-axe which was made up of different parts from different places and centuries. If we make the right selection of parts, we can have a series of axe-slices which even constitute its swinging and cutting wood. But it’s not really a very good axe; its existence relies completely on the support of our imagination, even though it consists entirely of unobjectionable physical items. Its existence is thin and dependent; it’s like the hrönir in the Borges story Tlön, Uqbar and Orbis Tertius, objects that exist only because someone thought they did. The experiences of the dust mind are almost as insubstantial as the experiences of fictional characters, or it seems as if it ought to be so.

But somehow I find myself reluctant to say even that much; reluctant to grant the dust even the most evanescent form of experience. Why is that? It’s because I feel a mind is something real, not just an abstraction which can arise as well out of a mistily conceptual object as out of a solid, fleshy brain. Strangely, I worry less about the axe because it’s clear to me that whether something has axehood is a matter of how we use and think of it. Various mis-shapen stones can have a significant degree of axe-worthiness; you can say they are axes if that suits you, and deny it another time. Surely you can’t endow something with a mind and then take it away as your convenience requires?

But at the same time another part of my mind is taking the opposite tack. All this nonsense about pies and axes misses the point completely, because no-one supposes that such things are constituted by high-order functional relations. Minds, on the other hand, seem very likely to be entities of that kind, and so they are uniquely suited to be the abstract result of some reinterpretation of suitable sections of the world. It’s just your hankering for some soul, some magic homunculus, that prevents you from realising this.

Come to that, what’s the point of the dust? Suppose we couldn’t find a suitable candidate for the role of one mote. OK, we say, it doesn’t really matter, let’s just arrange things so that the other parts of our dust mind behave as if this missing one were actually there. We can easily do that. But if we can remove one mote, why not remove them all? Let the functional relationships remain without physical realisation. If you’re worried that imaginary dust has no causal powers, reflect that at this very moment it is having effects in making me describe it; even as I write, and you in some remote place and time read, it is rearranging in its insidious way the contents of both our minds. But if we can do without the physical realisation, space and time become irrelevant; the Universe is full of minds; in fact every point is a point of view, and Leibniz’s Monadology is vindicated.

Once again I have reached that familliar state of complete confusion…