Meh-bots

Do robots care? Aeon has an edited version of the inaugural Margaret Boden Lecture, delivered by Boden herself. You can see the full lecture above. Among other things, she tells us that the robots are not going to take over because they don’t care. No computer has actual motives, the way human beings do, and they are indifferent to what happens (if we can even speak of indifference in a case where no desire or aversion is possible).

No doubt Boden is right; it’s surely true at least that no current computer has anything that’s really the same as human motivation. For me, though, she doesn’t provide a convincing account of why human motives are special, and why computers can’t have them, and perhaps doesn’t sufficiently engage with the possibility that robots might take over the world (or at least, do various bad out-of-control things) without having human motives, or caring what happens in the fullest sense. We know already that learning systems set goals by humans are prone to finding cheats or expedients never envisaged by the people who set up the task; while it seems a bit of a stretch to suppose that a supercomputer might enslave all humanity in pursuit of its goal of filling the world with paperclips (about which, however, it doesn’t really care), it seems quite possible real systems might do some dangerous things. Might a self-driving car (have things gone a bit quiet on that front, by the way?) decide that its built-in goal of not colliding with other vehicles can be pursued effectively by forcing everyone else off the road?

What is the ultimate source of human motivation? There are two plausible candidates that Boden doesn’t mention. One is qualia; I think John Searle might say, for example, that it’s things like the quake of hunger, how hungriness really feels, that are the roots of human desire. That nicely explains why computers can’t have them, but for me the old dilemma looms. If qualia are part of the causal account, then they must be naturalisable and in principle available to machines. If they aren’t part of the causal story, how do they influence human behaviour?

Less philosophically, many people would trace human motives to the evolutionary imperatives of survival and reproduction. There must be some truth in that, but isn’t there also something special about human motivation, something detached from the struggle to live?

Boden seems to rest largely on social factors, which computers, as non-social beings, cannot share in. No doubt social factors are highly important in shaping and transmitting motivation, but what about Baby Crusoe, who somehow grew up with no social contact? His mental state may be odd, but would we say he has no more motives than a computer? Then again, why can’t computers be social, either by interacting with each other, or by joining in human society? It seems they might talk to human beings, and if we disallow that as not really social, we are in clear danger of begging the question.

For me the special, detached quality of human motivation arises from our capacity to imagine and foresee. We can randomly or speculatively envisage future states, decide we like or detest them, and plot a course accordingly, coming up with motives that don’t grow out of current circumstances. That capacity depends on the intentionality or aboutness of consciousness, which computers entirely lack – at least for now.

But that isn’t quite what Boden is talking about, I think; she means something in our emotional nature. That – human emotions – is a deep and difficult matter on which much might be said; but at the moment I can’t really be bothered…

 

Mary and the Secret Stones

Adam Pautz has a new argument to show that consciousness is irreducible (that is, it can’t be analysed down into other terms like physics or functions). It’s a relatively technical paper – a book length treatment is forthcoming, it seems – but at its core is a novel variant on the good old story of Mary the Colour Scientist. Pautz provides several examples in support of his thesis, and I won’t address them all, but a look at this new Mary seems interesting.

Pautz begins by setting out a generalised version of how plausible reductive accounts must go. His route goes over some worrying territory – he is quite Russellian, and he seems to take for granted the old and questionable distinction between primary and secondary qualities. However, if the journey goes through some uncomfortable places, the destination seems to be a reasonable place to be. This is a  moderately externalist kind of reduction which takes consciousness of things to involve a tracking relation to qualities of real things out there. We need not worry about what kinds of qualities they are for current purposes, and primary and secondary qualities must be treated in a similar way. Pautz thinks that if he can show that reductions like this are problematic, that amounts to a pretty good case for irreducibility.

So in Pautz’s version, Mary lives on a planet where the outer surfaces of everything are black, grey, or white. However, on the inside they are brilliantly coloured, with red, reddish orange, and green respectively. All the things that are black outside are red inside, and so on, and this is guaranteed by a miracle ‘chemical’ process such that changes to the exterior colour are instantly reflected in appropriate changes inside. Mary only sees the outsides of things, so she has never seen any colours but black, white and grey.

Now Mary’s experience of black is a tracking relation to black reflectances, but in this world it also tracks red interiors. So does she experience both colours? If not, then which? A sensible reductionist will surely say that she only experiences the external colour, and they will probably be inclined to refine their definitions a little so that the required tracking relation requires an immediate causal connection, not one mediated through the oddly fixed connection of interior and exterior colours. But that by no means solves the problem, according to Pautz. Mary’s relation to red is only very slightly different to her relation to black. Similar relations ought to show some similarity, but in this case Mary’s relation to black is a colour experience, whereas her relation to red, intrinsically similar, is not a colour experience – or an experience of any kind! If we imagine Martha in another world experiencing a stone with a red exterior, then Martha’s relation to red and Mary’s are virtually identical, but have no similarity whatever. Suppose you had a headache this morning, suggests Pautz, could you then say that you were in a nearly identical state this afternoon, but that it was not the state of experiencing a headache; in fact it was no experience at all (not even, presumably, the experience of not having a headache).

Pautz thinks that examples of this kind show that reductive accounts of consciousness cannot really work, and we must therefore settle for non-reductive ones. But he is left offering no real explanation of the relation of being conscious of something; we really have to take that as primitive, something just given as fundamental. Here I can’t help but sympathise with the reductionists; at least they’re trying! Yes, no doubt there are places where explanation has to stop, but here?

What about Mary? The thing that troubles me most is that remarkable chemical connection that guarantees the internal redness of things that are externally black. Now if this were a fundamental law of nature, or even some logical principle, I think we might be willing to say that Mary does experience red – she just doesn’t know yet (perhaps can never know?) that that’s what black looks like on the inside. If the connection is a matter of chance, or even guaranteed by this strange local chemistry, I’m not sure the similarity of the tracking relations is as great as Pautz wants it to be. What if someone holds up for me a series of cards with English words on one side? On the other, they invariably write the Spanish equivalent. My tracking relation to the two words is very similar, isn’t it, in much the same way as above? So is it plausible to say I know what the English word is, but that my relation to the Spanish word is not that of knowing it – that in fact that relation involves no knowledge of any kind? I have to say I think that is perfectly plausible.

I can’t claim these partial objections refute all of Pautz’s examples, but I’m keeping the possibility of reductive explanations open for now.

 

Intentionality and Introspection

Some people, I know, prefer to get their philosophy in written form; but if you like videos it’s well worth checking out Richard Brown’s YouTube series Consciousness This Month.

This one, Ep 4, is about mental contents, with Richard setting out briefly but clearly a couple of the major problems (look at the camera, Richard!).

Introspection, he points out, is often held to be incorrigible or infallible on certain points. You can be wrong about being at the dentist, but you can’t be wrong about being in pain. This is because of the immediacy of the experience. In the case of the dentist, we know there is a long process between light hitting your retina and the dentist being presented to consciousness. Various illusions and errors provide strong evidence for the way all sorts of complex ‘inferences’ and conclusions have been drawn by your unconscious visual processing system before the presence of the dentist gets inserted into your awareness in the guise of a fact. There is lots of scope for that processing to go wrong, so that the dentist’s presence might not be a fact at all. There’s much less processing involved in our perception of someone tugging on a tooth, but still maybe you could be dreaming or deluded. But the pain is inside your mind already; there’s no scope for interpretation and therefore no scope for error.

My own view on this is that it isn’t our sense data that have to be wrong, it’s our beliefs about our experiences. If the results of visual processing are misleading, we may end up with the false belief that there is a dentist in the room. But that’s not the only way for us to pick up false beliefs, and nothing really prevents our holding false beliefs about being in pain. There is some sense in which the pain can’t be wrong, but thatks more a matter of truth and falsity being properties of propositions, not of pains.

Richard also sketches the notion of intentionality, or ‘aboutness’, reintroduced to Western philosophy as a key idea by Brentano, who took it to be the distinguishing feature of the mental. When we think about things it seems as if our thought is directed towards an external object. In itself that seems to require some explanation, but it gets especially difficult when you consider that we can easily talk about non-existent or even absurd things. This is the kind of problem that caused Meinong to introduce a distinction between existence and subsistence, so that the objects of thought could have a manageable ontological status without being real in the same way as physical objects.

Regulars may know that my own view is that consciousness is largely a matter of recognition. Humans, we might say, are superusers of recognition. Not only can we recognise objects, we can recognise patterns and use them for a sort of extrapolation. The presence of a small entity is recognised, but also a larger entity of which it is part. So we recognise dawn, but also see that it is part of a day. From the larger entity we can recognise parts not currently present, such as sunset, and this allows us to think about entities that are distant in time or space. But the same kind of extrapolation allows to think about things that do not, or even could not, exist.

I’m looking forward to seeing Richard’s future excursions.

Secrets of Consciousness

Here’s an IAI discussion between Philip Goff, Susan Blackmore, and Nicholas Humphrey, chaired by Barry Smith. There are some interesting points made, though overall it may have been too ambitious to try to get a real insight into three radically different views on the broad subject of phenomenal consciousness in a single short discussion. I think Goff’s panpsychism gets the lion’s share of attention and comes over most clearly. In part this is perhaps because Goff is good at encapsulating his ideas briefly; in part it may be because of the noticeable bias in all philosophical discussion towards the weirdest idea getting most discussion (it’s more fun and more people want to contradict it); it may be partly just a matter of Goff being asked first and so getting more time.

He positions panpsychism (the view, approximately, that consciousness is everywhere) attractively as the alternative to the old Scylla and Charybdis of dualism on oone hand and over-enthusiastic materialist reductionism on the other. He dodges some of the worst of the combination problem by saying that his version on panpsychism doesn’t say that every arbitrary object – like a chair has to be consciousness, only that there is a general, very simple form of awareness in stuff geneerally – maybe at the level of elementary particles. Responding to the suggestion that panpsychism is the preference for theft over honest toil (just assume consciousness) he rightly says that not all explanations have to be reductive explanations, but makes a comparison I think is dodgy by saying that James Clerk Maxwell, after all, did not reduce electromagnetism to mass or other known physical entities. No, but didn’t Maxwell reduce light, electricity, and magnetism to one phenomenon? (He also provided elegant equations, which I think no-one is about to do for consciousness (Yes, Tononi, put your hand down, we’ll talk about that another time)).

Susan Blackwell is a pretty thorough sceptic: there really is no such thing as subjective consciousness. If we meditate, she says, we may get to a point where we understand this intuitively, but alas, it is hard to explain so convincingly in formal theoretical terms. Maybe that’s just what we should expect though.

Humphrey is also a sceptic, but of a more cautious kind: he doesn’t want to say that there is no such thing as consciousness, but he agrees it is a kind of illusion and prefers to describe it as a work of art (thereby, I suppose, avoiding objections along the lines that consciousness can’t be an illusion because the having of illusions presupposes the having of consciousness by definition). He positions himself as less of a sceptic in some ways than the other two, however: they, he says, hold that consciousness cannot be observed through behaviour: but if not, what are we even doing talking about it?

Flat thinking

Nick Chater says The Mind Is Flat in his recent book of that name. It’s an interesting read which quotes a good deal of challenging research (although quite a lot is stuff that I imagine would be familiar to most regular readers here). But I don’t think he establishes his conclusion very convincingly. Part of the problem is a slight vagueness about what ‘flatness’ really means – it seems to mean a few different things and at times he happily accepts things that seem to me to concede some degree of depth. More seriously, the arguments range from pretty good through dubious to some places where he seems to be shooting himself in the foot.

What is flatness? According to Chater the mind is more or less just winging it all the time, supplying quick interpretations of the sensory data coming in at that moment (which by the way is very restricted) but having no consistent inner core; no subconscious, no self, and no consistent views. We think we have thoughts and feelings, but that’s a delusion; the thoughts are just the chatter of the interpreter, and the feelings are just our interpretation of our own physiological symptoms.

Of course there is a great deal of evidence that the brain confabulates a great deal more than we realise, making up reasons for our behaviour after the fact. Chater quotes a number of striking experiments, including the famous ones on split-brain patients, and tells surprising stories about the power of inattentional blindness. But it’s a big leap from acknowledging that we sometimes extemporise dramatically to the conclusion that there is never a consistent underlying score for the tune we are humming. Chater says that if Anna Karenina were real, her political views would probably be no more settled than those of the fictional character, about whom there are no facts beyond what her author imagined. I sort of doubt that; many people seem to me to have relatively consistent views, and even where the views flip around they’re not nugatory. Chater quotes remarkable experiments on this, including one in which subjects asked political questions via a screen with a US flag displayed in the corner gave significantly more Republican answers than those who answered the same questions without the flag – and moreover voted more Republican eight months later. Chater acknowledges that it seems implausible that this one experiment could have conditioned views in a way that none of the subjects’ later experiences could do (though he doesn’t seem to notice that being able to hold to the same conditioning for eight months rather contradicts his main thesis about consistency); but in the end he sort of thinks it did. These days we are more cautious about the interpretation of psychological experiments than we used to be, and the most parsimonious explanation might be something wrong with the experiment. An hypothesis Chater doesn’t consider is that subjects are very prone to guessing the experimenter’s preferences and trying to provide what’s wanted. It could plausibly be the case that subjects whose questions were accompanied by patriotic symbols inferred that more right-wing answers would be palatable here, and thought the same when asked about their vote much later by the same experimenter (irrespective of how they actually voted – something we can’t in fact know, given the secrecy of the ballot).

Chater presents a lot of good evidence that our visual system uses only a tiny trickle of evidence; as little as one shape and one colour at a time, it seems. He thinks this shows that we’re not having a rich experience of the world; we can’t be, because the data isn’t there. But isn’t this a perverse interpretation? He accepts that we have the impression of a rich experience; given the paucity of data, which he evidences well, this impression can surely only come from internal processing and internal models – which looks like mental depth after all. Chater, I think, would argue that unconscious processing does take place but doesn’t count as depth; but it’s hard to see why not. In a similar way he accepts that we remember our old interpretations and feed them into current interpretations, even extrapolating new views, but this process, which looks a bit like reflection and internal debate, does not count as depth either. Here, it begins to look as if Chater’s real views are more commonsensical than he will allow.

But not everywhere. On feelings, Chater is in a tradition stretching back to William James, who held that our hair doesn’t stand on end because we’re feeling fear; rather, we feel fear because we’re experiencing hair-rise (along with feelings in the gut, goose bumps, and other physiological symptoms). The conscious experience comes after the bodily reaction, not before. Similar views were held by the behaviourists, of course; these reductive ideas are congenial because they mean emotions can be studied objectively from outside, without resort to introspection. But they are not very plausible. Again, we can accept that it sometimes happens that way. If stomach ache makes me angry, I may well go on to find other things to be angry about. If I go out at night and feel myself tremble, I may well decide it is because I am frightened. But if someone tells a ghost story, it is not credible that the fear doesn’t come from my conscious grasp of the narrative.

I think Chater’s argument for the non-existence of the self is perhaps his most alarming. It rests on a principle (or a dogma; he seems to take it as more or less self evident) that there is nothing in consciousness but the interpretation of sensory inputs. He qualifies this at once by allowing dreams and imagination, a qualification which would seem to give back almost everything the principle took away, if we took it seriously; but Chater really does mean to restrict us to thinking about entities we can see, touch or otherwise sense. He says we have no conscious knowledge of abstractions, not even such everyday ones as the number five. The best we can do is proxies such as an image of five dots, or of the numeral ‘5’. But I don’t think that will wash. A collection of images is not the same as the number five; in fact, without understanding what five is, we wouldn’t be able to pick out which groups of dots belonged to the collection. Chater says we rely on precedent, not principle, but precedents are useless without the interpretive principles that tell us when they apply. I don’t know how Chater, on his own account, is even aware of such things as the number five; he refuses to address the metaphysical issues beyond his own assertions.

I think Chater’s principle rules out arithmetic, let alone higher maths, and a good deal besides, but he presumably thinks we can get by somehow with the dots. Later, however, he invokes the principle again to dismiss the self. Because we have no sensory impressions of the self, it must be incoherent nonsense. But there are proxies for the self – my face in the mirror, the sound of my voice, my signature on a document – that seem just as good as the dots and numerals we’ve got for maths. Consistency surely requires that Chater either accepts the self or dumps mathematics.

As a side comment, it’s interesting that Chater introduces his principle early and only applies it to the self much later, when he might hope we have forgotten the qualifications he entered, and the issues over numbers. I do not suggest these are deliberate presentational tactics, rather they seem good evidence of how we often choose the most telling way of giving an argument unconsciously, something that is of course impossible in his system.

I’m much happier with Chater’s view of AI. Early on, he gives a brief account of the failure of the naive physics project, which attempted to formalise our ‘folk’ understanding of science. He seems to conflate this with the much wider project of artificial general intelligence, but he is right about the limitations it points to. He thinks computers lack the ‘elasticity’ of human thought, and are unlikely to acquire it any time soon.

A bit of a curate’s egg, then. A lot of decent, interesting science, with some alarming stuff that seems philosophically naive (a charge I hesitate to make because it is always possible to adduce sophisticated justifications for philosophical positions that seem daft on the face of it; indeed, that’s something philosophers particularly enjoy doing).

Robot Memory

A new model for robot memory raises some interesting issues. It’s based on three networks, for identification , localization, and working memory. I have a lot of quibbles, but the overall direction looks promising.

The authors (Christian Balkenius, Trond A. Tjøstheim, Birger Johansson and Peter Gärdenfors)begin by boldly proposing four kinds of content for consciousness; emotions, sensations, perceptions (ie, interpreted sensations), and imaginations. They think that may be the order in which each kind of content appeared during evolution. Of course this could be challenged in various ways. The borderline between sensations and perceptions is fuzzy (I can imagine some arguing that there are no uninterpreted sensations  in consciousness, and the degree of interpretation certainly varies greatly), and imagination here covers every kind of content which is about objects not present to the senses, especially the kind of foresight which enables planning. That’s a lot of things to pack into one category. However, the structure is essentially very reasonable.

Imaginations and perceptions together make up an ‘inner world’. The authors say this is ess3ntial for consciousness, though they seem to have also said that pure emotion is an early content of consciousness. They propose two tests often used on infants as indicators of such an inner world; tests of the sense of object permanence and ‘A-not-B’. Both essentially test whether infants (or other cognitive entities) have an ongoing mental model of things which goes beyond what they can directly see. This requires a kind of memory to keep track of the objects that are no longer directly visible, and of their location. The aim of the article is to propose a system for robots that establishes this kind of memory-based inner world.

Imitating the human approach is an interesting and sensible strategy. One pitfall for those trying to build robot consciousness is the temptation to use the power of computers in non-human ways. We need our robot to do arithmetic: no problem! Computers can already do arithmetic much faster and more accurately than mere humans, so we just slap in a calculator module. But that isn’t at all the way the human brain tackles explicit arithmetic, and by not following the human model you risk big problems  later.

Much the same is true of memory. Computers can record data in huge quantities with great accuracy and very rapid recall; they are not prone to confabulation, false memories, or vagueness. Why not take advantage of that? But human memory is much less clear-cut; in fact ‘memory’ may be almost as much of a ‘mongrel’ term as consciousness, covering all sorts of abilities to repeat behaviour or summon different contents. I used to work in an office whose doors required a four-digit code. After a week or so we all tapped out each code without thinking, and if we had to tell someone what the digits were we would be obliged to mime the action in mid-air and work out which numbers on the keypad our fingers would have been hitting. In effect, we were using ‘muscle memory’ to recall a four-digit number.

The authors of the article want to produce the same kind of ‘inner world’ used in human thought to support foresight and novel combinations. (They seem to subscribe to an old theory that says new ideas can only be recombinations of things that got into the mind through the senses. We can imagine a gryphon that combines lion and eagle, but not a new beast whose parts resemble nothing  we have ever seen. This is another point I would quibble over, but let it pass.)

In fact, the three networks proposed by the authors correspond plausibly with three brain regions; the ventral, dorsal, and prefrontal areas of the cortex. They go on to sketch how the three networks play their role and report tests that show appropriate responses in respect of object permanence and other features of conscious cognition. Interestingly, they suggest that daydreaming arises naturally within their model and can be seen as a function that just arises unavoidably out of the way the architecture works, rather than being something selected for by evolution.

I’m sometimes sceptical about the role of explicit modelling in conscious processes, as I think it is easily overstated. But I’m comfortable with what’s being suggested here. There is more to be said about how processes like these, which in the first instance deal with concrete objects in the environment, can develop to handle more abstract entities; but you can’t deal with everything in detail in a brief article, and I’m happy that there are very believable development paths that lead naturally to high levels of abstraction.

At the end of the day, we have to ask: is this really consciousness? Yes and no, I’m afraid. Early on in the piece we find:

On the first level, consciousness contains sensations. Our subjective world of experiences is full of them: tastes, smells, colors, itches, pains, sensations of cold, sounds, and so on. This is what philosophers of mind call qualia.

Well, maybe not quite. Sensations, as usually understood, are objective parts of the physical world (though they may be ones with a unique subjective aspect), processes or events which are open to third-person investigation. Qualia are not. It is possible to simply identify qualia with sensations, but that is a reductive, sceptical view. Zombie twin, as I understand him, has sensations, but he does not have qualia.

So what we have here is not a discussion of ‘Hard Problem’ consciousness, and it doesn’t help us in that regard. That’s not a problem; if the sceptics are right, there’s no Hard stuff to account for anyway; and even if the qualophiles are right, an account of the objective physical side of cognition is still a major achievement. As we’ve noted before, the ‘Easy Problem’ ain’t easy…

Anthropic Consciousness

Stephen Hawking’s recent death caused many to glance regretfully at the unread copies of A Brief History Of Time on their bookshelves. I don’t even own one, but I did read The Grand Design, written with Leonard Mlodinow, and discussed it here. It’s a bold attempt to answer the big questions about why the universe even exists, and I suggested back then that it showed signs of an impatience for answers which is characteristic of scientists, at least as compared with philosophers. One sign of Hawking’s impatience was his readiness to embrace a version of the rather dodgy Anthropic Principle as part of the foundations of his case.

In fact there are many flavours of the Anthropic Principle. The mild but relatively uninteresting version merely says we shouldn’t be all that surprised about being here, because if we hadn’t been here we wouldn’t have been thinking about it at all. Is it an amazing piece of luck that from among all the millions of potential children our parents were capable of engendering, we were the ones who got born? In a way, yes, but then whoever did get born would have had the same perspective. In a similar way, it’s not that surprising that the universe seems designed to accommodate human beings, because if it hadn’t been that way, no-one would be worrying about it.

That’s alright, but the stronger versions of the Principke make much more dubious claims, implying that our existence as observers really might have called the world into existence in some stronger sense. If I understood them correctly, Hawking and Mlodinow pitched their camp in this difficult territory.

Here at Conscious Entities we do sometimes glance at the cosmic questions, but our core subject is of course consciousness. So for us the natural question is, could there be an Anthropic-style explanation of consciousness? Well, we could certainly have a mild version of the argument, which would simply say that we shouldn’t be surprised that consciousness exists, because if it didn’t no-one would be thinking about it. That’s fine but unsatisfying.

Is there a stronger version in which our conscious experience creates the preconditions for itself? I can think of one argument which is a bit like that. Let me begin by proposing an analogy in the supposed Problem of Focus.

The Problem of Focus notes that the human eye has the extraordinary power of drawing in beams of light from all the objects around it. Somehow every surface around us is impelled to send rays right in to that weirdly powerful metaphysical entity which resides in our eyes, the Focus. Some philosophers deny that there is a single Focus in each eye, suggesting it changes constantly. Some say the whole idea of a Focus with special powers is an illusion, a misconception of perfectly normal physical processes. Others respond that the facts of optometry and vision just show that denying the existence of Focus is in practice impossible; even the sceptics wear glasses!

I don’t suppose anyone will be detained for long by worries about the Probkem of Focus; but what if we remove the beams of light and substitute instead the power of intentionality, ie our mental ability to think about things. Being able to focus on an item mentally is clearly a useful ability, allowing us to target our behaviour more effectively. We can think of intentionality as a system of pointers, or lines connecting us to the object being thought of. Lines, however, have two ends, so the back end of these ones must converge in a single point. Isn’t it remarkable that this single focus point is able to draw together the contents of consciousness in a way which in fact generates that very state of awareness?

Alright, I’m no Hawking…

The Philosophy of Delirium

Is there any philosophy of delirium? I remember asserting breezily in the past that there was philosophy of everything – including the actual philosophy of everything and the philosophy of philosophy. But when asked recently, I couldn’t come up with anything specifically on delirium, which in a way is surprising, given that it is an interesting mental state.

Hume, I gather, described two diseases of philosophy, characterised by either despair or unrealistic optimism in the face of the special difficulties a philosopher faces. The negative over-reaction he characterised as melancholy, the positive as delirium, in its euphoric sense. But that is not what we are after.

Historically I think that if delirium came up in discussion at all, it was bracketed with other delusional states, hallucinations and errors. Those, of course, have an abundant literature going back many centuries. The possibility of error in our perceptions has been responsible for the persistent (but surely erroneous) view that we never perceive reality, only sense-data, or only our idea of reality, or only a cognitive model of reality. The search for certainty in the face of the constant possibility of error motivated Descartes and arguably most of epistemology.

Clinically, delirium is an organically caused state of confusion. Philosophically, I suggest we should seize on another feature, namely that it can involve derangement of both perception and cognition. Let’s use the special power of fiat used by philosophers to create new races of zombies, generate second earths, and enslave the population of China, and say that philosophical delirium is defined exactly as that particular conjunction of derangements. So we can then define three distinct kinds of mental disturbance. First, delusion, where our thinking mind is working fine but has bizarre perceptions presented to it. Second, madness, where our perceptions are fine, but our mental responses make no sense. Third, delirium, in which distorted perceptions meet with distorted cognition.

The question then is; can delirium, so defined, actually be distinguished from delusion and madness? Suppose we have a subject who persistently tries to eat their hat. One reading is that the subject perceives the Homburg as a hamburger.  The second reading is that they perceive the hat correctly, but think it is appropriate to eat hats. The delirious reading might be that they see the hat as a shoe and believe shoes are to be eaten. For any possible set of behaviour it seems different readings will achieve consistency with any of the three possible states.

That’s from a third person point of view, of course, but surely the subject knows which state applies? They can’t reliably tell us, because their utterances are open to multiple interpretations too, but inwardly they know, don’t they? Well, no. The deluded person thinks the world really is bizarre; the mad one is presumably unaware of the madness, and the delirious subject is barred from knowing the true position on both counts. Does it then, make any sense to uphold the existence of any real distinction? Might we not better say that the three possibilities are really no more than rival diagnostic strategies, which may or may not work better in different cases, but have no absolute validity?

Can we perhaps fall back on consistency? Someone with delusions may see a convincing oasis out in the desert, but if a moment later it becomes a mountain, rational faculties will allow them to notice that something is amiss, and hypothesise that their sensory inputs are unreliable. However, a subject of Cartesian calibre would have to consider the possibility that they are actually just mistaken in their beliefs about their own experiences; in fact it always seemed to be a mountain. So once again the distinctions fall away.

Delusion and madness are all very well in their way, but delirium has a unique appeal in that it could be invisible. Suppose my perceptions are all subject to a consistent but complex form of distortion; but my responses have an exquisitely apposite complementary twist, which means that the two sets of errors cancel out and my actual behaviour and everything that I say, come out pretty much like those of some tediously sane and normal character. I am as delirious as can be, but you’d never know. Would I know? My mental states are so addled and my grip on reality so contorted, it hardly seems I could know anything; but if you question me about what I’m thinking, my responses all sound perfectly fine, just like those of my twin who doesn’t have invisible delirium.

We might be tempted to say that invisible delirium is no delirium; my thoughts are determined by the functioning of my cognitive processes, and since those end up working fine, it makes no sense to believe in some inner place where things go all wrong for a while.

But what if I get super invisible delirium? In this wonderful syndrome, my inputs and outputs are mangled in complementary ways again, but by great good fortune the garbled version actually works faster and better than normal. Far from seeming confused, I now seem to understand stuff better and more deeply than before. After all, isn’t reaching this kind of state why people spend time meditating and doing drugs?

But perhaps I am falling prey to the euphoric condition diagnosed by Hume…

Secret Harmonies

We need a richer idea of consciousness and of our minds: Jenny Judge suggests that our experience of music in particular points to a need for an expanded conception of the mind to include visceral apprehension.  Many who have championed the idea of an extended mind that isn’t just identifiable with the brain alone will be nodding along, perhaps rhythmically.

For those of us who are a little entrenched in the limited idea of the mind as a matter of the brain doing computations on representations, Judge cunningly offers a couple of pieces of bait in the form of solid cognitive insights her wider perspective can yield. One is that we perceive and respond to rhythm in important ways, even without perceiving it at times. The timing of our utterances can actually change the way they are interpreted and carry significant information. A delay in giving assent can qualify the level of agreement, and apparently this is even culturally variable; the Japanese expect a snappy response, while in Denmark you can take your time without the risk of seeming grudging.

A second insight concerns entrainment, the tendency of connected vibrating systems to synchronise rhythm. Judge presents plausible evidence that a form of entrainment plays an important role in governing the activity of our minds and even of the neurons in the brain (so it ought not to be ignored, even by those who are initially happy with a narrower conception of cognition.

Judge discusses the challenges in perceiving music, with its complexity and its inherent sequentiality. The way we perceive time and motion is complex (we could add that sometimes the visual system just seems to label some things as ‘moving’ even though they are not perceived as changing place). But, wisely I think, she does not quite make the further case that the phenomenology of musical experience is peculiarly intractable. It’s true that great music can cause us to experience emotional and cognitive states that we could never otherwise explore, and it would certainly be possible to base an argument of incredulity on this. Just as Leibniz professed disbelief about the possibility of a mill-type mechanism, however complex, producing awareness, or Brentano declared that intentionality was something else altogether, we could claim that musical experience just is not the kind of thing that physical processes could create. Such arguments are powerfully persuasive, but without some further explanation as to exactly why consciousness cannot arise from physical processing, they don’t prove anything.

It would be hard to disagree, however, with the suggestion that our phenomenological experience really needs to be properly charted in a way that dies justice to its complexity. I’d have a go myself if I had any idea of how to set about it.

Not really feeling it

It’s not just that we don’t know how anaesthetics work – we don’t even know for sure that they work. Joshua Rothman’s review of a new book on the subject by Kate Cole-Adams quotes poignant stories of people on the operating table who may have been aware of what was going on. In some cases the chance remarks of medical staff seem to have worked almost like post-hypnotic suggestions: so perhaps all surgeons should loudly say that the patient is going to recover and feel better than ever, with new energy and confidence.

How is it that after all this time, we don’t know how anaesthetics work? As the piece aptly remarks, it’s about losing consciousness, and since we don’t know clearly what that is or how we come to have it, it’s no surprise that its suspension is also hard to understand. To add to the confusion, it seems that common anaesthetics paralyse plants, too. Surely it’s our nervous system anaesthetics mainly affect – but plants don’t even have a nervous system!

But come on, don’t we at least know that it really does work? Most of us have been through it, after all, and few have weird experiences; we just don’t feel the pain – or anything. The problem, as we’ve discussed before, is telling whether we don’t feel the pain, or whether we feel it but don’t remember it. This is an example of a philosophical problem that is far from being a purely academic matter.

It seems anaesthetics really do (at least) three different things. They paralyse the patient, making it easier to cut into them without adverse reactions, they remove conscious awareness or modulate it (it seems some drugs don’t stop you being aware of the pain, they just stop you caring about it somehow), and they stop the recording of memories, so you don’t recall the pain afterwards. Anaesthetists have a range of drugs to produce each of these effects. In many cases there is little doubt about their effectiveness. If a drug leaves you awake but feeling no pain, or if it simply leaves you with no memory, there’s not that much scope for argument. The problem arises when it comes to anaesthetics that are supposed to ‘knock you out’. The received wisdom is that they just blank out your awareness for a period, but as the review points out, there are some indications that instead they merely paralyse you and wipe your memory. The medical profession doesn’t have a good record of taking these issues very seriously; I’ve read that for years children were operated on after being given drugs that were known to do little more than paralyse them (hey, kids don’t feel pain, not really; next thing you’ll be telling me plants do…).

Actually, views about this are split; a considerable proportion of people take the view that if their memory is wiped, they don’t really care about having been in pain. It’s not a view I share (I’m an unashamed coward when it comes to pain), but it has some interesting implications. If we can make a painful operation OK by giving mnestics to remove all recollection, perhaps we should routinely do the same for victims of accidents. Or do doctors sometimes do that already…?