Getting Emotional

Are emotions an essential part of cognition? Luiz Pessoa thinks so (or perhaps he feels it is the case). He argues that while emotions have often been regarded as something quite separate from rational thought, a kind of optional extra which can be bolted on, but has no essential role, they are actually essential.

I don’t find his examples very convincing. He says robots on a Mars mission might have to plan alternative routes and choose between priorities, but I don’t really see why that requires them to get emotional. He says that if your car breaks down in the desert, you may need a quick fix. A sense of urgency will impel you to look for that, while a calm AI might waste time planning and implementing a proper repair. Well, I don’t know. On the one hand, it seems perfectly possible to appreciate the urgency of the situation in a calm rational way. On the other, it’s easy to imagine that panic and many other emotional responses might be very unhelpful, blocking the search for solutions or interfering with the ability to focus on implementation.

Yet there must be something in what he says, mustn’t there? Otherwise, why would we have emotions? I suppose in principle it could be that they really have no role; that they are epiphenomenal, a kind of side effect of no real importance. But they seem to influence behaviour in ways that make that implausible.

Perhaps they add motivation? In the final analysis, pure reason gives us no reason to do anything. It can say, if you want A, then the best way to get it is through X, Y, and Z. But if you ask, should I want A, pure reason merely shrugs, or at best it says, you should if you want B.

However, it doesn’t take much to provide a basic set of motivations. If we just assume that we want to survive, the need to obtain secure sources of food, shelter, and so on soon generate a whole web of subordinate motivations. Throw in a few very simple built-in drives – avoidance of pain, seeking sex, maintenance of good social relations – and we’re pretty much there in terms of human motivation. Do we need complex and distracting emotions?

Some argue that emotions add more ‘oomph’, that they intensify action or responses to stimuli. I’ve never quite understood the actual causal process there, but granting the possibility, it seems emotions must either harmonise with rational problem solving, or conflict with it. Rational problem solving is surely always best, so they must either be irrelevant or harmful?

One fairly traditional view is that emotions are a legacy of evolution, a system that developed before rational problem solving was available. So different emotional states affect the speed and intensity of certain sets of responses. If you get angry, you become more ready to fight, which may be helpful. Now, we would be better off deciding rationally on our responses, but we’re lumbered with the system our ancestors evolved. Moreover, some of the preparatory stuff, like a more rapid heartbeat, has never come under rational control so without emotions it wouldn’t be accessible. It can be argued that emotions are really little more than certain systems getting into certain potentially useful ready states like this.

That might work for anger, but I still don’t understand how grief, say, is a useful state to be in. There’s one more possible role for emotions, which is social co-ordination. Just as laughter or yawning tends to spread around the group, it can be argued that emotional displays help get everyone into a similar state of heightened or depressed responsiveness. But if that is truly useful, couldn’t it be accomplished more easily and in less disabling/distracting ways? For human beings, talking seems to be the tool for the job?

It remains a fascinating question, but I’ve never heard of a practical problem in AI that really seemed to need an emotional response.

Robot Memory

A new model for robot memory raises some interesting issues. It’s based on three networks, for identification , localization, and working memory. I have a lot of quibbles, but the overall direction looks promising.

The authors (Christian Balkenius, Trond A. Tjøstheim, Birger Johansson and Peter Gärdenfors)begin by boldly proposing four kinds of content for consciousness; emotions, sensations, perceptions (ie, interpreted sensations), and imaginations. They think that may be the order in which each kind of content appeared during evolution. Of course this could be challenged in various ways. The borderline between sensations and perceptions is fuzzy (I can imagine some arguing that there are no uninterpreted sensations  in consciousness, and the degree of interpretation certainly varies greatly), and imagination here covers every kind of content which is about objects not present to the senses, especially the kind of foresight which enables planning. That’s a lot of things to pack into one category. However, the structure is essentially very reasonable.

Imaginations and perceptions together make up an ‘inner world’. The authors say this is ess3ntial for consciousness, though they seem to have also said that pure emotion is an early content of consciousness. They propose two tests often used on infants as indicators of such an inner world; tests of the sense of object permanence and ‘A-not-B’. Both essentially test whether infants (or other cognitive entities) have an ongoing mental model of things which goes beyond what they can directly see. This requires a kind of memory to keep track of the objects that are no longer directly visible, and of their location. The aim of the article is to propose a system for robots that establishes this kind of memory-based inner world.

Imitating the human approach is an interesting and sensible strategy. One pitfall for those trying to build robot consciousness is the temptation to use the power of computers in non-human ways. We need our robot to do arithmetic: no problem! Computers can already do arithmetic much faster and more accurately than mere humans, so we just slap in a calculator module. But that isn’t at all the way the human brain tackles explicit arithmetic, and by not following the human model you risk big problems  later.

Much the same is true of memory. Computers can record data in huge quantities with great accuracy and very rapid recall; they are not prone to confabulation, false memories, or vagueness. Why not take advantage of that? But human memory is much less clear-cut; in fact ‘memory’ may be almost as much of a ‘mongrel’ term as consciousness, covering all sorts of abilities to repeat behaviour or summon different contents. I used to work in an office whose doors required a four-digit code. After a week or so we all tapped out each code without thinking, and if we had to tell someone what the digits were we would be obliged to mime the action in mid-air and work out which numbers on the keypad our fingers would have been hitting. In effect, we were using ‘muscle memory’ to recall a four-digit number.

The authors of the article want to produce the same kind of ‘inner world’ used in human thought to support foresight and novel combinations. (They seem to subscribe to an old theory that says new ideas can only be recombinations of things that got into the mind through the senses. We can imagine a gryphon that combines lion and eagle, but not a new beast whose parts resemble nothing  we have ever seen. This is another point I would quibble over, but let it pass.)

In fact, the three networks proposed by the authors correspond plausibly with three brain regions; the ventral, dorsal, and prefrontal areas of the cortex. They go on to sketch how the three networks play their role and report tests that show appropriate responses in respect of object permanence and other features of conscious cognition. Interestingly, they suggest that daydreaming arises naturally within their model and can be seen as a function that just arises unavoidably out of the way the architecture works, rather than being something selected for by evolution.

I’m sometimes sceptical about the role of explicit modelling in conscious processes, as I think it is easily overstated. But I’m comfortable with what’s being suggested here. There is more to be said about how processes like these, which in the first instance deal with concrete objects in the environment, can develop to handle more abstract entities; but you can’t deal with everything in detail in a brief article, and I’m happy that there are very believable development paths that lead naturally to high levels of abstraction.

At the end of the day, we have to ask: is this really consciousness? Yes and no, I’m afraid. Early on in the piece we find:

On the first level, consciousness contains sensations. Our subjective world of experiences is full of them: tastes, smells, colors, itches, pains, sensations of cold, sounds, and so on. This is what philosophers of mind call qualia.

Well, maybe not quite. Sensations, as usually understood, are objective parts of the physical world (though they may be ones with a unique subjective aspect), processes or events which are open to third-person investigation. Qualia are not. It is possible to simply identify qualia with sensations, but that is a reductive, sceptical view. Zombie twin, as I understand him, has sensations, but he does not have qualia.

So what we have here is not a discussion of ‘Hard Problem’ consciousness, and it doesn’t help us in that regard. That’s not a problem; if the sceptics are right, there’s no Hard stuff to account for anyway; and even if the qualophiles are right, an account of the objective physical side of cognition is still a major achievement. As we’ve noted before, the ‘Easy Problem’ ain’t easy…

Forgotten Crimes

Should people be punished for crimes they don’t remember committing? Helen Beebee asks this meaty question.

Why shouldn’t they? I take the argument to have two main points. First, because they don’t remember, it can be argued that they are no longer the same person as the one who committed the crime. Second, if you’re not the person who committed the crime, you can’t be responsible and therefore should not be punished. Both of these points can be challenged.

The idea that not remembering makes you a different person takes memory to be the key criterion of personal identity, a view associated with John Locke among others. But memory is in practice a very poor criterion. If I remember later, do I then become responsible for the crime? We remember unconsciously things we cannot recall explicitly; does unconscious memory count, and if so, how would we know? If I remember only unconsciously, is my unconscious self the same while my conscious one is not, so that perhaps I ought to suffer punishment I’m only aware of unconsciously? If I do not remember details, but have that sick sense of certainty that, yes, I did it alright, am I near enough the same person? What if I have a false, confabulated memory of the crime, but one that happens to be veridical, to tell the essential truth, if inaccurately? Am I responsible? If so, and if false memories will therefore do, then ought I to be held responsible even if in fact I did not commit the crime, so long as I ‘remember’ doing it?

Moreover, aren’t the practical consequences unacceptable? If forgetting the crime exculpates me, I can commit a murder and immediately take mnestic drugs that will make me forget it. If that tactic is itself punishable, I can take a few more drugs and forget even coming up with the idea. Surely few people think it really works as selectively as that. In order to be free of blame, you really need to be a different person, and that implies losing much more than a single memory. Perhaps it requires the loss of most memories, or more plausibly a loss of mentally retained things that go a lot wider than mere factual or experiential memory; my habits of thought, or the continuity of my personality. I think it’s possible that Locke would say something like this if he were still around. So perhaps the case ought to be that if you do not remember the crime, and other features of your self have suffered an unusual discontinuity, such that you would no longer commit a similar crime in similar circumstances, then you are off the hook. How we could establish such a thing forensically is quite another matter, of course.

What about the second point, though? Does the fact that I am now a different, and also a better person, one who doesn’t remember the crime, mean I shouldn’t be punished? Not necessarily. Legally, for example, we might look to the doctrines of joint enterprise and strict liability to show that I can sometimes be held responsible in some degree for crimes I did not directly commit, and even ones which I was powerless to prevent, if I am nevertheless associated with the crime in the required ways.

It partly depends on why we think people should be punished at all. Deterrence is a popular justification, but it does not require that I am really responsible.  Being punished for a crime may well deter me and others from attempting similar crimes in future, even if I didn’t do it at all, never mind cases where my responsibility is merely attenuated by loss of memory. The prevention of revenge is another justification that doesn’t necessarily require me to have been fully guilty. Or there might be doctrines of simple justice that hold to the idea of crime being followed by punishment, not because of any consequences that might follow, but just as a primary ethical principle. Under such a justification, it may not matter whether I am responsible in any strong sense. Oedipus did not know he had killed his father, and so could not be held responsible for patricide, at lest on most modern understandings; but he still put out his own eyes.

Much more could be said about all that, but for me the foregoing arguments are enough to suggest that memory is not really the point, either for responsibility or for personal identity. Beebee presents an argument about Bruce Banner and the Hulk; she feels Banner cannot directly be held responsible for the mayhem caused by the Hulk. Perhaps not, but surely the issue there is control, not memory. It’s irrelevant whether Banner remembers what the Hulk did, all that matters is whether he could have prevented it. Beebee makes the case for a limited version of responsibility which applies if Banner can prevent the metamorphosis into Hulk in the first place, but I think we have already moved beyond memory, so the fact that this special responsibility does not apply in the real life case she mentions is not decisive.

One point which I think should be added to the account, though it too is not decisive, is that the loss of responsibility may entail loss of personhood in a wider sense. If we hold that you are no longer the person who committed the crime, you are not entitled to their belongings or rights either. You are not married to their spouse, nor the heir to their parents. Moreover, if we think you are liable to turn into someone else again at some stage, and we know that criminals are, as it were, in your repertoire of personalities, we may feel justified in locking you up anyway; not as a punishment, but as prudent prevention. To avoid consequences like these and retain our integrity as agents, we may feel it is worth claiming our responsibility for certain past crimes, even if we no longer recall them.

Bot Love

John Danaher has given a robust defence of robot love that might cause one to wonder for a moment whether he is fully human himself. People reject the idea of robot love because they say robots are merely programmed to deliver certain patterns of behaviour, he says. They claim that real love would require the robot to have feelings, and freedom of choice. But what are those things, even in the case of human beings? Surely patterns of behaviour are all we’ve got, he suggests, unless you’re some nutty dualist. He quotes Michael Hauskeller…

[I]t is difficult to see what this love… should consist in, if not a certain kind of loving behaviour … if [our lover’s] behaviour toward us is unfailingly caring and loving, and respectful of our needs, then we would not really know what to make of the claim that they do not really love us at all, but only appear to do so.

But on the contrary, such claims are universally accepted and understood as part of normal human life. Literature and reality are full of situations where we suspect and fear (perhaps because of ideas about our own unworthiness rather than anything at all in the lover’s behaviour) that someone may not really love us in the way their behaviour would suggest – and indeed, cases where we hope in the teeth of all behavioural evidence that someone does love us. Such hopes are not meaninglessly incoherent.

It seems, according to Danaher, that behaviourism is not bankrupt and outmoded, as you may have thought. On the contrary, it is obviously true, and further, it is really the only way we have of talking about the mind at all! If there were any residual doubt about his position, he explains…

I have defended this view of human-robot relations under the label ‘ethical behaviourism’, which is a position that holds that the ultimate epistemic grounding for our beliefs about the value of relationships lies in the detectable behavioural and functional patterns of our partners, not in some deeper metaphysical truths about their existence.

The thing is, behaviourism failed because it became too clear that the relevant behavioural patterns are unintelligible or even unrecognisable except in the light of hypotheses about internal mental states (not necessarily internal in any sense that requires fancy metaphysics). You cannot give a list of behavioural responses which correspond to love. Given the right set of background beliefs about what is in somene’s mind, pretty well any behaviour can be loving. We’ve all read those stories where someone believes that their beloved’s safety can only be preserved by willing separation, and so out of true love, behave as if they, for their part, were not in love any more. Yes, evidence for emotions is generally behavioural; but it grounds no beliefs about emotions without accompanying beliefs about internal, ‘mentalistic’ states.

The robots we currently have do not by any means have the required internal states, so they are not even candidates to be considered loving; and in fact, they don’t really produce convincing behaviour patterns of the right sort either. Danaher is right that the lack of freedom or subjective experience look like fatal objections to robot love for most people, but myself I would rest most strongly on their lack of intentionality. Nothing means anything to our current, digital computer robots; they don’t, in any meaningful sense, understand that anyone exists, much less have strong feelings about it.

At some points, Danaher seems to be talking about potential future robots rather than anything we already have (I’m beginning to wish philosophers could rein in their habit of getting their ideas about robots from science fiction films). Yes, it’s conceivable that some new future technology might produce robots with genuine emotions; the human brain is, after all, a physical machine in some sense, albeit an inconceivably complex one. But before we can have a meaningful discussion about those future bots, we need to know how they are going to work. It can’t just be magic.

Myself, I see no reason why people shouldn’t have sex bots that perform puppet-level love routines. If we mistake machines for people we run the risk of being tempted to treat people like machines. But at the moment I don’t really think anyone is being fooled, beyond the acknowledged Pygmalion capacity of human beings to fall in love with anything, including inanimate objects that display no behaviour at all. If we started to convince ourselves that we have no more mental life than they do, if somehow behaviourism came lurching zombie-like from the grave – well, then I might be worried!

An AI driving test?

The first case of a pedestrian death caused by a self-driving vehicle has provoked an understandably strong reaction. Are we witnessing the questioning of the Emperor’s new clothes? Have we been living through the latest and greatest wave of hype around AI and its performance? Will self-driving cars come off the road for a generation, or even forever? Or, on the contrary, will we quickly come to accept that fatal accidents are unavoidable, as we have done for so long in the case of human drivers, after all?

It does seem to me that the dialogue around self-driving cars has been a bit unsatisfactory to date. There’s been a surprising amount of discussion about the supposed ethical issues; should the car save the lives of the occupants if doing so involves killing a greater number of bystanders? I think some people have spent too long on trolley problems; these situations never really come up in practice, and ‘try not to have an accident at all’ is probably a perfectly adequate strategy.

More remarkable is the way the cars have been allowed to move on to public roads quite quickly. No doubt the desire of the relevant authorities in various places to stay ahead of the technological curve has something to do with this. But so far as I know there has been almost no sceptical examination of the technology by genuinely impartial observers. The designers have generally retained quite tight control, and by and large their claims have been accepted rather uncritically. I’ve pointed out before that the idea that self-driving machines are safer than humans sits oddly with the fact that current versions all use the human driver as the emergency fall-back. There may in fact have been some tendency to treat the human as a handy repository for all blame; if they don’t intervene, they should have done; if they do intervene, then any accident is their responsibility because the AI was not in control at the moment of disaster.

Our confidence in autonomous vehicles is the stranger because it is quite well known that AI tends to have problems dealing with the unrestricted and unformalised domain of the real world. In the controlled environment of rail systems, AI works fine, and even in the more demanding case of aviation, autopilots have an excellent record – although a plane is out in the world, it normally only deals with predictable conditions and carefully designed runways. To a degree roads can be considered similarly standardised and predictable, of course, but not all roads and certainly not the human beings that frequent them.

It can be argued that AI does not need human levels of understanding to function well; machine translation now turns in a useful performance without even attempting to fathom what the words are about, after all. But even now it has a significant failure rate, and while an egregious mistranslation here and there probably does little harm, a driving mistake is another matter.

Do we therefore need a driving test for AI? Should autonomous vehicles be put through rigorous tests designed to exploit likely weaknesses and administered by neutral or even hostile examiners? I would have thought something like that would be a natural requirement.

The problem might be whether effective tests are possible. Human drivers have recognisable patterns of failure that can be addressed with more training. That may or may not be the case with AIs. I don’t know how advanced the recognition technology being used actually is, but we know that some of the best available systems can behave in ways that are weirdly unpredictable to human beings, with a few pixels in an image exerting unexpected influence. It might be very difficult to test an AI in ways that give good assurance that performance will degrade, if at all, only gradually and manageably, rather than suddenly and catastrophically.

In this context, footage of the recent accident is disturbing. The car simply ploughs right into a clearly visible pedestrian wheeling a bike across the road. It’s hard to see how this can be explained or how it can be consistent with a predictably safe system. I hope we’re not just going to be told it was the fault of the human co-pilot (who reacted too slowly, but surely isn’t supposed to have to be ready for an emergency stop at any moment?) –  or worse, of the victim.

Anthropic Consciousness

Stephen Hawking’s recent death caused many to glance regretfully at the unread copies of A Brief History Of Time on their bookshelves. I don’t even own one, but I did read The Grand Design, written with Leonard Mlodinow, and discussed it here. It’s a bold attempt to answer the big questions about why the universe even exists, and I suggested back then that it showed signs of an impatience for answers which is characteristic of scientists, at least as compared with philosophers. One sign of Hawking’s impatience was his readiness to embrace a version of the rather dodgy Anthropic Principle as part of the foundations of his case.

In fact there are many flavours of the Anthropic Principle. The mild but relatively uninteresting version merely says we shouldn’t be all that surprised about being here, because if we hadn’t been here we wouldn’t have been thinking about it at all. Is it an amazing piece of luck that from among all the millions of potential children our parents were capable of engendering, we were the ones who got born? In a way, yes, but then whoever did get born would have had the same perspective. In a similar way, it’s not that surprising that the universe seems designed to accommodate human beings, because if it hadn’t been that way, no-one would be worrying about it.

That’s alright, but the stronger versions of the Principke make much more dubious claims, implying that our existence as observers really might have called the world into existence in some stronger sense. If I understood them correctly, Hawking and Mlodinow pitched their camp in this difficult territory.

Here at Conscious Entities we do sometimes glance at the cosmic questions, but our core subject is of course consciousness. So for us the natural question is, could there be an Anthropic-style explanation of consciousness? Well, we could certainly have a mild version of the argument, which would simply say that we shouldn’t be surprised that consciousness exists, because if it didn’t no-one would be thinking about it. That’s fine but unsatisfying.

Is there a stronger version in which our conscious experience creates the preconditions for itself? I can think of one argument which is a bit like that. Let me begin by proposing an analogy in the supposed Problem of Focus.

The Problem of Focus notes that the human eye has the extraordinary power of drawing in beams of light from all the objects around it. Somehow every surface around us is impelled to send rays right in to that weirdly powerful metaphysical entity which resides in our eyes, the Focus. Some philosophers deny that there is a single Focus in each eye, suggesting it changes constantly. Some say the whole idea of a Focus with special powers is an illusion, a misconception of perfectly normal physical processes. Others respond that the facts of optometry and vision just show that denying the existence of Focus is in practice impossible; even the sceptics wear glasses!

I don’t suppose anyone will be detained for long by worries about the Probkem of Focus; but what if we remove the beams of light and substitute instead the power of intentionality, ie our mental ability to think about things. Being able to focus on an item mentally is clearly a useful ability, allowing us to target our behaviour more effectively. We can think of intentionality as a system of pointers, or lines connecting us to the object being thought of. Lines, however, have two ends, so the back end of these ones must converge in a single point. Isn’t it remarkable that this single focus point is able to draw together the contents of consciousness in a way which in fact generates that very state of awareness?

Alright, I’m no Hawking…

Brain Preservation Prize

The prize offered by the Brain Preservation Foundation has been won by 21st Century Medicine (21CM) with the Aldehyde-Stabilized Cryopreservation (ASC) technique that has been developed. In essence this combines chemical and cryogenic approaches and is apparently capable of preserving the whole connectome (or neural network) of a large mammalian brain (a pig brain here) in full detail and indefinitely. That is a remarkable achievement. A paper is here.

I am an advisor to the BPF, though I should make it clear that they don’t pay me and I haven’t given them a great deal of advice. I’ve always said I would be a critical friend, in that I doubt this research is ever going to lead to personal survival of the self whose brain is preserved. However, in my opinion it is much more realistic than a pure scan-and-upload approach, and has the potential to yield many interesting benefits even if it never yields personal immortality.

One great advantage of preserving the brain like this is that it defers some choices. When we model a brain or attempt to scan it into software, we have to pick out the features we think are salient, and concentrate on those. Since we don’t yet have any comprehensive and certain account of how the brain functions, we might easily be missing something essential. If we keep the whole of an actual brain, we don’t have to make such detailed choices and have a better chance of preserving features whose importance we haven’t yet recognised.

It’s still possible that we might lose something essential, of course. ASC, not unreasonably, concentrates on preserving the connectome. I’m not sure whether, for example, it also keeps the brain’s astrocytes in good condition, though I’d guess it probably does. These are the non-neural cells which have often been regarded as mere packing, but which may in fact have significant roles to play. Recently we’ve heard that neurons appear to signal with RNA packets; again, I don’t know whether ASC preserves any information about that – though it might. But even on a pessimistic view, ASC must in my view be a far better preservation proposition than digital models that explicitly drop the detailed structure of individual neurons in favour of an unrealistically standardised model, and struggle with many other features.

Preserving brains in fine detail is a worthy project in itself, which might yield big benefits to research in due course. But of course the project embodies the hope that the contents of a mind and even the personality of an individual could be delivered to posterity. I do not think the contents of a mind are likely to be recoverable from a preserved brain yet awhile, but in the long run, why not? On identity, I am a believer in brute physical continuity. We are our brains, I believe (I wave my hands to indicate various caveats and qualifications which need not sideline us here). If we want to retain our personal identity, then, the actual physical preservation of the brain is essential.

Now, once your brain has been preserved by ASC, it really isn’t going to be started up again in its existing physical form. The Foundation looks to uploading at this stage, but because I don’t think digital uploading as we now envision it is possible in principle, I don’t see that ever working. However, there is a tiny chink of light at the end of that gloomy tunnel. My main problem is with the computational nature of uploading as currently envisaged. It is conceivable that the future will bring non-computational technologies which just might allow us to upload, not ourselves, but a kind of mental twin at least. That’s a remote speculation, but still a fascinating prospect. Is it just conceivable that ways might be found to go that little bit further and deliver some kind of actual physical interaction between these hypothetical machines and the essential slivers of a preserved brain, some echo such that identity was preserved? Honestly, I think not, but I won’t quite say it is inconceivable. You could say that in my view the huge advantage of the brain preservation strategy for achieving immortality is that unlike its rivals it falls just short of being impossible in principle.

So I suppose, to paraphrase Gimli the dwarf: certainty of death; microscopic chance of success – what are we waiting for?

Postscript: I meant by that last bit that we should continue research, but I see it is open to misinterpretation. I didn’t dream people would actually do this, but I read that Robert McIntyre, lead author of the paper linked above, is floating a startup to apply the technique to people who are not yet dead. That would surely be unethical. If suicide were legal and if you had decided that was your preferred option, you might reasonably choose a method with a tiny chance of being revived in future. But I don’t think you can ask people to pay for a  technique (surely still inadequately tested and developed for human beings) where the prospects of revival are currently negligible and most likely will remain so.

On the phone or in the phone?

At Aeon, Karina Vold asks whether our smartphones have truly become part of us, and if so whether they deserve new legal protections. She quotes grisly examples of the authorities using a dead man’s finger to try to activate finger print recognition on protected devices.

There are several parts to the argument here. One is derived fairly straightforwardly from the extended mind theory. According this point of view, we are not simply our brains, nor even our bodies. When we use virtual reality devices we may feel ourselves to be elsewhere; a computer can give us cognitive abilities that we can use naturally but would not have been available from our simple biological nervous system. Even in the case of simpler technologies we may feel we are extended. Driving, I sometimes think of the car as ‘me’ in at least a limited sense. If I feel my way with a stick, I feel the ground through the stick, rather than feeling the movement of the stick and making conscious inferences about the ground. Our mind goes out further than we might have thought.

We can probably accept that there is at least some truth in that outlook. But we should also note an important qualification, namely that these things are a matter of degree. A stick in my hand may temporarily become like an extension of my limbs, but it remains temporary and liminal. It never becomes a core part of me in the way that my frontal lobes are. The argument for an extended mind is for a looser and more ambivalent border to the self, not just a wider one.

The second part of the argument is that while the authorities can legitimately seize our property, our minds are legally protected. Vold cites the right to silence, as well as restrictions on the use of drugs and lie detectors. She also quotes a judge to the effect that we are secure in the sanctum of our minds anyway, because there simply isn’t any way the authorities can intervene in there. They can control our behaviour, but not our thoughts.

One problem for me is that the ethical rationale for the right to remain silent is completely opaque to me. I have no idea what justifies our letting people remain silent in cases where they have information that is legitimately needed. A duty to disclose makes a lot more sense to me. Perhaps this principle is just a strongly-reinforced protection against the possibility of torture, in that removing the right of the authorities to have the information at all cuts off at the root any right to use pain as a means of prising it out? If so, it seems too much to me.

I also think the distinction between the ability to control behaviour and the ability to control thoughts is less absolute than might appear. True, we cannot read or implant thoughts themselves. But then it’s extremely difficult to control every action, too. The power of brainwashing techniques has often been overestimated, but the authorities can control information, use persuasive methods and even those forbidden drugs to get what they want. The Stoics, enjoying a bit of a revival in popularity these days, thought that in a broken world you could at least stay in control of your own mind; but it ain’t necessarily so; if they really want to, they can make you love Big Brother.

Still, let’s broadly accept that attempts at direct intervention in the mind are repugnant in ways that restraint of the body is not, and let’s also accept that my smart phone can in some extended sense be regarded as part of my mind. Does it then follow that my phone needs new legal protections in order to preserve the integrity of my personal boundaries?

The word ‘new’ in there is the one that gives me the final problem. Mind extension is not a new thing; if sticks can be part of it, then it’s nearly as old as humanity. Notebooks and encyclopaedias add to our minds, and have been around for a long time. Virtual reality has a special power, but films and even oil paintings sort of do the same job. What’s really new?

I think there is an implicit claim here that phones and other devices are special, because what they do is computation, and that’s what your brain does too. So they become one with our minds in a way that nothing else does. I think that’s just false. Regulars will know I don’t think computation is the essence of thought anyway. But even if it were, the computations done in a phone are completely disconnected from those going on in the brain. Virtual reality may merge with our experience, but what it gives our mind is the outputs of the computation; we never experience the computations themselves. It may hypothetically be the case that future technology will do this, and genuinely merge our thoughts into the data of some advanced machine (I think not, of course); but the idea that we are already at that point and that in fact smartphones already do this is a radical overstatement.

So although existing law may well be improvable, I don’t see a case in principle for any new protections.

 

The Philosophy of Delirium

Is there any philosophy of delirium? I remember asserting breezily in the past that there was philosophy of everything – including the actual philosophy of everything and the philosophy of philosophy. But when asked recently, I couldn’t come up with anything specifically on delirium, which in a way is surprising, given that it is an interesting mental state.

Hume, I gather, described two diseases of philosophy, characterised by either despair or unrealistic optimism in the face of the special difficulties a philosopher faces. The negative over-reaction he characterised as melancholy, the positive as delirium, in its euphoric sense. But that is not what we are after.

Historically I think that if delirium came up in discussion at all, it was bracketed with other delusional states, hallucinations and errors. Those, of course, have an abundant literature going back many centuries. The possibility of error in our perceptions has been responsible for the persistent (but surely erroneous) view that we never perceive reality, only sense-data, or only our idea of reality, or only a cognitive model of reality. The search for certainty in the face of the constant possibility of error motivated Descartes and arguably most of epistemology.

Clinically, delirium is an organically caused state of confusion. Philosophically, I suggest we should seize on another feature, namely that it can involve derangement of both perception and cognition. Let’s use the special power of fiat used by philosophers to create new races of zombies, generate second earths, and enslave the population of China, and say that philosophical delirium is defined exactly as that particular conjunction of derangements. So we can then define three distinct kinds of mental disturbance. First, delusion, where our thinking mind is working fine but has bizarre perceptions presented to it. Second, madness, where our perceptions are fine, but our mental responses make no sense. Third, delirium, in which distorted perceptions meet with distorted cognition.

The question then is; can delirium, so defined, actually be distinguished from delusion and madness? Suppose we have a subject who persistently tries to eat their hat. One reading is that the subject perceives the Homburg as a hamburger.  The second reading is that they perceive the hat correctly, but think it is appropriate to eat hats. The delirious reading might be that they see the hat as a shoe and believe shoes are to be eaten. For any possible set of behaviour it seems different readings will achieve consistency with any of the three possible states.

That’s from a third person point of view, of course, but surely the subject knows which state applies? They can’t reliably tell us, because their utterances are open to multiple interpretations too, but inwardly they know, don’t they? Well, no. The deluded person thinks the world really is bizarre; the mad one is presumably unaware of the madness, and the delirious subject is barred from knowing the true position on both counts. Does it then, make any sense to uphold the existence of any real distinction? Might we not better say that the three possibilities are really no more than rival diagnostic strategies, which may or may not work better in different cases, but have no absolute validity?

Can we perhaps fall back on consistency? Someone with delusions may see a convincing oasis out in the desert, but if a moment later it becomes a mountain, rational faculties will allow them to notice that something is amiss, and hypothesise that their sensory inputs are unreliable. However, a subject of Cartesian calibre would have to consider the possibility that they are actually just mistaken in their beliefs about their own experiences; in fact it always seemed to be a mountain. So once again the distinctions fall away.

Delusion and madness are all very well in their way, but delirium has a unique appeal in that it could be invisible. Suppose my perceptions are all subject to a consistent but complex form of distortion; but my responses have an exquisitely apposite complementary twist, which means that the two sets of errors cancel out and my actual behaviour and everything that I say, come out pretty much like those of some tediously sane and normal character. I am as delirious as can be, but you’d never know. Would I know? My mental states are so addled and my grip on reality so contorted, it hardly seems I could know anything; but if you question me about what I’m thinking, my responses all sound perfectly fine, just like those of my twin who doesn’t have invisible delirium.

We might be tempted to say that invisible delirium is no delirium; my thoughts are determined by the functioning of my cognitive processes, and since those end up working fine, it makes no sense to believe in some inner place where things go all wrong for a while.

But what if I get super invisible delirium? In this wonderful syndrome, my inputs and outputs are mangled in complementary ways again, but by great good fortune the garbled version actually works faster and better than normal. Far from seeming confused, I now seem to understand stuff better and more deeply than before. After all, isn’t reaching this kind of state why people spend time meditating and doing drugs?

But perhaps I am falling prey to the euphoric condition diagnosed by Hume…

The Meta-Problem

Maybe there’s a better strategy on consciousness? An early draft paper by David Chalmers suggests we turn from the Hard Problem (explaining why there is ‘something it is like’ to experience things) and address the Meta-Problem of why people think there is a Hard Problem; why we find the explanation of phenomenal experience problematic. While paper does make clear broadly what Chalmers’ own views are, it primarily seeks to map the territory, and does so in a way that is very useful.

Why would we decide to focus on the Meta-Problem? For sceptics, who don’t believe in phenomenal experience or think that the apparent problems about it stem from mistakes and delusions, it’s a natural piece of tidying up. In fact, for sceptics why people think there’s a problem may well be the only thing that really needs explaining or is capable of explanation. But Chalmers is not a sceptic. Although he acknowledges the merits of the broad sceptical case about phenomenal consciousness which Keith Frankish has recently championed under the label of illusionism, he believes it is indeed real and problematic. He believes, however, that illuminating the Meta-Problem through a programme of thoughtful and empirical research might well help solve the Hard Problem itself, and is a matter of interest well beyond sceptical circles.

To put my cards on the table, I think he is over-optimistic, and seems to take too much comfort from the fact that there have to be physical and functional explanations for everything. It follows from that that there must indeed at least be physical and functional explanations for our reports of experience, our reports of the problem, and our dispositions to speak of phenomenal experience, qualia, etc. But it does not follow that there must be adequate and satisfying explanations.

Certainly physical and functional explanations alone would not be good enough to banish our worries about phenomenal experience. They would not make the itch go away. In fact, I would argue that they are not even adequate for issues to do with the ‘Easy Problem’, roughly the question of how consciousness allows us to produce intelligent and well-directed behaviour. We usually look for higher-level explanations even there; notably explanations with an element of teleology – ones that tell us what things are for or what they are supposed to do. Such explanations can normally be cashed out safely in non-teleological terms, such as strictly-worded evolutionary accounts; but that does not mean they are dispensable or not needed in order for us to understand properly.

How much more challenging things are when we come to Hard Problem issues, where a claim that they lie beyond physics is of the essence. Chalmer’s optimism is encapsulated in a sentence when he says…

Presumably there is at least a very close tie between the mechanisms that generate phenomenal reports and consciousness itself.

There’s your problem. Illusionists can be content with explanations that never touch on phenomenal consciousness because they don’t think it exists, but no explanation that does not connect with it will satisfy qualophiles. But how can you connect with a phenomenon explanatorily without diagnosing its nature? It really seems that for believers, we have to solve the Hard Problem first (or at least, simultaneously) because believers are constrained to say that the appearance of a problem arises from a real problem.

Logically, that is not quite the case; we could say that our dispositions to talk about phenomenal experience arise from merely material causes, but just happen to be truthful about a second world of phenomenal experience, or are truthful in light of a Leibnizian pre-established harmony. Some qualophiles are similarly prepared to say that their utterances about qualia are not caused by qualia, so that position might seem appealing in some quarters. To me the harmonised second world seems hopelessly redundant, and that is why something like illusionism is, at the end of the day, the only game in town.

I should make it clear that Chalmers by no means neglects the question of what sort of explanation will do; in fact he provides a rich and characteristically thorough discussion. It’s more that in my opinion, he just doesn’t know when he’s beaten, which to be fair may be an outlook essential to the conduct of philosophy.

I say that something like illusionism seems to be the only game in town, though I don’t quite call myself an illusionist. There’s a presentational difficulty for me because I think the reality of experience, in an appropriate sense, is the nub of the matter. But you could situate my view as the form of illusionism which says the appearance of ineffable phenomenal experience arises from the mistaken assumption that particular real experiences should be within the explanatory scope of general physical theory.

I won’t attempt to summarise the whole of Chalmers’ discussion, which is detailed and illuminating; although I think he is doomed to disappointment, the project he proposes might well yield good new insights; it’s often been the case that false philosophical positions were more fecund than true ones.