angelanddevilTom Clark has an interesting paper on Experience and Autonomy: Why Consciousness Does and Doesn’t Matter, due to appear as a chapter in Exploring the Illusion of Free Will and Responsibility (if your heart sinks at the idea of discussing free will one more time, don’t despair: this is not the same old stuff).

In essence Clark wants to propose a naturalised conception of free will and responsibility and he seeks to dispel three particular worries about the role of consciousness; that it might be an epiphenomenon, a passenger along for the ride with no real control; that conscious processes are not in charge, but are subject to manipulation and direction by unconscious ones; and that our conception of ourselves as folk-dualist agents, able to step outside the processes of physical causation but still able to intervene in them effectively, is threatened. He makes it clear that he is championing phenomenal consciousness, that is, the consciousness which provides real if private experiences in our minds; not the sort of cognitive rational processing that an unfeeling zombie would do equally well. I think he succeeds in being clear about this, though it’s a bit of a challenge because phenomenal consciousness is typically discussed in the context of perception, while rational decision-making tends to be seen in the context of the ‘easy problem’ – zombies can make the same decisions as us and even give the same rationales. When we talk about phenomenal consciousness being relevant to our decisions, I take it we mean something like our being able to sincerely claim that we ‘thought about’ a given decision in the sense that we had actual experience of relevant thoughts passing through our minds. A zombie twin would make identical claims but the claims would, unknown to the zombie, be false, a rather disturbing idea.

I won’t consider all of Clark’s arguments (which I am generally in sympathy with), but there are a few nice ones which I found thought-provoking. On epiphenomenalism, Clark has a neat manoeuvre. A commonly used example of an epiphenomenon, first proposed by Huxley, is the whistle on a steam locomotive; the boiler, the pistons, and the wheels all play a part in the causal story which culminates in the engine moving down the track; the whistle is there too, but not part of that story. Now discussion has sometimes been handicapped by the existence of two different conceptions of epiphenomenalism; a rigorous one in which there really must be no causal effects at all, and a looser one in which there may be some causal effects but only ones that are irrelevant, subliminal, or otherwise ignorable. I tend towards the rigorous conception myself, and have consequently argued in the past that the whistle on a steam engine is not really a good example. Blowing the whistle lets steam out of the boiler which does have real effects. Typically they may be small, but in principle a long enough blast can stop a train altogether.

But Clark reverses that unexpectedly. He argues that in order to be considered an epiphenomenon an entity has to be the sort of thing that might have had a causal role in the process. So the whistle is a good example; but because consciousness is outside the third-person account of things altogether, it isn’t even a candidate to be an epiphenomenon! Although that inverts my own outlook, I think it’s a pretty neat piece of footwork. If I wanted a come-back I think I would let Clark have his version of epiphenomenalism and define a new kind, x-epiphenomenalism, which doesn’t require an entity to be the kind of thing that could have a causal role; I’d then argue that consciousness being x-epiphenomenal is just as worrying as the old problem. No doubt Clark in turn might come back and argue that all kinds of unworrying things were going to turn out to be x-epiphenomenal on that basis, and so on; however, since I don’t have any great desire to defend epiphenomenalism I won’t even start down that road.

On the second worry Clark gives a sensible response to the issues raised by the research of Libet and others which suggest our decisions are determined internally before they ever enter our consciousness; but I was especially struck by his arguments on the potential influence of unconscious factors which form an important part of his wider case. There is a vast weight of scientific evidence to show that often enough our choices are influenced or even determined by unconscious factors we’re not aware of; Clark gives a few examples but there are many more. Perhaps consciousness is not the chief executive of our minds after all, just the PR department?

Clark nibbles the bullet a bit here, accepting that unconscious influence does happen, but arguing that when we are aware of say, ethnic bias or other factors, we can consciously fight against it and second-guess our unworthier unconscious impulses. I like the idea that it’s when we battle our own primitive inclinations that we become most truly ourselves; but the issues get pretty complicated.

As a side issue, Clark’s examples all suppose that more or less wicked unconscious biases are to be defeated by a more ethical conscious conception of ourself (rather reminiscent of those cartoon disputes between an angel on the character’s right shoulder and a devil on the left); but it ain’t necessarily so. What if my conscious mind rules out on principled but sectarian grounds a marriage to someone I sincerely love with my unconscious inclinations? I’m not clear that the sectarian is to be considered the representative of virtue (or of my essential personal agency) more than the lover.

That’s not the point at all, of course: Clark is not arguing that consciousness is always right, only that it has a genuine role. However, the position is never going to be clear. Suppose I am inclined to vote against candidate N, who has a big nose. I tell myself I should vote for him because it’s the schnozz that is putting me off. Oh no, I tell myself, it’s his policies I don’t like, not his nose at all. Ah, but you would think that, I tell myself, you’re bound to be unaware of the bias, so you need to aim off a bit. How much do \I aim off, though – am I to vote for all big-nosed candidates regardless? Surely I might also have legitimate grounds for disliking them? And does that ‘aiming off’ really give my consciousness a proper role or merely defer to some external set of rules?

Worse yet, as I leave the polling station it suddenly occurs to me that the truth is, the nose had nothing to do with it; I really voted for N because I’m biased in favour of white middle-aged males; my unconscious fabricated the stuff about the nose to give me a plausible cover story while achieving its own ends. Or did it? Because the influences I’m fighting are unconscious, how will I ever know what they really are, and if I don’t know, doesn’t the claimed role of consciousness become merely a matter of faith? It could always turn out that if I really knew what was going on, I’d see my consciousness was having its strings pulled all the time. Consciousness can present a rationale which it claims was effective, but it could do that to begin with; it never knew the rationale was really a mask for unconscious machinations.

The last of the three worries tackled by Clark is not strictly a philosophical or scientific one; we might well say that if people’s folk-dualist ideas are threatened, so much the worse for them. There is, however, some evidence that undiluted materialism does induce what Clark calls a “puppet” outlook in which people’s sense of moral responsibility is weakened and their behaviour worsened. Clark provides rational answers but his views tend to put him in the position of conceding that something has indeed been lost. Consciousness does and doesn’t matter. I don’t think anything worth having can be lost by getting closer to the truth and I don’t think a properly materialist outlook is necessarily morally corrosive – even in a small degree. I think what we’re really lacking for the moment is a sufficiently inspiring, cogent, and understood naturalised ethics to go with our naturalised view of the mind. There’s much to be done on that, but it’s far from hopeless (as I expect Clark might agree).

There’s much more in the paper than I have touched on here; I recommend a look at it.

OutputMassimo Pigliucci issued a spirited counterblast to computationalism recently, which I picked up on MLU. He says that people too often read the Turing-Church hypothesis as if it said that a Universal Turing Machine could do anything that any machine could do. They then take that as a basis on which to help themselves to computationalism. He quotes Jack Copeland as saying that a myth has arisen on the matter, and citing examples where he feels that Dennett and the Copelands have mis-stated the position. Actually, says Pigliucci, Turing merely tells us that a Universal Turing Machine can do anything a specific Turing machine can do, and that does not tell us what real-world machines can or can’t do.

It’s possible some nits are being picked here.  Copeland’s reported view seems a trifle too puritanical in its refusal to look at wider implications; I think Turing himself would have been surprised to hear that his work told us nothing about the potential capacities of real world digital computers. But of course Pigliucci is quite right that it doesn’t establish that the brain is computational. Indeed, Turing’s main point was arguably about the limits of computation, showing that there are problems that cannot be handled computationally. It’s sort of part of our bedrock understanding of computation that there are many non-computable problems; apart from the original halting problem the tiling problem may be the most familiar. Tiling problems are associated with the ingenious work of Roger Penrose, and he, of course, published many years ago now what he claims is a proof that when mathematicians are thinking original mathematical thoughts they are not computing.

So really Pigliucci’s moderate conclusion that computationalism remains an open issue ought to be uncontroversial? Surely no-one really supposes that the debate is over? Strangely enough there does seem to have been a bit of a revival in hard-line computationalism. Pigliucci goes on to look at pancomputationalism, the view that every natural process is instantiating a computation (or even all possible computations. This is rather like the view John Searle once proposed, that a window can be seen as a computer because it has two states, open and closed, which are enough to express a stream of binary digits. I don’t propose to go into that in any detail, except to say I think I broadly agree with Pigliucci that it requires an excessively liberal use of interpretation. In particular, I think in order to interpret everything as a computation, we generally have to allow ourselves to interpret the same physical state of the object as different computational states at different times, and that’s not really legitimate. If I can do that I can interpret myself into being a wizard, because I’m free to interpret my physical self as human at one time, a dragon at another, and a fluffy pink bunny at a third.

But without being pancomputationalists we might wonder why the limits of computation don’t hit us in the face more often. The world is full of non-computable problems, but they rarely seem to give us much difficulty. Why is that? One answer might be in the amusing argument put by Ray Kurzweil in his book How to Create a mind. Kurzweil espouses a doctrine called the “Universality of Computation” which he glosses as ‘the concept that a general-purpose computer can implement any algorithm”. I wonder whether that would attract a look of magisterial disapproval from Jack Copeland? Anyway, Kurzweil describes a non-computable problem known as the ‘busy beaver’ problem. The task here is to work out for a given value of n what the maximum number of ones written by any Turing machine with n states will be. The problem is uncomputable in general because as the computer (a Universal Turing Machine) works through the simulation of all the machines with n states, it runs into some that get stuck in a loop and don’t halt.

So, says Kurzweil, an example of the terrible weakness of computers when set against the human mind? Yet for many values of n it happens that the problem is solvable, and as a matter of fact computers have solved many such particular cases – many more than have actually been solved by unaided human thought! I think Turing would have liked that; it resembles points he made in his famous 1950 essay on Computing Machinery and Intelligence.

Standing aside from the fray a little, the thing that really strikes me is that the argument seems such a blast from the past. This kind of thing was chewed over with great energy twenty or even thirty years ago, and in some respects it doesn’t seem as important as it used to. I doubt whether consciousness is purely computational, but it may well be subserved, or be capable of being subserved, by computational processes in important ways. When we finally get an artificial consciousness, it wouldn’t surprise me if the heavy lifting is done by computational modules which either relate in a non-computational way or rely on non-computational processing, perhaps in pattern recognition, though Kurzweil would surely hate the idea that that key process might not be computed. I doubt whether the proud inventor on that happy day will be very concerned with the question of whether his machine is computational or not.

turing22012 was Alan Turing Year, marking the hundredth centenary of his birth.  The British Government, a little late perhaps, announced recently that it would support a Bill giving Turing a posthumous pardon; Gordon Brown, then the Prime Minister, had already issued an official apology in 2009. As you probably know, Turing, who was gay, was threatened with blackmail by one of his lovers (homosexuality being still illegal at the time) and reported the matter to the authorities; he was then tried and convicted and offered a choice of going to jail or taking hormones, effectively a form of oestrogen. He chose the latter, but subsequently died of cyanide poisoning in what is generally believed to have been suicide, leaving by his bed a partly-eaten apple, thought by many to be a poignant allusion to the story of Snow White. In fact it is not clear that the apple had any significance or that his death was actually suicide

The pardon was widely but not universally welcomed: some thought it an empty  gesture; some asked why Turing alone should be pardoned; and some even saw it as an insult, confirming by implication that Turing’s homosexuality was indeed an offence that needed to be forgiven.

Turing is generally celebrated for wartime work at Bletchley Park, the code-breaking centre, and for his work on the Halting Problem: on the latter he was pipped at the post by Alonzo Church, but his solution included the elegant formalisation of the idea of digital computing embodied in the Turing Machine, recognised as the foundation stone of modern computing. In a famous paper from 1950 he also effectively launched the field of Artificial Intelligence, and it is here that we find what we now call the Turing Test, a much-debated proposal that the ability of machines to think might be tested by having a short conversation with them.

Turing’s optimism about artificial intelligence has not been justified by developments since: he thought the Test would be passed by the end of the twentieth century. For many years the Loebner Prize contest has invited contestants to provide computerised interlocutors to be put through a real Turing Test by a panel of human judges, who attempt to tell which of their conversational partners, communicating remotely by text on a screen, is human and which machine.  None of the ‘chat-bots’  has succeeded in passing itself off as human so far – but then so far as I can tell none of the candidates ever pretended to be a genuinely thinking machine – they’re simply designed to scrape through the test by means of various cunning tricks – so according to Turing, none of them should have succeeded.

One lesson which has emerged from the years of trials – often inadvertently hilarious – is that success depends strongly on the judges. If the judge allows the chat-bot to take the lead and steer the conversation, a good impression is liely to be possible; but judges who try to make things difficult for the computer never fail. So how do you go about tripping up a chat-bot?

Well, we could try testing its general knowledge. Human beings have a vast repository of facts, which even the largest computer finds it difficult to match. One problem with this approach is that human beings cannot be relied on to know anything in particular – not knowing the year of the battle of Hastings, for example, does not prove that you’re not human. The second problem is that computers have been getting much better at this. Some clever chat-bots these days are permanently accessible online; they save the inputs made by casual visitors and later discreetly feed them back to another subject, noting the response for future use. Over time they accumulate a large database of what humans say in these circumstances and what other humans say in response. The really clever part of this strategy is that not only does it provide good responses, it means your database is automatically weighted towards the most likely topics and queries. It turns out that human beings are fairly predictable, and so the chat-bot can come back with responses that are sometimes eerily good, embodying human-style jokes, finishing quotations, apparently picking up web-culture references, and so on.

If we’re subtle we might try to turn this tactic of saving real human input against the chat-bot, looking for responses that seem more appropriate for someone speaking to a chat-bot than someone engaging in normal conversation, or perhaps referring to earlier phases of the conversation that never happened. But this is a tricky strategy to rely on, generally requiring some luck.

Perhaps rather than trying established facts, it might be better to ask the chat-bot questions which have never been asked before in the entire history of the world, but which any human can easily answer. When was the last time a mole fought an octopus? How many emeralds were in the crown worn by Shakespeare during his visit to Tashkent?

It might be possible to make things a little more difficult for the chat-bot by asking questions that require an answer in a specific format; but it’s hard to do that effectively in a Turing Test because normal usage is generally extremely flexible about what it will accept as an answer; and failing to match the prescribed format might be more human rather than less. Moreover, rephrasing is another field where the computers have come on a lot: we only have to think of the Watson system’s performance at the quiz game Jeopardy, which besides rapid retrieval of facts required just this kind of reformulation.

So it might be better to move away from general stuff and ask the chat-bot about specifics that any human would know but which are unlikely to be in a database – the weather outside, which hotel it is supposedly staying at. Perhaps we should ask it about its mother, as they did in similar circumstances in Blade Runner, though probably not for her maiden name.

On a different tack, we might try to exploit the weakness of many chat-bots when it comes to holding a context: instead of falling into the standard rhythm of one input, one response, we can allude to something we mentioned three inputs ago. Although they have got a little better, most chat-bots still seem to have great difficulty maintaining a topic across several inputs or ensuring consistency of response. Being cruel, we might deliberately introduce oddities that the bot needs to remember: we tell it our cat is called Fish  and then a little later ask whether it thinks the Fish we mentioned likes to swim.

Wherever possible we should fall back on Gricean implicature and provide good enough clues without spelling things out. Perhaps we could observe to the chat-bot that poor grammar is very human – which to a human more or less invites an ungrammatical response, although of course we can never rely on a real human’s getting the point. The same thing is true, alas, in the case of some of the simplest and deadliest strategies, which involve changing the rules of discourse. We tell the chat-bot that all our inputs from now on lliw eb delleps tuo sdrawkcab and ask it to reply in the same way, or we js mss t ll th vwls.

Devising these strategies makes us think in potentially useful ways about the special qualities of human thought. If we bring all our insights together, can we devise an Ultra-Turing Test? That would be a single question which no computer ever answers correctly and all reasonably alert and intelligent humans get right. We’d have to make some small allowance for chance, as there is obviously no answer that couldn’t be generated at random in some tiny number of cases. We’d also have to allow for the fact that as soon as any question was known, artful chat-bot programmers would seek to build in an answer; the question would have to be such that they couldn’t do that successfully.

Perhaps the question would allude to some feature of the local environment which would be obvious but not foreseeable (perhaps just the time?) but pick it out in a non-specific allusive way which relied on the ability to generate implications quickly from a vast store of background knowledge. It doesn’t sound impossible…


kiss… is not really what this piece is about (sorry). It’s an idea I had years ago for a short story or a novella. ‘Lust’ here would have been interpreted broadly as any state which impels a human being towards sex. I had in mind a number of axes defining a general ‘lust space’. One of the axes, if I remember rightly, had specific attraction to one person at one end and generalised indiscriminate enthusiasm at the other; another went from sadistic to masochistic, and so on. I think I had eighty-one basic forms of lust, and the idea was to write short episodes exemplifying each one: in fact, to interweave a coherent narrative with all of them in.

My creative gifts were not up to that challenge, but I mention it here because one of the axes went from the purely intellectual to the purely physical. At the intellectual extreme you might have an elderly homosexual aristocrat who, on inheriting a title, realises it is his duty to attempt to procure an heir. At the purely physical end you might have an adolescent boy on a train who notices he has an erection which is unrelated to anything that has passed through his mind.

That axis would have made a lot of sense (perhaps) to Luca Barlassina and Albert Newen, whose paper in Philosophy and Phenomenological Research sets out an impure somatic theory of the emotions. In short, they claim that emotions are constituted by the integration of bodily perceptions with representations of external objects and states of affairs.

Somatic theories say that emotions are really just bodily states. We don’t get red in the face because we’re angry, we get angry because we’ve become red in the face. As no less an authority than William James had it:

The more rational statement is that we feel sorry because we cry, angry because we strike, afraid because we tremble, and not that we cry, strike, or tremble, because we are sorry, angry, or fearful, as the case may be. Without the bodily states following on the perception, the latter would be purely cognitive in form, pale, colorless, destitute of emotional warmth.

This view did not appeal to everyone, but the elegantly parsimonious reduction it offers has retained its appeal, and Jesse Prinz has put forward a sophisticated 21st century version. It is Prinz’s theory that Barlassina and Newen address; they think it needs adulterating, but they clearly want to build on Prinz’s foundations, not reject them.

So what does Prinz say? His view of emotions fits into the framework of his general view about perception: for him, a state is a perceptual state if it is a state of a dedicated input system – eg the visual system. An emotion is simply a state of the system that monitors our own bodies; in other words emotions are just perceptions of our own bodily states.  Even for Prinz, that’s a little too pure: emotions, after all, are typically about something. They have intentional content. We don’t just feel angry, we feel angry about something or other. Prinz regards emotions as having dual content: they register bodily states but also represent core relational themes (as against say, fatigue, which both registers and represents a bodily state). On top of that, they may involve propositional attitudes, thoughts about some evocative future event, for example, but the propositional attitudes only evoke the emotions, they don’t play any role in constituting them. Further still, certain higher emotions are recalibrati0ns of lower ones: the simple emotion of sadness is recalibrated so it can be controlled by a particular set of stimuli and become guilt.

So far so good. Barlassina and Newen have four objections. First, if Prinz is right, then the neural correlates of emotion and the perception of the relevant bodily states must just be the same. Taking the example of disgust, B&N argue that the evidence suggests otherwise: interoception, the perception of bodily changes, may indeed cause disgust, but does not equate to it neurologically.

Second, they see problems with Prinz’s method of bringing in intentional content. For Prinz emotions differ from mere bodily feeling because they represent core relational themes. But, say B&N, what about ear pressure? It tells us about unhealthy levels of barometric pressure and oxygen, and so relates to survival, surely a core relational theme: and it’s certainly a perception of a bodily state – but ear pressure is not an emotion.

Third, Prinz’s account only allows emotions to be about general situations; but in fact they are about particular things. When we’re afraid of a dog, we’re afraid of that dog, we’re not just experiencing a general fear in the presence of a specific dog.

Fourth, Prinz doesn’t fully accommodate the real phenomenology of emotions. For him, fear of a lion is fear accompanied by some beleifs about a lion: but B&N maintain that the directedness of the emotion is built in, part of the inherent phenomenology.

Barlassina and Newen like Prinz’s somatic leanings, but they conclude that he simply doesn’t account sufficiently for the representative characteristics of emotions: consequently they propose an ‘impure’ theory by which emotions are cognitive states constituted when interoceptive states are integrated with with perceptions of external objects or states of affairs.

This pollution or elaboration of the pure theory seems pretty sensible and B&N give a clear and convincing exposition. At the end of the day it leaves me cold not because they haven’t done a good job but because I suspect that somatic theories are always going to be inadequate: for two reasons.

First, they just don’t capture the phenomenology. There’s no doubt at all that emotions are often or typically characterised or coloured by perception of distinctive bodily states, but is that what they are in essence? It doesn’t seem so. It seems possible to imagine that I might be angry or sad without a body at all: not, of course, in the same good old human way, but angry or sad nevertheless. There seems to be something almost qualic about emotions, something over and above any of the physical aspects, characteristic though they may be.

Second, surely emotions are often essentially about dispositions to behave in a certain way? An account of anger which never mentions that anger makes me more likely to hit people just doesn’t seem to cut the mustard. Even William James spoke of striking people. In fact, I think one could plausibly argue that the physical changes associated with an emotion can often be related to the underlying propensity to behave in a certain way. We begin to breathe deeply and our heart pounds because we are getting ready for violent exertion, just as parallel cognitive changes get us ready to take offence and start a fight. Not all emotions are as neat as this: we’ve talked in the past about the difficulty of explaining what grief is for. Still, these considerations seem enough to show that a somatic account, even an impure one, can’t quite cover the ground.

Still, just as Barlassina and Newen built on Prinz, it may well be that they have provided some good foundation work for an even more impure theory.


killer robotThere is a gathering campaign against the production and use of ‘killer robots’, weapons endowed with a degree of artificial intelligence and autonomy. The increasing use of drones by the USA in particular has produced a sense that this is no longer science fiction and that the issues need to be addressed before they go by default. The United Nations recently received a report proposing national moratoria, among other steps.

However, I don’t feel I’ve yet come across a really clear and comprehensive statement of why killer robots are a problem; and it’s not at all a simple matter. So I thought I’d try to set out a sketch of a full overview, and that’s what this piece aims to offer. In the process I’ve identified some potential safguarding principles which, depending on your view of various matters, might be helpful or appropriate; these are collected together at the end.

I would be grateful for input on this – what have I missed? What have I got wrong?

In essence I think there are four broad reasons why hypothetically we might think it right to be wary of killer robots: first, because they work well; second because in other ways they don’t work well, third because they open up new scope for crime, and fourth because they might be inherently unethical.

A. Because they work well.

A1. They do bad things. The first reason to dislike killer robots is simply that they are a new and effective kind of weapon. Weapons kill and destroy; killing and destruction is bad. The main counter-argument at an equally simple level is that in the hands of good people they will be used only for good purposes, and in particular to counter and frustrate the wicked use of other weapons. There is room for many views about who the good people may be, ranging from the view that anyone who wants such weapons disqualifies themselves from goodness automatically, to the view that no-one is morally bad and we’re all equally entitled to whatever weapons we want; but we can at any rate pick up our first general principle.

P1. Killer Robots should not be produced or used in a way that allows them to fall into the hands of people who will use them unethically.

A2. They make war worse. Different weapons have affected the character of warfare in different ways. The machine gun, perhaps, transformed the mobile engagements of the nineteenth century into the fixed slaughter of the First World War. The atomic bomb gave us the capacity to destroy entire cities at a time; potentially to kill everyone on Earth, and arguably made total war unthinkable for rational advanced powers. Could it be that killer robots transform war in a way which is bad? There are several possible claims.

A2 (i) They make warfare easier and more acceptable for the belligerent who has them. No soldier on your own side is put at risk when a robot is used, so missions which are unacceptable because of the risk to your own soldier’s lives become viable. In addition robots may be able to reach places or conduct attacks which are beyond the physical capacity of a human soldier, even one with special equipment. Perhaps, too, the failure of a drone does not involve a loss of face and prestige in the same way as the defeat of a human soldier. If we accept the principle that some uses of weapons are good, then for those uses this kind of objection is inverted; the risk elimination and extra capability are simply further benefits for the good user. To restrict the use of killer robots for good purposes on these grounds might then be morally wrong. Even if we think the objection is sound it does not necessarily constitute grounds for giving up killer robots altogether; instead we can merely adopt the following restriction.

P2. Killer Robots should not be used for any mission which you would not be prepared to assign to a human soldier if a human soldier were capable of executing it.

A2 (ii) They tip the balance of advantage in war in favour of established powers and governments. Because advanced robots are technologically sophisticated and expensive, they are most easily or exclusively accessible to existing authorities and large organisations. This may make insurgency and rebellion more difficult, and that may have undemocratic consequences and strengthen the hold of tyrants. Tyrants normally have sufficient hardware to defeat rebellions in a direct confrontation anyway – it’s not easy to become a tyrant otherwise. Their more serious problems come from such factors as not being able or willing to kill in the massive numbers required to defeat a large-scale popular rebellion (because that would disrupt society and hence damage their own power), disloyalty among subordinates who have control of the hardware, or inability to deal with both internal and external enemies at the same time. Killer robots make no difference to the first of these factors; they would only affect the second if they made it possible for the tyrant to dispense with human subordinates, controlling the repression-bots direct from a central control panel, or being able to safely let them get on with things at their own discretion – still a very remote contingency; and on the third they make only the same kind of difference as additional conventional arms, providing no particular reason to single out robots for restriction. However, robots might still be useful to a tyrant in less direct ways, notably by carrying out targeted surveillance and by taking out leading rebels individually in a way which could not be accomplished by human agents.

A2 (iii) They tip the balance of advantage in war against established powers and governments Counter to the last point, or for a different set of robots, they may help anti-government forces. In a sense we already have autonomous weapons in the shape of land-mines and IEDs, which wait, detect an enemy (inaccurately) and fire. Hobbyists are already constructing their own drones, and it’s not at all hard to imagine that with some cheap or even recycled hardware it would be possible to make bombs that discriminate a little better; automatic guns that recognise the sound of enemy vehicles or the appearance of enemy soldiers, and aim and fire selectively; and crawling weapons that infiltrate restricted buildings intelligently before exploding. In A2 (ii) we thought about governments that were tyrannical, but clearly as well as justifiable rebels there are insurgents and plain terrorists whose causes and methods are wrong and wicked.

A2 (iv) They bring war further into the civilian sphere As a consequence of some of the presumed properties of robots – their ability to navigate complex journeys unobtrusively and detect specific targets accurately – it may be that they are used in circumstances where any other approach would bring unacceptable risks of civilian casualties. However, they may still cause collateral deaths and injuries and in general diminish the ‘safe’ sphere of civilian life which would in general be regarded as something to be protected, something whose erosion could have far-reaching effects on the morale and cohesion of society. Principle P2 would guard against this problem.

A3 They circumvent ethical restrictions The objection here is not that robots are unethical, which is discussed below under D, but that their lack of ethics makes them more effective because it enables them to do things which couldn’t be done otherwise, extending the scope and consequences of war, war being inherently bad as we remember.

A3 (i) Robots will do unethical things which could not otherwise be done. It doesn’t seem that robots can do anything that wouldn’t otherwise be done for ethical reasons: either they approximate to tools, in which case they are under the control of a human agent who would presumably have done the same things without a robot had they been physically possible; or if the robots are sufficiently advanced to have real moral agency of their own, they correspond to a human agent and, subject to the discussion below, there is no reason to think they would be any better or worse than human beings.

A3 (ii) Using robots gives the belligerent an enabling sense of having ‘clean hands’. Being able to use robots may make it easier for belligerents who are evil but squeamish to separate themselves from their actions. This might be simply because of distance: you don’t have to watch the consequences of a drone attack up close; or it might be because of a genuine sense that the robot bears some of the responsibility. The greater the autonomy of the robot, the greater this latter sense is likely to be. If there are several stages in the process this sense of moral detachment might be increased: suppose we set out overall mission objectives to a strategic management computer, which then selects the resources and instructs the drones on particular tasks, leaving them with options on the final specifics and an ability to abort if required. In such circumstances it might be relatively easy to disregard the blood eventually sprayed across the landscape as a result of our instructions.

A3 (iii) Robots can easily be used for covert missions, freeing the belligerent from the fear of punishment for unethical behaviour. Robots facilitate secrecy both because they may be less detectable to the enemy and because they may require fewer humans to be briefed and aware, humans being naturally more inclined to leak information than a robot, especially a one-use robot.

B Because they don’t work well

B1 They are inherently likely to go wrong. In some obvious ways robots are more reliable than human agents; their capacities can be known much more exactly and they are very predictable. However, it can be claimed that they have weaknesses, and the most important is probably inability to deal with open-ended real-life situations. AI has shown it can do well in restricted domains, but to date it has not performed well where clear parameters cannot be established in advance. This may change, but for the moment while it’s quite conceivable we might send a robot after a man who fitted a certain description, it would not be a good idea to send a robot to take out anyone ‘behaving like an insurgent’. This need not be an insuperable problem because in many cases battlefield situations or the conditions of a particular mission may be predictable enough to render the use of a robot sufficiently safe; the risk may be no greater or perhaps even less, than the risk inevitably involved in using unintelligent weapons. We can guard against this systematically by adopting another principle.

P3. Killer Robots should not be used for any mission in unpredictable circumstances or where the application of background understanding may be required.

B2 The consequences of their going wrong are especially serious. This is the Sorcerer’s Apprentice scenario: a robot is sent out on a limited mission but for some reason does not terminate the job, and continues to threaten and kill victims indefinitely; or it destroys far more than was intended. The answer here seems to be; don’t design a robot with excess killing capacity. And impose suitable limits and safeguards.

P4. Killer Robots should not be equipped with capacities that go beyond the immediate mission; they should be subject to built-in time limits and capable of being shut down remotely.

B3 They tempt belligerents into over-ambitious missions. It seems plausible enough that sometimes the capacities of killer robots might be over-estimated, but that’s a risk that applies to all weapons. I don’t see anything about robots in themselves that makes the error especially seductive.

B4 They lack understanding of human beings. So do bullets, of course; the danger here only arises if we are asking the robot to make very highly sophisticated judgements about human behaviour. Robots can be equipped with game-theoretic rules that allow them to perform well in defined tactical situations; otherwise the risk is covered by principle P3. That, at least applies to current AI. It is conceivable that in future we may develop robots that actually possess ‘theory of mind’ in the sense required to understand human beings.

B5 They lack emotion. Lack of emotion can be a positive feature if the emotions are anger, spite and sadistic excitement. In general a good military agent follows rules and subject to P3, robots can do that very satisfactorily. There remains the possibility that robots will kill when a human soldier would refrain from feelings of empathy and mercy. As a countervailing factor the human soldier need not be doing the best thing, of course, and short-term immediate mercy might in some circumstances lead to more deaths overall. I believe that there are in practice few cases of soldiers failing to carry out a bad mission through feelings of sympathy and mercy, though I have no respectable evidence either way. I mentioned above the possibility that robots may eventually be endowed with theory of mind. Anything we conclude about this must be speculative to some degree, but there is a possibility that the acquisition of theory of mind requires empathy, and if so we could expect that robots capable of understanding human emotion would necessarily share it. It need not be a permanent fact that robots lack emotion. While current AI simulation of emotion is not generally impressive or useful, that may not remain the case

C Because they create new scope for crime

C1 They facilitate ‘rational’ crime. We tend to think first of military considerations, but it is surely the case that killer robots, whether specially designed or re-purposed from military uses could be turned to crime. The scope for use in murder, robbery and extortion is obvious.

C2 They facilitate irrational crime. A particularly unpleasant prospect is the possibility of autonomous weapons being used for irrational crime – mass murder, hate crime and so on. When computer viruses became possible, people created them even though thre was no benefit involved; it cannot be expected that people will refrain from making murder-bots merely because it makes no sense.

D Because they are inherently unethical

D1 They lack ethical restraint. If ethics involves obeying a set of rules, then subject to their being able to understand what is going on, robots should in that limited sense be ethical. If soldiers are required to apply utilitarian ethics, then robots at current levels of sophistication will be capable of applying Benthamite calculus, but have great difficulty identifying the values to be applied. Kantian ethics requires one to have a will and desires of one’s own, so pending the arrival of robots with human-level cognition, they are presumably non-starters at it, as they would presumably be for non-naturalistic or other ethical systems. But we’re not requiring robots to behave well, only to avoid behaving badly – I don’t think anything beyond obedience to a set of rules is generally required because if the rules are drawn conservatively it should be possible to avoid the grey areas.

D2 They disperse or dispel moral responsibility. Under A3(ii) I considered the possibility that using robots might give a belligerent a false sense of moral immunity; but what if robots really do confer moral immunity? Isaac Asimov gives an example of a robot subject to a law that it may not harm human beings, but without a duty to protect them. It could, he suggests, drop a large weight towards a human being: because it knows it has ample time to stop the weight this in itself does not amount to harming the human. But once the weight is on its way, the robot has no duty to rescue the human being and can allow the weight to fall. I think it is a law of ethics that responsibility ends up somewhere except in cases of genuine accident; either with the designer, the programmer, the user, or, if sufficiently sophisticated, the robot itself. Asimov’s robot equivocates; dropping the weight is only not murder in the light of a definite intention to stop the weight, and changing its mind subsequently amounts to murder.

D3 They may become self-interested. The classic science fiction scenario is that robots develop interests of their own and Kill All Humans in order to protect them. This could only be conceivable in the still very remote contingency that robots were endowed with full human-level capacities for volition and responsibility. If that were the case, then robots would arguably have a right to their own interests in just the same way as any other sentient being; there’s no reason to think they would be any more homicidial in pursung them than human being themselves.

D4 They resemble slaves. The other side of the coin then, is that if we had robots with full human mental capacity, they would amount to our slaves, and we know that slavery is wrong. That isn’t necessarily so, we could have killer robots that were citizens in good standing; however all of that is still a very remote prospect. Of more immediate concern is the idea that having mechanical slaves would lead us to treat human beings more like machines. A commander who gets used to treating his drones as expendable might eventually begin to be more careless with human lives. Against his there is a contrary argument that being relieved from the need to accept that war implied death for some on one’s own soldiers would mean it in fact became even more shocking and less acceptable in future.

D5 It is inherently wrong for a human to be killed by a machine. Could it be that there is something inherently undignified or improper about being killed by a mechanical decision, as opposed to a simple explosion? People certainly speak of it’s being repugnant that a machine should have the power of life or death over a person. I fear there’s some equivocation going on here: either the robot is a simple machine, in which case no moral decision is being made and no power of life or death is being exercised; or the robot is a thinking being like us, in which case it seems like mere speciesism to express repugnance we wouldn’t feel for a human being in the same circumstances. I think nevertheless that even if we confine ourselves to non-thinking robots there may perhaps be a residual sense of disgust attached to killing by robot, but it does not seem to have a rational basis, and again the contrary argument can be made; that death by robot is ‘clean’. To be killed by a robot seems in one way to carry some sense of humiliation, but in another sense to be killed by a robot is to be killed by no-one, to suffer a mere accident.

What conclusion can we reach? There are at any rate some safeguards that could be put in place on a precautionary basis. Beyond that I think one’s verdict rests on whether the net benefits, barely touched on here, exceed the net risks; whether the genie is already out of the bottle, and if so whether it is allowable or even a duty to try to ensure that the good people maintain parity of fire-power with the bad.

P1. Killer Robots should not be produced or used in a way that allows them to fall into the hands of people who will use them unethically.

P2. Killer Robots should not be used for any mission which you would not be prepared to assign to a human soldier if a human soldier were capable of executing it.

P3. Killer Robots should not be used for any mission in unpredictable circumstances or where the application of background understanding may be required.

P4. Killer Robots should not be equipped with capacities that go beyond the immediate mission; they should be subject to built-in time limits and capable of being shut down remotely.

uploadMichael Hauskeller has an interesting and very readable paper in the International Journal of Machine Consciousness on uploading – the idea that we could transfer ourselves from this none-too solid flesh into a cyborg body or even just into the cloud as data. There are bits I thought were very convincing and bits I thought were totally wrong, which overall is probably a good sign.

The idea of uploading is fairly familiar by now; indeed, for better or worse it resembles ideas of transmigration, possession, and transformation which have been current in human culture for thousands of years at least. Hauskeller situates it as the logical next step in man’s progressive remodelling of the environment, while also nodding to those who see it as  the next step in the evolution of humankind itself. The idea that we could transfer or copy ourselves into a computer, Hauskeller points out, rests on the idea that if we recreate the right functional relationships, the phenomenological effects of consciousness will follow; that, as Minsky put it, ‘Minds are what Brains do’. This remains for Hauskeller a speculation, an empirical question we are not yet in a position to test, since we have not as yet built a whole brain simulation (not sure how we would test phenomenology even after that, but perhaps only philosophers would be seriously worried about it…). In fact there are some difficulties, since it has been shown that identical syntax does not guarantee identical semantics (So two identical brains could contain identical thoughts but mean different things by them – or something strange like that. While I think the basic point is technically true with reference to derived intentionality, for example in the case of books – the same sentence written by different people can have different meanings – it’s not clear to me that it’s true for brains, the source of original intentionality.).

However, as Hauskeller says, uploading also requires that identity is similarly transferable, that our computer-based copy would be not just a mind, but a particular mind – our own. This is a much more demanding requirement. Hauskeller suggests the analogy of books might be brought forward; the novel Ulysses can be multiply realised in many different media, but remains the same book. Why shouldn’t we be like that? Well, he thinks readers are different. Two people might both be reading Ulysses at the same moment, meaning the contents of their minds were identical; but we wouldn’t say they had become the same person. Conceivably at least, the same mind could be ‘read’ by different selves in the same way a single book can be read by different readers.

Hauskeller’s premise there is questionable – two people reading the same book don’t have identical mental content (a point he has just touched on, oddly enough, since it would follow from the fact that syntax doesn’t guarantee semantics, even if it didn’t follow simply from the complexity of our multi-layered mental lives). I’d say the very idea of identical mental content is hard to imagine, and that by using it in thought-experiments we risk, as Dennett has warned, mistaking our own imaginative difficulties for real-world constraints. But Hauskeller’s general point, that identity need not follow from content alone, is surely sound enough.

What about Ray Kurzweil’s argument from gradualism? This points out that we might replace someone with cyborg parts bit by bit. We wouldn’t have any doubt about the continuing identity of someone with a cyborg eye; nor someone with an electronic hippocampus. If each neuron were replaced by a functional equivalent one by one, we’d be forced to accept either that the final robot, with no biological parts at all, was indeed the same continuing person, or, that at some stage a single neuron made a stark binary difference between being the same person and not being the same person. If the final machine can be the same person, then uploading by less arduous methods is also surely possible since it’s equivalent to making the final machine by another route?

Hauskeller basically bites Kurzweil’s bullet. Yes, it’s conceivable that at some stage there will come neurons whose replacement quite suddenly switches off the person being operated on. I have a lot of sympathy with the idea that some particular set of neurons might prove crucial to identity, but I don’t think we need to accept the conceivability of sudden change in order to reject Kurzweil’s argument. We can simply suppose that the subject becomes a chimera; a compound of two identically-functioning people. The new person keeps up appearances alright, but the borders of the old personality gradually shrink to destruction, though it may be very unclear when exactly that should be said to have happened.

Suppose (my example) an image of me is gradually overlaid with an image of my identical evil twin Retep, line of pixels by line. No one can even tell the process is happening, yet at some stage it ceases to be a picture of me and becomes one of Retep. The fact that we cannot tell when does not prove that I am identical with Retep, nor that both pictures are of me.

Hauskeller goes on to attack ‘information idealism’. The idea of uploading often rests on the view that in the final analysis we consist of information, but

Having a mind generally means being to some extent aware of the world and oneself, and this awareness is not itself information. Rather, it is a particular way in which information is processed…

Hauskeller, provocatively but perhaps not unjustly, accuses those who espouse information idealism of Cartesian substance dualism; they assume the mind can be separated from the body.

But no, it can’t: in fact Hauskeller goes on to suggest that in fact the whole body is important to our mental life: we are not just our brains. He quotes Alva Noë and goes further, saying:

That we can manipulate the mind by manipulating the brain, and that damages to our brains tend to inhibit the normal functioning of our minds, does not show that the mind is a product of what the brain does.

The brain might instead, he says, be like a window; if the window is obscured, we can’t see beyond it, but that does not mean the window causes what lies beyond it.

Who’s sounding dualist now? I don’t think that works. Suppose I am knocked unconscious by the brute physical intervention of a cosh; if the brain were merely transmitting my mind, my mental processes would continue offstage and then when normal service was resumed I should be aware that thoughts and phenomenology had been proceeding while my mere brain was disabled. But it’s not like that; knocking out the brain stops mental processes in a way that blocking a window does not stop the events taking place outside.

Although I take issue with some of his reasoning, I think Hauskeller’s objections have some force, and the limited conclusion he draws – that the possibility of uploading a mind, let alone an identity, is far from established, is true as far as it goes.

How much do we care about identity as opposed to continuity of consciousness? Suppose we had to chose between on the one hand retaining our bare identity while losing all our characteristics, our memories, our opinions and emotions, our intelligence, abilities and tastes, and getting instead some random stranger’s equivalents; or on the other losing our identity but leaving behind a new person whose behaviour, memories, and patterns of thought were exactly like ours? I suspect some people might choose the latter.

If your appetite for discussion of Hauskeller’s paper is unsatisfied, you might like to check out John Danaher’s two-parter on it.




dennettProfessors are too polite. So Daniel Dennett reckons. When leading philosophers or other academics meet, they feel it would be rude to explain their theories thoroughly to each other, from the basics up. That would look as if you thought your eminent colleague hadn’t grasped some of the elementary points. So instead they leap in and argue on the basis of an assumed shared understanding that isn’t necessarily there. The result is that they talk past each other and spend time on profitless misunderstandings.

Dennett has a cunning trick to sort this out. He invites the professors to explain their ideas to a selected group of favoured undergraduates (‘Ew; he sounds like Horace Slughorn’ said my daughter); talking to undergraduates they are careful to keep it clear and simple and include an exposition of any basic concepts they use. Listening in, the other professors understand what their colleagues really mean, perhaps for the first time, and light dawns at last.

It seems a good trick to me (and for the undergraduates, yes, by ‘good’ I mean both clever and beneficial); in his new book Intuition Pumps and Other Tools for Thinking Dennett seems covertly to be playing another. The book offers itself as a manual or mental tool-kit offering tricks and techniques for thinking about problems, giving examples of how to use them. In the examples, Dennett runs through a wide selection of his own ideas, and the cunning old fox clearly hopes that in buying his tools, the reader will also take up his theories. (Perhaps this accessible popular presentation will even work for some of those recalcitrant profs, with whom Dennett has evidently grown rather tired of arguing…. heh, heh!)

So there’s a hidden agenda, but in addition the ‘intuition pumps’ are not always as advertised. Many of them actually deserve a more flattering description because they address the reason, not the intuition. Dennett is clear enough that some of the techniques he presents are rather more than persuasive rhetoric, but at least one reviewer was confused enough to think that Reduction ad Absurdum was being presented as an intuition pump – which is rather a slight on a rigorous logical argument: a bit like saying Genghis Khan was among the more influential figures in Mongol society.

It seems to me, moreover, that most of the tricks on offer are not really techniques for thinking, but methods of presentation or argumentation. I find it hard to imagine someone trying to solve a problem by diligently devising thought-experiments and working through the permutations; that’s a method you use when you think you know the answer and want to find ways to convince others.

What we get in practice is a pretty comprehensive collection of snippets; a sort of Dennettian Greatest Hits. Some of the big arguments in philosophy of mind are dropped as being too convoluted and fruitless to waste more time on, but we get the memorable bits of many of Dennett’s best thought-experiments and rebuttals.  Not all of these arguments benefit from being taken out of the context of a more systematic case, and here and there – it’s inevitable I suppose – we find the remix or late cover version is less successful than the original. I thought this was especially so in the case of the Giant Robot; to preserve yourself in a future emergency you build a wandering robot to carry you around in suspended animation for a few centuries. The robot needs to survive in an unpredictable world, so you end up having to endow it with all the characteristics of a successful animal; and you are in a sense playing the part of the Selfish Gene. Such a machine would be able to deal with meanings and intentionality just the way you do, wouldn’t it? Well, in this brief version I don’t really see why or, perhaps more important, how.

Dennett does a bit better with arguments against intrinsic intentionality, though I don’t think his arguments succeed in establishing that there is no difference between original and derived intentionality. If Dennett is right, meaning would be built up in our brains through the interaction of gradually more meaningful layers of homunculi; OK (maybe), but that’s still quite different to what happens with derived intentionality, where things get to mean something because of an agreed convention or an existing full-fledged intention.

Dennett, as he acknowledges, is not always good at following the maxims he sets out. An early chapter is given over to the rules set out by Anatol Rapoport, most notably:

You should attempt to re-express your target’s position so clearly, vividly and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”

As someone on Metafilter said, when Dan Dennett does that for Christianity, I’ll enjoy reading it; but there was one place in the current book where I thought Dennett fell short on understanding the opposition. He suggests that Kasparov’s way of thinking about chess is probably the same as Deep Blue’s in the end. What on earth could provoke one to say that they were obviously different, he protests. Wishful thinking? Fear? Well, no need to suppose so: we know that the hardware (brain versus computer) is completely different and runs a different kind of process; we know the capacities of computer and brain are different and, in spite of an argument from Dennett to the contrary, we know the heuristics are significantly different. We know that decisions in Kasparov’s case involve consciousness, while Deep Blue lacks it entirely. So, maybe the processes are the same in the end, but there are some pretty good prima facie reasons to say they look very different.

One section of the book naturally talks about evolution, and there’s good stuff, but it’s still a twentieth century, Dawkinsian vision Dennett is trading in. Can it be that Dennett of all people is not keeping up with the science? There’s no sign here of the epigenetic revolution; we’re still in a world where it’s all about discrete stretches of DNA. That DNA, moreover, got to be the way it is through random mutation; no news has come in of the great struggle with the viruses which we now know has left its wreckage all across the human genome, and more amazing,  has contributed some vital functional stretches without which we wouldn’t be what we are. It’s a pity because that seems like a story that should appeal to Dennett, with his pandemonic leanings.

Still, there’s a lot to like; I found myself enjoying the book more and more as it went on and the pretence of being a thinking manual dropped away a bit.  Naturally some of Dennett’s old attacks on qualia are here, and for me they still get the feet tapping. I liked Mr Clapgras, either a new argument or more likely one I missed first time round; he suffers a terrible event in which all his emotional and empathic responses to colour are inverted without his actual perception of colour changing at all. Have his qualia been inverted – or are they yet another layer of experience? There’s really no way of telling and for Dennett the question is hardly worth asking. When we got to Dennett’s reasonable defence of compatibilism over free will, I was on my feet and cheering.

I don’t think this book supersedes Consciousness Explained if you want to understand Dennett’s views on consciousness. You may come away from reading it with your thinking powers enhanced, but it will be because your mental muscles have been stretched and used, not really because you’ve got a handy new set of tools. But if you’re a Dennett fan or just like a thoughtful and provoking read, it’s worth a look.

LorenzoConsciousness, as we’ve noted before, is a most interdisciplinary topic, and besides the neurologists, the philosophers, the AI people, the psychologists and so on, the novelists have also, in their rigourless way, delved deep into the matter. Ever since the James boys (William and Henry) started their twin-track investigation there has been an intermittent interchange between the arts and the sciences. Academics like Dan Lloyd have written novels, novelists like our friend Scott Bakker have turned their hand to serious theory.

Recently we seem to have had a new genre of invented brain science. We could include Ian McEwan’s fake paper on De Clerambault syndrome, appended to Enduring Love; recently Sebastian Faulks gave us Glockner’s Isthmus; now, in his new novel A Box of Birds Charles Fernyhough gives us the Lorenzo Circuit.

The Lorenzo Circuit is a supposed structure which pulls together items from various parts of the brain and uses them to constitute memories. It’s sort of assumed that the same function thereby provides consciousness and the sense of self. Since it seems unlikely that a distinct brain structure could have escaped notice this long, we must take it that the Lorenzo is a relatively subtle feature of the connectome, only identifiable through advanced scanning techniques. The Lycée, which despite its name seems to be an English university, has succeeded in mapping the circuit in detail, while Sansom, one of those large malevolent corporate entities that crop up in thrillers, has developed new electrode technology which allows safe and detailed long-term interference with neurons. It’s obvious to everyone that if brought together these two discoveries would provide a potent new technology; a cure for Alzheimer’s is what seems to be at the forefront of everyone’s minds, though I would have thought there were far wilder and more exciting possibilities. The story revolves around the narrator, Dr Yvonne Churcher, an academic at the Lycée, and two of her undergraduate students, Gareth and James.

Unfortunately I didn’t rate the book all that highly as a novel. The plot is put together out of slightly corny thrillerish elements and seems a bit loosely managed. I didn’t like the characters much either. Yvonne seems to be putty in the hands of her students, letting Gareth steal the Lycée’s crucial research without seeming to hold the betrayal of her trust against him at all, and being readily seduced by the negligent James, a nonsense-talking cult member who calls her ‘babe’ (ack!). I’ve seen Gareth described as a “brilliant” character in reviews elsewhere, but sadly not much brilliance seems to be on offer. In fact to be brutal he seemed to me quite a convincing depiction of the kind of student who sits at the back of lectures chuckling to himself for no obvious reason and ultimately requires pastoral intervention. Apart from nicking other people’s theories and data, his ideas seem to consist of a metaphor from Plato, which he interprets with dismal literalism.

This metaphor is the birds thing that provides the title and up to a point, the theme of the book. In the Theaetetus, Plato makes a point about how we can possess knowledge without having it actually in our consciousness by comparing it to owning an aviary of birds without having them actually in your hand. In Plato’s version there’s no doubt that there’s a man in the aviary who chooses the birds to catch; here I think the idea is more that he flocking and movement of the birds itself produces higher-level organisation analogous to conscious memory.

Yvonne is a pretty resolute sceptic about her own selfhood; she can’t see that she is anything beyond the chance neurochemical events which sweep through her brain. This might indeed explain her apparent passivity and the way she seems to drift through even the most alarming and hare-brained adventures, though if so it’s a salutary warning about the damaging potential of overdosing on materialism. Overall the book alludes to more issues than it really discusses, and gives us little side treats like a person whose existence turns out to be no more than a kind of narrative convention; perhaps it’s best approached as a potential thought provoker rather than the adumbration of a single settled theory; not necessarily a bad thing for a book to be.

Yvonne’s scepticism did cause me to realise that I was actually rather hazy on the subject; what is it that people who deny the self are actually denying, and are they all denying the same thing? There are actually quite a few options.

  • I think all self-sceptics want to deny the existence of the traditional immaterial soul, and for some that may really be about all. (To digress a bit, there are actually caverns below us at this point which have not been explored for thousands of years, if ever: if we were ancient Egyptians, with their complex ontology of multiple souls, we should have a large range of sceptical permutations available; denying the ba while affirming the khaibit, say. Our simpler culture, perhaps mercifully, does not offer us such a range of refinedly esoteric entities in which to disbelieve, but those of a philosophical temperament may be inclined to cast a regretful glance towards those profoundly obscure imaginary galleries.)
  • Some may want to deny any sense, or feeling, of self; like Hume they see only a bundle of sensations when they look inside themselves. I think there is arguably a quale of the self; but these people would not accept it.
  • Others, by contrast, would affirm that the sense of self is vivid, just not veridical. We think there’s a self, but there’s nothing actually there. There’s scope for an interesting discussion about what would have to be there in order to prove them wrong – or whether having the sense of self itself constitutes the self.
  • Some would say that there is indeed ‘something’ there; it just isn’t what we think it is. For example, there might indeed be a centre of experience, but an epiphenomenal one; a self who has no influence on events but is in reality just along for the ride.
  • Logically I suppose we could invert that to have a self that really did make the decisions, but was deluded about having any experiences. I don’t think that would be a popular option, though.
  • Some would make the self a purely social construct, a matter of legal and moral rights and privileges, a conception simply grafted on to an animal which in itself, or by itself, would lack it.
  • Some would deny only that the self provides a break in the natural chain of cause and effect. We are not really the origin of anything, they would say, and our impression of being a freely willing being is mistaken.
  • Some radical sceptics would deny that even the body has any particular selfhood; over time every part of it changes and to assert that I am the same self as the person of twenty years ago makes no sense.

As someone who, on the whole, prefers to look for a tenable account of the reality of the self, the richness of the sceptical repertoire makes me feel rather unimaginative.

bindingJohnjoe McFadden has followed up the paper on his conscious electromagnetic information (CEMI) field which we discussed recently with another in the JCS – it’s also featured on MLU, where you can access a copy.

This time he boldly sets out to tackle the intractable enigma of meaning. Well, actually, he says his aims are more modest; he believes there is a separate binding problem which affects meaning and he wants to show how the CEMI field offers the best way of resolving it. I think the problem of meaning is one of those issues it’s difficult to sidle up to; once you’ve gone into the dragon’s lair you tend to have to fight the beast even if all you set out to do was trim its claws; and I think McFadden is perhaps drawn into offering a bit more than he promises; nothing wrong with that, of course.

Why then, does McFadden suppose there is a binding problem for meaning? The original binding problem is to do with perception. All sorts of impulses come into our heads through different senses and get processed in different ways in different places and different speeds. Yet somehow out of these chaotic inputs the mind binds together a beautifully coherent sense of what is going on, everything matching and running smoothly with no lags or failures of lip-synch. This smoothly co-ordinated experience is robust, too; it’s not easy to trip it up in the way optical illusions so readily derail up our visual processes. How is this feat pulled off? There are a range of answers on offer, including global workspaces and suggestions that the whole thing is a misconceived pseudo-problem; but I’ve never previously come across the suggestion that meaning suffers a similar issue.

McFadden says he wants to talk about the phenomenology of meaning. After sitting quietly and thinking about it for some time, I’m not at all sure, on the basis of introspection, that meaning has any phenomenology of its own, though no doubt when we mean things there is usually some accompanying phenomenology going on. Is there something it is like to mean something? What these perplexing words seem to portend is that McFadden, in making his case for the binding problem of meaning, is actually going to stick quite closely with perception. There is clearly a risk that he will end up talking about perception; and perception and meaning are not at all the same. For one thing the ‘direction of fit’ is surely different; to put it crudely, perception is primarily about the world impinging on me, whereas meaning is about me pointing at the world.

McFadden gives five points about meaning. The first is unity; when we mean a chair, we mean the whole thing, not its parts. That’s true, but why is it problematic? McFadden talks about how the brain deals with impossible triangles and sees words rather than collections of letters, but that’s all about perception; I’m left not seeing the problem so far as meaning goes. The second point is context-dependence. McFadden quite rightly points out that meaning is highly context sensitive and that the same sequence of letters can mean different things on different occasions. That is indeed an interesting property of meaning; but he goes on to talk about how meanings are perceived, and how, for example, the meaning of “ball” influences the way we perceive the characters 3ALL. Again we’ve slid into talking about perception.

With the third point, I think we fare a bit better; this is compression, the way complex meanings can be grasped in a flash. If we think of a symphony, we think, in a sense, of thousands of notes that occur over a lengthy period, but it takes us no time at all. This is true, and it does point to some issue around parts and wholes, but I don’t think it quite establishes McFadden’s point. For there to be a binding problem, we’d need to be in a position where we had to start with meaning all the notes separately and then triumphantly bind them together in order to mean the symphony as a whole – or something of that kind, at any rate. It doesn’t work like that; I can easily mean Mahler’s eighth symphony (see, I just did it), of whose notes I know nothing, or his twelfth, which doesn’t even exist.

Fourth is emergence: the whole is more than the sum of its parts. The properties of a triangle are not just the properties of the lines that make it up. Again, it’s true, but the influence of perception is creeping in; when we see a triangle we know our brain identifies the lines, but we don’t know that in the case of meaning a triangle we need at any stage to mean the separate lines – and in fact that doesn’t seem highly plausible. The fifth and last point is interdependence: changing part of an object may change the percept of the whole, or I suppose we should be saying, the meaning. It’s quite true that changing a few letters in a text can drastically change its meaning, for example. But again I don’t see how that involves us in a binding problem. I think McFadden is typically thinking of a situation where we ask ourselves ‘what’s the meaning of this diagram?’ – but that kind of example invites us to think about perception more than meaning.

In short, I’m not convinced that there is a separate binding problem affecting meaning, though McFadden’s observations shed some interesting lights on the old original issue. He does go on to offer us a coherent view of meaning in general. He picks up a distinction between intrinsic and extrinsic information. Extrinsic information is encoded or symbolised according to arbitrary conventions – it sort of corresponds with derived intentionality – so a word, for example, is extrinsic information about the thing it names. Intrinsic information is the real root of the matter and it embodies some features of the thing represented. McFadden gives the following definition.

Intrinsic information exists whenever aspects of the physical relationships that exist between the parts of an object are preserved – either in the original object or its representation.

So the word “car” is extrinsic and tells you nothing unless you can read English. A model of a car, or a drawing, has intrinsic information because it reproduces some of the relations between parts that apply in the real thing, and even aliens would be able to tell something about a car from it (or so McFadden claims). It follows that for meaning to exist in the brain there must be ‘models’ of this kind somewhere. (McFadden allows a little bit of wiggle room; we can express dimensions as weights, say, so long as the relationships are preserved, but in essence the whole thing is grounded in what some others might call ‘iconic’ representation. ) Where could that be? The obvious place to look is in the neurons. but although McFadden allows that firing rates in a pattern of neurons could carry the information, he doesn’t see how they can be brought together: step forward the CEMI field (though as I said previously I don’t really understand why the field doesn’t just smoosh everything together in an unhelpful way).

The overall framework here is sensible and it clearly fits with the rest of the theory; but there are two fatal problems for me. The first is that, as discussed above, I don’t think McFadden succeeds in making the case for a separate binding problem of meaning, getting dragged back by the gravitational pull of perception. We have the original binding problem because we know perception starts with a jigsaw kit of different elements and produces a slick unity, whereas all the worries about parts seem unmotivated when it comes to meaning. If there’s no new binding problem of meaning, then the appeal of CEMI as a means of solving it is obviously limited.

The second problem is that his account of meaning doesn’t really cut the mustard. This is unfair, because he never said he was going to solve the whole problem of meaning, but if this part of the theory is weak it inevitably damages the rest.  The problem is that representations that work because they have some of the properties of the real thing, don’t really work.  For one thing a glance at the definition above shows it is inherently limited to things with parts that have a physical relationship. We can’t deal with abstractions at all. If I tell you I know why I’m writing this, and you ask me what I mean, I can’t tell you I mean my desire for understanding, because my desire for understanding does not have parts with a physical relationship, and there cannot therefore be intrinsic information about it.

But it doesn’t even work for physical objects. McFadden’s version of intrinsic information would require that when I think ‘car’ it’s represented as a specific shape and size. In discussing optical illusions he concedes at a late stage that it would be an ‘idealised’ car (that idealisation sounds problematic in itself); but I can mean ‘car’ without meaning anything ideal or particular at all. By ‘car’ I can in fact mean a flying vehicle with no wheels made of butter and one centimetre long  (that tiny midge is going to regret settling in my butter dish as he takes his car ride into the bin of oblivion courtesy of a flick from my butter knife), something that does not in any way share parts with physical relationships which are the same as any of those applying to the big metal thing in the garage.

Attacking that flank, as I say, probably is a little unfair. I don’t think the CEMI theory is going to get new oomph from the problems of meaning, but anyone who puts forward a new line of attack on any aspect of that intractable issue deserves our gratitude.

sleeping babyProbably, according to a new paper  by Sid Kouider et al. Babies can’t report their own mental states, so they can’t confirm it for us explicitly: and new-borns are regarded by the medical profession as unknowing bags of instinct and reflex, with fond mothers quite deluded in thinking their brand-new offspring recognises anything or smiles at them (it’s just wind, or some random muscular grimace). However Kouider and his associates tracked ERPs (event-related potentials) in the brains of infants at 5, 12, and 15 months and found responses similar to those of adults, albeit slower and weaker, especially in the younger babies. So although few babies are able to pull off the legendary feat of St Nicholas, who apparently uttered a perfectly-articulated prayer immediately on emerging from the womb, they are probably more aware than we may have thought.

Kouider has a bit of form on consciousness: last year Trends in Cognitive Sciences carried a dialogue between him and Ned Block. This arose out of a claim by Block that classic experiments by Sperling support the richness of phenomenal consciousness as compared with access consciousness. Block is probably best known for introducing the distinction between phenomenal, or p-consciousness, and access, or a-consciousness, into philosophy of mind. Roughly speaking, we can say that p-consciousness is Hard Problem consciousness, to do with our subjective experience, and a-consciousness is Easy Problem consciousness, the kind that plays a functional role in decision-making and so on.

Sperling, some fifty years ago, showed that subjects shown an array of letters could report only 3 or 4 of them; but when cued to think of a particular line, they were able to report 3 or 4 from that line. They must therefore have had some image of more of the array – probably the whole array – than 3 or 4 items, but could only ever report that many.

Block’s analysis is that the whole array was in phenomenal consciousness, but access consciousness could only ever get 3 to 4 items from it. This is apparently supported by what test subjects tended to say: they often claimed to be conscious of the whole array at the time but not able to recall more than a few of the items (although the Sperling experiments show they could report the quota of items afterwards from any row, indicating that the problem was not really with recall but with access).

Kouider, among others, rejected this view, suggesting instead that the full array is retained, not in phenomenal consciousness, but in unconscious storage. In his view it’s not necessary to invoke phenomenal consciousness, which is an unverifiable addition which we’re better off without. The subjects’ feeling that they had been aware of the whole array can be attributed to a sort of illusion; you don’t notice the absence of things you’re not aware of, any more than you can see whether the refrigerator light goes off when the door is closed.

It’s tempting to think that the dispute is at least aggravated by terminology; everyone agrees that information about the array is retained mentally in a place other than the active forefront of the mind; isn’t the argument merely about whether we call that place phenomenal or un- conscious? That doesn’t seem altogether satisfactory, though, if we take phenomenal consciousness seriously – we are talking about whether the subjects are right or wrong about the contents of their own consciousness, which seems to be a matter of substance. I wonder though, whether there is some unhelpful reification going on – are phenomenal consciousness and the unconscious really two ‘places’? Is it really more a matter of retaining information phenomenally or unconsciously? That might be a slightly more promising perspective, although I also think that mental states are generally a trackless swamp and a dispute with only two alternatives may actually be underselling the problem (could it be kept both unconsciously and phenomenally? Could it be not unconsciously but subconsciously? Could the route from unconscious storage to access consciousness lead via phenomenal consciousness?)

So what about the babies? Is it possible that we are again in an area where what we mean by ‘conscious’ and the way we carve things conceptually is half the problem? It does look a bit like it.

After all (and my apologies to any readers who may have been grinding their teeth in frustration) babies are obviously conscious, aren’t they? The difference between a sleeping baby and one that is awake (which for some common sense values, equals ‘conscious’) is far too salient for any parent to overlook. On the other hand, do babies soliloquise internally? Equally obviously, no, because they don’t have the words to do it with.

But Kouider et al do make it fairly clear that they are specifically concerned with perception, and they make only sensible claims, noting that their results might be relevant, for example, to questions of infant anaesthesia (although it may be difficult to keep the can of phenomenal worms fully sealed on that issue). It is interesting to note the gradual development in speed and intensity which they have uncovered, but by and large I think common sense has been vindicated.